Abstract 1 Introduction 2 The Algorithm 3 Connectivity guarantee 4 Properties of extreme point solutions (Lemma 4) 5 Proof of Theorem 3 References

Bicriteria Approximation for k-Edge-Connectivity

Zeev Nutov ORCID The Open University of Israel, Ra’anana, Israel Reut Cohen The Open University of Israel, Ra’anana, Israel
Abstract

In the k-Edge Connected Spanning Subgraph (k-ECSS) problem we are given a (multi-)graph G=(V,E) with edge costs and an integer k, and seek a min-cost k-edge-connected spanning subgraph of G. The problem admits a 2-approximation algorithm and no better approximation ratio is known. Recently, Hershkowitz, Klein, and Zenklusen [STOC 24] gave a bicriteria (1,k10)-approximation algorithm that computes a (k10)-edge-connected spanning subgraph of cost at most the optimal value of a standard Cut-LP for k-ECSS. We improve the bicriteria approximation to (1,k4) and also give another non-trivial bicriteria approximation (3/2,k2).

The k-Edge-Connected Spanning Multi-subgraph (k-ECSM) problem is almost the same as k-ECSS, except that any edge can be selected multiple times at the same cost. A (1,kp) bicriteria approximation for k-ECSS w.r.t. Cut-LP implies approximation ratio 1+p/k for k-ECSM, hence our result also improves the approximation ratio for k-ECSM.

Keywords and phrases:
k-edge-connected subgraph, bicriteria approximation, iterative LP-rounding
Copyright and License:
[Uncaptioned image] © Zeev Nutov and Reut Cohen; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithms
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

A graph is k-edge-connected if it contains k edge-disjoint paths between any two nodes. We consider the following problem.

k-Edge-Connected Spanning Subgraph (k-ECSS)
Input: A (multi-)graph G=(V,E) with edge costs and an integer k.
Output: A min-cost k-edge-connected spanning subgraph J of G.

The problems admits approximation ratio 2 by a reduction to a bidirected graph (c.f. [17]), and no better approximation ratio is known. Recently, Hershkowitz, Klein, and Zenklusen [14] (henceforth HKZ) gave a bicriteria (1,k10)-approximation algorithm that computes a (k10)-edge-connected spanning subgraph of cost at most the optimal value of a standard Cut-LP for k-ECSS. Our first result improves the connectivity approximation to k4.

Theorem 1.

k-ECSS admits a bicriteria approximation ratio (1,k4) w.r.t. the Cut-LP.

The k-Edge-Connected Spanning Multi-subgraph (k-ECSM) problem is almost the same as k-ECSS, except that any edge can be selected multiple times at the same cost. For k-ECSM, approximation ratios 3/2 for k even and 3/2+O(1/k) for k odd were known long time ago [10, 11], and it was conjectured by Pritchard [29] that k-ECSM admits approximation ratio 1+O(1/k). HKZ [14] showed that k-ECSM has an (1+Ω(1/k))-approximation threshold and that a (1,kp) bicriteria approximation for k-ECSS w.r.t. the standard Cut-LP implies approximation ratio 1+p/k for k-ECSM. Thus the novel result of HKZ [14] implies an approximation ratio 1+10/k for k-ECSM (improving the previous ratio 1+O(1/k) of [16]), and we reduce the approximation ratio to 1+4/k.

Corollary 2.

k-ECSM admits approximation ratio 1+4/k.

In addition, we also study bicriteria approximation ratios when the cost approximation is bellow 2, and prove the following.

Theorem 3.

k-ECSS admits a bicriteria approximation ratio (3/2,k2).

We note that while HKZ presented a novel original approach for attacking the problem and gave the first additive bicriteria approximation, our contribution here is more modest, as follows.

  • We present a simpler algorithm that uses the HKZ [14] idea and a much simpler proof, based on new properties of k-ECSS Cut-LP extreme point solutions; see Lemmas 4.

  • This enabled us to improve the connectivity approximation from k10 to k4.

  • We use the same idea to present another nontrivial bicriteria approximation (3/2,k2).

We now survey some related work. k-ECSS is an old fundamental problem in combinatorial optimization and network design. The case k=1 is the MST problem, while for k=2 the problem is MAX-SNP hard even for unit costs [7]. Pritchard [29] showed that there is a constant ϵ>0 such that for any k2, k-ECSS admits no (1+ϵ)-approximation algorithm, unless P = NP. No such hardness of approximation was known for k-ECSM, and Pritchard [29] conjectured that k-ECSM admits approximation ratio 1+O(1/k). This conjecture was resolved by HKZ [14], who also proved a matching hardness of approximation result.

The k-ECSS problem and its special cases have been extensively studied, c.f [22, 8, 10, 11, 18, 1, 4, 17, 13, 12, 29, 30, 32, 14] for only a small sample of papers in the field. Nevertheless, the current best known approximation ratio for k-ECSS is 2, the same as was four decades ago. The 2-approximation is obtained by bidirecting the edges of G, computing a min-cost directed spanning subgraph that contains k edge-disjoin dipaths to all nodes from some root node (this can be done in polynomial time [5]), and returning its underlying graph. A 2-approximation can also be achieved with the iterative rounding method [15], that gives a 2-approximation also for the more general Steiner Network problem, where we seek a min-cost subgraph that contains ruv edge-disjoint paths for every pair of nodes u,vV. The directed version of k-ECSS was also studied, c.f. [5, 13, 1, 17], and for this case also no better than 2-approximation is known, except for special cases.

Bicriteria approximation algorithms were widely studied for degree constrained network design problems such as Bounded Degree MST and Bounded Degree Steiner Network, c.f. [31, 21, 20] and the references therein. A particular case of a bicriteria approximation is when only one parameter is relaxed while the cost/size of the solution is bounded by a budget B. For example, the (11/e)-approximation for the Budgeted Max-Coverage problem can be viewed as a bicriteria approximation (1,11/e) for Set Cover, where only a fraction of 11/e of the elements is covered by 𝗈𝗉𝗍 sets. Similarly, in the budgeted version Budgeted ECSS of k-ECSS, instead of k we are given a budget B and seek a spanning subgraph of cost at most B that has maximum edge-connectivity k.

One can view Theorem 1 as a k4 additive approximation for Budgeted ECSS, where k is the maximum edge-connectivity under the budget B. We note that budgeted connectivity problems were studied before for unit costs. Specifically, in the Min-Size k-ECSS and Min-Size k-CSS problems we seek a k-edge-connected and k-node-connected, respectively, spanning subgraph with a minimal number of edges. Nutov [24] showed that the following simple heuristic (previously analyzed by Cheriyan and Thurimella [1]) is a (1,k1) bicriteria approximation for Min-Size k-CSS: compute a min-size (k2)-edge-cover Ik2 (a spanning subgraph of minimal degree k2) and then augment it by an inclusion minimal edge set FE such that Ik2F is (k1)-connected. This (1,k1) bicriteria approximation implies a k1 approximation for Budgeted Min-Size k-CSS, which is tight, since the problem is NP-hard. A similar result for k-ECSS, for large enough k, can be deduced from the work of Gabow and Gallagher [12].

One can obtain a bicriteria approximation (2/θ,k/θ) for k-ECSS for any θ1, by just solving the k/θ-ECSS problem. This is since the 2 approximation is w.r.t. the Cut-LP. Specifically, if x is a k-ECSS Cut-LP solution, then x/θ is a k/θ-ECSS Cut-LP solution, thus a 2-approximation w.r.t. Cut-LP for k/θ-ECSS computes a k/θ-connected subgraph of cost 2/θ𝗈𝗉𝗍k. Similar “LP-scaling” gives a bicriteria approximation (2/θ,ruv/θ) for any θ1 for the Steiner Network problem.

The node connectivity version k-CSS of k-ECSS was also studied extensively [19, 6, 25, 1, 24, 2, 28, 26, 27]; it admits approximation ratio 4+ϵ when k is bounded by a constant, but for general k only a polylogarithmic approximation is known [6, 25]. The Survivable Network Design Problem (SNDP) is the node-connectivity version of Steiner Network when there should be ruv internally node disjoint path between every u,vV; k-CSS is a particular case when ruv=k for all u,vV. The Rooted SNDP is another particular case of SNDP where we require k disjoint paths from a root s to every node in a given set T of terminals, and the k-Out-Connected Spanning Subgraph (k-OCSS) problem is a particular case of Rooted SNDP when T=V{s}. Since the best approximation ratios for these problems are w.r.t. to the Cut-LP (a.k.a. “biset LP”) they can be used to obtain bicretiria approximations using the method described above. We summarize the best approximation ratios for these problems and the bicriteria approximation ratios that can be derived for them by a simple “LP-scaling” in the following table.

Table 1: LP and bicriteria approximation ratios for node connectivity problems. The third row gives the approximation cost that can achieve connectivity k/θ (ruv/θ for SNDP) for any θ1.
problem SNDP Rooted SNDP k-CSS k-OCSS
approximation O(k3logn) [3] O(klogk) [23] 4+4lgk+1lgnlgk [28] 2 [9]
bicriteria appr. 1θO((k/θ)3logn) 1θO((k/θ)log(k/θ)) 1θ(4+4lg(k/θ)+1lgnlg(k/θ)) 2/θ

This paper is organized as follows. In Sect. 2 we describe our iterative rounding algorithm. In Sect. 3 we prove the connectivity guarantee, and in Sect. 4 we prove a certain property of extreme point solutions that is used in the algorithm. Sect. 5 gives a bicriteria approximation (3/2,k2).

2 The Algorithm

For an edge set F and disjoint node sets S,T let δF(S,T) denote the set of edges in F with one end in S and the other end in T, and let dF(S,T)=|δF(S,T)|. Let δF(S)=δF(S,S¯) and dF(S)=|δF(S)|, where S¯=VS is the node complement of S. For xE let x(F)=eFx(e). We will often refer to a proper subset S of V (so SV) as a cut. A standard Cut-LP for k-ECSS is:

mincTxs.t.x(δE(S))kSV0xe1eE

As in HKZ [14], we will use the iterative rounding relaxation method. This means that while E, we repeatedly compute an extreme point optimal solution x to the Cut-LP above, and do at least one of the following steps:

  1. 1.

    Remove from E an edge e with xe=0.

  2. 2.

    Add to a partial solution I an edge e with xe=1, and remove e from E.

  3. 3.

    Relax some cut constraints x(δ(S))k to x(δ(S))kq for some integer q.

The relaxation of the cut constraints will be implemented in two different ways.

  1. (i)

    Relaxing the cut constraint of a single cut S (and also of S¯=VS) and then contracting S into a single node vS; this also removes the constraint of every cut A such that both AS and AS¯ are non-empty. We will store the nodes that correspond to such relaxed contracted cuts S in a set U.

  2. (ii)

    Relaxing the constraints of all cuts S that separate two nodes u1,u2U (that correspond to contracted sets C1,C2) to x(δ(S))k2. This is achieved by adding to the partial solution a “ghost edge” u1u2 of capacity 2, and considering the “residual” demands of sets w.r.t the ghost edges. We will store the added ghost edges in a set H.

Instead of working with the cut constraints x(δE(S))k it would be convenient to consider the residual constraints x(δE(S))f(S) for an appropriately defined set function f. To define this f, suppose that we are already given:

  • A partial solution I of already chosen “integral” edges, that were removed from E.

  • A set H of “ghost-edges” – these are new virtual edges that are added to the solution.

  • A set U of nodes that correspond to contracted cuts whose constraints are relaxed.

Let f(S) be a set function defined on proper node subsets S by

f(S)={kdI(S)2dH(S)2if S={u} or S¯={u} for some uUkdI(S)2dH(S)otherwise (1)

This function f(S) is a relaxation of the usual residual function kdI(S) of k-ECSS. Consider the following LP-relaxation with the function f above:

mincTxs.t.x(δE(S))f(S)SV0xe1eE (2)

A set S is f-positive if f(S)>0. Given an LP solution x we say that S is x-tight if the LP-inequality of S holds with equality, namely, if x(δ(S))=f(S). An x-core, or simply a core if x is clear from the context, is an inclusion-minimal f-positive x-tight set (note that an x-core must be f-positive and not only x-tight).

To contract a node subset C of a graph G=(V,E) means to identify all nodes in C into one node vC; edges that have an end in C now have vC as their end and the arising loops (if any) are deleted. During the algorithm, we iteratively contract certain node subsets of G, so we denote the initial graph by G0=(V0,E0).

Our algorithm is given in Algorithm 1. The main difference between Algorithm 1 and that of HKZ is that we never add parallel ghost edges, namely we must have dH(u,v)=0 whenever we add a ghost edge uv. This has a chain reaction of simplifying some other parts of the algorithm. For example, we add a ghost edge when dI(u,v)(k3)/2 (and dH(u,v)=0) while the analogous condition in HKZ is dIH(uv)[k/22,k/2). The upper bound dIH(u,v)<k/2 in HKZ is needed to control the number of parallel ghost edges added, which also complicates the analysis.

In Section 4 we will prove the following lemma about extreme point solutions of LP (2).

Algorithm 1 A bicriteria (1,k4)-approximation.
Lemma 4.

Let f be defined by (1) and suppose that dH(u)1 and dI(u)+2dH(u)k2 for all uU. Let x be an extreme point of the polytope P={x[0,1]E:x(δE(S))f(S)} such that 0<xe<1 for all eE, where E. Then at least one of the following holds.

  1. (i)

    There is an x-core C with dE(C){2,3}; moreover, no edge in E has both ends in C.

  2. (ii)

    There are u,vU with dH(u,v)=0 and dI(u,v)μ, where μ=(k3)/2.

Furthermore, such u,v or such C can be found in polynomial time.

Interestingly, the two cases in Lemma 4 depend on whether there exist a laminar set family such that x is the unique solution to the equation system {x(δE(S)=f(S):S}. Specifically, if such exists then case (i) in Lemma 4 holds, otherwise case (ii) must hold.

In the next Section 3 we will show that the algorithm terminates after a polynomial number of iterations and prove the connectivity guarantee k4. Since in each iteration the cut constraints are only relaxed (by contractions, or by including nodes in U, or by adding ghost edges), the cost of the produced solution is at most the initial LP-value. In this context note that the 2 relaxation of the nodes u,vU is not canceled when u,v are removed from U, it is just replaced by the 2 relaxation caused by the added ghost edge uv.

3 Connectivity guarantee

For every ghost edge h=uv added, let I(h)=I(uv) be the set of (at least μ=(k3)/2) edges in J that appear in δI(u,v). A key observation is the following.

Lemma 5.

For any distinct ghost edges h,h added during the algorithm, I(h)I(h)=.

Proof.

This follows from the condition that when a ghost edge h=uv added, there is no other ghost edge between u and v. Specifically, assume w.l.o.g. that h=uv was added after h. Then all edges in I(h) are uv-edges – go between the sets Cu,Cv that correspond to u,v, while no edge in I(h) is a uv-edge.

Lemma 6.

When a core C is found in the algorithm but not yet contracted, dH(C)1, x(δE(C)){1,2}, and dI(C)=k2dH(C)x(δE(C)). Consequently,

  • dI(C){k1,k2} if dH(C)=0.

  • dI(C){k3,k4} if dH(C)=1.

Proof.

If dH(C)2 then dI(C)+2dH(C)2μ+4k, hence f(C)0, contradicting that C is a core (recall that a core must be an f-positive set). We have dE(C){2,3}, so there are 2 or 3 fractional edges in δE(C). Since C is x-tight, x(δE(C)) is a positive integer that is either 1 or 2, so there are k12dH(C) or k22dH(C) (integral) edges in δI(C). If dH(C)=0 then dI(C){k1,k2} and if dH(C)=1 then dI(C){k3,k4}; see Fig. 1(a,b) where δE(x(C))=2 and dH(C)=0 or dH(C)=1, respectively.

Refer to caption
Figure 1: Illustration to the proofs of Lemmas 6 and 8. Edges in I are shown by black lines, edges in E (the fractional edges) by green dashed lines, and the ghost edge is blue.

The following lemma shows that the conditions of Lemma 4 hold during the algorithm, and thus the algorithm terminates.

Lemma 7.

During the algorithm dH(u)1 and dI(u)+2dH(u)k2 holds for all uU. Furthermore, dH(v)2 for all vVU.

Proof.

By Lemma 6, when u enters U we have dH(u)1 and dI(u)k2dH(u)2. This implies f({u})=kdI(u)2dH(u)20. The addition of a ghost edge incident to u excludes u from U but does not increase f({u}). Hence {u} will never be chosen as a core C and will never enter U again. If we had dH(u)=1 when u entered U, then u will leave U with dH(u)=2 and will never enter U again. Thus dH(v)2 for all vVU.

A set family is laminar if any two sets in the family are either disjoint or one contains the other. Let be the family of subsets of V0 that correspond to the contracted sets during the algorithm (recall that G0=(V0,E0) denotes the initial graph). It is known and not hard to see that is laminar. We will say that a cut S of G0 is compatible with a graph G during the algorithm if RS or RS= for every R that was contracted so far to obtain G. As long as S is compatible with G, we can look at the cut of G that corresponds to S, which for simplicity of notation we will also denote by S, and lower bound dI(S) (the degree of the set that corresponds to S w.r.t to the partial solution I) instead of dJ(S) (the degree of S w.r.t to the final solution J), since IJ and hence dJ(S)dI(S).

We would like to establish the bound dJ(S)k4 for a cut S of G0 such that {S} is laminar. For such S, it is easy to determine the last graph G that S is compatible with. For a cut S of G0 let RS be the inclusion minimal set in {V} that properly contains S. If RS=V then S compatible with G till the end of the algorithm. If RSV then S (such that {S} is laminar) is compatible with G as long as RS is not contracted, hence we will analyze such S as a cut of G right before the core C=RS is contracted. Due to the fact that at this moment no edge in E has both ends in C and since dE(C)3 and x(δE(C))2 (by Lemma 4(i)), the difference between dI(S) and dJ(S) is small.

Lemma 8.

When a core C in the algorithm is found but not yet contracted, the following holds for any proper subset S of C.

  1. (i)

    dI(S)dH(S)μ and dI(S)k2dH(S)2.

  2. (ii)

    dI(S,CS)μ.

Proof.

We prove (i). dI(S)dH(S)μ by Lemma 5. If S={u} for some uU then dI(u)k2dH(u)2 by Lemma 6. Otherwise, the constraint of S is relaxed by ghost edges only, and since δE(S)δE(C) (Lemma 4(i)) and x(δE(C))2 (Lemma 6), we get

dI(S)k2dH(S)x(δE(S))k2dH(S)2.

We prove (ii). If dH(S,CS)1 then dI(S,CS)μ since dI(u,v)μ when the ghost edge uv is added. Otherwise, dH(S)+dH(CS)=dH(C)1 and since dJ(R)k2dH(C)1 (by Lemma 6) and by part (i) we get

2dI(S,CS) = dI(S)+dI(CS)dI(C)
[k2dH(S)2]+[k2dH(CS)2][k2dH(C)1]
= k32[dH(S)+dH(CS)dH(C)]=k3,

concluding the proof.

The notation δH(S) and dH(S) are well defined for any node subset S at any iteration of the algorithm. We now extend it to any subset S of V. We can view a ghost edge v1v2 as a pair {C1,C2} of sets in that correspond to v1,v2. We say that such a ghost-edge covers S if one of C1,C2 is contained in S and the other in S¯. Let δH(S) denote the set of ghost edges in H that cover S and let dH(S) be their number.

Corollary 9.

Let S be a cut such that {S} is laminar. Then dJ(S)k2dH(S)2 and dJ(S)dH(S)μ. Furthermore, if RS=V and SR then dJ(S)k2dH(S). Consequently, dJ(S)max{k2dH(S)2,dH(S)μ}k4.

Proof.

The bound dJ(S)dH(S)μ follows from Lemma 5, so we prove only the bound dJ(S)k2dH(S)2. For S it follows from Lemma 6, so assume that S. If S is contained in some set in , then let C be the inclusion minimal set in that contains S. The bound then follows from Lemma 8(i). Otherwise, the constraint of S was relaxed by ghost edges only and then dJ(S)k2dH(S).

Before establishing the bound dJ(S)k4 for sets S such that {S} is not laminar, we need three additional lemmas. The next lemma considers sets R and TR¯ such that both T,(R¯T) are laminar; one can verify that this is equivalent to {T,R¯T} being laminar. Moreover, this means that R is an inclusion maximal set in , and any other set in , except for maybe R¯ (if R¯), is contained in one of R,T,R¯T.

Lemma 10.

Let R and let T be a proper subset of R¯ such that the set family {T,R¯T} is laminar. Then dJ(T,R¯T)(k6)/2.

Proof.

If dH(T,R¯T)1 then dJ(T,RT)μ>(k6)/2 by Lemma 5. Otherwise, dH(R)=dH(T)+dH(R¯T). Since dJ(T)k2dH(T)2 and dJ(R¯T)k2dH(R¯T)2 by Corollary 9, and dJ(R)k2dH(R)+2 by Lemma 6, we get (see Fig. 2(a))

2dJ(T,R¯T) = dJ(T)+dJ(R¯T)dJ(R)
[k2dH(T)2]+[k2dH(R¯T)2][k2dH(R)+2]
= k62[dH(T)+dH(R¯T)dH(R)]=k6,

concluding the proof.

We say that a set S overlaps a set R, or that R,S overlap, if the pair {R,S} is not laminar, namely if the sets RS,RS,SR are all non-empty. Now we consider sets S that overlap some R. Note however, that it may be that S overlaps some R, but S¯ does not, so we can still deduce from Corollary 9 that dJ(S)=dJ(S¯)k4. To avoid this confusion we will examine among the sets S,S¯ one that is more “compatible” with . More formally, let (S)={R:S overlaps R} be the family of sets in that S overlaps, and note that so far we examined the case |(S)|=0. So now w.l.o.g. we will assume that 1|(S)||(S¯)|. Under this assumption, we have the following.

Lemma 11.

If 1|(S)||(S¯)| then V(RS) for any R(S).

Proof.

Let R(S) and suppose that |(S)||(S¯)|. It is known that (RS)(S); this is so since every set in that overlaps RS also overlaps S, but R overlaps S but not RS, c.f. [33, Lemma 23.15]. Thus |(RS)|<|(S)|. If V(RS)= then RS=S¯ and we get |(S¯)|<|(S)| contradicting the assumption |(S)||(S¯)|.

Lemma 12.

Suppose that 1|(S)||(S¯)|.

  1. (i)

    If R is a minimal set in (S) then dJ(RS,RS)μ.

  2. (ii)

    If R is a unique maximal set in (S) then dJ(R¯S,R¯S)(k6)/2.

Proof.

We prove (i). The minimality of R implies that if T{RS,RS} then {T} is laminar. Therefore by Lemma 8(ii) dJ(RS,RS)μ.

We prove (ii). The maximality of R implies that if T{R¯S,R¯S} then {T} is laminar. Therefore by Lemma 10 dJ(R¯S,R¯S)(k6)/2.

Lemma 13.

If 1|(S)||(S¯)| then dJ(S)k4.

Proof.

Suppose that there are two disjoint sets in (S). Then there are two minimal sets in (S) that are disjoint, say R1,R2; see Fig. 2(b). Then by Lemma 12(i)

dJ(S)dJ(R1S,R1S)+dJ(R2S,R2S)μ+μ>k4.

Otherwise, (S) is a nested family with a unique maximal set R1 and a unique minimal set R2; see Fig. 2(c) and note that R2R1 and possibly R1=R2. Then by Lemma 12 we get

dJ(S)dJ(R¯1S,R¯1S)+dJ(R2S,R2S)(k6)/2+μ=k4,

concluding the proof.

Refer to caption
Figure 2: Illustration to the proofs of Lemmas 10 and 13.

This concludes the proof of the connectivity guarantee of Algorithm 1. Now we show that the algorithm terminates after a polynomial number of iterations.

Lemma 14.

The algorithm terminates after O(n) iterations, where n=|V|.

Proof.

Let x be an extreme point computed at the first iteration of the algorithm. It is known (c.f. [13]) that |{0<xe<1:eE}|2n1, namely, at most 2n1 variables xe are fractional. This follows from two facts. The first is that after the integral entries are substituted, the fractional entries are determined by a full rank system of constraints {x(δE(S))=k:S} where is laminar, c.f. [15, 13]. The second is that a laminar family on a set of n elements has size at most 2n1. The latter fact also implies that ||2n1. By Lemma 7, the ghost edges form a 2-matching on , thus |H|||2n1. At each iteration we remove an edge from E, or add a set to , or add a ghost edge. Hence the number of iterations is bounded by ||+||+|H|(2n1)+(2n1)+(2n1)=3(2n1)=O(n).

4 Properties of extreme point solutions (Lemma 4)

Here we will prove Lemma 4. Fix an extreme point solution x as in the Lemma. Then there exists a family of f-positive x-tight sets such that x is the unique solution to the equation system {x(δE(S))=f(S):S}; namely, ||=|E| and the characteristic vectors of the sets {δE(S):S} are linearly independent. We will call such a family x-defining. The following was essentially proved in [14].

Lemma 15 ([14]).

If there exists a laminar x-defining family then there is an x-core C such that dE(C){2,3} and f(C){1,2}; moreover, no edge in E has both ends in C.

Proof.

Let 𝒞 be the family of minimal sets in . It was proved by HKZ [14] that there is S𝒞 such that dE(S)3 and no edge in E has both ends in S; we provide a proof sketch for completeness of exposition. Suppose to the contrary that dE(S)4 for every S𝒞. Assign 2 tokens to every edge uv, and then reassign these tokens – one to the smallest set in that contains u and one to the smallest set in that contains v (if such sets exist). The paper of Jain [15] shows that then for any S, the tokens assigned to sets in (S)={T:TS} can be redistributed such that S gets at least 4 tokens and every T(S) gets at least 2 tokens, giving the contradiction |E|>||. The proof in [15] is by induction on |(S)|. If S𝒞 is a leaf then S already has 4 tokens, since dE(S)4 for every S𝒞. Else, S can collect 2 tokens from each child in (S), so if S has at least two children then we are done. Else, S has exactly one child, and then [15] shows that by linear independence, there are 2 tokens assigned to S, which together with the extra 2 tokens of the child of S gives the required 4 tokens. This shows that there is S𝒞 such that dE(S)3.

Note that for any T𝒞, no edge in E has both ends in T; this is since for every eE there is Se such that eδE(Se), since the variable xe must appear in at least one equation among {x(δE(S))=f(S):S}. Let S𝒞 be such that dE(S)3. If S is an x-core then we are done. Otherwise, let C be an x-core properly contained in S. We claim that δE(C)δE(S), and that dE(C)2. This follows from the following observations.

  1. 1.

    dE(C)2, since C is f-positive and since we assume that 0<xe<1 for all eE.

  2. 2.

    For every eδE(C) there is Se such that eδE(Se), since the variable xe must appear in at least one equation among {x(δ(S))=f(S):S}.

This gives dE(C){2,3}, as required.

In the rest of this section we will prove the following.

Lemma 16.

Under the assumptions of Lemma 4, if no laminar x-defining family exists then there are u,vU with dI(u,v)μ and dH(u,v)=0.

In the rest of this section assume that the assumptions of Lemma 4 hold (dH(u)1 and dI(u)+2dH(u)k2 for all uU), and that no laminar x-defining family exists.

Lemma 17.

There are f-positive x-tight sets A,B that cross (namely, all the four sets AB,AB,BA,A¯B¯ are non-empty) such that the following holds:

  1. (i)

    AB={u} or A¯B¯={u} for some uU.

  2. (ii)

    AB={v} or BA={v} for some vU.

Proof.

For a cut S let χS{0,1}E be the incidence vector of δE(S). Let 𝒯 be the family of f-positive x-tight sets (if S is tight but not f-positive, then χS0, since xe>0 for all eE). Let us say that A,B𝒯 are x-uncrossable if at least one of the following holds:

  1. (a)

    AB,AB are both x-tight and χA+χB=χAB+χAB.

  2. (b)

    AB,BA are both x-tight and χA+χB=χAB+χBA.

We will prove two claims that will imply the lemma.

Claim.

There are A,B𝒯 that are not x-uncrossable.

Proof.

Let be a maximal laminar subfamily of 𝒯. Let 𝗌𝗉𝖺𝗇(𝒯) denote the linear space spanned by the incidence vectors of the sets in 𝒯 and similarly 𝗌𝗉𝖺𝗇() is defined. Since no laminar x-defining family exists and by the maximality of , there is A𝒯 such that χA𝗌𝗉𝖺𝗇(); any such A overlaps some set in . Choose such A that overlaps the minimal number of sets in , and let B be a set that A overlaps. We show that (a) cannot hold; the proof that (b) cannot hold is similar. Suppose to the contrary that (a) holds, namely that χA+χB=χAB+χAB and AB,AB are both x-tight. It is known that each of the sets AB,AB overlaps strictly less sets in than A, c.f. [33, Lemma 23.15]. Thus by our choice of A, each of these sets has its incidence vector in 𝗌𝗉𝖺𝗇(), namely χAB,χAB𝗌𝗉𝖺𝗇(). But χA=χAB+χABχB, contradicting that χA𝗌𝗉𝖺𝗇().

Claim.

If A,B𝒯 are not x-uncrossable then A,B cross and both (i) and (ii) hold.

Proof.

We show that A,B cross. If A,B do not overlap then {A,B}={AB,AB} or {A,B}={AB,BA} and we are done. We claim that also A¯B¯. Otherwise, AB=B¯, BA=A¯, and thus χAB=χB¯=χB and χBA=χA¯=χA, implying that AB,BA are both x-tight and χA+χB=χAB+χBA. Now we prove that (i) holds; a proof that (ii) holds is similar, and in fact can be deduced from (i) by the symmetry of f. Suppose to the contrary that (i) does not hold. Denoting Δ=x(δE(AB,BA)) we have:

f(A)+f(B) = x(δE(A))+x(δE(B))
= x(δE(AB))+x(δE(AB))+2Δ
f(AB)+f(AB)+2Δ
f(A)+f(B)+2Δ.

The first equality is since A,B are tight, and the second can be verified by counting the contribution of each edge to both sides. The first inequality is since x is a feasible LP-solution. The second inequality is since none of A,B,AB,AB is a singleton from U or its complement, hence f coincides on these sets with the symmetric supermodular function g(S)=kdI(S)2dH(S), that satisfies the inequality g(AB)+g(AB)g(A)+g(B). Consequently, equality holds everywhere, hence AB,AB are both tight. Moreover, Δ=0, and this implies χA+χB=χAB+χAB, contradicting that A,B are not x-uncrossable.

By the first claim there exists A,B𝒯 that are not x-uncrossable, while by the second claim such A,B cross and satisfy both (i) and (ii), concluding the proof of the lemma.

By the symmetry of f, assume w.l.o.g. that AB={u} and AB={v} for u,vU.

Lemma 18.

If dH(u,v)=0 then dI(u,v)μ.

Proof.

By the assumption of Lemma 4, dI(u)+2dH(u)k2 and dI(v)+2dH(v)k2. Since A is f-positive, dI(A)+2dH(A)k1. Thus we get (see Fig. 3(a))

k1dI(A)+2dH(A)=[dI(u)+2dH(u)]+[dI(v)+2dH(v)]2dI(u,v)2(k2)2dI(u,v).

Consequently, 2dI(u,v)k3, as claimed.

Refer to caption
Figure 3: Illustration to the proofs of Lemmas 18 and 19.

The next lemma shows that if dH(u,v)=1 then there is an “alternative” node wU such that we can add a ghost edge uw or vw.

Lemma 19.

If dH(u,v)=1 then there is wU such that one of the following holds:

  1. (i)

    A¯B={w}, dH(u,w)=0, and dI(u,w)μ.

  2. (ii)

    A¯B={w}, dH(v,w)=0, and dI(v,w)μ.

Proof.

For disjoint sets S,T let ψ(S,T)=dI(S,T)+2dH(S,T)+x(δE(S,T)), and let ψ(S)=ψ(S,S¯) be the “coverage” of S by I, H, and the fractional edges. Note that

ψ(S){k2if S={u} or S¯={u} for some uUkotherwise

In particular, ψ(S)<k implies that S={w} or S¯={w} for some wU.

Let W=A¯B and Z=A¯B, see Fig. 3(b). Note that ψ(A)=ψ(B)=k since A,B are x-tight and that ψ(u,v)2dH(u,v)+μ=2+μ. This implies (see Fig. 3(b))

ψ(W,Z)ψ(B)ψ(u,v)k(2+μ).

Thus we get

2k2(2+μ)2ψ(W,Z)=ψ(W)+ψ(Z)ψ(A)=ψ(W)+ψ(Z)k

Consequently, ψ(W)+ψ(Z)3k2(2+μ)<2k, since 2(2+μ)>k. Thus ψ(W)<k or ψ(Z)<k, and w.l.o.g. assume that ψ(W)<k. This implies that W={w} for some wU; (see Fig. 3(c)). Note that dH(u,w)=0, since dH(u,v)=1 and since dH(u)1 by the assumption of Lemma 4. Also, dI(B)+2dH(B)k1, since B is f-positive. Thus we have

2dI(u,w) = 2[dI(u,w)+2dH(u,w)]
= [dI(u)+2dH(u)]+[dI(w)+2dH(w)][dI(B)+2dH(B)]
(k2)+(k2)(k1)=k3,

concluding the proof.

To show a polynomial time implementation it is sufficient to prove the following lemma, that was essentially proved in [14]; we provide a proof for completeness of exposition.

Lemma 20 ([14]).

At any step of the algorithm, an extreme point solution x to LP (2) with f in (1), as well as the x-cores can be computed in polynomial time.

Proof.

An extreme point solution can be found using the Ellipsoid Algorithm. For that, we need the show an existence of a polynomial time separation oracle: given x, determine whether x is a feasible LP solution or find a violated inequality. Checking the inequalities 0xe1 is trivial. For cut inequalities, assign capacities we to the edges in EIH as follows: we=xe if eE, we=1 if eI, and we=2 if eH. The inequality of a cut S is violated if and only if w(δ(S))<k and none of S,S¯ is a singleton in U. Thus to find an st-cut with violated inequality do the following. If s,tU then we find a minimum st-cut S; if w(δ(S))<k then we found a violated inequality, else no st-cut with violated inequality exists. If sU and tU, then for every uV{s,t} we contract u into s and find a minimum st-cut Sv; if w(δ(Sv))<k for some v, then we found a violated inequality, else no st-cut with violated inequality exists. Similarly, If s,tU then for every pair u,vV{s,t} we contract u into s and v into t and find a minimum st-cut Su,v; if w(δ(Su,v))<k for some u,v then we found a violated inequality, else no st-cut with violated inequality exists.

We show how to find all x-cores. Assume w.l.o.g. that 0<xe<1 for all eE. One can see that a cut C is an x-core if and only if C is an inclusion minimal set that has w-capacity k and δE(C). For every ordered pair (s,t) with stE we check if there exists an st-cut C (so sC and tC) of capacity exactly k, and if so, then we find the inclusion minimal such cut C(s,t). The x-cores are the inclusion minimal sets among the sets C(s,t).

This concludes the proof of Lemma 4 and thus also of Theorem 1.

5 Proof of Theorem 3

In the algorithm of Theorem 3 we use the following function f:

f(S)={kdI(S)dH(S)1if S={u} or S¯={u} for some uUkdI(S)dH(S)otherwise (3)

The main difference between Algorithm 1 and the following Algorithm 2 are:

  1. 1.

    We use a different function f – the one defined in (3).

  2. 2.

    We round to 1 edges e with xe2/3.

  3. 3.

    Adding a ghost edge uv requires μ=(k1)/2 integral uv-edges.

Algorithm 2 A bicriteria (3/2,k2)-approximation.

A counterpart of Lemma 4 is as follows.

Lemma 21.

Let f be defined by (3) and suppose that dH(u)1 and dI(u)+dH(u)k1 for all uU. Let x be an extreme point of the polytope P={x[0,1]E:x(δE(S))f(S)} with 0<xe<2/3 for all eE, where E. Then at least one of the following holds.

  1. (i)

    There is an x-core C with dE(C){2,3} and f(C)=1; moreover, no edge in E has both ends in C.

  2. (ii)

    There are u,vU with dH(u,v)=0 and dI(u,v)μ, where μ=(k1)/2.

Furthermore, such u,v or such C can be found in polynomial time.

Proof.

By Lemma 15, if there exists an x-defining laminar family, then there is an x-core C such that dE(C){2,3}, f(C){1,2}, and no edge in E has both ends in C. But f(C)=2 is not possible, as otherwise there is eδE(C) with xe2/3. Hence in this case (i) holds.

We now show that if no laminar x-defining family exists then (ii) holds. Similarly to Lemma 17 we conclude that there is a pair A,B of x-tight f-positive sets that cross such that AB={u} and AB={v} for some u,vU.

We claim that if dH(u,v)=0 then dI(u,v)μ. By the assumption of the lemma, dI(u)+dH(u)k1 and dI(v)+dH(v)k1. Since A is f-positive, dI(A)+dH(A)k1. Thus we get (see Fig. 3(a))

k1dI(A)+dH(A)=[dI(u)+dH(u)]+[dI(v)+dH(v)]2dI(u,v)2(k1)2dI(u,v).

Consequently, 2dI(u,v)k1, as claimed.

We prove that if dH(u,v)=1 then there is wU such that one of the following holds:

  • A¯B={w}, dH(u,w)=0, and dI(u,w)μ.

  • A¯B={w}, dH(v,w)=0, and dI(v,w)μ.

Let ψ(S,T)=dI(S,T)+dH(S,T)+x(δE(S,T)) and ψ(S)=ψ(S,S¯). Note that

ψ(S){k1if S={u} or S¯={u} for some uUkotherwise

In particular, ψ(S)<k implies that S={w} or S¯={w} for some wU.

Let W=A¯B and Z=A¯B. Note that ψ(A)=ψ(B)=k since A,B are x-tight and that ψ(u,v)dH(u,v)+μ=1+μ. This implies ψ(W,Z)ψ(B)ψ(u,v)k(1+μ), see Fig. 3(b). Thus we get

2k2(1+μ)2ψ(W,Z)=ψ(W)+ψ(Z)ψ(A)=ψ(W)+ψ(Z)k

Consequently, ψ(W)+ψ(Z)3k2(1+μ)<2k, since 2(1+μ)>k. Thus ψ(W)<k or ψ(Z)<k, and w.l.o.g. ψ(W)<k. This implies that W={w} for some wU, see Fig. 3(c). Note that dH(u,w)=0, since dH(u,v)=1 and since dH(u)1 by the assumption of the lemma. Also, dI(B)+dH(B)k1, since B is f-positive. Thus we have

2dI(u,w) = 2[dI(u,w)+dH(u,w)]
= [dI(u)+dH(u)]+[dI(w)+dH(w)][dI(B)+dH(B)]
(k1)+(k1)(k1)=k1.

The polynomial time implementation details are identical to those in Lemma 20.

We now prove the connectivity guarantee. As before, let be the family of subsets of V0 that correspond to the contracted sets during the algorithm. Similarly to the previous case, we have the following sequence of lemmas.

Lemma 22.

When a core C is found in the algorithm but not yet contracted, dH(C)1, x(δE(C))=1, and dI(C)=kdH(C)x(δE(C))=kdH(C)1. Consequently, dI(C)=k1 if dH(C)=0 and dI(C)=k2 if dH(C)=1.

Proof.

If dH(C)2 then dI(C)+2dH(C)2μ+2k, contradicting that C is a core. Since dE(C){2,3} and x(δE(C)) is a positive integer, we must have x(δE(C))=1, as otherwise there is eδE(C) with xe2/3.

Lemma 23.

During the algorithm dH(u)1 and dI(u)+dH(u)=k1 holds for all uU. Furthermore, dH(v)2 for all vVU. Thus the algorithm terminates after O(m) iterations.

Proof.

By Lemma 22, when u enters U we have dH(u)1 and dI(u)=kdH(u)1. This implies f({u})=kdI(u)dH(u)1=0. The addition of a ghost edge incident to u excludes u from U but does not increase f({u}), hence {u} will never enter U again. If we had dH(u)=1 when u entered U, then u will leave U with dH(u)=2 and will never enter U again. Thus dH(v)2 for all vVU. The proof that the algorithm terminates after O(m) iterations is identical to the proof of Lemma 14.

Lemma 24.

When a core C is found in the algorithm but not yet contracted, the following holds for any proper subset S of C.

  1. (i)

    dI(S)dH(S)μ and dI(S)kdH(S)1.

  2. (ii)

    dI(S,CS)μ.

Proof.

We prove (i). dI(S)dH(S)μ by Lemma 5. If S={u} for some uU then dI(u)=kdH(u)1 by Lemma 22. Otherwise, the constraint of S is relaxed by ghost edges only, and since δE(S)δE(C) (by Lemma 21(i)) and x(δE(C))1 (by Lemma 22), we get dI(S)kdH(S)x(δE(S))=kdH(S)1.

We prove (ii). If dH(S,CS)1 then dI(S,CS)μ since dI(u,v)μ when the ghost edge uv is added. Otherwise, dH(S)+dH(CS)=dH(C)1 and since dJ(C)kdH(C)1 (by Lemma 22) and by part (i) we get

2dI(S,CS) = dI(S)+dI(CS)dI(C)
[kdH(S)1]+[kdH(CS)1][k2dH(C)1]
= k1[dH(S)+dH(CS)dH(C)]=k1,

concluding the proof.

Corollary 25.

Let S be a cut such that {S} is laminar. Then dJ(S)kdH(S)1 and dJ(S)dH(S)μ. Furthermore, if RS=V and SR then dJ(S)kdH(S). Consequently, dJ(S)max{kdH(S)1,dH(S)μ}k2.

Proof.

The bound dJ(S)dH(S)μ follows from Lemma 5, so we prove only the bound dJ(S)kdH(S)1. For S it follows from Lemmas 22, so assume that S. If S is contained in some set in , then let C be the inclusion minimal set in that contains S. The bound then follows from Lemma 24(i). Otherwise, the constraint of S was relaxed by ghost edges only and then dJ(S)kdH(S).

Lemma 26.

Let R and let T be a proper subset of R¯ such that {T,R¯T} is laminar. Then dJ(T,R¯T)(k4)/2.

Proof.

If dH(T,R¯T)1 then by Lemma 5 dJ(T,RT)μ>(k4)/2. Otherwise, dH(R)=dH(T)+dH(R¯T), and since dJ(R)kdH(C)+2 (by Lemma 22) we get

2dJ(T,R¯T) = dJ(T)+dJ(R¯T)dJ(R)
[kdH(T)1]+[kdH(R¯T)1][kdH(R)+2]
= k4[dH(T)+dH(R¯T)dH(R)]=k4,

concluding the proof.

Lemma 27.

Suppose that 1|(S)||(S¯)|.

  1. (i)

    If R is a minimal set in (S) then dJ(RS,RS)μ.

  2. (ii)

    If R is a unique maximal set in (S) then dJ(R¯S,R¯S)(k4)/2.

Proof.

We prove (i). The minimality of R implies that if T{RS,RS} then {T} is laminar. Therefore by Lemma 24(ii) dJ(RS,RS)μ.

We prove (ii). The maximality of R implies that if T{R¯S,R¯S} then {T} is laminar. Therefore by Lemma 26 dJ(R¯S,R¯S)(k4)/2.

Lemma 28.

If 1|(S)||(S¯)| then dJ(S)k2.

Proof.

Suppose that there are two disjoint sets in (S). Then there are two minimal sets in (S) that are disjoint, say R1,R2; see Fig. 2(b). Then by Lemma 27(i)

dJ(S)dJ(R1S,R1S)+dJ(R2S,R2S)μ+μ>k2.

Otherwise, (S) is a nested family with a unique maximal set R1 and a unique minimal set R2; see Fig. 2(c) and note that R2R1 and possibly R1=R2. Then by Lemma 27

dJ(S)dJ(R¯1S,R¯1S)+dJ(R2S,R2S)(k4)/2+μ=k2,

concluding the proof.

This concludes the proof of the connectivity guarantee of Algorithm 2 and thus also of Theorem 3.

References

  • [1] J. Cheriyan and R. Thurimella. Approximating minimum-size k-connected spanning subgraphs via matching. SIAM J. Computing, 30:528–560, 2000. doi:10.1137/S009753979833920X.
  • [2] J. Cheriyan and L. A. Végh. Approximating minimum-cost k-node connected subgraphs via independence-free graphs. SIAM Journal on Comput., 43(4):1342–1362, 2014. doi:10.1137/120902847.
  • [3] J. Chuzhoy and S. Khanna. An O(k3logn)-approximation algorithm for vertex-connectivity survivable network design. Theory Comput., 8(1):401–413, 2012. doi:10.4086/TOC.2012.V008A018.
  • [4] A. Czumaj and A. Lingas. On approximability of the minimum-cost k-connected spanning subgraph problem. In SODA, pages 281–290, 1999.
  • [5] J. Edmonds. Edge-disjoint branchings. In B. Rustin, editor, Combinatorial Algorithms, pages 91–96. Academic, New York, 1973.
  • [6] J. Fackharoenphol and B. Laekhanukit. An O(log2k)-approximation algorithm for the k-vertex connected subgraph problem. In STOC, pages 153–158, 2008.
  • [7] C. G. Fernandes. A better approximation ratio for the minimum size k-edge-connected spanning subgraph problem. J. Algorithms, 28(1):105–124, 1998. doi:10.1006/JAGM.1998.0931.
  • [8] A. Frank. Augmenting graphs to meet edge-connectivity requirements. SIAM J. Discrete Math., 5(1):25–53, 1992. doi:10.1137/0405003.
  • [9] A. Frank and É. Tardos. An application of submodular flows. Linear Algebra and its Applications, 114-115:329–348, 1989.
  • [10] G. N. Fredrickson and J. F. JáJá. Approximation algorithms for several graph augmentation problems. SIAM J. Comput., 10(2):270–283, 1981. doi:10.1137/0210019.
  • [11] G. N. Fredrickson and J. F. JáJá. On the relationship between the biconnectivity augmentation and traveling salesman problem. Theoretical Computer Science, 19:189–201, 1982. doi:10.1016/0304-3975(82)90059-7.
  • [12] H. N. Gabow and S. Gallagher. Iterated rounding algorithms for the smallest k-edge connected spanning subgraph. SIAM J. Computing, 41(1):61–103, 2012. doi:10.1137/080732572.
  • [13] H. N. Gabow, M. X. Goemans, É. Tardos, and D. P. Williamson. Approximating the smallest k-edge connected spanning subgraph by LP-rounding. Networks, 53(4):345–357, 2009. doi:10.1002/NET.20289.
  • [14] D. E. Hershkowitz, N. Klein, and R. Zenklusen. Ghost value augmentation for k-edge-connectivity. In STOC, pages 1853–1864, 2024. Full version available at arXiv:2311.09941.
  • [15] K. Jain. A factor 2 approximation algorithm for the generalized steiner network problem. Combinatorica, 21(1):39–60, 2001. doi:10.1007/S004930170004.
  • [16] A. R. Karlin, N. Klein, S. Oveis Gharan, and X. Zhang. An improved approximation algorithm for the minimum k-edge connected multi-subgraph problem. In STOC, pages 1612–1620, 2022.
  • [17] S. Khuller. Approximation algorithms for for finding highly connected subgraphs. In D. S. Hochbaum, editor, Chapter 6 in Approximation Algorithms for NP-hard problems, pages 236–265. PWS, 1995.
  • [18] S. Khuller and U. Vishkin. Biconnectivity approximations and graph carvings. J. ACM, 41(2):214–235, 1994. doi:10.1145/174652.174654.
  • [19] G. Kortsarz and Z. Nutov. Approximating node connectivity problems via set covers. Algorithmica, 37(2):75–92, 2003. doi:10.1007/S00453-003-1027-4.
  • [20] L. C. Lau, R. Ravi, and M. Singh. Iterative methods in combinatorial optimization, volume 46. Cambridge University Press, 2011.
  • [21] L. C. Lau and H. Zhou. A unified algorithm for degree bounded survivable network design. Math. Program., 154(1-2):515–532, 2015. doi:10.1007/S10107-015-0858-5.
  • [22] L. Lovász. Combinatorial Problems and Exercises. North-Holland Publishing Co., Amsterdam, 1979.
  • [23] Z. Nutov. Approximating minimum-cost connectivity problems via uncrossable bifamilies. ACM Trans. Algorithms, 9(1):1:1–1:16, 2012. doi:10.1145/2390176.2390177.
  • [24] Z. Nutov. Small -edge-covers in k-connected graphs. Discret. Appl. Math., 161(13-14):2101–2106, 2013.
  • [25] Z. Nutov. Approximating minimum-cost edge-covers of crossing biset-families. Combinatorica, 34(1):95–114, 2014. doi:10.1007/S00493-014-2773-4.
  • [26] Z. Nutov. Improved approximation algorithms for minimum cost node-connectivity augmentation problems. Theory Comput. Syst., 62(3):510–532, 2018. doi:10.1007/S00224-017-9786-5.
  • [27] Z. Nutov. The k-connected subgraph problem. In T. Gonzalez, editor, Approximation Algorithms and Metaheuristics, chapter 12, pages 213–232. Chapman & Hall, 2018.
  • [28] Z. Nutov. A (4+ϵ)-approximation for k-connected subgraphs. J. Comput. Syst. Sci., 123:64–75, 2022.
  • [29] D. Pritchard. k-edge-connectivity: Approximation and LP relaxation. In WAOA, pages 225–236, 2010.
  • [30] A. Sebö and J. Vygen. Shorter tours by nicer ears: 7/5-approximation for the graph-tsp, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs. Combinatorica, 34(5):597–629, 2014. doi:10.1007/S00493-014-2960-3.
  • [31] M. Singh and L. C. Lau. Approximating minimum bounded degree spanning trees to within one of optimal. J.of the ACM (JACM), page 62.1, 2015.
  • [32] V. Traub and R. Zenklusen. A (1.5+ϵ)-approximation algorithm for weighted connectivity augmentation. In STOC, pages 1820–1833, 2023.
  • [33] V. V. Vazirani. Approximation Algorithms. Springer, 2010.