Abstract 1 Introduction 2 Triangle Packing and Maximum Cut Tradeoff 3 Analysis of the Perturbed Goemans-Williamson Algorithm 4 Hardness of Approximating Maximum Cut on Split Graphs References

Approximating Maximum Cut on Interval Graphs and Split Graphs Beyond Goemans-Williamson

Jungho Ahn ORCID Inha University, Incheon, South Korea Ian DeHaan ORCID University of Michigan, Ann Arbor, MI, USA Eun Jung Kim ORCID KAIST and IBS, Daejon, South Korea
CNRS, Paris, France
Euiwoong Lee ORCID University of Michigan, Ann Arbor, MI, USA
Abstract

We present a polynomial-time (αGW+ε)-approximation algorithm for the Maximum Cut problem on interval graphs and split graphs, where αGW0.878 is the approximation guarantee of the Goemans-Williamson algorithm and ε>1034 is a fixed constant. To attain this, we give an improved analysis of a slight modification of the Goemans-Williamson algorithm for graphs in which triangles can be packed into a constant fraction of their edges. We then pair this analysis with structural results showing that both interval graphs and split graphs either have such a triangle packing or have maximum cut close to their number of edges. We also show that, subject to the Small Set Expansion Hypothesis, there exists a constant c>0 such that there is no polyomial-time (1c)-approximation for Maximum Cut on split graphs.

Keywords and phrases:
Maximum cut, graph theory, interval graphs, split graphs
Category:
APPROX
Funding:
Jungho Ahn: Supported by Leverhulme Trust Research Project Grant RPG-2024-182.
Eun Jung Kim: Supported by Institute for Basic Science (IBS-R029-C1) and National Research Foundation, Korea (RS-2025-00563533).
Euiwoong Lee: Supported in part by NSF grant CCF-2236669.
Copyright and License:
[Uncaptioned image] © Jungho Ahn, Ian DeHaan, Eun Jung Kim, and Euiwoong Lee; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Approximation algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2507.10436
Editors:
Alina Ene and Eshan Chattopadhyay

1 Introduction

Given a graph G=(V,E), the Maximum Cut problem asks for a subset of vertices SV that maximizes the number of edges with exactly one endpoint contained in S. In this paper, we study the approximability of the Maximum Cut problem on interval graphs and split graphs.

A graph G=(V,E) is interval if there exists a collection of intervals on the real line {v}vV such that uvE if and only if uv. See Figure 1 for an example. Interval graphs are used in the field of biology, where they model natural phenomena such as DNA and food webs [13]. They have also been used in the study of register allocation, where vertices correspond to variables and intervals correspond to “live ranges” [11]. Finally, they have numerous desirable theoretical properties and have arisen as a natural class of graphs to design algorithms for. For example, it is shown in [5] that a certain variant of the graph homomorphism problem is polynomial-time solvable if and only if the label graph is interval.

Figure 1: An interval graph. The left figure shows the interval representation. The right figure shows the resulting graph.

A graph G=(V,E) is split if there exists a partition of V=KI such that G[K], the graph induced by K, is a clique and G[I], the graph induced by I, is independent. See Figure 2 for an example. Split graphs and interval graphs are both important subclasses of chordal graphs, which themselves are a subclass of perfect graphs. In particular, split graphs are often the “simplest” subclass of perfect graphs in which problems are difficult to approximate. Thus, considering split graphs is a natural first step when attempting to characterize a problem on chordal or perfect graphs. In this paper, we show that, assuming the Small Set Expansion Hypothesis, there is some constant c>0 such that there is no polynomial-time (1c)-approximation for Maximum Cut on split graphs. To our knowledge, this is the first known hardness of approximation result for Maximum Cut on perfect graphs.

Figure 2: A split graph with K={k1,k2,k3,k4} and I={i1,i2,i3}.

Maximum Cut is one of the original 21 problems shown to be NP-Complete by Karp [14] and has long been a staple problem among algorithm researchers. The seminal work of Goemans and Williamson shows that there is a polynomial-time (αGW0.878)-approximation algorithm for Maximum Cut on all graphs [8]. The optimality of this result was open until 2007, when Khot et al. showed that, subject to the Unique Games Conjecture, there is no polynomial-time approximation algorithm for Maximum Cut with an approximation ratio better than αGW [15].

Remarkably, there is relatively little known about Maximum Cut when the input graph is restricted to graphs from some structured class. Even for subclasses of perfect graphs, where problems such as Independent Set and Chromatic number admit polynomial-time algorithms based on semidefinite programming, Maximum Cut, another flagship application of semidefinite programming, remains mostly unexplored. In particular the extremely well-structured classes of interval graphs and split graphs, two important subclasses of perfect graphs, had no known approximation algorithm with a ratio better than αGW prior to this work. Our main results provide the first improved approximation for these classes of graphs since the work of Goemans and Williamson.

Theorem 1.

There is a polynomial-time (αGW+1034)-approximation algorithm for Maximum Cut on interval graphs.

Theorem 2.

There is a polynomial-time (αGW+1016)-approximation algorithm for Maximum Cut on split graphs.

On the hardness side, it was shown by Bodlander and Jansen in 2000 that Maximum Cut on split graphs is NP-Complete [3]. Maximum Cut on interval graphs has seen further attention lately - it was shown only recently that Maximum Cut on interval graphs is NP-Complete [1]. Further work refined this hardness for interval graphs which have at most 4 interval lengths [4], and later at most 2 interval lengths [2]. Hardness for unit interval graphs - interval graphs with only 1 interval length - remains open. We remark that none of these hardness results imply any hardness of approximation. Our final main result is that, subject to the Small Set Expansion Hypothesis of [17], Maximum Cut on split graphs is hard to approximate to some constant factor.

Theorem 3.

There exists a constant c>0 such that there is no polynomial-time (1c)-approximation algorithm for Maximum Cut on split graphs, subject to the Small Set Expansion Hypothesis under randomized reductions.

Table 1: Known results for Maximum Cut. Our work appears in bold. Results marked with are subject to the Unique Games Conjecture. Results marked with are subject to the Small Set Expansion Hypothesis under randomized reductions.
lower bound upper bound
General graphs αGW [8] αGW [15]
Degree d graphs αGW+Ω(1d2logd) [12] αGW+𝒪(1d) [19]
Interval graphs 𝜶𝐆𝐖+𝟏𝟎𝟑𝟒 NP-Complete [1]
Split graphs 𝜶𝐆𝐖+𝟏𝟎𝟏𝟔 𝟏𝐜
Planar graphs 1 [10]
Line graphs 1 [9]

1.1 Our Techniques

1.1.1 Analysis of the Perturbed Goemans-Williamson Algorithm

Our starting point is the Goemans-Williamson algorithm [8]. This algorithm first solves the following semidefinite program (SDP)

𝐦𝐚𝐱𝐢𝐦𝐢𝐳𝐞{12uvE(1xuxv)xv𝕊|V|1vV}.

That is, the algorithm maps each vertex to a unit vector in a way that maximizes the sum of 1xuxv=1cosθuv, where θuv is the angle between xu and xv. Next, the algorithm samples a random Gaussian vector r𝒩(0,1)|V|, creates the set S={vrxv0}, and returns the cut defined by S. This step is equivalent to sampling a random hyperplane and taking S to be the vertices on one side of this hyperplane. It is a straightforward calculation to see that the probability of an edge eE being cut by S is equal to θe/π. Thus, the approximation guarantee of the Goemans-Williamson algorithm is equal to

αGW:=minθ[0,π]2πθ1cosθ0.878.

Define θc:=argminθ[0,π]θ1cosθ134 to be the “critical angle” at which this ratio is minimized. We plot the performance guarantee over all angles in Figure 3. Any edge eE with θeθc has approximation ratio strictly better than αGW. Intuitively, if at least a constant fraction of edges have angle bounded away from θc, we should expect the Goemans-Williamson algorithm to achieve a better approximation ratio. Unfortunately, this is not exactly true.

Figure 3: Plot of 2πθ1cos(θ) for θ(0,π).

Consider the graph P3, the path on 3 vertices. Suppose the SDP is solved suboptimally, so that the first edge has angle θc and the second edge has angle 0. Even though half of the edges have angle bounded away from θc, the expected size of the cut from rounding is still only αGW times the optimal value of the SDP. To handle situations like this, we introduce a “perturbed” version of the Goemans-Williamson algorithm. In this perturbed algorithm, any vertex vV with |rxv|<η for some small η will instead be included in S with probability 12. The motivation behind this perturbation is to consider what happens when every xv is “moved” by a small amount. If xu and xv are close together, then they are likely to move further apart after this perturbation. Inversely, if xu and xv are far apart, they are likely to move closer together after this perturbation.

Indeed, in Lemma 17, we show that the perturbed algorithm is more likely to cut any edge eE with θe<π/2, and less likely to cut any edge with θe>π/2. Moreover, the further away θe is from π/2, the more the results of the perturbed algorithm vary from the unperturbed algorithm. Thus, if there are many edges with angle near 0, but few edges with angle near π, the perturbed algorithm will achieve an approximation ratio above αGW, see Lemma 19. On the other hand, if there are many edges with angle near π, then the Goemans-Williamson algorithm itself will achieve an approximation ratio above αGW, see Lemma 15. Thus, if we take the maximum result of the perturbed and unperturbed Goemans-Williamson algorithm, we can ignore the case of having many 0 edges which contribute little to the optimal. That is, as shown in Lemma 21, we can indeed assume that if at least a constant fraction of edges have angle bounded away from θc, the Goemans-Williamson algorithm will achieve an approximation ratio above αGW.

Consider a single triangle T={uv,vw,wu}. A straightforward analysis shows that

θuv+θvw+θwu360,

regardless of the values of xu,xv,xw. Thus, there is some edge eT with θe120<θc. That is, for every triangle in our graph, we can expect at least one of its edges to have an angle bounded away from θc. This leads us into Section 2, where we show that every interval graph and split graph either have a large edge-disjoint triangle packing, or have a cut with at least 0.9|E| edges.

1.1.2 Finding an Edge-Disjoint Triangle Packing

Suppose G=(KI,E) is a split graph. If at least 0.1|E| of its edges are contained in the clique K, then we can pack triangles almost perfectly into those edges. Otherwise, we have that at least 0.9|E| edges are crossing between K and I, and so we have found a cut with 0.9|E|0.9mc(G) edges, where mc(G) is the size of a maximum cut of G. Thus, we find ourselves in a “win-win” situation, where G either has an edge-disjoint triangle packing on a constant fraction of its edges, or G is nearly bipartite, and we can find a large cut. This analysis leads directly to an improved approximation for Maximum Cut on split graphs.

It turns out that the same “win-win” structural result also holds for interval graphs, which we prove in Theorem 5. To show this, we employ a marking scheme, where each vertex is classified as either “small” or “large” based on how it interacts with the rest of the graph. We first show in Lemma 8 that the number of edges between large vertices is low compared to |E|. The situation for small vertices is more complicated. We show in Lemmas 9 and 10 that one of the following conditions must always hold:

  1. 1.

    G has a bridge; or

  2. 2.

    there is a clique C in G such that the sum of degrees vCdv is not much more than |C|2; or

  3. 3.

    the number of edges between small vertices is low compared to |E|.

If Condition 1 holds, we can essentially delete the bridge; because adding a bridge to a bipartite graph does not create an odd cycle, we can always add back in the bridge after finding a cut in the rest of the graph. If Condition 2 holds and |C|3, then we can pack edge-disjoint triangles into C and delete C from the graph. Because the sum of degrees is not much more than |C|2, we have packed triangles into at least a constant fraction of the deleted edges. The case of |C|2 is more tricky, as one cannot pack triangles into one or two vertices, and must be handled separately. If Condition 3 holds, then most of the edges in G go between small and large vertices. If we have already packed triangles into a constant fraction of edges via Condition 2, then we are done. Otherwise, “most” edges are remaining in the graph, and we have found a cut on “most” of these remaining edges. For the right definition of “most,” this is a large cut of at least 0.9|E| edges.

1.1.3 Hardness of Approximating Maximum Cut on Split Graphs

To show that Maximum Cut is hard to approximate on split graphs, we start from the following hardness result that follows using standard techniques from [18]. Assuming the Small Set Expansion Hypothesis holds under randomized reductions, for any sufficiently small ε>0, there is no polynomial time algorithm that, given a graph G=(V,E), can distinguish between the following two cases.

  1. 1.

    There exists a cut SV with |S|0.5|V| such that |δ(S)|𝒪(ε|E|).

  2. 2.

    For all cuts SV with |S|0.5|V|, either |S|0.2|V| or |δ(S)|Ω(ε|E|).

That is, in Case 1, there is a cut with nearly half the vertices that cuts very few edges. In Case 2, all cuts with at least a constant fraction of vertices must cut many more edges. We transform G into a split graph G=(KI,E) by turning V into a clique and setting K:=V, setting I:={veeE}, and adding an edge from ve to each endpoint of e. Any cut of G can have at most 0.25|V|2 edges from those edges contained in K, and at most 2|E| edges from those going between I and K. In Case 1, there exists a cut that does cut nearly 0.25|V|2+2|E| edges. In Case 2, any cut of G either cuts at most 0.2|V|2 edges from those edges contained in K, or at most (2Ω(ε))|E| from those edges crossing between I and K. After ensuring that |E||V|2, this shows that Maximum Cut is hard to approximate on split graphs.

1.2 Preliminaries

1.2.1 Graphs

We consider only finite graphs in this paper. Apart from Section 4, all considered graphs will also be simple and unweighted. When the subject graph G is clear from context, we will use V:=V(G) to refer to the vertex set of G, E:=E(G) to refer to the edge set of G, and n:=|V| to refer to the number of vertices in G. Let SV be a subset of vertices of G. We define G[S]:=(S,{uvEu,vS}) as the subgraph of G induced by S and E[S]:=E(G[S]) as the set of edges in this subgraph. We define δG(S):={uvEuS,vS} to be the cut induced by S and mc(G):=maxSV{|δG(S)|} to be the maximum cut size of G. We will often omit G and write only δ(S) when the graph G is clear from context. Let vV be any vertex of G. We define δ(v):=δ({v}) as the set of edges adjacent to v, N(v):={uVuvE} as the set of neighbors of v, and dv:=|N(v)| as the degree of v. We define a triangle of G as a set of edges {uv,vw,wu}E forming a triangle. We say that G has a triangle packing of size t if there exist t edge-disjoint triangles T1,T2,,TtE.

1.2.2 Gaussians

We define 𝒩(0,1) as the Gaussian distribution with mean 0 and variance 1. Moreover, we define 𝒩(0,1)n as the n-dimensional Guassian distribution, where a sampled vector v𝒩(0,1)n has vi𝒩(0,1) for each i[n] and vi is independent of vj for ij. We make use of the fact that sampling a vector in this way is equivalent to sampling a random direction; that is, after normalizing, v becomes a uniformly random unit vector in n-dimensional space. In particular, this means that 𝒩(0,1)n is symmetric up to rotations, which we will exploit frequently. We will also make use of the following lemma, which says that 𝒩(0,1) is roughly equivalent to a uniform distribution close to 0.

Lemma 4.

For a randomly sampled r𝒩(0,1), we have that

x2[|r|x]x

for all x[0,1].

The proof of Lemma 4 follows from direct calculation and is not instructive, so we omit it.

2 Triangle Packing and Maximum Cut Tradeoff

This section is devoted to proving the following structural result.

Theorem 5.

If G=(V,E) is an interval graph, then G either has a triangle packing of size 108|E| or has a cut of size at least 0.9|E| that can be found in polynomial time.

2.1 Warmup: Tradeoff for Split Graphs

As a warmup, we prove the following structural result that is essentially equivalent to Theorem 5, except it is for split graphs instead of interval graphs.

Theorem 6.

If G=(V,E) is a split graph, then G either has a triangle packing of size 0.01|E| or has a cut of size at least 0.9|E| that can be found in polynomial time.

Before we prove this, we need the following helpful lemma.

Lemma 7.

The complete graph Kn on n3 vertices has an edge-disjoint triangle packing of size |E[Kn]|10=n(n1)20.

Edge-disjoint triangle packings in complete graphs have been studied in the literature before, with [6] giving an optimal bound. Lemma 7 is not very close to the optimal bound, but it is sufficient for our purposes and substantially easier to prove.

Proof.

Label the vertices of Kn with numbers 0,1,2,,n1. Label each triangle {uv,vw,wu} with (u+v+w) modulo n. Fix any edge uvE(Kn). Then we have that, for each possible triangle label, there is only one triangle involving uv with that label. So we may take all the triangles of any specific label and find an edge-disjoint triangle packing. By the pigeon hole principle, some label has at least (n3)/n triangles. Therefore, Kn has a triangle packing of size (n3)/n|E[Kn]|10, as wanted.

With Lemma 7 in hand, we can now prove Theorem 6.

Theorem 6.

Write V=KI where K and I are the clique and independent portions of G, respectively. Then we can partition E=δ(K)E[K]. If |δ(K)|0.9|E|, then we are done, as we have constructed a cut of size at least 0.9|E| in polynomial time. Otherwise, we have that |E[K]|0.1|E|. Now, recalling that G[K] is the complete graph on |K| vertices, we use Lemma 7 to find that G[K] (and thus G) has an edge-disjoint triangle packing of size at least |E[K]|100.01|E|.

2.2 Finding a Cut

The proof of Theorem 5 maintains a similar flavor as that of Theorem 6. While the proof of Theorem 6 either finds a large cut or one large clique to pack triangles into, we need to repeatedly apply Lemma 7 during the proof of Theorem 5, as there may be no single large-enough clique. That is, at each stage, we either identify a clique that we may pack triangles into and remove while being careful to bound the number of non-clique edges we remove, or we conclude there is a large cut.

Given an interval graph G=(V,E), we proceed by partitioning the vertices into “small” and “large” vertices V=𝒮. At each step, we will either certify that the implied cut is large |δ(𝒮)|0.9|E|, or find a way to expand a triangle packing on edges in E[𝒮].

For any t, define Bt:={vVtv} to be the “bag” of vertices whose intervals occupy position t. For a vertex vV, define v:=mintv|Bt| to be the size of the smallest bag in v’s interval. Let T be a constant we will decide the value of later. We say that v is “small” if dvTv and “large” otherwise. In other words, if at least a constant fraction of v’s edges “come from” v, then v is small. We define 𝒮 as the set of small vertices and as the set of large vertices.

Lemma 8.

|E[]|8T1|E|.

Proof.

Define an ordering on by uv if u<v. If u=v we say uv if the leftmost point of u is to the left of the leftmost point of v. For u,vV with u=v and equal leftmost point, we break ties arbitrarily to extend into a total ordering.

Fix any v. We will bound the number of edges uvE[] with uv as a function of dv. Consider any uN(v) such that u does not intersect either the leftmost point or rightmost point of v. Then we must have that uv and thus uv. Combined with the fact that the leftmost bag of u is to the right of the leftmost bag of v this implies that vu. Thus, we need only consider uN(v) such that u intersects either the leftmost or rightmost bag of v.

Let Pv be the set of uN(v) such that u contains the leftmost point of v. Let LvPv be the first min{|Pv|,v} of these neighbors sorted by increasing leftmost point. Similarly, let RvPv be the last min{|Pv|,v} of these neighbors sorted by increasing rightmost point. Now notice that all uPv(LvRv) have u>v and thus vu. This is because every point of u either intersects all of Lv or all of Rv, which have size at least v assuming Pv(LvRv) is non-empty. See Figure 4 for an illustration. Thus, the number of vertices uPv such that uv is at most |LvRv|2v. We can similarly bound the number of vertices uN(v) such that u contains the rightmost point of v and uv by 2v.

Putting these bounds together with the fact that v, we can bound the total number of edges uvE[] with uv by 4v4T1dv. Thus, we find that |E[]|u4T1du8T1|E|.

Figure 4: Representation of u,v,Lv,Rv with v=3. Every point of u is contained in either all of Lv or all of Rv.

Unlike with large vertices, we cannot unconditionally bound |E[𝒮]|. We instead introduce a condition that, if unsatisfied, will allow us to make progress towards a triangle packing.

Lemma 9.

If, for some ε>0, all t have |Bt𝒮|max{1,ε|Bt|}, then |E[𝒮]|4ε|E|.

Proof.

Fix any non-isolated v𝒮. Let t denote the leftmost point of v. Let Pv𝒮N(v) denote the set of small neighbors of v whose intervals contain t. Note that u𝒮|Pu||E[𝒮]|. By the definition of Bt, we can bound |Pv|=|Bt𝒮|1ε|Bt|. Additionally, we have that dv|Bt|1|Bt|/2 and thus |Pv|2εdv. Now we can bound |E[𝒮]| by iterating over all small vertices u𝒮

|E[𝒮]|u𝒮|Pu|u𝒮2εdu4ε|E|.

2.3 Building a Triangle Packing

Lemma 10.

If, for some ε>0 and t, |Bt𝒮|max{2,ε|Bt|}, then either

  1. 1.

    we can pack at least ε30TuBt𝒮du edge-disjoint triangles into uBt𝒮δ(u) or

  2. 2.

    G has a bridge.

Proof.

Suppose |Bt𝒮|3. Then by Lemma 7, we can pack at least |Bt𝒮|2/30 edge-disjoint triangles into Bt𝒮. By the definition of 𝒮, we have that

uBt𝒮duuBt𝒮Tu|Bt𝒮|T|Bt||BtS|2Tε1.

This fulfills Condition 1.

Now suppose |Bt𝒮|=2. Label Bt𝒮={v1,v2}. If N(v1)N(v2)=, then v1v2 is a bridge of G and thus Condition 2 is fulfilled. Otherwise, let uN(v1)N(v2). We can pack a single triangle into the edges v1v2,uv1,uv2. Further, we have that ε|Bt|2, and so dv1+dv24Tε. This fulfills Condition 1.

This leads us to Algorithm 1. We will iteratively find a clique to pack triangles into, delete the clique, and continue. If we reach a point where we have deleted at least 0.01|E| edges, then we can finish as we have packed triangles into a constant fraction of the edges. Otherwise, if we run out of cliques to pack triangles into, we certify that we have an almost-complete cut on the remaining graph, which still contains most of the original edges of G. Finally, whenever we identify a bridge, we can simply delete it from the graph and add it to our final cut, as bridges can always be added to any cut.

Algorithm 1 IntervalMaxCut(G=(V,E),T,ε).

Note that Algorithm 1 runs in polynomial time. There are at most |E| iterations of the while loop, as each iteration either returns or removes at least one edge.

Lemma 11.

If IntervalMaxCut(G,T,ε) returns the set δ(𝒮)A from within the while loop, then |δ(𝒮)A|0.99(14ε8T1)|E| and δ(𝒮)A is a valid cut.

Proof.

Suppose that Algorithm 1 returns on the ith iteration of the while loop. By Lemmas 8 and 9, we have that |δ(𝒮)|(14ε8T1)|Ei| edges. Note that E=EiA𝒯 and due to the while loop condition, we have that |𝒯|0.01|E|. Thus, |δ(𝒮)A|0.99(14ε8T1)|E|.

To see that δ(𝒮)A is a valid cut, note that adding a bridge to a bipartite graph cannot introduce an odd cycle. Thus, we can iteratively add each edge of A to δ(𝒮) without invalidating our cut.

Lemma 12.

If IntervalMaxCut(G) exits the while loop, then G has an edge-disjoint triangle packing of size at least ε3000T|E|.

Proof.

At the conclusion of the while loop, we have that |𝒯|0.01|E|. Due to Lemma 10, each time we expand 𝒯 by x edges, we can pack an additional ε30Tx edge-disjoint triangles into 𝒯. By iterating this process, there exists an edge-disjoint triangle packing of size at least ε30T|𝒯|ε3000T|E| in G.

Now set ε=0.01 and T=200. Lemmas 11 and 12 imply that either G has a cut of size at least 0.990.92|E|>0.9|E| or an edge-disjoint triangle packing of size at least 16107|E|>108|E|, as wanted.

2.4 No Tradeoff for Chordal Graphs

In this subsection, we will show that there is no equivalent tradeoff for chordal graphs as there are for interval graphs and split graphs. Thus, improving upon αGW for chordal graphs will likely require new algorithmic insights.

Theorem 13.

For all c>0, there exists a chordal graph G=(V,E) such that mc(G)<αGW|E| and G has no edge-disjoint triangle packing of size c|E|.

Proof.

Suppose we are given a chordal graph G=(V,E) with |V|<c|E| and no triangle packing of size c|E|. We will show later how to obtain such a G. Construct G=(V,E) by setting V:=V{w}{xe,yeeE} and E:=E{xeu,yeve=uvE}{wvvV{w}}. That is, for each edge e, we attach one fresh vertex to each endpoint of the edge, and create a “universal” vertex w connected to all vertices in the graph.

We first note that G is chordal, as it is obtained from a chordal graph G by iteratively adding simplicial vertices. By inspection, any triangle TE must contain at least one edge from {wvvV}. Thus, the maximum number of edge-disjoint triangles in G is at most |V|+c|E|<2c|E|<c|E|. For each edge e=uvE, G contains a 5-cycle wxe,xeu,uv,vye,yew, and so G contains |E| edge-disjoint 5-cycles. Thus, we have that mc(G)|E||E|. Note that |E|=5|E|+|V|6|E|, so mc(G)56|E|<αGW|E|.

It remains to show that a chordal graph G with the desired properties exists. We give the following interval graph construction for G. Select k large enough. Create 2k1 vertices in a “segment-tree” pattern as follows. In the first layer, create one vertex with interval (0,1). In the second layer, create two vertices with intervals (0,0.5) and (0.5,1) respectively. In the third layer, create four vertices with intervals (0,0.25),(0.25,0.5),(0.5,0.75) and (0.75,1) respectively. Then iterate this process for k total layers, see Figure 5 for an illustration.

Figure 5: The interval representation of G for k=4.

We have that |V|=2k1 and |E|2k1(k1)>|V|log2|V|2 by counting only the edges adjacent to the bottom layer. Thus, for k sufficiently large, we have that |V|<c|E|.

To show that G has no edge-disjoint triangle packing of size c|E|, it is sufficient to show that mc(G)>(1c3)|E|, as each triangle results in at least one un-cuttable edge. Consider the cut of G obtained by taking the bottom t layers as one side of the cut. The number of edges in this cut is (2k2kt)(kt). Note also that |E|2kk. Therefore, we have that mc(G)(2k2kt)(kt)2kk|E|. As k tends to infinity, the term (2k2kt)(kt)2kk tends to 12t. So, by setting t to be a sufficiently large constant based on c and letting k be sufficiently large based on t, we have that mc(G)>(1c3)|E| and so G has no edge-disjoint triangle packing of size c|E|. Thus, G fulfills all the conditions we needed to produce G.

3 Analysis of the Perturbed Goemans-Williamson Algorithm

Our second main contribution is an improved approximation for Maximum Cut in graphs with large triangle packings. Given a fixed triangle {uv,vw,wu}, it is impossible for all angles θuv,θvw,θwu to be equal or very close to the critical angle θc. However, it is possible for θuv=θvw=θc to achieve the critical angle and have θwu=0. In this case, despite the entire graph being a single triangle, the Goemans-Williamson rounding algorithm will not perform beyond its worst-case guarantee. This is because the contribution of the edge wu to the objective function is 0, and so rounding it “better” does not actually increase our expected value.

To grapple with this issue, we introduce the “Perturbed Goemans-Williamson Algorithm.” Intuitively speaking, this algorithm randomly “perturbs” each vector slightly. We will see that edges with near-zero angle stand to gain much more in this perturbation process than any other edges have to lose besides those with an angle of nearly π. Thus, any SDP solution with many near-zero angle edges and few near-π angle edges can be rounded with a guarantee better than αGW.

Before presenting the algorithm, we must first define the semidefinite program from which we will round a solution.

maximize: 12uvE(1xuxv) (SDP-GW)
subject to: xv𝕊n1 vV
Algorithm 2 PerturbGW(G=(V,E),η).

Note that Algorithm 2 runs in polynomial time, as solving SDPs and sampling from a Gaussian distribution can be made to run in polynomial time.

Theorem 14.

If G has an edge-disjoint triangle packing of size at least t|E|, then

|PerturbGW(G,η:=t2104)|(αGW+1010t3)mc(G).

Theorem 14, when combined with Theorem 5 and Theorem 6, immediately shows that there is a polynomial-time (αGW+1034)-approximation for Maximum Cut on interval graphs and a polynomial-time (αGW+1016)-approximation for Maximum Cut on split graphs.

Note that δ(S) is the result of running the original Goemans-Williamson algorithm, so the result of Algorithm 2 is immediately at least 𝔼[|δ(S)|]αGWmc(G).

For an edge eE, let Ce:=𝕀[eδ(S)] and Ce:=𝕀[eδ(S)] be the random variables indicating that e is cut by S and S, respectively. Define θe:=arccos(xuxv) as the angle between u and v. For θ[0,π], let

Eθ:={eE|θeθ|η}

be the set of edges with angle “close to” θ, where η is a parameter of Algorithm 2. We first deal with the case where there are many edges in Eπ.

Lemma 15.

If η0.01 and |Eπ|η3/2|E|, then 𝔼[|δ(S)|](αGW+102η3/2)mc(G).

Proof.

Let SDP equal the value of SDP-GW at optimal solution {xv}vV. Consider any e=uvE. Recall that the contribution of e to SDP is 1xuxv2=1cosθe2. Also, by simple calculation, we have that [eδ(S)]=θeπ. We calculate

𝔼[|δ(S)|] =eEθeπ
=eE1cosθe22θeπ(1cosθe)
=eEEπ1cosθe22θeπ(1cosθe)+eEπ1cosθe22θeπ(1cosθe)
αGWeEEπ1cosθe2+(αGW+102)eEπ1cosθe2
=αGWSDP+102eEπ1cosθe2
αGWSDP+102|Eπ||E|SDP.

The first inequality is by calculating 2πθe1cosθeαGW+102 for θe>π0.01. The second inequality follows from the fact that 1cosθe is increasing on [0,π]. The lemma now follows from the fact that SDPmc(G).

Now we show that for edges not in Eπ, S is not much worse than S. For technical reasons, our lemma statement also excludes edges in E0. We will see later that for all edges with angle at most π2, S is no worse than S. Additionally, for edges with angle very close to 0, S is substantially better than S.

Lemma 16.

For all eE0Eπ, we have that [Ce][Ce]10η3/2.

Proof.

For any vertex vV, let Rv:=𝕀[|rxv|<η] be the random variable indicating that sv is randomly selected. Fix any e=uvE0Eπ. We first note that [Ce¬Ru¬Rv]=[Ce¬Ru¬Rv] and [CeRuRv]=12.

We now turn our attention to [CeRv]. Note that xu and xv lie on a single plane through the origin, so by symmetry of 𝕊n1, we can assume without loss of generality that xv=(1,0,,0) and xu=(x,y,0,,0) for y0. Label the random variable r=(r1,r2,,rn). By symmetry of output, we can assume without loss of generality that r10. Note that these assumptions imply that Rv is independent of the value of r2.

Suppose that |r2|>ηy. Then we have that |r2y|>η>|r1x|, and so sign(rxu)=sign(r2). That is, the sign of rxu is entirely determined by the sign of r2. This implies that, when |r2|>ηy, Ce=𝕀[sign(r2)<0]. Thus, by independence of Rv and the value of r2,

[CeRv|r2|>ηy]=[sign(r2)<0Rv|r2|>ηy]=[sign(r2)<0]=12.

We then apply Lemma 4 to bound

[CeRv]12+[|r2|ηyRv]12+ηy.

Let M:=max{[CeRu],[CeRv]}12+ηy. We wish to bound [CeRuRv]M+4ηy. To this end, we manipulate probabilities

[Ce(RuRv)] [CeRu]+[CeRv]
=[CeRu][Ru]+[CeRv][Rv]
M([Ru]+[Rv])
=M([RuRv]+[RuRv]).

Now we find

[CeRuRv]=[Ce(RuRv)][RuRv]M+[RuRv][RuRv].

To bound the numerator of the error term, first consider [RuRv]. Recall that Ru=𝕀[|r1x+r2y|<η] and Rv=𝕀[|r1|<η]. Thus, assuming Rv holds, in order for Ru to hold, we must have that |r2y|<2η. Using Lemma 4, we can bound

[RuRv][|r2y|2ηRv]=[|r2|2ηy]2ηy.

and using Lemma 4 again,

[RuRv]=[RuRv][Rv]2η2y.

Applying this with yet another application of Lemma 4 gives

[RuRv][RuRv]4ηy.

This yields the desired result, that [CeRuRv]12+5ηy.

Putting it all together, we find that

[Ce] =[Ce¬Ru¬Rv][¬Ru¬Rv]+[CeRuRv][RuRv]
[Ce¬Ru¬Rv][¬Ru¬Rv]+(12+5ηy)[RuRv].

and using Lemma 4,

[Ce] =[Ce¬Ru¬Rv][¬Ru¬Rv]+[CeRuRv][RuRv]
=[Ce¬Ru¬Rv][¬Ru¬Rv]+12[RuRv]
[Ce]5ηy[RuRv]
[Ce]10η2y.

Noting that y=sinθeη because eE0Eπ completes the proof of the lemma.

Lemma 16 allows us to bound the perturbation loss on all angles sufficiently bounded away from 0 and π. However, we will not be able to guarantee that the angles we consider are sufficiently bounded away from 0 to properly utilize Lemma 16. Thus, we strengthen Lemma 16 to show that perturbation does not cause any loss on angles below π2.

Lemma 17.

For all eE with θeπ2, we have that [Ce][Ce].

Proof.

Take any e=uvE such that θeπ2. As in the proof of Lemma 16, restrict to two dimensions, rotate, and reflect so we may assume xv=(1,0,,0),xu=(x,y,0,,0) for y0, and r10. Due to our assumption on θe, we have that x=cosθe0. As earlier, define the event Rw:=𝕀[|rxw|<η] for wV. As in the proof of Lemma 16, our main task is to bound [CeRv]. However this time, we will show [CeRv]12.

The event Rv is equivalent to 𝕀[|r1|<η]. Since x,y0, if Ce happens, then we must have r20, so

[Ce|Rv]=[r1x+r2y<0r1[0,η)][r20]=12.

We have that

Ru=𝕀[|r1x+r2y|<η]=[r2y(ηr1x,ηr1x)].

Recall that x0 and r10, so |ηr1x|>|ηr1x|. Thus, the event Ru contains a larger portion of the negative space than the positive space and

[r2y<0¬Ru]12.

Recalling that y>0, and r1 and r2 are independent, we can now calculate

[CeRv¬Ru] =[r1x+r2y<0r1[0,η)¬Ru]
[r2<0r1[0,η)¬Ru]
[r2<0¬Ru]
12.

and by symmetry, [CeRv¬Ru]12. A similar computation as that in the proof of Lemma 16 completes the proof.

Lemma 18.

For all eE, we have that [Ce]η4.

Proof.

Take any e=uvE and calculate, using Lemma 4,

[Ce][CeRv][Rv]12η2.

Now notice that if θe<π4η, then [Ce]η4>θeπ=[Ce]. Thus, if a substantial fraction of E has very small angle, then δ(S) will be noticeably larger than δ(S). To this end, define

Ez:={eE:θeπ8η}.

Note that for each eEz, we have [Ce]π8η1π=η82[Ce].

Lemma 19.

If |Ez|96η|E| and |Eπ|<η3/2|E|, then

𝔼[|δ(S)|]𝔼[|δ(S)|]+η3/2|E|(αGW+η3/2)mc(G).
Proof.

Calculate, using Lemmas 16, 17, and 18

𝔼[|δ(S)|] =eEz[Ce]+eEπ[Ce]+eE(EπEz)[Ce]
eEzη4+eE(EπEz)[Ce]10η3/2
eEz([Ce]+η8)+eE(EπEz)[Ce]10η3/2
𝔼[|δ(S)|]+η8|Ez|10η3/2|E||Eπ|
𝔼[|δ(S)|]+η3/2|E|.

We have shown that if either Ez or Eπ contain a constant fraction of E, then we can improve upon Goemans-Williamson, so we may assume from now on that both of these sets have negligible size. Now we will utilize the fact that G has an edge-disjoint triangle packing of size t|E| to show that there are many edges not near the critical angle θc. Define the set

E={eEθe2π3}Ez.
Lemma 20.

|E|t|E||Ez|.

Proof.

Fix any triangle T={uv,vw,wu}. We can bound the sum of angles θuv+θvw+θuw2π, and so at least one eT has θe2π3. Thus, in any triangle T, we have that T(EEz) is non-empty. Because G has t|E| edge-disjoint triangles, there are at least t|E| edges in (EEz). Subtracting out those edges in Ez completes the proof.

We note that minθ[0,2π3]2πθ1cosθ=89>αGW+0.01, which motivates the following lemma.

Lemma 21.

If t97η and |Ez|96η|E|, then

𝔼[|δ(S)|](αGW+104η3/2)mc(G).
Proof.

The assumptions on t and |Ez| imply, through Lemma 20, that |E|η|E|. We calculate

𝔼[|δ(S)|] =eEθeπ+eEEθeπ
89eE1cosθe2+αGWeEE1cosθe2
αGWSDP+0.01eE1cosθe2
αGWSDP+103eEθeπ
αGWSDP+104η|E|
αGWSDP+104η3/2|E|
(αGW+104η3/2)mc(G).

Now we can recall and prove Theorem 14. See 14

Proof.

Note that the definition of η implies η<0.01 and t>97η. If |Eπ|η3/2|E|, then apply Lemma 15 to find that

𝔼[|δ(S)|](αGW+102η3/2)mc(G)=(αGW+108t3)mc(G).

If |Eπ|<η3/2|E| and |Ez|96η|E|, then apply Lemma 19 to find that

𝔼[|δ(S)|](αGW+η3/2)mc(G)=(αGW+106t3)mc(G).

Finally, if |Eπ|<η3/2|E| and |Ez|<96η|E|, then apply Lemma 21 to find that

𝔼[|δ(S)|](αGW+104η3/2)mc(G)=(αGW+1010t3)mc(G).

4 Hardness of Approximating Maximum Cut on Split Graphs

In this section, we show that, subject to the Small Set Expansion Hypothesis, Maximum Cut on split graphs is hard to approximate to some constant. Our starting point is hardness of finding a small balanced cut on weighted graphs. Given a graph G=(V,E) with weights we[0,1] for eE, we define

μ(S):=uSduvVdv

as the normalized set size of SV. Here, du:=eδ(u)we is defined as the weighted degree of u. In an unweighted regular graph, we have that μ(S)=|S||V|. Also define

Φ(S):=eδ(S)weuSdu

as the edge expansion of SV. In an unweighted d-regular graph, we have that Φ(S)=|δ(S)|d|S|.

Lemma 22 (Corollary 3.6 of [18]).

There is a constant c1>0 such that for any sufficiently small ϵ>0, it is SSE-hard to distinguish between the following two cases for a weighted graph G=(V,E):

Yes: There exists SV such that μ(S)=12 and Φ(S)2ϵ.

No: Every SV with μ(S)[110,12] satisfies Φ(S)c1ϵ.

To utilize this hardness, we first create an unweighted instance as follows.

Lemma 23.

There is a constant c2>0 such that for any sufficiently small ϵ,η>0, it is SSE-hard to distinguish between the following two cases for an unweighted, non-simple graph G=(V,E) with |E||V|5:

Yes: There exists SV such that μ(S)[12η,12+η] and Φ(S)3ϵ.

No: Every SV with μ(S)[110+η,12] satisfies Φ(S)c2ϵ.

Proof.

We begin with a gap instance G=(V,E) of the form in Lemma 22. We will create a weighted instance G with the same vertex and edge set such that each edge has weight in {1,2,,n3}. Duplicating edges according to their weight will then yield the desired statement for unweighted graphs.

First, we may assume without loss of generality that the edges in G are scaled such that the sum of degrees is vVdv=n3. This is because multiplying the weight of all edges by an equal constant factor does not change the results of μ or Φ. Now produce the weights {we}eE by setting we:=we. Note in particular that this implies we|V|3 for all eE. Define μ,Φ, and dv analogously to μ,Φ, and dv, except for weights {we}eE. Note that wewe>we1 for all eE and so dvdv>dvn for all vV.

Suppose that G is in the Yes case of Lemma 22, and let SV have μ(S)=12 and Φ(S)2ϵ. Then we calculate

μ(S)=uSduvVdvuSdun|S|vVdv=μ(S)n|S|n3121n.

Thus, for sufficiently large n, we have that μ(S)12η. By a similar calculation, we have that μ(VS)μ(VS)η=12η and so μ(S)=1μ(VS)12+η. We now aim to show that Φ(S)3ϵ. We calculate

Φ(S) =eδ(S)weuSdueδ(S)weuSdun|S|eδ(S)we(12/n)uSdu
=112/nΦ(S)112/n2ϵ.

Here, the second inequality uses the fact that μ(S)=12 and so uSdu=n32. Thus, for sufficiently large n, we have that Φ(S)3ϵ. This establishes that the Yes case of G maps to the Yes case of G.

Suppose that G is in the No case of Lemma 22. Then any SV with μ(S)12 either has μ(S)<110 or Φ(S)c1ϵ. Take any SV. If μ(S)<110, then μ(S)μ(S)+η<110+η. So consider the case in which μ(S)110. Then we calculate

Φ(S) =eδ(S)weuSdueδ(S)wen2uSdu=Φ(S)n2uSdu
Φ(S)10nc1ϵ10n.

If we set c2=c12, then for sufficiently large n, Φ(S)c2ϵ. This establishes that the No case of G maps to the No case of G.

Our final reduction to Maximum Cut requires our graph to be regular and |E|𝒪(n2), so before proceeding, we must first take a standard sparsification step.

Lemma 24.

There is a constant c3>0 such that for any sufficiently small ϵ,ζ>0, it is SSE-hard under randomized reductions to distinguish between the following two cases for an unweighted, non-simple graph G=(V,E) with |E|Θ(|V|2) and maximum degree at most (1+ζ)d, where d is the minimum degree:

Yes: There exists SV such that μ(S)[12ζ,12+ζ] and Φ(S)4ϵ.

No: Every SV with μ(S)[110+ζ,12] satisfies Φ(S)c3ϵ.

Proof.

We begin with a gap instance G=(V,E) of the form in Lemma 23. We randomly create G=(V,E) as follows. For each vertex uV, create du copies and add them to G. That is, we set V={u1,u2,,uduuV}. Let g:VV map copies uiV to their original vertex uV. For each edge uvE and copies ug1(u),vg1(v), let puv:=Rdudv for a parameter R we will adjust later. Now add puv copies of uv to E and randomly add a final edge with probability puvpuv. Note that the expected number of uv edges is equal to puv.

Let us consider the properties of G. We have that |V|=vVdv=2|E| and

𝔼[|E|]=uvEpuvdudv=R|E|.

Let qe be the random variable denoting the number of edges in G “produced” from eE. By a standard application of Chernoff bounds and the union bound over 2V and E, we can show, for any constant α>0, that

  1. 1.

    (1α)𝔼[|δG(S)|]|δG(S)|(1+α)𝔼[|δG(S)|] for all SV and

  2. 2.

    (1α)Rqe(1+α)R for all eE

with high probability, assuming RΩ(n). In particular, item 1 implies that dv[(1α)R,(1+α)R] for all vV, where dv is the degree of v in G. Suppose from now on that items 1 and 2 are both true. Now suppose G is a Yes instance of Lemma 23. That is, there is a SV such that μ(S)[12η,12+η] and Φ(S)3ϵ. Let S:=g1(S), and μ be defined for G as μ is defined for G. Then we calculate

μ(S) =uSduvVdv(Rα)|S|(R+α)|V|=RαR+αuSduvVdv=RαR+αμ(S)RαR+α(12η).

Applying the same calculation for μ(VS) yields the upper bound

μ(S)=1μ(VS)1RαR+α(12η).

For sufficiently small α>0, this implies that μ(S)[12ζ,12+ζ]. Let Φ be defined for G as Φ for G. We calculate

Φ(S) =|δG(S)|uSdu(R+α)|δG(S)|(Rα)uSdu=R+αRαΦ(S)R+αRα3ϵ.

For sufficiently small α>0, we get the upper bound Φ(S)4ϵ. This establishes that the Yes case of G maps to the Yes case of G with high probability.

Now suppose that G is a No instance of Lemma 23. Consider any SV with μ(S)[110+ζ,12]. Let f[0,1]V be a vector defined by fv=|Sg1(v)|dv. That is, f indicates the proportion of each original vertex selected by S. We calculate

Φ(S) =|δG(S)|vVdv(1α)𝔼[|δG(S)|](R+α)|V|
=1αR+αuvEpuv((1fu)dufvdv+fudu(1fv)dv)|V|
=R(1α)R+αuvE(1fu)fv+fv(1fu)|V|

Let SV be a random variable sampled by including vS with probability fv. Then we have that

𝔼[Φ(S)]=𝔼[|δG(S)|]vVdv=uvE(1fu)fv+fu(1fv)|V|R+αR(1α)Φ(S).

We can apply Chernoff bounds on |δG(S)| to find that Φ(S)<(1+α)𝔼[Φ(S)] with high probability. We additionally calculate

𝔼[μ(S)]=uVfuduvVdv=|S||V|RαR+αμ(S)RαR+α(110+ζ).

By setting α,η>0 sufficiently small and applying Chernoff bounds, this implies that 𝔼[μ(S)]>110+η with probability at least 12. Thus, by the probabilistic method, there exists some SV such that μ(S)>110+η and Φ(S)<(1+α)𝔼[Φ(S)]. Applying Lemma 23, we find that

Φ(S)R(1α)R+α𝔼[Φ(S)]>R(1α)(R+α)(1+α)Φ(S)R(1α)(R+α)(1+α)c2ϵ.

So, set c3:=R(1α)(R+α)(1+α)c2. This establishes that the No case of G maps to the No case of G with high probability.

We now convert the language of μ and Φ to the simpler language of cardinalities of sets.

Lemma 25.

There exists a constant c>0 such that for all sufficiently small ε,η>0, it is SSE-hard under randomized reductions to distinguish between the following cases for an unweighted graph G=(V,E) with |E|Θ(|V|2):

Yes: There exists a cut SV with (12η)|V||S||V|2 such that |δ(S)|10ε|E|.

No: For all cuts SV with |S||V|2, either |S||V|5 or |δ(S)|cε|E|.

Proof.

We begin with a gap instance G=(V,E) of the form in Lemma 24. We will not need to modify this instance to produce our desired result. Suppose that G is a Yes instance of Lemma 24. Then there is some SV with μ(S)[12ζ,12+ζ] and Φ(S)4ϵ. Using that every vertex vV has degree dv(1+ζ)d, where d is the minimum degree of G, we can bound

μ(S)=uSduvVdv|S|(1+ζ)d|V|d.

Thus, |S|1/2ζ1+ζ|V|. For sufficiently small ζ>0, we can then bound below |S|(12η)|V|. Because μ(S)=μ(VS), we also bound |VS|(12η)|V|. If |S|>|V|2, then swap S with VS, noting that δ(S)=δ(VS). This fulfills the (12η)|V||S||V|2 condition. Similarly, we can bound

Φ(S)=|δ(S)|uSdu|δ(S)||V|d(1+ϵ).

With the bound of Φ(S)4ϵ from Lemma 24, this implies that

|δ(S)|4ϵ|V|d(1+ζ)10ϵ|E|

for sufficiently small ζ>0. Thus, G being a Yes instance of Lemma 24 implies the Yes conditions of this lemma.

Suppose that G is a No instance of Lemma 24. Consider any SV. Suppose first that μ(S)<110+ζ. Then we have that

μ(S)=uSduvVdv|S|d|V|(1+ζ)d,

and so |S|(110+ζ)(1+ζ)|V|. For ζ sufficiently small, this implies |S||V|5. Suppose otherwise, that μ(S)110+ζ. Then we have that

Φ(S)=|δ(S)|uSdu|δ(S)||V|d,

and so |δ(S)|3dϵ|V|61+ζϵ|E|. Setting c:=61+ζ finishes the proof.

We now reduce from gap instances of the type in Lemma 25 to show hardness for Maximum Cut on split graphs.

Theorem 3. [Restated, see original statement.]

There exists a constant c>0 such that there is no polynomial-time (1c)-approximation algorithm for Maximum Cut on split graphs, subject to the Small Set Expansion Hypothesis under randomized reductions.

Proof.

We reduce from a graph G=(V,E) of the form in Lemma 25 to a (simple) split graph G=(V=KI,E) as follows. Let the clique portion of G be K:=V, and let the independent portion of G be I:={ve}. We define the edge set of G as E:={uveuV,eu}{uwu,wV}. That is, we place a copy of V in the clique portion of G, and place one vertex for each edge of E in the independent set portion, each connected to its two endpoints in V.

Note that in a split graph, the maximum cut is defined solely by its intersection with the clique portion K of the graph, as the decision for vertices in the independent set portion I can be made greedily. Define δ(S) for SK=V to be the edges in the maximum cut of G defined by S, breaking ties arbitrarily.

Suppose that G is a Yes instance as defined in Lemma 25. Then by direct counting, we have that, in G,

mc(G)|δ(S)|(0.25η2)|V|2+(210ε)|E|.

Define ω:=(0.25η2)|V|2+(210ε)|E|. Suppose instead that G is a No instance and consider any SV with |S|0.5|V|. We will show that, with the right choice of ε and η, |δ(S)|(1c)ω for some constant c>0. If |S||V|5, then

|δ(S)||V|25+2|E|.

Because |E|Θ(|V|2), and 15<0.25η2 for sufficiently small η, this implies that |δ(S)|<(1c)ω for some constant c>0 when η and ε are sufficiently small. Otherwise, if |S||V|5, then we must have that |δG(S)|cε|E|. Then

|δ(S)|0.25|V|2+(2cε)|E|.

As above, because 2cε<210ε for sufficiently small ε, we have that |δ(S)|<(1c)ω for some constant c>0 when η and ε are sufficiently small. Thus, in the No case, we have that mc(G)(1c)ω. Therefore, it is SSE-hard to distinguish between mc(G)ω and mc(G)(1c)ω. This implies SSE-hardness of approximating Maximum Cut to a factor of 1c on split graphs.

It is critical that Lemma 25 allows for non-simple graphs. If G were simple, then our reduction in the proof of Theorem 3 results in the relation mc(G)=mc(Gc)+2|E|, where Gc=(V,Ec) is the complement of G. This is proved explicitly in [3]. Then, if |E|ω(|Ec|), the value 2|E| is a good estimate for mc(G). If |E|𝒪(|Ec|), then |Ec|Ω(|V|2). In this case, Gc is a dense graph and so there is a PTAS for mc(Gc) [7, 16]. In particular, these cases imply that there is a PTAS for mc(G) whenever G is a 2-split graph (i.e., all vertices in I have degree 2) without any “duplicated” vertices of I that have the same neighborhood.

References

  • [1] Ranendu Adhikary, Kaustav Bose, Satwik Mukherjee, and Bodhayan Roy. Complexity of maximum cut on interval graphs. Discrete & Computational Geometry, 70(2):307–322, 2023. doi:10.1007/S00454-022-00472-Y.
  • [2] Alexey Barsukov and Bodhayan Roy. Maximum cut on interval graphs of interval count two is np-complete. arXiv preprint, 2022. arXiv:2203.06630.
  • [3] Hans L Bodlaender and Klaus Jansen. On the complexity of the maximum cut problem. Nordic Journal of Computing, 7(1):14–31, 2000.
  • [4] Celina MH de Figueiredo, Alexsander A de Melo, Fabiano S Oliveira, and Ana Silva. Maximum cut on interval graphs of interval count four is np-complete. Discrete & Computational Geometry, 71(3):893–917, 2024. doi:10.1007/S00454-023-00508-X.
  • [5] Tomas Feder and Pavol Hell. List homomorphisms to reflexive graphs. Journal of Combinatorial Theory, Series B, 72(2):236–250, 1998. doi:10.1006/JCTB.1997.1812.
  • [6] Tomás Feder and Carlos Subi. Packing edge-disjoint triangles in given graphs. In Electronic Colloquium on Computational Complexity (ECCC), volume 19, page 13, 2012.
  • [7] W Fernandez De La Vega. Max-cut has a randomized approximation scheme in dense graphs. Random Structures & Algorithms, 8(3):187–198, 1996. doi:10.1002/(SICI)1098-2418(199605)8:3\%3C187::AID-RSA3\%3E3.0.CO;2-U.
  • [8] Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. doi:10.1145/227683.227684.
  • [9] Venkatesan Guruswami. Maximum cut on line and total graphs. Discrete applied mathematics, 92(2-3):217–221, 1999. doi:10.1016/S0166-218X(99)00056-6.
  • [10] Frank Hadlock. Finding a maximum cut of a planar graph in polynomial time. SIAM Journal on Computing, 4(3):221–225, 1975. doi:10.1137/0204019.
  • [11] Laurie J Hendren, Guang R Gao, Erik R Altman, and Chandrika Mukerji. A register allocation framework based on hierarchical cyclic interval graphs. In Compiler Construction: 4th International Conference, CC’92 Paderborn, FRG, October 5–7, 1992 Proceedings 4, pages 176–191. Springer, 1992. doi:10.1007/3-540-55984-1_17.
  • [12] Jun-Ting Hsieh and Pravesh K Kothari. Approximating max-cut on bounded degree graphs: Tighter analysis of the fkl algorithm. arXiv preprint arXiv:2206.09204, 2022. doi:10.48550/arXiv.2206.09204.
  • [13] John R. Jungck and Rama Viswanathan. Chapter 1 - graph theory for systems biology: Interval graphs, motifs, and pattern recognition. In Raina S. Robeva, editor, Algebraic and Discrete Mathematical Methods for Modern Biology, pages 1–27. Academic Press, Boston, 2015. doi:10.1016/B978-0-12-801213-0.00001-0.
  • [14] Richard M. Karp. Reducibility among combinatorial problems. In Complexity of Computer Computations, pages 85–103, 1972. doi:10.1007/978-1-4684-2001-2_9.
  • [15] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproximability results for max-cut and other 2-variable csps? SIAM Journal on Computing, 37(1):319–357, 2007. doi:10.1137/S0097539705447372.
  • [16] Claire Mathieu and Warren Schudy. Yet another algorithm for dense max cut: go greedy. In SODA, volume 8, pages 176–182, 2008. URL: http://dl.acm.org/citation.cfm?id=1347082.1347102.
  • [17] Prasad Raghavendra and David Steurer. Graph expansion and the unique games conjecture. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 755–764, 2010. doi:10.1145/1806689.1806792.
  • [18] Prasad Raghavendra, David Steurer, and Madhur Tulsiani. Reductions between expansion problems. In 2012 IEEE 27th Conference on Computational Complexity, pages 64–73. IEEE, 2012. doi:10.1109/CCC.2012.43.
  • [19] Luca Trevisan. Non-approximability results for optimization problems on bounded degree instances. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 453–461, 2001. doi:10.1145/380752.380839.