Abstract 1 Introduction 2 Preliminaries 3 Our Results 4 Applications 5 Sampling with a Black Box References

Sampling with a Black Box: Faster Parameterized Approximation Algorithms for Vertex Deletion Problems

Barış Can Esmer111The author is part of Saarbrücken Graduate School of Computer Science, Germany. ORCID CISPA Helmholtz Center for Information Security, Saarbrücken, Germany Ariel Kulik ORCID Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
Abstract

In this paper, we present Sampling with a Black Box, a unified framework for the design of parameterized approximation algorithms for vertex deletion problems (e.g., Vertex Cover, Feedback Vertex Set, etc.). The framework relies on two components:

  • A Sampling Step. A polynomial-time randomized algorithm that, given a graph G, returns a random vertex v such that the optimum of G{v} is smaller by 1 than the optimum of G, with some prescribed probability q. We show that such algorithms exist for multiple vertex deletion problems.

  • A Black Box algorithm which is either an exact parameterized algorithm, a polynomial-time approximation algorithm, or a parameterized-approximation algorithm.

The framework combines these two components together. The sampling step is applied iteratively to remove vertices from the input graph, and then the solution is extended using the black box algorithm. The process is repeated sufficiently many times so that the target approximation ratio is attained with a constant probability.

We use the technique to derive parameterized approximation algorithms for several vertex deletion problems, including Feedback Vertex Set, d-Hitting Set and -Path Vertex Cover. In particular, for every approximation ratio 1<β<2, we attain a parameterized β-approximation for Feedback Vertex Set, which is faster than the parameterized β-approximation of [Jana, Lokshtanov, Mandal, Rai and Saurabh, MFCS 23’]. Furthermore, our algorithms are always faster than the algorithms attained using Fidelity Preserving Transformations [Fellows, Kulik, Rosamond, and Shachnai, JCSS 18’].

Keywords and phrases:
Parameterized Approximation Algorithms, Random Sampling
Category:
Track A: Algorithms, Complexity and Games
Funding:
Ariel Kulik: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 852780-ERC (SUBMODULAR).
Copyright and License:
[Uncaptioned image] © Barış Can Esmer and Ariel Kulik; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Parameterized complexity and exact algorithms
; Theory of computation Approximation algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2407.12654
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

A vast body of research has been dedicated to basic vertex deletion problems such as Vertex Cover, 3-Hitting Set and Feedback Vertex Set. In these problems, the objective is to delete a minimum cardinality set of vertices from the input (hyper-)graph so that the remaining (hyper-)graph satisfies a specific property (edge-free, cycle-free, etc.). As many of these problems are NP-hard, multiple algorithmic results focus on either polynomial-time approximations or exact parameterized algorithms. In between these two classes of algorithms lies the class of parameterized approximation algorithms. Such algorithms aim to provide approximation ratios which cannot be attained in polynomial-time, at the expense of a parameterized running time.

In this paper we explore how existing exact parameterized algorithms and polynomial-time approximation algorithms can be used together with sampling steps to derive efficient parameterized approximation algorithms. Informally, a sampling step with success probability q(0,1) is a polynomial-time algorithm which, given an input graph G=(V,E), returns a random vertex vV. The vertex v should, with probability at least q, satisfy the condition that removing v from G reduces the optimum (i.e., the minimum number of vertices that must be removed for the graph to satisfy the property) by 1. As we show in this paper, such algorithms can be easily obtained for various vertex deletion problems.

Our framework, Sampling with a Black Box, applies the sampling step t times and subsequently uses the existing parameterized/approximation algorithms to complete the solution. The whole process is executed sufficiently many times so that a β-approximate solution is found with a constant probability.

Sampling with a Black-Box is applicable to a wide collection of vertex deletion problems. We show sampling steps exist for every vertex deletion problem for which the property can be described by a finite set of forbidden vertex induced hypergraphs. Some examples for such problems are d-Hitting Set, -Path Vertex Cover and Vertex Cover on graphs with maximal degree 3. We further provide sampling steps for Feedback Vertex Set (FVS) and Pathwidth One Vertex Deletion, where the set of forbidden hypergraphs is infinite.

We compare Sampling with a Black Box to existing benchmarks.

  • In [24], Jana, Lokshtanov, Mandal, Rai, and Saurabh developed a parameterized β-approximation for Feedback Vertex Set (FVS) for every 1<β<2. The objective in FVS is to remove a minimum number of vertices from an undirected graph, so the remaining graph does not contain cycles. We achieve faster parameterized approximation algorithms for FVS for every approximation ratio between 1 and 2 – see comparison in Figure 1(a).

  • In [19], Fellows, Kulik, Rosamond and Shachnai provided a generic technique, called Fidelity Preserving Transformations, which can be applied for every vertex deletion problem in which the property can be described by a finite set of forbidden vertex induced subgraphs. Assuming the maximum number of vertices in a forbidden induced subgraph is η, and the problem has an exact parameterized ckn𝒪(1) algorithm, the technique of [19] yields a β-approximation in time cηβη1kn𝒪(1).222The result in [19] was stated for Vertex Cover and 3-Hitting Set, but can be easily generalized to the stated result. We prove that the running time we obtain for every problem in the class is always faster than the running time attained by [19], for every 1<β<η. Figure 1(b) provides a comparison between the running times we attain and those of [19] for 3-Path Vertex Cover, in which the objective is to remove vertices from a graph so the remaining graph does not have a path on 3 vertices.

Sampling with a Black-Box builds upon ideas which have been developed in several previous works [20, 16, 18, 26, 24, 19]. The framework overcomes weaknesses of existing techniques in parameterized approximations [19, 26, 24], and utilizes concepts developed for exponential algorithms [20, 16, 18] in a parameterized setting. A key technical contribution of this paper is the analysis of the framework, which effectively leverages the potential of each of its components.

(a) Feedback Vertex Set.
(b) 3-Path Vertex Cover.
Figure 1: Comparison of the running times for Feedback Vertex Set and 3-Path Vertex Cover. The x-axis corresponds to the approximation ratio, while the y-axis corresponds to the base of the exponent in the running time. A dot at (β,c) means that there is a parameterized β-approximation for FVS or 3-Path Vertex Cover in time ckn𝒪(1).

1.1 Related Work

The field of parameterized approximations aims to derive approximation algorithms which run in parameterized running time. In the classic setting, the considered problem is one which is not expected to have an exact parameterized algorithm, and further has a hardness of approximation lower bound which indicates that a polynomial-time α-approximation is unlikely to exist, for some α>1. In such a setting, the objective is to derive a β-approximation algorithm that runs in parameterized running time for β<α, or to show that such an algorithm cannot exist (subject to common complexity assumptions). In recent years, there has been a surge of breakthrough results in the field, providing both algorithms (e.g., [27, 21, 15, 32, 31, 4, 1, 14, 23]) and hardness of approximation results (e.g., [22, 29, 36, 11, 5, 33, 25, 30, 12]).

Within the broad field of parameterized approximations, a subset of works consider problems which are in FPT. For problems in FPT, there is always a parameterized β-approximation algorithm for all β1, as the exact algorithm is also an approximation algorithm. Therefore, the main focus of these works is on the trade-off between approximation and running time.

A prominent approach to attain such parameterized-approximation algorithms relies on using existing exact parameterized algorithms as black-box. In [6], the authors showed that an exact parameterized ckn𝒪(1) algorithm for Vertex Cover implies a parameterized β-approximation in time c(2β)k. The same running time has been attained for Vertex Cover by Fellows et al. [19] through a more generic framework which can be applied to additional problems.

Other works focused on directly designing parameterized approximation algorithms. In [8, 7] Brankovic and Fernau provided parameterized approximation algorithms for Vertex Cover and 3-Hitting Set. The designed algorithms are branching algorithms that involve branching rules aimed at approximation and interleave approximative reduction rules. The works provide, among others, a parameterized 1.5-approximation for Vertex Cover in time 1.0883kn𝒪(1) and a parameterized 2-approximation for 3-Hitting Set in time 1.29kn𝒪(1).

In [26], Kulik and Shachnai showed that the use of randomized branching can lead to significantly faster parameterized approximation algorithms. In particular, they attained a parameterized 1.5-approximation for Vertex Cover in time 1.017kn𝒪(1) and a parameterized 2-approximation for 3-Hitting Set in time 1.0659kn𝒪(1). The idea in randomized branching is that the algorithm picks one of the branching options at random. Subsequently, a good approximation is attained as long as the randomly picked options are not too far from the correct options. The branching rules used by [26] are fairly involved in comparison to the sampling steps used by this paper and their analysis requires non-trivial mathematical machinery.

While [26] showed that randomized branching is a powerful technique for the design of parameterized approximation algorithms, it has only been applied to a limited set of problems. In [26], the authors provided applications of the technique to Vertex Cover and 3-Hitting Set. Furthermore, for approximation ratios close to 1, the randomized-branching algorithms are inferior to the exact algorithms in terms of running times (or to the combination of those with [19]). A central goal of this paper is to overcome some of these difficulties: Sampling with a Black Box leverages the power of randomized branching, it is applicable to a wide set of problems, and always improves upon the running times attained using the best known exact algorithms in conjunction with [19].

In [24], the authors developed parameterized approximation algorithms for FVS using the combination of a randomized branching rule for FVS and existing algorithms as black box. Their work indeed served as a motivation for this paper. While [24] uses randomized branching, the work does not facilitate the full power of randomized branching for approximation. In [24], for the algorithm to succeed, the randomized branching must always select the “correct” branching option, while a main benefit of randomized branching for approximation is that an incorrect option may still lead to an approximate solution 333[24] presents three different algorithms. One of the algorithms does utilize this property of randomized branching. However, the specific algorithm is a plain randomized branching algorithm which does not use an exact parameterized algorithm or a polynomial-time approximation algorithm as a black box. . The faster running times attained in this paper are a result of a better utilization of randomized branching for parameterized approximations. While the approach in [24] can potentially be generalized to other problems, such as 3-Path Vertex Cover, the resulting running time would be inferior to the ones attained by Sampling with a Black Box.

The concept of randomly sampling a partial solution and subsequently extending it using a parameterized algorithm is central to the monotone local search technique of Fomin, Gaspers, Lokshtanov and Saurabh [20]. Later variants of the technique are designed to obtain exponential time approximation algorithms [16, 17, 18]. A central idea in those later variants is that an approximate solution is obtained as long as the initial sampling does not contain too many vertices which do not belong to the optimum. This resembles the use of the sampling step in this paper.

Organization.

Section 2 gives several standard definitions used throughout the paper. In Section 3 we formally state the results of the paper. Section 4 lists applications of the technique for several problems. Section 5 provides our main algorithm. Additional material, including detailed proofs, sampling step constructions, and extended applications, is provided in the full version of the paper.

2 Preliminaries

In this section, we present several standard concepts and notations used throughout this paper.

Graph Notations.

Given an hypergraph G=(V,E) we use V(G)=V and E(G)=E to denote the sets of vertices and hyperedges of the graph (respectively).

Vertex Induced Subhypergraphs and Vertex Deletion.

Given a a hypergraph G=(V,E) and a subset of vertices UV, the vertex induced subhypergraph of G and U is the hypergraph G[U]=(U,E) where E{eEeU}. We also define the vertex deletion of U from G by GU=G[VU]. For a single vertex uV(G) we use the shorthand Gv=G{v}.

Hypergraph Properties.

A hypergraph property is a set Π of hypergraphs such that for every GΠ and G isomorphic to G, it holds that GΠ as well. A hypergraph property Π is called hereditary if for every GΠ and UV(G) it holds that G[U]Π as well. Furthermore, we say that Π is polynomial-time decidable if given a hypergraph G we can decide in polynomial-time whether GΠ or not.

In this paper we solely focus on hypergraph properties that are both hereditary and polynomial-time decidable. Although some hypergraph properties do not meet these criteria, Sampling with a Black Box cannot be applied to problems defined by them. For simplicity, we use the term hypergraph property exclusively to refer to properties that are both hereditary and polynomial-time decidable. In other words, whenever we say that Π is a hypergraph property, we implicitly mean that it satisfies these two conditions.

A closed set of hypergraph.

We say that a set 𝒢 of hypergraphs is closed if every vertex induced subhypergraph of a graph in 𝒢 is also in 𝒢. That is, for every G𝒢 and SV(G) it also holds that G[S]𝒢. A key step in Sampling with a Black Box is removing a vertex from a given hypergraph G𝒢. The fact that 𝒢 is closed ensures that G{v}𝒢. In other words, removing a vertex from a hypergraph in 𝒢 results in another hypergraph that also belongs to 𝒢.

Kullback-Leibler Divergence.

Given two numbers a,b[0,1], the Kullback-Leibler divergence of a and b is defined to be

𝒟(ab)=aln(ab)+(1a)ln(1a1b).

We follow that standard convention that 0ln0=0ln0x=0, which implies

𝒟(1b)=1ln(1b)+(11)ln(111b)=ln(b). (1)

3 Our Results

In this section we formally state our results, starting with some formal definitions. The definitions of vertex deletion problems and parameterized approximation algorithms are given in Section 3.1. Section 3.2 provides the definition of sampling steps. Then, we state our main result in Section 3.3. The comparison of our results to those of [19] is finally given in Section 3.4.

3.1 Vertex Deletion Problems

The focus of this paper is the class of (𝒢,Π)-Vertex Deletion problems. For any hypergraph property Π and a closed set of hypergraphs 𝒢, the input for (𝒢,Π)-Vertex Deletion ((𝒢,Π)-Del for short) is a hypergraph G𝒢. A solution is a subset of vertices SV(G) such that GSΠ. We use SOLΠ(G)={SV(G)|GSΠ} to denote the set of all solutions. The objective is to find a smallest cardinality solution SSOLΠ(G). In the decision version of the problem, we are given a hypergraph G𝒢, an integer k0 and we are asked whether there exists a set SSOLΠ(G) such that |S|k.

The family of (𝒢,Π)-Vertex Deletion problems include many well known problems. Some notable examples are:

  • Vertex Cover. In this case 𝒢 corresponds to graphs, i.e., hypergraphs with edge cardinality exactly 2. The hypergraph property Π consists of all edgeless graphs.

  • Vertex Cover on graphs with maximal degree 3. In this case 𝒢 corresponds to graphs of maximal degree 3 or less. The property Π, again, consists of all edgeless graphs.

  • d-Hitting Set. Similar to Vertex Cover, in this case 𝒢 corresponds to hypergraphs with edge cardinality at most d. Π also consists of all edgeless hypergraphs.

  • Feedback Vertex Set. In this case 𝒢 is the set of all graphs and Π is the set of graphs that have no cycles.

We use OPT𝒢,Π(G) to denote the size of an optimal solution for G𝒢 with respect to the (𝒢,Π)-Vertex Deletion problem. That is, OPT𝒢,Π(G)=min{|S||SSOLΠ(G)}. If 𝒢 and Π are known by context we use OPT instead of OPT𝒢,Π.

Our goal is to develop parameterized approximation algorithms for (𝒢,Π)-Vertex Deletion problems, where the parameter is the solution size. Such algorithms return a solution of size αk or less (with probability at least 12) if the optimum of the instance is at most k.

Definition 1 (Parameterized Approximation).

Let Π be a hypergraph property and 𝒢 be a closed set of hypergraphs. An algorithm 𝒜 is a randomized parameterized α-approximation algorithm for (𝒢,Π)-Vertex Deletion if it takes a graph G𝒢 and an integer k0 as input, and returns a solution SSOLΠ(G) which satisfies the following:

If OPT𝒢,Π(G)k, then Pr(|S|αk)12. (2)

Moreover, the running time of 𝒜 is f(k)n𝒪(1) for some function f.

We note that the above definition of parameterized approximation algorithms captures the classic definition of exact parameterized algorithms (α=1) and polynomial-time approximation algorithms (f(k)=O(1)) as special cases. Sampling with a Black Box converts a randomized parameterized α-approximation algorithm 𝒜 which runs in time ckn𝒪(1) to a randomized parameterized β-approximation algorithm that runs in time dkn𝒪(1). The conversion relies on a simple problem dependent sampling step. In all our applications the algorithm 𝒜 is either the state-of-art exact parameterized algorithm (i.e., α=1), or the state-of-art polynomial-time algorithm (i.e., c=1) for the problem.

3.2 Sampling Steps

We exploit inherent properties of a specific (𝒢,Π)-Vertex Deletion problem to come up with basic sampling strategies. From a high-level perspective, a sampling step is a polynomial-time algorithm that takes as input a hypergraph G𝒢Π and returns a vertex vV(G), such that the size of the optimal solution of Gv decreases by 1, with a prescribed probability.

Definition 2 (Sampling Step).

Let 𝒢 be a closed set of hypergraph, Π be a hypergraph property and q(0,1). A sampling step with success probability q is a polynomial-time randomized algorithm that takes as input a hypergraph G𝒢Π and returns a vertex vV(G) such that

Pr(OPTΠ(Gv)OPTΠ(G)1)q.

For example, consider Vertex Cover, which is the (𝒢,Π)-Vertex Deletion problem where 𝒢 is the set of all graphs and Π consists of all edgeless graphs. The following is a very simple sampling step for Vertex Cover with success probability 12: pick an arbitrary edge and return each of its endpoints with probability 12. This algorithm clearly runs in polynomial-time. Moreover, for each Vertex Cover S of G and for each edge e, at least one of the endpoints of e belongs to S. Therefore, it is a sampling step with success probability q=12 for Vertex Cover. We provide sampling steps for the general case where Π is defined by a finite set of forbidden subgraphs, for Feedback Vertex Set and for Pathwidth One Vertex Deletion.

On their own, sampling steps can be easily used to derive parameterized approximation algorithms. To obtain a β-approximation, one may use the sampling step βk times, where each execution returns a vertex v which is removed from the graph and added to the solution S. After βk steps, with some probability P, the set S of returned vertices is indeed a solution. Thus, by repeating this multiple sampling step for 1P times, one gets a β-approximate solution with probability 12. By carefully tracing the probability in the above argument, we show that P(exp(β𝒟(1βq)))k, as demonstrated by the following lemma.444Recall 𝒟() stands for the Kullback-Leibler divergence which has been defined in Section 2.

Lemma 3.

Let 𝒢 be a closed set of hypergraphs and Π be an hypergraph property such that there is a sampling step with success probability q for (𝒢,Π)-Del. Then for every 1β1q there is a randomized parameterized β-approximation for (𝒢,Π)-Del which runs in time dkn𝒪(1) where d=exp(β𝒟(1βq)).

The proof of Lemma 3 can be found in Section 5. Observe that exp(1q𝒟(1(1q)q))=1 for every q(0,1). Thus, Lemma 3 provides a randomized polynomial-time 1q-approximation algorithm for (𝒢,Π)-Del. This justifies the restriction of the lemma to β1q.

While Lemma 3 provides a parameterized β-approximation algorithm for essentially any β, the resulting algorithms are often far from optimal. For example, for Feedback Vertex Set (FVS) we provide a sampling step with success probability 14. Thus, by Lemma 3 we get a randomized parameterized 1.1-approximation for FVS which runs in time 2.944kn𝒪(1). However, FVS has an exact randomized parameterized algorithm which runs in time 2.7kn𝒪(1). That is, the running time of the approximation algorithm is slower than that of the exact algorithm. Our goal is to combine the sampling step with the exact algorithm to attain improved running time in such cases.

3.3 Sampling with a Black-Box

Our main theorem states the following: a sampling step with success probability q, together with a parameterized α-approximation algorithm 𝒜 which runs in time ckn𝒪(1), can be used to obtain a β-approximation algorithm . By Lemma 3, the sampling step can be used to obtain a randomized parameterized α-approximation which runs in time (exp(α𝒟(1αq)))kn𝒪(1). Our assumption is that 𝒜 is at least as fast as the algorithm provided by Lemma 3, hence we only consider the case in which cexp(α𝒟(1αq)).

We use the following functions to express the running time of . Define δleft(α,c,q)(1,α] and δright(α,c,q)[α,1q] to be the unique numbers which satisfy

𝒟(1α1δleft(α,c,q))=𝒟(1α1δright(α,c,q))=𝒟(1αq)ln(c)α. (3)

We write δleft=δleft(α,c,q) and δright=δright(α,c,q) if the values of α,c and q are clear from the context. The following lemma provides conditions which guarantee that δleft and δright are well defined in certain domains.

Lemma 4.

For every c1, 0<q<1 and α1 such that cexp(α𝒟(1αq)), the value of δright(α,c,q) is well-defined. Furthermore, if α>1, then δleft(α,c,q) is also well-defined.

The proof of Lemma 4 is included in the full version of the paper. We note that 𝒟(1α1x) is monotone in the domains x(1,α] and x[α,). Hence the values of δleft(α,c,q) and δright(α,c,q) can be easily evaluated to arbitrary precision using a simple binary search.

We use δleft and δright to express the running time of . For every β,α1, 0<q1 and c1 such that cexp(α𝒟(1αq)), we define

convert(α,β,c,q){cexp(δright𝒟(1δrightq)ln(c)δrightα(βα))if δright(α,c,q)βαcexp(δleft𝒟(1δleftq)ln(c)δleftα(βα))if δleft(α,c,q)βαexp(β𝒟(1βq)) otherwise (4)

As we can compute δleft and δright, it follows we can also compute convert(α,β,c,q) to arbitrary precision.

Recall that Lemma 3 provides a polynomial-time 1q-approximation algorithm for (𝒢,Π)- Del. Consequently, we restrict our consideration to α and β values that are less than or equal to 1q. Our main technical result is the following.

Theorem 5 (Sampling with a Black-Box).

Let 𝒢 be a closed set of hypergraphs and Π be a hypergraph property. Assume the following:

  • There is a sampling step with success probability q(0,1) for (𝒢,Π)-Del.

  • There is a randomized parameterized α-approximation algorithm for (𝒢,Π)-Del which runs in time ckn𝒪(1) for some c1, 1α1q and cexp(α𝒟(1αq)).

Then, for every 1β1q, there is a randomized parameterized β-approximation for (𝒢,Π)-Del which runs in time (convert(α,β,c,q))kn𝒪(1).

The proof of Theorem 5 is given in Section 5. For example, by Theorem 5, we can use the sampling step with success probability 14 for FVS, together with the randomized exact parameterized algorithm for the problem which runs in time 2.7kn𝒪(1) [28], to get a randomized parameterized 1.1-approximation algorithm with running time 2.49kn𝒪(1). The algorithm achieves a better running time than that of the exact parameterized algorithm, as well as the running time which can be attain solely by the sampling step, i.e. 2.944kn𝒪(1) (Lemma 3). We note that this running time is also superior to the running time of 2.62kn𝒪(1) for the same approximation ratio given in [24].

The running time of the algorithm generated by Theorem 5 (i.e., the value of convert(α,β,c,q)) can always be computed efficiently, though this computation requires a binary search for the evaluation of δright and δleft. For the special cases of α=1 and well as (α=2 and c=1) we provide a closed form expression for convert(α,β,c,q).

Theorem 6 (simple formula for α=1).

For every 0<q<1, 1β1q and c1 such that cexp(1𝒟(11q))=1q it holds that

convert(1,β,c,q)={c(1cq1q)β1if 1β<1qcexp(β𝒟(1βq))if 1qcβ1q
Theorem 7 (simple formula for α=2 and c=1).

Let 0<q12 and 1β2. Then it holds that

convert(2,β,1,q)={exp(β𝒟(1βq))if 1β11q(q1q)β2if 11q<β2

Both Theorems 6 and 7 follow from a closed form formula which we can attain for δright or δleft for the specific values of α and c considered in the theorems. The proofs of Theorems 6 and 7 are given in the full version of the paper.

3.4 Comparison to Fidelity Preserving Transformations

We compare Sampling with a Black-Box with the technique of [19] for (𝒢,Π)-Vertex Deletion  problems in which Π is defined by a finite set of forbidden vertex induced hypergraphs.

Definition 8.

Let Ω={F1,,F} be a finite set of hypergraphs for >0. Then ΠΩ is the hypergraph property where a hypergraph G belongs to ΠΩ if and only if there is no vertex induced subhypergraph X of G such that X is isomorphic to Fi, for some 1i.

We note that ΠΩ is always hereditary and polynomial-time decidable. We also note the the family of graphs properties defined by a finite set of forbidden hypergraph suffices to define many fundamental graph problems such as Vertex Cover, d-Hitting Set and -Path Vertex Cover.

For a set of hypergraphs Ω={F1,,F}, define η(Ω)max1i|V(Fi)|, the maximal number of vertices of a hypergraph in Ω. The following result has been (implicitly) given in [19].

Lemma 9 ([19]).

Let Ω be a finite set of hypergraphs and let 𝒢 be a closed set of hypergraphs. Furthermore, assume there is a randomized exact parameterized ckn𝒪(1) algorithm for (𝒢,ΠΩ)-Vertex Deletion. Then for every 1βη(Ω) there is a randomized parameterized β-approximation algorithm for (𝒢,ΠΩ)-Vertex Deletion which runs in time

cη(Ω)βη(Ω)1kn𝒪(1).

There is a simple and generic way to design a sampling step for ΠΩ. The sampling step finds a set of vertices SV(G) of the input graph such that G[S] is isomorphic for a graph in Ω, and returns a vertex from S uniformly at random. This leads to the following lemma.

Lemma 10.

Let Ω be a finite set of hypergraphs and let 𝒢 be a closed set of hypergraphs. There is a sampling step for (𝒢,ΠΩ)-Vertex Deletion with success probability 1η(Ω).

A formal proof for Lemma 10 is given in the full version of the paper. Together with Theorem 5 the lemma implies the following.

Corollary 11.

Let Ω be a finite set of hypergraphs and 𝒢 be a closed set of hypergraphs. Furthermore, assume there is a randomized exact parameterized ckn𝒪(1) algorithms for (𝒢,ΠΩ)-Vertex Deletion. Then for every 1βη(Ω) there is a randomized parameterized β-approximation algorithm for (𝒢,ΠΩ)-Vertex Deletion which runs in time

(convert(1,β,c,1η(Ω)))kn𝒪(1).

Observe that Lemma 9 and Corollary 11 only differ in the resulting running time. The following lemma implies that for every 1<β<η(Ω) the running time of Sampling with a Black Box, as given in Corollary 11, is always strictly better than the running time of [19] as given in Lemma 9.

Lemma 12.

For every η such that η2, 1<c<η and 1<β<η it holds that

convert(1,β,c,1η)<cηβη1.

The proof of Lemma 12 is given in the full version of the paper.

4 Applications

In this section we will describe some problems to which our results can be applied. For each problem, we utilize a sampling step to obtain a parameterized approximation algorithm. We also compare the running time of our algorithms against a relevant benchmark, when applicable.

4.1 Feedback Vertex Set

Recall that given a graph G and integer k, the Feedback Vertex Set (FVS) problem asks whether there exists a set SV(G) of size at most k such that GS is acyclic. FVS can also be described as a (𝒢,Π)-Vertex Deletion problem, where 𝒢 is the set of all graphs and Π is the set of graphs that have no cycles. First, we demonstrate that there exists a sampling step for FVS.

Lemma 13.

Feedback Vertex Set has a sampling step with success probability 14.

The proof of Lemma 13, included in the full version of the paper, analyzes a sampling step that begins by removing vertices of degree at most 1. Then, a vertex is sampled from the remaining vertices, where the sampling probability for each vertex is proportional to its degree in the remaining graph. The proof of Lemma 13 relies on ideas from [3], which were also similarly used by [24]. Unlike [3, 24], Lemma 13 avoids the use reduction rules which modify the graph. This technical difference is essential for meeting the formal definition of a sampling step.

In [28], the authors present a randomized parameterized algorithm which runs in time 2.7kn𝒪(1) for FVS (i.e., in the terminology of this paper, α=1,c=2.7). Moreover, FVS also has a 2-approximation algorithm that runs in polynomial-time (i.e., α=2, c=1) [2]. Using the sampling step together with Theorem 5 we attain the following.

Theorem 14.

For each 1β2, Feedback Vertex Set has a randomized parameterized β-approximation algorithm which runs in time dkn𝒪(1) where

d={2.7(1.33)β1if  1β<1.402(13)β2if  1.402β2. (5)

The proof of Theorem 14 is provided in the full version of the paper. It leverages the FPT algorithm presented in [28] and the 2-approximation algorithm introduced in [2], alongside Theorems 6 and 7.

In [24], the authors present a β-approximation algorithm for each 1<β2. It can be visually (see Figure 1(a)) and numerically (see Table 1) verified that our algorithms demonstrates a strictly better running-time.

Table 1: Comparison of the base of exponents of different algorithms for FVS. For each column with a β value of b, a value d in the second or third row implies a b-approximation algorithm with running time dkn𝒪(1).
β 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
Our Algorithm (5) 2.483 2.284 2.101 1.932 1.732 1.552 1.390 1.246 1.116
Jana et al. [24] 2.620 2.467 2.160 1.942 1.778 1.649 1.56 1.319 1.149

4.2 Pathwidth One Vertex Deletion

Given a graph G and integer k, the Pathwidth One Vertex Deletion (POVD) problem asks whether there exists a set SV(G) of size at most k such that GS has pathwidth at most 1 (see [13] for the definition of pathwidth). Initially, we demonstrate that there exists a sampling step for POVD.

Lemma 15.

Pathwidth One Vertex Deletion has a sampling step with probability 17.

The proof of Lemma 15 can be found in the full version of the paper. The sampling step for POVD is very similar to that for FVS, with a slight modification. This follows from the known property that a graph has pathwidth at most 1 if and only if it is acyclic and does not contain a specific graph of 7 vertices as a vertex induced subgraph. Due to this additional forbidden subgraph, POVD has a sampling step with probability 17, instead of 14 as in the case of FVS.

To the best of our knowledge, parameterized approximation algorithms have not been studied for POVD. However, there exists an FPT algorithm for POVD with running time 3.888kn𝒪(1) (α=1,c=3.888) [34]. Next, we demonstrate how combining the aforementioned sampling step with a parameterized algorithm yields a new approximation algorithm. Refer to Figures 2 and 2 for the corresponding running times.

Theorem 16.

For each 1β7, Pathwidth One Vertex Deletion has a β-approximation algorithm which runs in time dkn𝒪(1) where

d={3.888(0.519)β1if 1β1.8exp(β𝒟(1β17))if 1.8<β<7. (6)

The proof of Theorem 16, included in the full version of the paper, uses the FPT algorithm from [34] together with Theorem 6.

Figure 2: A plot of the running time of our algorithm for Pathwidth One Vertex Deletion. The x-axis corresponds to the approximation ratio, while the y-axis corresponds to the base of the exponent in the running time. A point (β,d) in the plot describes a running time of the form dkn𝒪(1) for a β-approximation.
Table 2: The table displays the base of exponents for our algorithm designed for Pathwidth One Vertex Deletion. Each pair (b,d), listed in the same row and consecutive β and value columns, represents a β-approximation algorithm with a running time dkn𝒪(1).
β Value β Value β Value β Value β Value β Value
1.1 3.6412 2.1 1.9391 3.1 1.3776 4.1 1.1573 5.1 1.0553 6.1 1.0107
1.2 3.4100 2.2 1.8498 3.2 1.3466 4.2 1.1433 5.2 1.0488 6.2 1.0083
1.3 3.1936 2.3 1.7713 3.3 1.3181 4.3 1.1303 5.3 1.0428 6.3 1.0063
1.4 2.9908 2.4 1.7018 3.4 1.2920 4.4 1.1183 5.4 1.0374 6.4 1.0046
1.5 2.8010 2.5 1.6399 3.5 1.2679 4.5 1.1071 5.5 1.0323 6.5 1.0031
1.6 2.6232 2.6 1.5844 3.6 1.2457 4.6 1.0968 5.6 1.0277 6.6 1.0020
1.7 2.4567 2.7 1.5346 3.7 1.2252 4.7 1.0871 5.7 1.0236 6.7 1.0011
1.8 2.3007 2.8 1.4895 3.8 1.2062 4.8 1.0782 5.8 1.0198 6.8 1.0005
1.9 2.1604 2.9 1.4487 3.9 1.1886 4.9 1.0700 5.9 1.0164 6.9 1.0001
2.0 2.0417 3.0 1.4115 4.0 1.1724 5.0 1.0624 6.0 1.0134

4.3 (𝓖,𝚷)-Vertex Deletion for a finite set of forbidden sub-hypergraphs

There are many problems that can be described as (𝒢,Π)-Vertex Deletion  problems in which Π is defined by a finite set Ω of forbidden vertex induced hypergraphs. For each of those problems, by Lemma 10 there exists a sampling step with success probability 1η, where η is the maximum number of vertices of a hypergraph in Ω. In the following, we will demonstrate how we can obtain parameterized approximation algorithms for such problems. For the sake of presentation, we will focus on a specific problem called 3-Path Vertex Cover. We provide additional examples in the full version of the paper.

Given a graph G, a subset of vertices SV(G) is called an -path Vertex Cover if every path of length contains a vertex from S. The -Path Vertex Cover problem asks whether there exists an -path Vertex Cover of size at most k where k is the parameter [9]. -Path Vertex Cover can be described as a (𝒢,Π)-Vertex Deletion where 𝒢 is the set of graphs and Π is the set of graphs with maximum path length at most 1. Alternatively, let Ω be the set of all graphs G with vertices such that G contains a path on vertices. It holds that -Path Vertex Cover is equivalent to (𝒢,ΠΩ)-Vertex Deletion. Moreover, by definition of Ω we have that η(Ω). Therefore, by Lemma 10, there is a sampling step for -Path Vertex Cover with success probability 1.

In the following, we will consider =3, i.e. the problem 3-Path Vertex Cover. There exists an FPT algorithm for 3-Path Vertex Cover which runs in time 1.708kn𝒪(1) (α=1,c=1.708) [10]. Moreover, there is also a 2-approximation algorithm that runs in polynomial-time (α=2,c=1[35].

Theorem 17.

For each 1<β<2, 3-Path Vertex Cover has a β-approximation algorithm which runs in time dkn𝒪(1) where

d={1.708(0.644)β1if 1β<1.6143(0.5)β2if 1.6143β2 (7)

The proof of Theorem 17, detailed in the full version of the paper, combines the FPT algorithm from [10] with the polynomial-time 2-approximation algorithm from [35]. Leveraging Theorems 6 and 7, we derive approximation algorithms and select the one with the smaller running time based on the approximation ratio.

Using the technique of [19] (see Lemma 9) one can attain a β-approximation algorithm for 3-Path Vertex Cover with running time 1.7083β2kn𝒪(1), for every 1β2. As can be visually (see Figure 1(b)), numerically (see Table 3) or formally (see Lemma 12) verified, our algorithm has a strictly better running time for all values of 1<β2 .

Table 3: Comparison of the base of exponents of different algorithms for 3-Path Vertex Cover. For each column with a β value of b, a value d in the second or third row implies a b-approximation algorithm with running time dkn𝒪(1).
β 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
Our Algorithm (7) 1.6345 1.5641 1.4968 1.4323 1.3707 1.3117 1.2311 1.1487 1.0718
Fellows et al. [19] 1.6628 1.6189 1.5762 1.5345 1.4940 1.4545 1.416 1.3787 1.3423

5 Sampling with a Black Box

In this section we present our framework, Sampling with a Black Box, and prove Theorem 5. The technique is designed using three main components, enabling a modular analysis of each part. We use the notion of (δ,p,r,T)-procedure to abstract the outcome of iteratively executing a sampling step. We use this abstract notion in RandAndExtend which combines the (δ,p,r,T)-procedure together with the black box parameterized α-approximation algorithm. On its own, RandAndExtend only attains a β-approximate solution with a low probability. Our main algorithm, SamplingWithABlackBox, executes RandAndExtend multiple times to get a β-approximate solution with a constant probability.

We start with the formal definition of a (δ,p,r,T)-procedure. The procedure gets a hypergraph G𝒢 and a number t as input. The procedure returns a set S of size at most δt such that with probability pt, the optimum of GS is smaller than the optimum of G by t, or reaches zero. The parameters T and r are used to overcome technicalities in the implementation of the procedure: the guarantees only hold if tT, and the probability that the optimum decreased is pt(t+1)r.

Definition 18.

Let Π be a hypergraph property and 𝒢 be a closed set of hypergraphs. For every δ1, r0, 0<p1 and T0, a (δ,p,r,T)-procedure for (𝒢,Π) is a polynomial-time randomized algorithm that takes as input a hypergraph G𝒢, an integer t0 and returns a set SV(G) with the following properties:

  1. P1

    It holds that |S|δt.

  2. P2

    Suppose that OPT𝒢,Π(G)k for some k0. If tT, then with probability at least pt(t+1)r it holds that GS has a solution of size at most max(0,kt), i.e.

    Pr(OPT𝒢,Π(GS)max(0,kt))pt(t+1)r.

Additionally, we use the notation (δ,p)-procedure to refer to a (δ,p,r,T)-procedure for some constants r,T0.

Observe that if there is a (δ,p)-procedure for (𝒢,Π)-Vertex Deletion, then we can make use of it to obtain a δ-approximation as follows. Suppose G has a solution of size k. Then, by setting t=k in Item 2, it holds that with probability at least pk(k+1)r, GS has a solution of size 0, i.e. GSΠ. By our assumption (that Π is polynomial-time decidable), we can check in polynomial-time whether a graph belongs to the property Π. By Item 1 it holds that |S|δk. Additionally, we can repeat this algorithm pkn𝒪(1) times to obtain a δ-approximate solution constant probability. We summarize these insights in the following observation.

Observation 19.

If there is a (δ,p)-procedure for (𝒢,Π)-Vertex Deletion, then there is randomized parameterized δ-approximation algorithm for (𝒢,Π)-Vertex Deletion with running time (1/p)kn𝒪(1).

In the remainder of Section 5, we fix the values of 0<q1, 1α1q, 1β1q and 1cexp(α𝒟(1αq)) to specific numbers such that αβ, unless specified explicitly. For notational simplicity, we omit α, β, and c from the subscript of functions dependent on these variables. Moreover, let Π be a fixed polynomial-time decidable hypergraph property, 𝒢 be a closed set of hypergraphs and 𝒜 be an randomized parameterized α-approximation algorithm for the (𝒢,Π)-Vertex Deletion problem with running time ckn𝒪(1).

Sampling with a Black Box combines the α-approximation algorithm 𝒜 with a (δ,p)-procedure to obtain a β-approximation. By Observation 19, the (δ,p)-procedure can be interpreted as a δ-approximation algorithm. Intuitively, this implies that, for the combination to work, β should lie between α and δ. We define interval(α,β) as the set of values that δ can take based under this logic.

Definition 20.

For α,β1 such that αβ, we define the set interval(α,β) as

interval(α,β){[β,)if β>α[1,β]if β<α.

The next component in our technique is RandAndExtend, given in Algorithm 1. The algorithm is configured with a (δ,p)-procedure 𝒫δ,p. Given an hypergraph G and an integer t in the input, the algorithm invokes the procedure 𝒫δ,p which returns a random set SV(G) of size at most δt, and then runs the randomized parameterized α-approximation algorithm 𝒜 on the remaining hypergraph GS. The idea is to hope that GS has a solution of size at most βkδtα. Note that the parameter for the α-approximation algorithm is also βkδtα, which ensures that the approximation algorithm returns a set of size at most βkδt, with high probability. In the event that this holds, by adding S to the returned set, we obtain a solution with a size of at most βk.

Algorithm 1 RandAndExtend.

Our main algorithm, given in Algorithm 2, begins by selecting a value for t. This value ensures the following: if the set S in Algorithm 1 contains at least t many elements from an optimal solution of size k, then GS has a solution of size at most βkδtα. Furthermore, the size of the set returned by Algorithm 1 is at most βk if the above event occurs. Finally, Algorithm 2 utilizes Algorithm 1 and executes it multiple times to ensure a β-approximate solution is attained with a constant probability.

Algorithm 2 SamplingWithABlackBox.
Lemma 21.

Let 0<p1, δinterval(α,β) and a (δ,p,r,T)-procedure 𝒫δ,p for (𝒢,Π). Then Algorithm 2 is a randomized parameterized β-approximation algorithm for (𝒢,Π)-Del with running time

f(δ,p)kn𝒪(1)

where f(δ,p) is given by

f(δ,p)exp((δβ)ln(c)+(βα)ln(1p)δα).

The proof of Lemma 21, included in the full version of the paper, proceeds as follows: for each set S𝒮, we demonstrate that with non-negligible probability, S is of size at most βk. From this, it follows that the set returned by Algorithm 2 is, with high probability, of size at most βk. Lastly, the running time of the algorithm is computed through detailed calculations, resulting in an expression of f(δ,p)kn𝒪(1). completing the proof.

Algorithm 2 relies on the existence of a (δ,p)-procedure, however, insofar we did not show how to design one. We generate a (δ,p)-procedure from a sampling step (Definition 2) via a simple algorithm which iteratively invokes the sampling step. The pseudo-code of the algorithm is given in Algorithm 3.

Algorithm 3 MultiSample.

Define

ϕ(δ,q)exp(δ𝒟(1δq)). (8)

The following lemma state that Algorithm 3 is indeed a (δ,p)-procedure for p=ϕ(δ,q).

Lemma 22.

Let 0<q1 and be a sampling step for (𝒢,Π)-Vertex Deletion with success probability q. Then, for any 1δ1q, Algorithm 3 is a (δ,ϕ(δ,q))-procedure for (𝒢,Π)-Del.

The proof of Lemma 22, provided in the full version of the paper, shows that Algorithm 3 satisfies the properties of a (δ,ϕ(δ,q))-procedure. It is structured in three steps: first, the algorithm is proven to run in polynomial-time by using the polynomial-time decidability of Π. Second, Item 1 is shown to hold by observing that the while loop in Algorithm 3 can execute at most δt times. Finally, Item 2 is shown by deriving a bound on the probability of success using auxiliary results about the behavior of dependent random variables and hereditary properties of hypergraphs. Together, these steps show that the algorithm meets all necessary conditions.

Lemma 22 implies that for δ=1q, there exists a (1q,1)-procedure for (𝒢,Π)-Vertex Deletion. Observe that this serves as a (δ,1)-procedure for δ>1q. Therefore, we make the following observation.

Observation 23.

Let 0<q1 and be a sampling step for (𝒢,Π)-Vertex Deletion with success probability q. Then, for any δ>1q, there is a (δ,1)-procedure for (𝒢,Π)-Vertex Deletion.

Furthermore, we can now prove Lemma 3 using Lemma 22 together with Observation 19.

Proof of Lemma 3.

Assume there is a sampling step with success probability q for (𝒢,Π)-Del, and let 1β1q. Then by Lemma 22 there is a (β,ϕ(β,q))-procedure for (𝒢,Π)-Del. Hence, by Observation 19 there is a parameterized β-approximation for (𝒢,Π)-Del which runs in time dkn𝒪(1) where d=1ϕ(β,q)=exp(β𝒟(1βq)).

Using the procedure from Lemma 22 together with Lemma 21 (Algorithm 2) we get the following results.

Lemma 24.

Suppose that:

  • There is a sampling step with success probability q(0,1) for (𝒢,Π)-Del.

  • There is a randomized parameterized α-approximation algorithm for (𝒢,Π)-Del which runs in time ckn𝒪(1).

Then, there is a randomized parameterized β-approximation algorithm for (𝒢,Π)-Del with running time

(minδinterval(α,β)[1,1q]f~(δ,q))kn𝒪(1)

where f~(δ,q) is defined as

f~(δ,q)f(δ,ϕ(δ,q))=exp((δβ)ln(c)+(βα)ln(1ϕ(δ,q))δα).

Proof.

For each δinterval(α,β) such that 1δ1q, by Lemma 22 there is a (δ,ϕ(δ,q))-procedure for (𝒢,Π)-Del. Therefore, by Lemma 21, there is a randomized parameterized β-approximation algorithm for (𝒢,Π)-Del with running time

f(δ,ϕ(δ,q))kn𝒪(1)=f~(δ,q)kn𝒪(1).

In particular, one can consider all possible δ(interval(α,β)[1,1q]) and choose the δ that minimizes the running time. Therefore, there is a randomized parameterized β-approximation algorithm for (𝒢,Π)-Del which runs in time

(minδinterval(α,β)[1,1q]f~(δ,q))kn𝒪(1).

Lemma 24 provides a β-approximation algorithms whose running time is a solution for an optimization problem. The final step towards the proof of Theorem 5 is to upper bound the value of the optimization problem.

Lemma 25.

It holds that

minδinterval(α,β)[1,1q]f~(δ,q)convert(α,β,c,q).

The proof of Lemma 25, which can be found in the full version of the paper, simply selects a value for δ. We note the selected value is in fact optimal, that is

minδinterval(α,β)[1,1q]f~(δ,q)=convert(α,β,c,q).

However, we do not formally show the above equality. With all necessary components in place, we are now ready to proceed with the proof of Theorem 5.

Proof of Theorem 5.

In case αβ the claim follows immediately from Lemmas 24 and 25. If α=β then convert(α,β,c,q)=c, and indeed 𝒜 is a randomized parameterized β-approximation algorithm for (𝒢,Π)-Vertex Deletion which runs in time ckn𝒪(1).

References

  • [1] Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase. Parameterized approximation schemes for clustering with general norm objectives. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 1377–1399. IEEE, 2023. doi:10.1109/FOCS57990.2023.00085.
  • [2] Vineet Bafna, Piotr Berman, and Toshihiro Fujito. A 2-Approximation Algorithm for the Undirected Feedback Vertex Set Problem. SIAM Journal on Discrete Mathematics, 12(3):289–297, January 1999. doi:10.1137/S0895480196305124.
  • [3] Ann Becker, Reuven Bar-Yehuda, and Dan Geiger. Randomized algorithms for the loop cutset problem. Journal of Artificial Intelligence Research, 12(1):219–234, May 2000. doi:10.1613/JAIR.638.
  • [4] Rémy Belmonte, Michael Lampis, and Valia Mitsou. Parameterized (approximate) defective coloring. SIAM Journal on Discrete Mathematics, 34(2):1084–1106, 2020. doi:10.1137/18M1223666.
  • [5] Arnab Bhattacharyya, Édouard Bonnet, László Egri, Suprovat Ghoshal, Karthik C. S., Bingkai Lin, Pasin Manurangsi, and Dániel Marx. Parameterized intractability of even set and shortest vector problem. J. ACM, 68(3), March 2021. doi:10.1145/3444942.
  • [6] Nicolas Bourgeois, Bruno Escoffier, and Vangelis Th Paschos. Approximation of max independent set, min vertex cover and related problems by moderately exponential algorithms. Discrete Applied Mathematics, 159(17):1954–1970, 2011. doi:10.1016/J.DAM.2011.07.009.
  • [7] Ljiljana Brankovic and Henning Fernau. Parameterized Approximation Algorithms for Hitting Set. In Roberto Solis-Oba and Giuseppe Persiano, editors, Approximation and Online Algorithms, pages 63–76, Berlin, Heidelberg, 2012. Springer. doi:10.1007/978-3-642-29116-6_6.
  • [8] Ljiljana Brankovic and Henning Fernau. A novel parameterised approximation algorithm for minimum vertex cover. Theoretical Computer Science, 511:85–108, 2013. doi:10.1016/J.TCS.2012.12.003.
  • [9] Boštjan Brešar, František Kardoš, Ján Katrenič, and Gabriel Semanišin. Minimum k-path vertex cover. Discrete Applied Mathematics, 159(12):1189–1195, July 2011. doi:10.1016/j.dam.2011.04.008.
  • [10] Radovan Červený and Ondřej Suchý. Generating Faster Algorithms for d-Path Vertex Cover. In Daniël Paulusma and Bernard Ries, editors, Graph-Theoretic Concepts in Computer Science, pages 157–171, Cham, 2023. Springer Nature Switzerland. doi:10.1007/978-3-031-43380-1_12.
  • [11] Parinya Chalermsook, Marek Cygan, Guy Kortsarz, Bundit Laekhanukit, Pasin Manurangsi, Danupon Nanongkai, and Luca Trevisan. From gap-exponential time hypothesis to fixed parameter tractable inapproximability: Clique, dominating set, and more. SIAM Journal on Computing, 49(4):772–810, 2020. doi:10.1137/18M1166869.
  • [12] Yijia Chen, Yi Feng, Bundit Laekhanukit, and Yanlin Liu. Simple combinatorial construction of the ko(1) -lower bound for approximating the parameterized k-clique. arXiv preprint arXiv:2304.07516, 2023. doi:10.48550/arXiv.2304.07516.
  • [13] Marek Cygan, Fedor V. Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer International Publishing, Cham, 2015. doi:10.1007/978-3-319-21275-3.
  • [14] Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai. Budgeted Matroid Maximization: a Parameterized Viewpoint. In Proc. IPEC 2023, pages 13:1–13:17, 2023. doi:10.4230/LIPICS.IPEC.2023.13.
  • [15] Pavel Dvorák, Andreas Emil Feldmann, Dusan Knop, Tomás Masarík, Tomas Toufar, and Pavel Veselỳ. Parameterized approximation schemes for steiner trees with small number of steiner vertices. In Proc. STACS, 2018.
  • [16] Barış Can Esmer, Ariel Kulik, Dániel Marx, Daniel Neuen, and Roohani Sharma. Faster Exponential-Time Approximation Algorithms Using Approximate Monotone Local Search. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, Proc. ESA, 2022. doi:10.4230/LIPIcs.ESA.2022.50.
  • [17] Barış Can Esmer, Ariel Kulik, Dániel Marx, Daniel Neuen, and Roohani Sharma. Approximate monotone local search for weighted problems. In Neeldhara Misra and Magnus Wahlström, editors, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023), volume 285 of Leibniz International Proceedings in Informatics (Lipics), pages 17:1–17:23, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.IPEC.2023.17.
  • [18] Can Barış Esmer, Ariel Kulik, Dániel Marx, Daniel Neuen, and Roohani Sharma. Optimally Repurposing Existing Algorithms to Obtain Exponential-Time Approximations. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Proceedings, pages 314–345. Society for Industrial and Applied Mathematics, January 2024. doi:10.1137/1.9781611977912.13.
  • [19] Michael R Fellows, Ariel Kulik, Frances Rosamond, and Hadas Shachnai. Parameterized approximation via fidelity preserving transformations. Journal of Computer and System Sciences, 93:30–40, 2018. doi:10.1016/J.JCSS.2017.11.001.
  • [20] Fedor V Fomin, Serge Gaspers, Daniel Lokshtanov, and Saket Saurabh. Exact algorithms via monotone local search. Journal of the ACM (JACM), 66(2):1–23, 2019. doi:10.1145/3284176.
  • [21] Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese. Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack. In Proc. ESA, 2019.
  • [22] Venkatesan Guruswami, Bingkai Lin, Xuandi Ren, Yican Sun, and Kewen Wu. Parameterized inapproximability hypothesis under ETH. In Proc. STOC (to appear), 2024.
  • [23] Tanmay Inamdar, Daniel Lokshtanov, Saket Saurabh, and Vaishali Surianarayanan. Parameterized Complexity of Fair Bisection: (FPT-Approximation meets Unbreakability). In Proc. ESA, 2023.
  • [24] Satyabrata Jana, Daniel Lokshtanov, Soumen Mandal, Ashutosh Rai, and Saket Saurabh. Parameterized Approximation Scheme for Feedback Vertex Set. In Jérôme Leroux, Sylvain Lombardy, and David Peleg, editors, 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023), volume 272 of Leibniz International Proceedings in Informatics (LIPIcs), pages 56:1–56:15, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.MFCS.2023.56.
  • [25] CS Karthik and Subhash Khot. Almost polynomial factor inapproximability for parameterized k-clique. In 37th Computational Complexity Conference (CCC 2022). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2022.
  • [26] Ariel Kulik and Hadas Shachnai. Analysis of two-variable recurrence relations with application to parameterized approximations. In Proc. FOCS, pages 762–773, 2020. doi:10.1109/FOCS46700.2020.00076.
  • [27] Michael Lampis. Parameterized approximation schemes using graph widths. In International Colloquium on Automata, Languages, and Programming, pages 775–786. Springer, 2014. doi:10.1007/978-3-662-43948-7_64.
  • [28] Jason Li and Jesper Nederlof. Detecting Feedback Vertex Sets of Size k in O (2.7k) Time. ACM Transactions on Algorithms, 18(4):34:1–34:26, October 2022. doi:10.1145/3504027.
  • [29] Bingkai Lin, Xuandi Ren, Yican Sun, and Xiuhan Wang. Constant approximating parameterized k-setcover is W[2]-hard. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3305–3316. SIAM, 2023. doi:10.1137/1.9781611977554.CH126.
  • [30] Bingkai Lin, Xuandi Ren, Yican Sun, and Xiuhan Wang. Improved hardness of approximating k-clique under eth. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), 2023.
  • [31] Daniel Lokshtanov, Saket Saurabh, and Vaishali Surianarayanan. A parameterized approximation scheme for min k-cut. SIAM Journal on Computing, 2022.
  • [32] Pasin Manurangsi. A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation. In Proc. SOSA, 2019.
  • [33] Pasin Manurangsi. Tight running time lower bounds for strong inapproximability of maximum k-coverage, unique set cover and related problems (via t-wise agreement testing theorem). In Proc. SODA, pages 62–81. SIAM, 2020. doi:10.1137/1.9781611975994.5.
  • [34] Dekel Tsur. Faster algorithm for pathwidth one vertex deletion. Theoretical Computer Science, 921:63–74, June 2022. doi:10.1016/j.tcs.2022.04.001.
  • [35] Jianhua Tu and Wenli Zhou. A factor 2 approximation algorithm for the vertex cover P3 problem. Information Processing Letters, 111(14):683–686, July 2011. doi:10.1016/j.ipl.2011.04.009.
  • [36] Michał Włodarczyk. Parameterized inapproximability for steiner orientation by gap amplification. In Proc. ICALP, 2020.