Abstract 1 Introduction 2 Preliminaries 3 Framework 4 Recoverable Robust Problems 5 An SSP Framework for Recoverable Robust Problems 6 The Issue of Transitivity: Preserving the Blow-up 7 Conclusion References

On the Complexity of Recoverable Robust Optimization in the Polynomial Hierarchy

Christoph Grüne ORCID Department of Computer Science, RWTH Aachen University, Germany Lasse Wulf ORCID Section of Algorithms, Logic and Graphs, Technical University of Denmark, Kongens Lyngby, Denmark
Abstract

Recoverable robust optimization is a popular multi-stage approach, in which it is possible to adjust a first-stage solution after the uncertain cost scenario is revealed. We consider recoverable robust optimization in combination with discrete budgeted uncertainty. In this setting, it seems plausible that many problems become Σ3p-complete and therefore it is impossible to find compact IP formulations of them (unless the unlikely conjecture NP =Σ3p holds). Even though this seems plausible, few concrete results of this kind are known. In this paper, we fill that gap of knowledge. We consider recoverable robust optimization for the nominal problems of Sat, 3Sat, vertex cover, dominating set, set cover, hitting set, feedback vertex set, feedback arc set, uncapacitated facility location, p-center, p-median, independent set, clique, subset sum, knapsack, partition, scheduling, Hamiltonian path/cycle (directed/undirected), TSP, k-directed disjoint path (k2), and Steiner tree. We show that for each of these problems, and for each of three widely used distance measures, the recoverable robust problem becomes Σ3p-complete. Concretely, we show that all these problems share a certain abstract property and prove that this property implies that their robust recoverable counterpart is Σ3p-complete. This reveals the insight that all the above problems are Σ3p-complete “for the same reason”. Our result extends a recent framework by Grüne and Wulf.

Keywords and phrases:
Complexity, Robust Optimization, Recoverable Robust Optimization, Two-Stage Problems, Polynomial Hierarchy, Sigma 2, Sigma 3
Funding:
Christoph Grüne: Funded by the German Research Foundation (DFG) – GRK 2236/2.
Lasse Wulf: Funded by the Carlsberg Foundation CF21-0302 “Graph Algorithms with Geometric Applications”.
Copyright and License:
[Uncaptioned image] © Christoph Grüne and Lasse Wulf; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Problems, reductions and completeness
Related Version:
Full Version: https://arxiv.org/abs/2411.18590 [19]
Editors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał Skrzypczak

1 Introduction

Real-world decision makers are faced with a large degree of uncertainty when making important decisions in economics, planning, and operations research. The successful area of Robust optimization [1, 14, 28] has been developed as one possible way to deal with these uncertainties. However, sometimes the classical robust optimization approach has turned out to be too conservative. For this reason, Recoverable robust optimization was introduced by Liebchen, Lübbecke, Möhring and Stiller [31]. Initially motivated by train scheduling problems, it has since then found wide-spread application in practice and in the analysis of standard optimization problems. For instance, it has been successfully applied to combinatorial problems such as s-t-path [4, 23, 24], matching [11], scheduling [3], spanning tree [21, 22], knapsack [6, 7, 8, 29], and TSP [9, 15]. Recoverable robust optimization follows a two-step robust optimization approach. In contrast to classic robust optimization, where a first-stage solution cannot be changed, in recoverable robust optimization the decision maker is allowed to incorporate a limited recovery action after the underlying uncertainty is revealed. Mathematically, this is described by the expression

minS1maxc2CminS2,dist(S1,S2)κc1(S1)+c2(S2).

Here, S1 denotes the first-stage solution, S2 denotes the second-stage solution, c1 denotes the first-stage cost function, C denotes the set of uncertain scenarios, and dist is some distance function. A more formal definition will be given later.

In this paper, we make the standard assumption that the underlying uncertainty for the robust problem is given as a discrete budgeted uncertainty set, also denoted as discrete Γ-uncertainty [2].

The study of 𝚺𝟑𝒑-complete problems.

Recoverable robust problems are described by min-max-min formulations. From a complexity-theoretic point of view, such min-max-min problems are often even harder than NP-complete, namely they are often complete for the third stage of the so-called polynomial hierarchy [33]. A problem complete for the k-th stage of the hierarchy is called Σkp-complete. (In this paper, we are concerned mostly with the case of k=3). The theoretical study of Σkp-complete problems is important: If a problem is found to be Σ3p-complete, it means that, under some basic complexity-theoretic assumptions111More specifically, we assume here that Σ3pNP. This is a similar assumption to the famous PNP assumption, and it is believed to be similarly unlikely. However, the true status of the conjecture is not known. More details can be found in [34]., it is not possible to find an integer programming formulation of the problem of polynomial size [34] (also called a compact model). This means that no matter how cleverly a decision maker tries to design their integer programming model, it must inherently have a huge number of constraints and/or variables, and may be much harder to solve than even NP-complete problems.

Even though this fact makes the study of Σ3p-complete problems compelling, surprisingly few results relevant to the area of min-max-min combinatorial optimization were known until recently. While the usual approach to prove Σkp-completeness (or NP-completeness) is to formulate a new proof for each single problem, a recent paper by Grüne & Wulf [18], extending earlier ideas by Johannes [26] breaks with this approach. Instead, it is shown that there exists a large ’base list’ of problems (called SSP-NP-complete problems in [18]), including many classic problems like independent set, vertex cover, knapsack, TSP, etc. Grüne and Wulf show that for each problem from the base list, some corresponding min-max version is Σ2p-complete, and some corresponding min-max-min version is Σ3p-complete. This approach has three main advantages: A1.) It uncovers a large number of previously unknown Σkp-complete problems. A2.) It reveals the theoretically interesting insight, that for all these problems the Σkp-completeness follows from essentially the same argument. A3.) It can simplify future proofs, since heuristically it seems to be true that for a new problem it is often easier to show that the nominal problem belongs to the list of SSP-NP-complete problems, than to find a Σkp-completeness proof from scratch.

The main goal of the current paper is to extend the framework of Grüne & Wulf [18] also to the setting of recoverable optimization, since this setting was not considered in the original paper. It turns out that compared to [18] more complicated assumptions are needed. We introduce a set of sufficient conditions for some nominal problem, which imply that the recoverable robust version of that problem is Σ3p-complete.

Our results.

We consider recoverable robust optimization for the following nominal problems: Sat, 3Sat, vertex cover, dominating set, set cover, hitting set, feedback vertex set, feedback arc set, uncapacitated facility location, p-center, p-median, independent set, clique, subset sum, knapsack, partition, scheduling, Hamiltonian path/cycle (directed/undirected), TSP, k-directed disjoint path (k2), and Steiner tree. In addition we consider the three most popular distance measures dist used in recoverable robust optimization: The κ-addition distance, the κ-deletion distance, and the Hamming distance. (A formal definition of these distance measures is given in Section 4.1)

We show that for every combination of the above problems with any of the three distance measures, the recoverable robust problem (with discrete budgeted uncertainty) is Σ3p-complete. More generally, we identify an abstract property of all our studied problems, and prove that this abstract property already implies that the recoverable robust problem becomes Σ3p-complete. This answers a question asked by Goerigk, Lendl and Wulf [16]. As a consequence, the advantages A1–A3 as explained above also apply to recoverable robust problems. We remark that Σ3p-completeness was already known in the case of clique/independent set, TSP or shortest path combined with the Hamming distance [16]. It was also already known for shortest path in combination with all three distance measures [23] and for several optimization problems [17] for so-called xor-dependencies and Γ-set scenarios. Hence our work is an extension of these results.

1.1 Related Work

Recoverable robustness concepts were analyzed for a variety of different standard optimization problems. Büsing [4] analyzed the recoverable shortest path problem with discrete budgeted uncertainty, in which adding at most κ elements to the second stage solution are allowed. This analysis was lately continued by Jackiewicz, Kasperski, and Zieliński [24] for several different graph classes and for interval budgeted uncertainty. Furthermore, recoverable robust knapsack was analyzed by Büsing, Koster, and Kutschka [8] for discrete scenarios and in which adding at most κ elements and deleting at most elements are allowed. Büsing, Koster, and Kutschka [7] additionally analyzed the Γ-scenario case (discrete budgeted uncertainty) while allowing at most elements to be deleted. This work was also continued by Büsing, Goderbauer, Koster, and Kutschka [6]. Further classical optimization problems that were studied in the recoverable robustness context are matching by Dourado, Meierling, Penso, Rautenbach, Protti, and de Almeida [11] and spanning tree under interval cost uncertainty by Hradovich, Kasperski, and Zieliński [21, 22]. Additionally, Lendl, Peis, and Timmermans [30] examined matroidal problems. One variant of independent set for recoverable robustness with a commitment property was analyzed by Hommelsheim, Megow, Muluk, and Peis [20]. Beside these problems, recoverable robust selection was studied by Kasperski and Zieliński [27], Chassein, Goerigk, Kasperski, and Zieliński [10], and Goerigk, Lendl, and Wulf [15]. Moreover, Lachmann, Lendl, and Woeginger [29] developed a linear time algorithm for the recoverable Γ-robust knapsack problem. The recoverable robust assignment problem was also investigated by Fischer, Hartmann, Lendl, and Woeginger [12].

Approximation results for recoverable robust problems were also achieved. For example, the recoverable traveling salesman problem was studied by Chassein and Goerigk [9], as well as Goerigk, Lendl, and Wulf [15], where the latter showed a 4-approximation algorithm. Beyond that Bold and Goerigk [3] presented a 2-approximation for the recoverable robust single machine scheduling problem under interval uncertainty.

Closely related to our work are the complexity studies by Goerigk, Lendl, and Wulf [16] who analyze the problems independent set, traveling salesman and vertex cover and obtain Σ3p-completeness for the three problems. At the same time, Grüne [17] introduced a gadget reduction framework to derive Σ3p-completeness for various different optimization problems by reusing already known reductions for recoverable robust problems with so-called xor-dependency scenarios and Γ-set scenarios. Additionally, Jackiewicz, Kasperski, and Zieliński [23] show that the shortest path problem with discrete budgeted interval uncertainty is Σ3p-complete.

2 Preliminaries

A language is a set L{0,1}. A language L is contained in Σkp iff there exists some polynomial-time computable function V (verifier), and m1,m2,,mk=|w|O(1) such that for all w{0,1}

wLy1{0,1}m1y2{0,1}m2Qyk{0,1}mk:V(w,y1,y2,,yk)=1,

where Q=, if k is odd, and Q=, if k even. An introduction to the polynomial hierarchy and the classes Σkp can be found in the book by Papadimitriou [32] or in the article by Jeroslow [25]. An introduction specifically in the context of bilevel optimization can be found in the article of Woeginger [34]. A many-one-reduction or Karp-reduction from a language L to a language L is a map f:{0,1}{0,1} such that wL iff f(w)L for all w{0,1}. A language L is Σkp-hard, if every LΣkp can be reduced to L with a polynomial-time many-one reduction. If L is both Σkp-hard and contained in Σkp, it is Σkp-complete.

For some cost function c:U, and some subset UU, we define the cost of the subset U as c(U):=uUc(u). For a map f:AB and some subset AA, we define the image of the subset A as f(A)={f(a):aA}. We defer the proofs marked with () to the full version of the paper [19].

3 Framework

Because this work builds upon the framework developed by Grüne and Wulf, it is essential to reintroduce its key concepts. A more comprehensive explanation of these concepts and their underlying motivation can be found in the original paper [18]. Grüne and Wulf begin by providing a precise definition of their primary focus: linear optimization problems, or in short LOP problems. An example of an LOP problem is vertex cover.

Definition 1 (Linear Optimization Problem, from [18]).

A linear optimization problem (or in short LOP problem) Π is a tuple (,𝒰,,d,t), such that

  • {0,1} is a language. We call the set of instances of Π.

  • For each instance I, there is some

    • a set 𝒰(I) with |𝒰(I)|=|I|O(1) that we call the universe associated to the instance I.

    • set (I)2𝒰(I) that we call the feasible solution set associated to the instance I.

    • function d(I):𝒰(I) mapping each universe element e to its costs d(I)(e).

    • threshold t(I).

For I, we define the solution set 𝒮(I):={S(I):d(I)(S)t(I)} as the set of feasible solutions below the cost threshold. The instance I is a Yes-instance, if and only if 𝒮(I). We say that an LOP is contained in NP, if it can be checked in polynomial time in |I| whether some proposed set F𝒰(I) is a solution, i.e. F𝒮(I).

Vertex Cover
Instances: Graph G=(V,E), number k.
Universe: Vertex set V=:𝒰.
Feasible solution set: The set of all vertex covers of G.
Solution set: The set of all vertex covers of G of size at most k.

It turns out that often the mathematical discussion is a lot clearer, when one omits the concepts ,d(I), and t(I), since for the abstract proof of the theorems only ,𝒰,𝒮 are important. This leads to the following abstraction from the concept of an LOP problem:

Definition 2 (Subset Search Problem (SSP), from [18]).

A subset search problem (or short SSP problem) Π is a tuple (,𝒰,𝒮), such that

  • {0,1} is a language. We call the set of instances of Π.

  • For each instance I, there is some set 𝒰(I) with |𝒰(I)|=|I|O(1) which we call the universe associated to the instance I.

  • For each instance I, there is some (potentially empty) set 𝒮(I)2𝒰(I) which we call the solution set associated to the instance I.

An instance of an SSP problem is referred to as a yes-instance if 𝒮(I). Any LOP problem is transformable into an SSP problem by defining 𝒮(I):={S(I):d(I)(S)t(I)}. We refer to this as the SSP problem derived from an LOP problem. There are problems that are more naturally represented as SSP problems, rather than as LOP problems. For instance, the problem 3Satisfiability can be derived as an SSP problem as follows.

3Satisfiability
Instances: Literal set L={1,,n}{¯1,,¯n}, clause set
C={C1,,Cm} such that CjL3 for all j{1,,m}.
Universe: L=:𝒰.
Solution set: The set of all subsets L𝒰 of the literals such that for all i{1,,n} we have |L{i,¯i}|=1, and such that |LCj|1 for all clauses CjC.

Grüne and Wulf introduce a novel type of reduction, termed SSP reduction. In essence, a typical polynomial-time reduction from a problem Π to another problem Π possesses the SSP property if it includes an additional injective mapping f:𝒰𝒰 that embeds the universe 𝒰 elements of Π into the universe 𝒰 of Π. This embedding allows Π to be viewed as a ’subinstance’ of Π while preserving the topology of solutions within the subset induced by the image of f. More precisely, we interpret W as the subinstance of Π within the instance of Π and we require the following two conditions to be met:

  1. 1.

    For every solution S of Π, the set f(S) is a partial solution of Π and can be extended to a full solution using elements that are not in W.

  2. 2.

    For every solution S of Π, the set f1(SW) is a solution of Π.

These two conditions are encapsulated in the single equation (1). We indicate that such a reduction exists by ΠSSPΠ. For a more intuitive explanation and an example that shows 3Sat SSP Vertex Cover, we direct readers to [18].

Definition 3 (SSP Reduction, from [18]).

Let Π=(,𝒰,𝒮) and Π=(,𝒰,𝒮) be two SSP problems. We say that there is an SSP reduction from Π to Π, and write ΠSSPΠ, if

  • There exists a function g:{0,1}{0,1} computable in polynomial time in the input size |I|, such that I is a Yes-instance iff g(I) is a Yes-instance (i.e. 𝒮(I) iff 𝒮(g(I))).

  • There exist functions (fI)I computable in polynomial time in |I| such that for all instances I, we have that fI:𝒰(I)𝒰(g(I)) is an injective function mapping from the universe of the instance I to the universe of the instance g(I) such that

    {fI(S):S𝒮(I)}={SfI(𝒰(I)):S𝒮(g(I))}. (1)

We remark that the map fI is required to be uniformly computable in polynomial time in |I|, i.e. there exists a fixed polynomial p such that fI(u) can be computed in time p(|I|) for all I and u𝒰(I). In [18], it is demonstrated that SSP reductions are transitive, that is Π1SSPΠ2 and Π2SSPΠ3 imply Π1SSPΠ3. We denote the class of SSP-NP-complete problems by SSP-NPc. It is defined as the set of all SSP problems Π that are polynomial-time verifiable (containment) and there is an SSP reduction from Satisfiability, i.e. SatisfiabilitySSPΠ (hardness). The key insight in [18] is that many classical problems fall within the class SSP-NPc. Additionally, this insight can be leveraged to show that their corresponding min-max version are Σ2p-complete.

4 Recoverable Robust Problems

In this section, we consider recoverable robust optimization problems. We show that the recoverable robust optimization problem is Σ3p-complete for the following nominal problems: satisfiability, 3-satisfiability, vertex cover, dominating set, set cover, hitting set, feedback vertex set, feedback arc set, uncapacitated facility location, p-center, p-median, independent set, clique, subset sum, knapsack, partition, scheduling, (un)directed Hamiltonian path, (un)directed Hamiltonian cycle, traveling salesman, two directed disjoint path, k directed disjoint path, and Steiner tree.

Recoverable robust optimization problems are defined as follows: We are given some instance I of a linear optimization problem (like the shortest path problem, the traveling salesman problem, etc.), and are faced with an uncertain future. The goal is to find a feasible solution S1 for the first stage (called here-and-now decision) such that after the reveal of the uncertainty we can find a feasible solution S2 in the second stage (called wait-and-see decision) such that S1 and S2 are not far away from each other according to some distance measure. Formally we require dist(S1,S2)κ, where dist(S1,S2) is some abstract distance function (for example the Hamming distance |S1S2|). The cost of the solution is given by c1(S1)+c2(S2). Here, c1 is a fixed cost function not affected by uncertainty, called the setup costs, and c2 is affected by uncertainty. More specifically, we assume that c2CΓ, that is, c2 is affected by discrete budgeted uncertainty222Usually in literature, this set is denoted by 𝒰Γ, however, we have already used the letter 𝒰 in this paper. CΓ. Precisely, given Γ and upper and lower bounds c¯(u)c¯(u) for all elements in the universe, the set CΓ contains all cost functions such that at most Γ elements u𝒰(I) have costs of c¯(u), while all other have c¯(u):

CΓ:={c2u𝒰(I):c2(u)=c¯(u)+δu(c¯(u)c¯(u)),δu{0,1},u𝒰(I)δuΓ}

This leads to the following abstract definition of a recoverable robust problem. We remark that the formal definition in some sense “disregards” the cost functions d and the cost threshold t of the original LOP problem. This is intentional because in the new instance of the recoverable robust problem, it becomes necessary to replace the old function d and the old threshold t by new ones. However, these concepts are necessary to correctly understand the proofs.

Definition 4 (Recoverable Robust Problem).

Let an LOP problem Π=(,𝒰,,d,t) and a distance measure dist:2𝒰×2𝒰0 be given. The recoverable robust problem associated to Π and dist is denoted by RR-Π and defined as follows: The input is an instance I together with three cost functions c1:𝒰(I),c¯:𝒰(I) and c¯:𝒰(I) and a cost threshold tRR, an uncertainty parameter Γ0 and a recoverability parameter κ0. The question is whether

minS1(I)maxc2CΓminS2(I)dist(S1,S2)κc1(S1)+c2(S2)tRR. (2)

Example.

Let Π= TSP. TSP is encoded as LOP problem the following way: An instance is given by I=(G,d,t), where G=(V,E) is a complete undirected graph, d:E0 are the edge costs and t is the cost threshold. The decision problem of TSP asks if there is a tour TE with cost d(T)t visiting all vertices from V exactly once. The universe is 𝒰(I)=E. The set (I)2E is the set of all feasible tours (including those of cost greater than t). The set 𝒮(I) is the set of all tours of cost at most t. To turn the TSP into a recoverable robust problem, we “forget” about the cost function d and the threshold t. Given c1,c¯,c¯,Γ,κ,tRR, the decision problem associated to the recoverable robust TSP (under some fixed distance function dist) is to decide whether Equation 2 holds.

4.1 Distance Measures

In the literature, multiple different distance measures are used. All these distance functions share the abstract properties specified in Definition 5.

  • the κ-addition or simply κ-distance measure is used in [5, 21]: dist(A1,A2)=|A2A1|

  • the κ-deletion distance is used in [7]: dist(A1,A2)=|A1A2|

  • the Hamming distance measure is used in [11, 16, 17]: dist(A1,A2)=|A1A2|

Definition 5 (Distance Measure).

A distance measure is a family (distU), where U ranges over all finite sets, such that for all finite sets U,U, and all subsets A1,A2U, the map distU:2U×2U0 has the following properties:

  • computable in polynomial time in |U|

  • invariant on injective mappings f:UU, i.e. distU(A1,A2)=distU(f(A1),f(A2)),

  • invariant on union, i.e. distU(A1,A2)=distU(A1{x},A2{x}) for xU(A1A2).

  • distU(A1,A1)=0

If U is clear from the context, we omit the subscript.

One can easily verify that all of the distance measures from above fulfill these criteria.

4.2 Containment in 𝚺𝟑𝒑

Recoverable robust problems can also be understood as a three-stage two-player game of an -player playing against an adversary (the -player). The -player controls both min operators and is able to choose the solutions S1 and S2. On the other hand, the adversary controls the max operator and is able to choose the uncertainty scenario, i.e. the cost function c2 from the set CΓ. Thus, it is possible to reformulate the question to

S1𝒰(I):c2CΓ:S2𝒰(I):S1,S2(I),c1(S1)+c2(S2)t and dist(S1,S2)κ.

With the game-theoretical perspective, it is intuitive to see the containment in Σ3p for all problems that have a polynomial-time verifier.

Theorem 6 ().

If Π=(,𝒰,𝒮) is an LOP problem in NP, then RR-Π is in Σ3P.

4.3 Combinatorial Recoverable Robust Problems

To show the hardness of recoverable robust problems, we introduce a new problem which we call the combinatorial version of recoverable robust problems. There are two differences to Definition 4: In this new problem, the cost function is replaced by a set B of so-called blockable elements (this can be interpreted as the case where cost coefficients c¯,c¯ are restricted to come from {0,}). Furthermore, in this new definition we replace (I) by 𝒮(I). As we show later, the hardness of the new combinatorial version also implies the hardness of the cost version. Finally, as explained above, Definition 7 applies to all SSP problems.

Definition 7 (Combinatorial Recoverable Robust Problem).

Let an SSP problem Π=(,𝒰,𝒮) be given. The combinatorial recoverable robust problem associated to Π is denoted by Comb. RR-Π and defined as follows: The input is an instance I together with a set B𝒰(I), an uncertainty parameter Γ0, and a recoverability parameter κ0. The question is whether

S1𝒰(I):BBwith|B|Γ:S2𝒰(I):
S1,S2𝒮(I),S2B=anddist(S1,S2)κ.

5 An SSP Framework for Recoverable Robust Problems

In this section, we prove our main result, i.e. we introduce and aprove a sufficient condition for nominal problems such that the corresponding recoverable robust problem is Σ3p-hard. We first explain the rough idea.

We want to show that recoverable problems are Σ3p-hard, so we have to choose some Σ3p-hard problem from which we start our reduction. The canonical Σ3p-complete problem is -Satisfiability [33]. Intuitively, this satisfiability problem can be understood as a game between Alice and Bob. Alice first chooses a variable assignment on the variables of X, then Bob chooses an assignment of the variables Y, and then again Alice selects an assignment on the variables Z, where Alice wishes to satisfy formula φ(X,Y,Z), and Bob wishes the opposite. On the other hand, in our target problem RR-Π, any solution consists of two solutions of the nominal problem Π, the first stage solution S1 and the second stage solution S2. The main challenge of our reduction is, that we have to model three variable sets X,Y,Z of the formula XYZφ(X,Y,Z) in the order of quantification into the problem RR-Π. Accordingly, both solutions S1,S2 need to include an assignment to all variables from X,Y,Z that satisfy φ(X,Y,Z) in agreement with the nominal Sat problem. However, note that there is a difference in the solution structure of both problems, which we need to address. If we model Alice’s decision on the X-variables in -Sat in first stage solution S1 of RR-Π, we need to make sure that Alice is not able to reassign the the X-variables in the second stage solution S2 of RR-Π. Otherwise, Alice does not adhere to the order of quantification.

This problem can be circumvented by adding an additional property to the SSP reduction. We demand that for a given set of literals Lb the distance of two solutions S1 and S2 is small if and only if the partial solution of S1 and S2 restricted to Lb is the same. One can interpret this construction as a gadget that blows up the literals of Lb in comparison to all the other literals. Accordingly, we call the literals of Lb blow-up literals and the corresponding reduction blow-up SSP reduction with a blow-up factor βI. Then, we can set Lb=LX, where LX is the set of literals corresponding to variables of X. Therefore, Alice is not able to change the assignment of the variable set X from the first stage solution S1 to the second stage solution S2 and thus has to adhere to the order of quantification. Then, it is also possible to reuse the injective correspondence function f of the SSP reduction to set the blockable elements B and to show the correctness of the reduction by using the elementary correspondence between the Sat solution and the solution of Π. Formally, we define blow-up SSP reductions as follows.

Definition 8 (Blow-Up SSP Reduction).

Let dist be a distance measure. Let 3Sat=(,𝒰,𝒮) with 𝒰=L={1,,n}{¯1,,¯n} and Π=(,𝒰,𝒮) be two SSP problems. Then, 3Sat is SSP blow-up reducible to Π with respect to dist(,) if for all sets LbL fulfilling iLb¯iLb there exists an SSP reduction (g,fI) with the following property: There is a polynomial time computable blow-up factor βI corresponding to each instance I such that for all solutions S1,S2𝒮(g(I)) of g(I):

f(Lb)S1=f(Lb)S2dist𝒰(g(I))(S1,S2)βI.

A concrete example of a blow-up reduction is given in Section 5.1.

Remark 1. Note that we define blow-up reductions to start at 3Sat. Because 3Sat is one of the ’first’ NP-complete problems, many reductions start at 3Sat or have a short reduction chain from 3Sat. Thus, it is convenient to reuse these reductions or respectively reduction chains to construct blow-up reductions. As we will show later in Sections 5.1 and 6 this is not really a restriction.

Remark 2. Blow-up SSP reductions are not transitive. (This is because there is no restriction on newly introduced elements in Π and how they behave corresponding to solutions in Π. Therefore, the distance between solutions in Π cannot be related to the distance between the corresponding solutions in Π.) We will tackle this problem in Section 6, in which we will show that we only need to find a blow-up reduction once at the beginning of a reduction chain and we can use so-called blow-up preserving reductions to append further problems to the reduction chain starting at 3Sat. This blow-up preserving reduction is typically much easier to find than a new blow-up reduction.

With everything in place, we show that blow-up SSP reductions enable us to show the Σ3p-hardness of recoverable robust problems as long as the nominal problem Π is SSP-NP-hard. In particular, our main theorem now states, that our newly introduced blow-up reductions indeed are a sufficient criterion for Σ3p-hardness.

Theorem 9.

For all SSP-NP-hard problems Π that are blow-up SSP reducible from 3Sat, the combinatorial recoverable robust variant Comb. RR-Π is Σ3p-hard.

Proof.

For the proof of the main theorem, we require some definitions. Let X={x1,,xn} be a set of binary variables and L:={x1,,xn}{x¯1,,x¯n}=XX¯ be the set of corresponding literals. An assignment is a subset AXX¯ such that |A{xi,x¯i}|=1 for all i=1,,n. We say that the assignment A assigns the value 1 to variable xi, if xiA, and A assigns 0 to xi, if x¯iA. We remark that this notation for an assignment is non-standard, but it turns out to be convenient in the context of our framework. A Sat-formula φ is in 3CNF, if it is a conjunction of clauses, and every clause has exactly three literals. We denote by φ(A){0,1} the evaluation of the formula φ under the assignment A. In the following, we often consider the case, where the set of variables is partitioned into three disjoint sets X,Y,Z, and denote this case by writing φ(X,Y,Z).

For the Σ3p-hardness proof, we require a known Σ3p-complete problem to reduce from. It turns out that instead of reducing from the classic problem -Satisfiability, it is more convenient to base our proof on Robust Adjustable Sat with budgeted uncertainty (in short, R-Adj-Sat), introduced by Goerigk, Lendl and Wulf [16], which we reformulate to adhere to the notation of this paper:

Problem R-Adj-Sat
Instance: A Sat-formula φ(X,Y,Z) in 3CNF. A partition of the set of variables into three disjoint parts XYZ with |X|=|Y|=|Z|. An integer Γ0.
Question: Is there an assignment AXXX¯ such that for all subsets YY of size |Y|Γ, there exist assignments AYYY¯ and AZZZ¯ such that AYY=, i.e. set all variables in Y to 0, and φ(AX,AY,AZ)=1?

The problem R-Adj-Sat is best understood as a game between Alice and Bob: Alice wants to satisfy the formula, Bob wants to achieve the opposite. Alice initially selects an assignment AX of the variables in X. Afterwards, Bob is allowed to select a blocker YY of size at most Γ and force all these variables to be “0”. Finally, Alice is allowed to assign values to all variables in YZ that have not yet been assigned. In [16] it is shown that R-Adj-Sat is Σ3p-complete. In order to provide additional insight for the reader, we quickly sketch the main argument in [16]: The idea is to reduce from the classic problem -Satisfiability. Let AXAYAZφ(AX,AY,AZ) be an instance of this problem. How do we transform it into a R-Adj-Sat problem? It is intuitive, that in such a reduction, Alice’s rule of choosing assignments AX and AZ should stay roughly the same. However, how do we express the choice of AY in terms of a blocker Y that is forced to 0? The idea is to split the variable yiY into two new variables yit,yif for every i=1,,n. We say that Bob plays honestly if for all i=1,,n we have |{yit,yif}Y|=1. Such an honest choice of Y naturally encodes an assignment AY. It turns out that, using the pigeonhole principle, and the budget constraint |Y|Γ, one can enrich the formula φ with a “cheat-detection-gadget”. This gadget can be used by Alice to win the game trivially if and only if Bob cheats. Hence the modified game of R-Adj-Sat is equivalent to the original instance of -Satisfiability.

We prove Σ3p-hardness by providing a reduction from R-Adj-Sat to Comb. RR-Π. This reduction will crucially rely on the blow-up SSP reduction from 3Sat to Π, which we assumed to exist. More formally, let Π=(Π,𝒰Π,𝒮Π) be an SSP problem, such that there is a blow-up reduction from 3Sat to Π defined by the mappings g, (fI)ISat, and blow-up factor (βI)ISat. Recall that by the definition of a blow-up reduction this means the following: Let the term Sat denote the set of all possible 3Sat instances, i.e. the set of all 3CNF formulas. For each fixed 3Sat instance ISatSat, its universe 𝒰(ISat) is the set of positive and negative literals of variables appearing in that formula. By assumption, for every fixed 3Sat instance ISatSat and every set Lb𝒰(ISat) (with Lb iff ¯Lb) there exists an equivalent instance IΠ=g(ISat) of Π. Furthermore, associated to each fixed 3Sat instance ISatSat we have a function fISat, which describes in which way the universe of ISat can be injectively embedded into the universe of the equivalent instance IΠ such that the SSP property is true. Finally, there is a tight correspondence between solutions of IΠ having distance at most βISat, and (the pre-image of) the solutions agreeing on the set Lb. For brevity of notation, in the following we often write 𝒰(IΠ) instead of 𝒰Π(IΠ) and 𝒮(IΠ) instead of 𝒮Π(IΠ) to denote the solution set/universe associated to instance IΠ, if the correct subscript is clear from the context.

We now turn our attention to the Σ3p-hardness proof. As explained above, the problem R-Adj-Sat is Σ3p-complete. Let an R-Adj-Sat instance IRAS be given. Our goal is to transform this instance into a new instance IRRΠ of Comb. RR-Π such that

IRASis a Yes-instance ofR-Adj-SatIRRΠis a Yes-instance of Comb. RR-Π.

Clearly if we can achieve this goal we are done. By definition of the problem R-Adj-Sat, the instance IRAS consists of the following parts:

IRAS=(φ(X,Y,Z),Γ),

where φSat is a 3CNF-formula, whose variables are partitioned into three parts X,Y,Z of equal size n:=|X|=|Y|=|Z|, and Γ0. Let the corresponding literal sets X,Y,Z be denoted by LX:=XX¯, LY:=YY¯, and LZ:=ZZ¯. Note that the universe associated to the 3Sat-instance φ is LXLYLZ. Let us denote this fact by writing 𝒰(φ)=LXLYLZ.

Given the instance IRAS as input, we describe in the following how to construct the instance IRRΠ of Comb. RR-Π, which consists of the following parts:

IRRΠ=(IΠ,B,Γ,κ).

Here, IΠ is an instance of the nominal problem Π, B𝒰(IΠ) is the set of blockable elements, Γ0 is the uncertainty budget, and κ0 is the recoverability parameter.

Before we give a formal description of IΠ,B,Γ, and κ, we explain the rough idea: In particular, what is the right choice for the instance IΠ? The answer is provided by the blow-up reduction. Note that this reduction can take in any 3CNF formula and a subset Lb of the literals (with the property that Lb iff ¯Lb for all literals ) and produce an instance I of Π. Note that the set Lb gets “blown up” by the reduction. We claim that Lb=LX is a good choice. Indeed, consider the following formal definition of IRRΠ.

Description of the Instance IRRΠ.

Let g, (fφ)φSat, and (βφ)φSat be the objects describing the blow-up reduction from 3Sat to Π for the fixed choice of Lb:=LX. We let IΠ=g(φ) be the instance produced by the blow-up reduction applied to the formula φ and the literal set Lb=LX. We remark that the following holds by the definition of the blow-up reduction: A blow-up reduction in particular is also an SSP reduction. Hence the function fφ:𝒰(φ)𝒰(IΠ) maps the literals of φ injectively to elements of the new universe 𝒰(IΠ). We can hence define the set of blockable elements

B:=fφ(Y),

where Y𝒰(φ) describes the positive literals in LY (recall that LY=YY¯).

Furthermore, note that by definition of a blow-up reduction, there exists some βφ, computable in poly-time, with the property that for all S1,S2𝒮(IΠ):

dist(S1,S2)βφS1fφ(LX)=S2fφ(LX).

In other words, the new instance IΠ has the property that two solutions of it have small distance if and only if they agree on fφ(LX), i.e. they agree on those universe elements of 𝒰(IΠ) that correspond to LX. We finally define

κ:=βφ and Γ:=Γ.

This completes the description of the instance IRRΠ=(I,B,Γ,κ).

Correctness

We start the correctness proof by showing

IRASis a Yes-instanceIRRΠ=(IΠ,B,Γ,κ)is a Yes-instance.

In this case, let AXXX¯ be an assignment of the variables X with the property that

YY,|Y|Γ:AY,AZ:AYY= and φ(AX,AY,AZ)=1.

Such an AX exists by assumption that IRAS is a Yes-instance. Now, let YY be an arbitrary subset of Y with |Y|Γ. Then, it is possible to choose assignments AY,AZ of Y,Z such that AYY= and φ(AX,AY,AZ)=1. We consider such an assignment A1:=AXAYAZ. Note that if we interpret φ as 3Sat instance, A1𝒰(φ) and even A1𝒮(φ), due to φ(A1)=1. Since the blow-up reduction is in particular an SSP reduction, we can make use of the central property of SSP reductions. The property implies that the 3Sat solution A1 can be “lifted” to a solution of Π. More precisely, there exists a solution S1𝒮(IΠ), such that

S1fφ(𝒰(φ))=fφ(A1).

(Intuitively, the solution S1𝒮(IΠ) of the nominal problem instance IΠ corresponds to the solution A1𝒮(φ) of the 3Sat instance φ when restricted to the injective embedded “sub-instance” of 3Sat inside Π.)

We claim that S1 is a solution for IRRΠ. To prove this claim we have to show that for all blockers BB with |B|Γ=Γ, there exists a solution S2𝒮(IΠ) with S2B= and dist(S1,S2)κ. Indeed, such a solution S2 exists for all choices of blockers B. This can be seen by applying the following construction: Repeat exactly the same construction as for S1, except that the set Y is not chosen arbitrarily. Instead, choose Y by letting Y:=fφ1(B). Note that by the definition of B, the set Y is well-defined, i.e. YY, |Y|Γ, and fφ(Y)=B. Repeating the same construction, assignment AX stays the same, but we obtain different assignments AY,AZ with AYY= and φ(AX,AY,AZ)=1. We let A2:=AXAYAZ. Again, by the SSP property there exists a solution S2𝒮(IΠ) such that S2fφ(𝒰(φ))=fφ(A2). Therefore we have

=A2Y=fφ(A2)fφ(Y)=(S2fφ(𝒰(φ)))B=S2B.

Note that AX is the same in both constructions. Therefore S1fφ(LX)=S2fφ(LX), and hence by the definition of a blow-up reduction, we have dist(S1,S2)βφ=κ. In total, we have shown that S1 is a solution of Π such that for every blocker BB, with |B|Γ, there exists a good solution S2. This shows that IRRΠ is yes-instance.

It remains to consider the reverse direction:

IRRΠ=(I,B,Γ,κ) is a Yes-instanceIRAS is a Yes-instance

The strategy is very similar, with the difference that we use the SSP property and the blow-up property in the reverse direction. Consider some solution S1𝒮(IΠ) which satisfies

BB,|B|Γ:S2𝒮(IΠ),S2B=:dist(S1,S2)κ.

We have to show that there exists an assignment AXXX¯ such that for all subsets YY there are assignments AY,AZ with AYY= and φ(AX,AY,AZ)=1. Indeed, to define AX, consider solution S1. By the SSP property, since S1𝒮(IΠ), the set fφ1(S1)𝒰(φ) is a set of literals which satisfies φ. We let AX be the restriction of that satisfying assignment to the variables in X. More formally, we let AX:=fφ1(S1)LX.

We claim that this assignment AX proves that IRAS is a yes-instance. Indeed, let some set YY, with |Y|Γ, be given. We define the blocker to be B:=fφ(Y). Observe that Bfφ(Y)=B, and |B|Γ. Therefore there exists S2𝒮(I) such that S2B= and dist(S1,S2)κ. We can again interpret the solution S2 as an assignment A2:=fφ1(S2). By the SSP property, this assignment satisfies φ, that is A2𝒮(φ). Since dist(S1,S2)κ=βφ, we have S1fφ(LX)=S2fφ(LX) by the property of a blow-up reduction. This in turn implies A2(XX¯)=AX. Finally, we have =S2B=fφ1(S2)fφ1(B)=A2Y. This shows that IRAS is a yes-instance, and concludes the proof.

Polynomial Time.

The instance IRRΠ=(IΠ,B,Γ,κ) can indeed be constructed in polynomial time. The nominal instance IΠ can be computed polynomially, since the map g in the SSP reduction can be computed polynomially. The set B can be computed polynomially, since the map fφ in the SSP reduction can be computed polynomially. The number κ=βφ can be computed polynomially by definition of a blow-up SSP reduction. The number Γ remains the same.

We have shown that the combinatorial version is indeed Σ3p-hard as long as the nominal SSP problem Π is SSP-NP-hard with a blow-up SSP reduction from Satisifiability. This however does not directly imply that the linear optimization version RR-Π is also Σ3p-hard. We remedy this problem with a short reduction that simulates the set of blockable elements in Comb. RR-Π with the cost functions in RR-Π that results in the following theorem.

Theorem 10 ().

For all LOP problems Π NP with the property that the SSP problem derived from it is blow-up SSP reducible from 3Sat, the recoverable robust version RR-Π is Σ3p-complete.

5.1 Blow-Up SSP Reductions for Various Problems

We want to apply this framework to several problems. In order to convey the intuition how a blow-up SSP reduction works, we only present the reduction from satisfibility to vertex cover and defer the other reductions to the full version.

Recall the definitions of vertex cover and 3-satisfiability as given in Section 3. Now, we can use a modification of the reduction by Garey and Johnson [13] from 3Sat to Vertex Cover as blow-up SSP reduction. We first describe the original reduction and then argue why it is SSP. At last, we show how to modify the reduction to a blow-up SSP reduction with a blow-up factor βI.

Let (L,C) be the 3Sat instance of literals L and clauses C. We construct the corresponding Vertex Cover instance (G,k) with graph G=(V,E) and integer k as follows. The reduction maps each literal to a vertex v. Then the vertices v and v¯ of a literal and its negation ¯ are connected by the edge {v,v¯}. Next, for all clauses cC there is a 3-clique on the three vertices {vc:C}. All the edges connecting the vertices in the 3-clique to their corresponding literal vertex are added, i.e. the edge set {{vc,v}:cC,c}. The threshold of the vertex cover instance is set to k=|L|/2+2|C|.

We now describe the blow-up gadget for every possible 3Sat instance I, blow-up literal set Lb, and number βI. The blow-up gadget for ,¯Lb is a duplication of the literal vertices v and v¯ forming a complete bipartite graph KβI+1,βI+1 between the vertex sets VβI={v,v1,,vβI} and V¯βI={v¯,v¯1,,v¯βI} as depicted in Figure 1. Then, we have the equivalence classes Q={v1,,vβI} and Q¯={v¯1,,v¯βI}. Now, consider the graph G as described above and modify it the following way: For each pair ,¯Lb, we identify the two vertices v,v¯ in G and in the blow-up gadget for ,¯Lb and merge them together. This results in the graph

G(Lb,βI)=(V(G)LbVβI,E(G)Lb{{a,b}aVβI,bV¯βI})

An example of such a graph G(Lb,βI) can be found in Figure 1.

The threshold for the vertex cover is then increased such that βI+1 vertices (instead of only one) for every blow-up literal pair can be taken into the solution, i.e. k(βI)=(βI+1)|Lb|/2+|LLb|/2+2|C|.

Figure 1: The graph G(Lb,βI) with Lb={3,¯3} and βI=2 of the instance φ=(¯1¯23). The blow-up gadget of the literals 3 and ¯3 consiting of vetex sets V32V¯32 is outlined with dashed lines.
Claim 11.

For the three choices of dist from Section 4.1 and βI:=|V(G)|, G(Lb,βI) and k(βI) describe a blow-up SSP reduction from 3Sat to Vertex Cover.

Proof.

We show that g, (fI)I, and (βI)I defined by

g(I)=(G(Lb,βI),k(βI)),f()=v, and βI=|V(G)|

describe a blow-up SSP reduction from 3Sat to Π. We start by showing that the reduction is still SSP.

For this, we first make the observation that for a bipartite graph KβI+1,βI+1 between the vertex sets VβI and V¯βI, there are exactly two vertex covers of size βI+1: either one takes all vi together with v or all v¯i together with v¯ into the vertex cover. Otherwise, there is at least one edge {v,w} with vVβI and wV¯βI not covered.

With this observation, we can follow that every vertex cover solution still consists of

  1. 1.

    exactly one of the original vertices v and v¯ for ,¯L, and

  2. 2.

    the βI additional vertices {vi:1iβI} of the blow-up gadget if and only if vLb is in the vertex cover solution, and

  3. 3.

    exactly two of the three vertices v1cj,v2cj,v3cj for cjC.

Thus, the original injective SSP mapping f()=v is still a valid SSP mapping for this reduction.

To show that the blow-up property holds, we have to show that for βI:=|V(G)| and for all solutions of g(I), S1,S2𝒮(g(I)),

f(Lb)S1=f(Lb)S2dist(S1,S2)βI. (3)

We begin with analyzing the maximum distances of two solutions S1,S2𝒮(g(I)) for the three distance measures if S1 and S2 agree on each blow-up gadget. To reach the maximum distance between two solutions S1 and S2 for the three distance measures, S1 includes all the v for all LLb, while S2 includes all the opposite v¯. Additionally, S1 includes a different pair v1cj,v2cj for all cjC in comparison to S2. Thus, all v¯ of ¯LLb and v3cj for all cjC have to be added (while deleting v and one of v1cj,v2cj). In conclusion, βI=|V(G)| is an upper bound for the maximal possible κ-addition and κ-deletion distance (|C|+|LLb|/2) as well as for the maximal possible Hamming distance (here one has to count for all deletions and additions such that βI=|V(G)|2|C|+|LLb||S1S2| is also sufficient). It follows that f(Lb)S1=f(Lb)S2dist(S1,S2)βI.

We conclude with the situation that S1 and S2 do not agree on each blow-up gadget. Then corresponding to the analysis above for κ-addition and κ-deletion, we reach at least a distance of |V(G)|+1>βI and for the Hamming distance we reach at least 2|V(G)|+2>βI. It follows that f(Lb)S1f(Lb)S2dist(S1,S2)>βI.

Clearly βI and thus G(Lb,βI) and k(βI) are polynomial time computable. This completes the correctness proof of the reduction.

6 The Issue of Transitivity: Preserving the Blow-up

Since blow-up SSP reductions require more structure, we unfortunately lose transitivity in comparison to normal SSP reductions. However, we would like to reuse an existing blow-up SSP reduction such that we do not need to start every reduction at 3Sat and additionally construct blow-up gadgets. For this, we introduce blow-up preserving SSP reductions, and show that they are transitive and preserve the structure of the blow-up gadgets. The idea is that we only need to show that there is a blow-up SSP reduction for the first reduction in the reduction chain and then the reduction chain can be prolonged by adding further problems, between which there are blow-up preserving SSP reductions.

The idea of a blow-up preserving reduction between problem Π and problem Π is that we partition the universe into three sets, the set of elements which originate in the problem Π and a set of elements Uoff, which are never of a solution of the instance of Π, and a set of elements Uon, which are part of every solution of the instance of Π. Therefore, every solution S in Π has a correspondent solution S=f(S)Uon by applying the injective function f. Since the distance measures that we consider are invariant on injective functions and union, the distance between the solutions in problem Π is the same as in problem Π. Correspondingly, the blow-up SSP reduction from 3Sat to Π can be extended to Π.

From a different point of view, a blow-up preserving SSP reduction can be understood as an SSP reduction, which “adds” only two kind of new elements to the universe: Those that are trivially contained in every (optimal) solution, and those that are never contained in an (optimal) solution. It is intuitively easy to understand that compared to the old instance, the newly “added” elements cannot influence the term dist(S1,S2). Hence Σ3p-hardness of recoverable problems is maintained.

Definition 12 (Blow-Up Preserving SSP Reduction).

Let Π=(,𝒰,𝒮) and Π=(,𝒰,𝒮) be two SSP problems. Then, there is a blow-up preserving SSP reduction from Π to Π if there exists an SSP reduction (g,(fI)I) such that for all instances I the following holds: For all elements u𝒰(g(I)), either uf(𝒰(I)), or for all S𝒮:uS, i.e. uUoff, or for all S𝒮:uS, i.e. uUon.

With this definition of a blow-up preserving SSP reduction, we are able to derive the following important properties.

Theorem 13 ().

Let Π be an SSP-NP-complete problem for which a blow-up SSP reduction from 3Sat exists. Then every problem Π, which is blow-up preserving SSP reducible from Π, is blow-up SSP reducible from 3Sat.

Lemma 14 ().

Blow-up preserving SSP reductions are transitive.

6.1 Blow-up Gadgets for various Problems

In the full version of the paper, we show that for many classical problems, blow-up or blow-up-preserving reductions exists. An overview of all the reductions is provided in Figure 2. As a corollary, we obtain Theorem 15.

Theorem 15 ().

The recoverable robust version of the following nominal problems is Σ3p-complete: Sat, 3Sat, Vertex Cover, Dominating Set, Set Cover, Hitting Set, Feedback Vertex/Arc Set, Uncap. Facility Location, p-Center, p-Median, Independent Set, Clique, Subset Sum, Knapsack, Partition, Scheduling, (Un)Directed Hamiltonian Path/Cycle, TSP, k-Vertex Directed Disjoint Path, Steiner Tree

Figure 2: The tree of SSP reductions for all considered problems. Solid edges indicate blow-up SSP reductions and dotted edges indicate blow-up preserving SSP reductions.

7 Conclusion

We have shown that for a large class of NP-complete problems their corresponding recoverable robust problem is Σ3p-complete for several different distance measures. For this, we have introduced two new types of reductions: blow-up SSP reductions and blow-up preserving SSP reductions. Blow-up SSP reductions are the basis to show Σ3p-completeness of a recoverable robust problem by a reduction from 3Sat and further problems can be appended to the reduction chain by using the transitive blow-up preserving SSP reductions. In particular, we are able to show that 24 recoverable robust problems are Σ3p-complete with the ability to apply the framework to further problems.

These results and the framework might be extendible in several different dimensions. First the natural question arises, whether these results can also be applied to most problems that are just in NP but not NP-complete, because we are only able to use this result for NP-complete problems. The results by Jackiewicz, Kasperski and Zieliński [23] indicate that this might be indeed the case. Secondly, we have only shown that these results hold for subset search problems. Partition problems or permutation problems are not directly covered by this result. Thus, it is of interest to classify the complexity of these problems as well. Furthermore, it is of interest to apply the framework to more subset search problems, i.e. to show that more problems are blow-up (preserving) SSP reducible from the examined problems.

References

  • [1] Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust Optimization, volume 28 of Princeton Series in Applied Mathematics. Princeton University Press, 2009. doi:10.1515/9781400831050.
  • [2] Dimitris Bertsimas and Melvyn Sim. Robust discrete optimization and network flows. Math. Program., 98(1-3):49–71, 2003. doi:10.1007/S10107-003-0396-4.
  • [3] Matthew Bold and Marc Goerigk. Investigating the recoverable robust single machine scheduling problem under interval uncertainty. Discret. Appl. Math., 313:99–114, 2022. doi:10.1016/j.dam.2022.02.005.
  • [4] Christina Büsing. Recoverable robustness in combinatorial optimization. Cuvillier Verlag, 2011.
  • [5] Christina Büsing. Recoverable robust shortest path problems. Networks, 59(1):181–189, 2012. doi:10.1002/net.20487.
  • [6] Christina Büsing, Sebastian Goderbauer, Arie M. C. A. Koster, and Manuel Kutschka. Formulations and algorithms for the recoverable Γ-robust knapsack problem. EURO J. Comput. Optim., 7(1):15–45, 2019. doi:10.1007/s13675-018-0107-9.
  • [7] Christina Büsing, Arie M. C. A. Koster, and Manuel Kutschka. Recoverable robust knapsacks: Γ-scenarios. In Julia Pahl, Torsten Reiners, and Stefan Voß, editors, Network Optimization - 5th International Conference, INOC 2011, Hamburg, Germany, June 13-16, 2011. Proceedings, volume 6701 of Lecture Notes in Computer Science, pages 583–588. Springer, 2011. doi:10.1007/978-3-642-21527-8_65.
  • [8] Christina Büsing, Arie M. C. A. Koster, and Manuel Kutschka. Recoverable robust knapsacks: the discrete scenario case. Optim. Lett., 5(3):379–392, 2011. doi:10.1007/s11590-011-0307-1.
  • [9] André B. Chassein and Marc Goerigk. On the recoverable robust traveling salesman problem. Optim. Lett., 10(7):1479–1492, 2016. doi:10.1007/s11590-015-0949-5.
  • [10] André B. Chassein, Marc Goerigk, Adam Kasperski, and Pawel Zielinski. On recoverable and two-stage robust selection problems with budgeted uncertainty. Eur. J. Oper. Res., 265(2):423–436, 2018. doi:10.1016/j.ejor.2017.08.013.
  • [11] Mitre Costa Dourado, Dirk Meierling, Lucia Draque Penso, Dieter Rautenbach, Fábio Protti, and Aline Ribeiro de Almeida. Robust recoverable perfect matchings. Networks, 66(3):210–213, 2015. doi:10.1002/net.21624.
  • [12] Dennis Fischer, Tim A. Hartmann, Stefan Lendl, and Gerhard J. Woeginger. An investigation of the recoverable robust assignment problem. In Petr A. Golovach and Meirav Zehavi, editors, 16th International Symposium on Parameterized and Exact Computation, IPEC 2021, September 8-10, 2021, Lisbon, Portugal, volume 214 of LIPIcs, pages 19:1–19:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.IPEC.2021.19.
  • [13] M. R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979.
  • [14] Marc Goerigk and Michael Hartisch. An Introduction to Robust Combinatorial Optimization, volume 361 of International Series in Operations Research & Management Science. Springer, 2024.
  • [15] Marc Goerigk, Stefan Lendl, and Lasse Wulf. Recoverable robust representatives selection problems with discrete budgeted uncertainty. Eur. J. Oper. Res., 303(2):567–580, 2022. doi:10.1016/j.ejor.2022.03.001.
  • [16] Marc Goerigk, Stefan Lendl, and Lasse Wulf. On the complexity of robust multi-stage problems with discrete recourse. Discret. Appl. Math., 343:355–370, 2024. doi:10.1016/J.DAM.2023.10.018.
  • [17] Christoph Grüne. The complexity classes of hamming distance recoverable robust problems. In José A. Soto and Andreas Wiese, editors, LATIN 2024: Theoretical Informatics - 16th Latin American Symposium, Puerto Varas, Chile, March 18-22, 2024, Proceedings, Part I, volume 14578 of Lecture Notes in Computer Science, pages 321–335. Springer, 2024. doi:10.1007/978-3-031-55598-5_21.
  • [18] Christoph Grüne and Lasse Wulf. Completeness in the polynomial hierarchy for many natural problems in bilevel and robust optimization. CoRR, abs/2311.10540, 2023. doi:10.48550/arXiv.2311.10540.
  • [19] Christoph Grüne and Lasse Wulf. On the complexity of recoverable robust optimization in the polynomial hierarchy. arXiv preprint arXiv:2411.18590, 2024. doi:10.48550/arXiv.2411.18590.
  • [20] Felix Hommelsheim, Nicole Megow, Komal Muluk, and Britta Peis. Recoverable robust optimization with commitment. CoRR, abs/2306.08546, 2023. doi:10.48550/arXiv.2306.08546.
  • [21] Mikita Hradovich, Adam Kasperski, and Pawel Zielinski. Recoverable robust spanning tree problem under interval uncertainty representations. J. Comb. Optim., 34(2):554–573, 2017. doi:10.1007/s10878-016-0089-6.
  • [22] Mikita Hradovich, Adam Kasperski, and Pawel Zielinski. The recoverable robust spanning tree problem with interval costs is polynomially solvable. Optim. Lett., 11(1):17–30, 2017. doi:10.1007/S11590-016-1057-X.
  • [23] Marcel Jackiewicz, Adam Kasperski, and Pawel Zielinski. Computational complexity of the recoverable robust shortest path problem with discrete recourse. CoRR, abs/2403.20000, 2024. doi:10.48550/arXiv.2403.20000.
  • [24] Marcel Jackiewicz, Adam Kasperski, and Pawel Zielinski. Recoverable robust shortest path problem under interval uncertainty representations. CoRR, abs/2401.05715, 2024. doi:10.48550/arXiv.2401.05715.
  • [25] Robert G. Jeroslow. The polynomial hierarchy and a simple model for competitive analysis. Math. Program., 32(2):146–164, 1985. doi:10.1007/BF01586088.
  • [26] Berit Johannes. New classes of complete problems for the second level of the polynomial hierarchy, 2011.
  • [27] Adam Kasperski and Pawel Zielinski. Robust recoverable and two-stage selection problems. Discret. Appl. Math., 233:52–64, 2017. doi:10.1016/j.dam.2017.08.014.
  • [28] Panos Kouvelis and Gang Yu. Robust discrete optimization and its applications, volume 14. Springer Science & Business Media, 2013.
  • [29] Thomas Lachmann, Stefan Lendl, and Gerhard J. Woeginger. A linear time algorithm for the robust recoverable selection problem. Discret. Appl. Math., 303:94–107, 2021. doi:10.1016/j.dam.2020.08.012.
  • [30] Stefan Lendl, Britta Peis, and Veerle Timmermans. Matroid bases with cardinality constraints on the intersection. Math. Program., 194(1):661–684, 2022. doi:10.1007/S10107-021-01642-1.
  • [31] Christian Liebchen, Marco E. Lübbecke, Rolf H. Möhring, and Sebastian Stiller. The concept of recoverable robustness, linear programming recovery, and railway applications. In Ravindra K. Ahuja, Rolf H. Möhring, and Christos D. Zaroliagis, editors, Robust and Online Large-Scale Optimization: Models and Techniques for Transportation Systems, volume 5868 of Lecture Notes in Computer Science, pages 1–27. Springer, 2009. doi:10.1007/978-3-642-05465-5_1.
  • [32] Christos H. Papadimitriou. Computational complexity. Addison-Wesley, 1994.
  • [33] Larry J. Stockmeyer. The polynomial-time hierarchy. Theor. Comput. Sci., 3(1):1–22, 1976. doi:10.1016/0304-3975(76)90061-X.
  • [34] Gerhard J. Woeginger. The trouble with the second quantifier. 4OR, 19(2):157–181, 2021. doi:10.1007/s10288-021-00477-y.