Abstract 1 Introduction 2 Preliminaries 3 1-paddability 4 Construction of a Function Problem with Chosen Difficulty for 2nd Solution 5 Conclusion and Open Questions References

The Complexity of Computing Second Solutions

Fabian Egidy ORCID University of Würzburg, Germany Christian Glaßer ORCID University of Würzburg, Germany Fynn Godau ORCID University of Würzburg, Germany
Abstract

We study the complexity of computing second solutions for NP search problems, i. e., given a problem instance x and a valid solution y, we have to find another valid solution y.

Our main result shows that for typical NP decision problems, the complexity of computing second solutions is completely determined by the choice of the type of solution (i. e., the specific function problem), but independent of the underlying decision problem. More precisely, we show that for every XNP that is 1-paddable (a weak form of paddability), different choices of the type of solution lead to different second solution problems, which altogether have the same degree structure as the entire class of NP search problems (FNP). In fact, each degree of difficulty within FNP does occur as a second solution problem for X. This proves that typical NP decision problems have no intrinsic complexity w. r. t. the search for a second solution, but only the specification of the type of solution determines this complexity. This explains the empirical observation that the difficulty of computing second solutions strongly depends on the formulation of the problem.

Moreover, we show that the complexities of a search problem and its second solution variant are independent in the following sense: For all search problems A and B representing two degrees of difficulty, there exists a search problem C such that

  1. 1.

    C is as difficult as A and

  2. 2.

    computing second solutions for C is as difficult as B.

Keywords and phrases:
function problems, another solution problem, turing machines
Funding:
Fabian Egidy: supported by the German Academic Scholarship Foundation.
Copyright and License:
[Uncaptioned image] © Fabian Egidy, Christian Glaßer, and Fynn Godau; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Complexity classes
; Theory of computation Turing machines
Editors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał Skrzypczak

1 Introduction

1.1 Motivation

We study the complexity of computing a second solution for an NP search problem when we are given a first solution. NP search problems (or search problems for short) are a fundamental concept in complexity theory, first studied as multivalued functions by Selman [13, 14, 15] and further developed by Fenner et al. [6, 5] and Hemaspaandra et al. [10]. Particularly intensive research has been done on the class of total search problems TFNP [12], where each input is guaranteed to have at least one solution (see [7] for an overview). While the standard setting of search problems asks to compute a first solution, we focus on computing a second solution. This means we are given a problem instance x together with a valid solution y, and have to find another valid solution y. In particular, we are interested in the extent to which a first solution helps compute a second one.

On the one hand, our investigation is motivated by use cases where users ask for multiple options to choose from (e. g., routes or schedules), or for new solutions different from the previous ones (e. g., generation of exam tasks). On the other hand, the following consideration shows that our investigation is also interesting from a theoretical perspective: It is easy to devise examples where finding an additional solution is simple (because it can be easily derived from a first solution, e. g., reversing the direction of a Hamiltonian cycle). In contrast, for other examples, finding the first few solutions is trivial, but finding additional solutions is hard (e. g., when searching for divisors). This observation prompts us to systematically investigate the relationship between first and second solutions from a complexity theory perspective.

1.2 Previous Work

In the past, questions about the complexity of finding alternative solutions were mostly raised in custom-tailored form for concrete problems, like the “Another Hamiltonian Cycle” problem (see [3] for an overview). Ueda and Nagao [16] consider several specific problems of this shape and employ the general problem scheme “alternative solution for X” based on an intuitive understanding of the base problem X. They investigate the resulting problem for several such problems X (e. g., 3-Dimensional matching and Nonogram).

In our paper, we use the following generic and abstract definition of finding an n-th solution, which was proposed by Yato and Seta [17] and which is applicable to any search problem: given the solutions y1,,yn1 for the input instance x, we have to find a new n-th solution yn (if such solution exists). This definition yields, for any search problem A, the unique corresponding second solution problem A[1].

1.3 Our Contribution

We analyze how the formulation of a problem influences the complexity of computing second solutions. Our main result in Section 4 shows that this complexity is completely determined by the choice of the type of solution, but independent of the underlying decision problem.

Conclusion 1.

The complexities of a search problem and its second solution variant are independent.

In 4.11, we show that for any nonempty search problems A and B (representing two arbitrary degrees of difficulty), there exists a search problem C such that:

  1. 1.

    C is as difficult as A.

  2. 2.

    C[1] is as difficult as B.

This shows that the complexity of C is independent of the complexity of its second solution problem C[1]. Here, “independent” really is to be understood in a mathematical sense: the complexity of one problem variant provides no information about the complexity of the other. For example, there exists a search problem C just as hard as factorization such that its second solution problem is just as hard as graph isomorphism.

Conclusion 2.

For every 1-paddable XNP, the corresponding second solution problems have the same degree structure as the entire class FNP. In fact, all possible degrees of difficulty do occur as second solution problems for X.

Conclusion 2 uses the additional assumption 1-paddable, which is weaker than paddablitity in the sense of Berman-Hartmanis [2]; hence, natural problems in NP are 1-paddable. The conclusion is based on the following arguments. By 4.8, for every 1-paddable XNP and every nonempty search problem B, there exists a search problem C such that:

  1. 1.

    dom(C)=X.

  2. 2.

    C[1] is as difficult as B.

Hence, C solves the decision problem X, while its second solution problem C[1] has the same complexity as B. Phrased in terms of machines (cf. 4.7), we construct an NP machine for X such that finding a second accepting path has the same complexity as B. Since any such choice of complexity is possible, we recognize that each 1-paddable XNP has maximum freedom for the complexity of the second solution problem. As a consequence, the choice of the type of solution has the greatest possible influence on the difficulty of the second solution problem. For example, for the satisfiability problem SAT, there exist search problems C and D both solving SAT such that the second solution problem of C is just as hard as factorization, while that of D is solvable in polynomial time. This demonstrates that NP decision problems have no intrinsic complexity w. r .t. the search for a second solution. Rather, the specification of the type of solution (that is, the corresponding search problem) determines the complexity of computing a second solution, where all levels of complexities are possible. Note that, given PNP, the analog statement for first solution problems does not hold: under this assumption, there does not exist a search problem D such that dom(D)=SAT and D is solvable in polynomial time.

4.12 restates this result in terms of degree structures, as follows. For every 1-paddable XNP, all of the following classes share the same degree structure:

  • FNP

  • {A[1]AFNP}

  • {A[1]AFNP and dom(A)=X}

This shows that the entire variety of degrees of difficulty in FNP reappears in second solution problems; this still applies even if we restrict to any single 1-paddable XNP and corresponding second solution problems.

In our proofs, we use artificially defined search problems to achieve the desired degrees of difficulty. Therefore, we do not obtain statements about specific problems. We rather see our results as an objective description and systematic fine-grained analysis of the empirical observation that the difficulty of computing second solutions strongly depends on the formulation of the problem, more precisely the chosen type of solution.

2 Preliminaries

2.1 Basic Definitions

We choose Turing machines (as commonly defined) as the computation model for our algorithms, usually on the fixed alphabet Σ{0,1}. Let ran(f) denote the range of f.

Definition 2.1 (inverse of a polynomial-time computable function).

Let f:ΣΣ be a polynomial-time computable, injective, partial function. Then f1:ΣΣ{} is defined as follows:

f1(y){x,if f(x)=y,if yran(p)

If f1 is polynomial-time computable, we call f polynomial-time invertible.

In the following, when defining a deterministic polynomial-time algorithm by means of a function definition, we can informally use “” as a function value instead of “n.d.”, which other algorithms can use to continue their computations. For this, we define xx for a compact notation in our formulas.

Definition 2.2.

Each AΣ×Σ is a (partial) multivalued function. We denote the function values as A(x){y(x,y)A}.

Definition 2.3 (refinement, [14]).

Let fΣ×Σ be a multivalued function. Then, f:ΣΣ{} is a (single-valued) refinement of f if and only if the following holds for all xΣ:

  • f is total

  • f(x)=f(x)=

  • f(x)=yyf(x)

Each multivalued function is a function problem that poses the task of algorithmically computing any of its function values for a given input. From any function problem, we can always derive a decision problem that poses the question whether an instance has at least one function value:

Definition 2.4.

Let f be a multivalued function. Then, dom(f){xyf(x)}.

Definition 2.5.

We define FNP as follows:

fFNP:fP and  polynomial p with (x,y)f:|y|p(|x|)

Hence, f belongs to FNP if and only if its graph belongs to P and the length of function values is polynomially bounded. Each fFNP, which we refer to as a search problem, uniquely defines its corresponding decision problem dom(f)NP. The class FNP is sometimes also referred to as NPMVg.

Definition 2.6.

TFNP{fFNPdom(f)=Σ}.

Definition 2.7 (polynomial-time many-one reduction).

Let A,BΣ. We define:

  • AmpB: there exists a total, polynomial-time computable f:ΣΣ such that
    xΣ:xAf(x)B

  • AmpB:AmpB and BmpA

Note that from B=ΣAmpB, it follows that A=Σ. For function problems, a different kind of reduction is needed: intuitively, a function problem f reduces to g if an efficient algorithm for g would allow us to solve f efficiently as well. This is captured by the following notion, similar to a metric reduction [11]: on input x, a polynomial-time algorithm must find a value zf(x); we allow it one oracle query y for a value zg(y). The notation MO shall express which oracle O an oracle algorithm M can access.

Definition 2.8 (metric reduction for function problems, [11]).

Let f and g be function problems.

  • f1-Tpg:there exists a deterministic polynomial-time oracle algorithm M s. t.
           M asks at most one oracle query and
           for each refinement g of gMg computes a refinement of f

  • f1-Tpg:f1-Tpg and g1-Tpf

Note that f1-Tpg implies fTpg, where Tp denotes polynomial-time Turing reduction with an unbounded number of queries. Therefore, our results also hold for Tp instead of 1-Tp. In our proofs, we will show 1-Tp by defining a function or an oracle algorithm from which it is obvious that it can be computed in polynomial time and behaves correctly for each matching oracle. For this, we will informally use g(x) as a return value of an arbitrary refinement g(x), i. e. as an answer to the oracle query x.

From any search problem A, we denote the class of equally difficult problems (by our reduction term 1-Tp) as its degree deg(A):={AA1-TpA}. Furthermore, we define the degree structure of a complexity class 𝒞 to be the set of all its degrees, i. e. {deg(A)A𝒞}. We consider two complexity classes 𝒞 and 𝒟 to share a degree structure (w. r. t. 1-Tp) if both classes contain at least one element of every degree that appears in any of the two classes, i. e. if {deg(A)A𝒞}={deg(A)A𝒟} holds.

2.2 Formalism for Additional Solution Problems

We now define a formalism that describes the search for an additional solution. For this, we define the function problem where an instance of f together with n solutions are given as a list encoding, and the task is to find an n+1-th solution. For a given f and n, this search problem is uniquely defined. An input is only valid if it contains n valid and pairwise distinct solutions, and we ask the answer to be valid and different from all solutions that are already part of the input. Invalid inputs shall not have function values. Let denote an appropriate list encoding, and let ε denote the empty word.

Definition 2.9 (additional solution, [17]).

For fFNP and n1, we define:

f[n]{(x,y1,,yn,yn+1)i{1,,n+1}:[(j<i:yiyj)(x,yi)f]}
f[n]tf[n] {(z,y1,,yn,ε)¬i{1,,n}:[(j<i:yiyj)(z,yi)f]not an encoding of n valid solutions}
{(x,ε)z,y1,,yn:x=z,y1,,ynnot a proper list encoding}

Compared to the definition of f[n], the definition of f[n]t provides invalid inputs with one trivial and unique solution. Generally, it holds that f[n],f[n]tFNP. Additionally, it holds that if the existence of n solutions in f implies the existence of an n+1-th solution, f[n]tTFNP.

2.3 Function Problems as Search Problems in Nondeterministic Machines

Function problems are useful for describing the search for accepting paths in nondeterministic machines. Our motivation for drawing a parallel between the two concepts is that we will allow ourselves to treat function problems and nondeterministic machines as formally different, but intuitively synonymous. For reasoning the similarity, we define a corresponding function problem for each nondeterministic machine.

Definition 2.10.

Let M be a nondeterministic decision machine. Then, we define

fM{(x,r)r is an accepting path of M(x)}.

For an NP machine M, it holds that fMFNP; the function problem’s associated task is the search for accepting paths in this specific machine. Conversely, for each FNP problem f, we can immediately derive a nondeterministic machine that tests one of the possible function values of f(x) on each of its paths on input x.

Definition 2.11.

Let the solutions of fFNP be length-bounded by polynomial p. Then, let Mf be the nondeterministic polynomial-time machine that decides dom(f) by guessing and verifying one yΣ with |y|p(|x|) per execution path.

Note that despite fMgg, it holds that fMg1-Tpg.

3 1-paddability

3.1 Definition

In the proof of the main theorem, we will want to add a special behavior to many of the positive instances of one of the two sets that the construction works on. To know when we can safely apply our special behavior, we will need a subset of positive instances that are both easy to find and easy to recognize. Formally, we want an infinite subset, decidable in polynomial time, whose elements are indexed such that the bijection from index to element is easy to compute in both directions. We call sets that have such a subset 1-paddable. We will justify this choice of name shortly.

Definition 3.1 (1-paddability).

Let AΣ.

  • A is 1-paddable via p:p:ΣΣ total and injective, p,p1 polynomial-time computable, and ran(p)A

  • A is 1-paddable : there exists a function p such that A is 1-paddable via p

If A is 1-paddable via p, the desired infinite subset is ran(p); in other words, p(x)A holds for all x. The index of a given yran(p) is determined by the polynomial-time computable term x=p1(y). From index x, it is easy to return to y via y=p(x). Since the output of p1(x) tells us whether xran(p), it holds that ran(p)P.

Let “A1,ipB” denote that there exists a total, injective, polynomial-time computable and polynomial-time-invertible function f such that xAf(x)B. With this reduction term, we obtain an alternative definition of 1-paddability:

Proposition 3.2.

A is 1-paddable Σ1,ipA.

Proof.

The right-hand side asks for an f such that xΣf(x)A holds. Since xΣ is always true, the requirement can be rewritten as ran(f)A. The rest follows immediately from the two definitions.

3.2 Properties

To avoid having to show 1-paddability individually for each set, we want to utilize known results for paddability, which is defined as follows.

Definition 3.3 (paddability [2, 4]).

Let AΣ be a set. A is paddable : there exists a total, injective function pad:Σ×ΣΣ such that pad, pad1 are polynomial-time computable and that for all x,y, it holds that xApad(x,y)A. Then pad is a padding function for A.

Almost all discovered NP-complete problems are known to be paddable. We show all these problems to be 1-paddable as well.

Proposition 3.4.

AA paddableA 1-paddable.

Proof.

Let pad be a padding function for A and let aA. We define p(x)pad(a,x). From aA and aApad(a,x)A (a property of the padding function), it follows that p(x)A for all x. Totality, injectivity, polynomial-time computability and polynomial-time invertibility all transfer directly from pad. Now, A is 1-paddable via p.

The proof implies an analogy between “paddability” and “1-paddability”, which justifies the choice of name for the latter:

paddability every instance is paddable
1-paddability at least one positive instance is paddable

However, the inverse implication of 3.4 does not hold.

Proposition 3.5.

Some sets are 1-paddable, but not paddable.

Proof.

Let AΣ{ε}. We show A to be 1-paddable, but not paddable.

  • Assume A to be paddable, then we know A¯ to be paddable as well (using the same padding function). Let A¯ be 1-paddable via p by virtue of 3.4. We know p to be injective, so p(0)A¯ and p(1)A¯ are distinct; a contradiction to A¯={ε}, which proves A not to be paddable.

  • We show A to be 1-paddable via p(x)0x. Clearly, ran(p)=0Σ. The empty word ε is the only element outside of A; since by definition, ε does not lie in ran(p), it holds that ran(p)A. It follows that p is a 1-padding function for A.

The concept of 1-paddability as existence of a very simple subset is interesting in itself. Analogous to [4] Theorem 7.15, one can show 1-paddable sets not to be sparse. Additionally, 1-paddability is different from P-printability as in [9, 8, 1], as one can show that no set is both 1-paddable and P-printable at the same time.

3.3 1-paddability in Every FNP Equivalence Class

Obviously, there are function problems fFNP such that dom(f) is not 1-paddable. However, we can demonstrate that there exists at least one function problem f in each FNP degree such that dom(f) is 1-paddable. In other words, we can fix the difficulty of an arbitrary FNP problem f that this f will assume.

Proposition 3.6.

For every nonempty fFNP, there exists an fFNP such that f1-Tpf, dom(f) 1-paddable, and dom(f)mpdom(f).

Proof.

Let fFNP. We define f via:

f(0x) f(x)
f(1x) {ε}
f(ε) {ε}

Obviously, fFNP and dom(f) is 1-paddable via p(x)1x.

  • It holds that f1-Tpf via the oracle algorithm that returns the answer to oracle query 0x.

  • It holds that f1-Tpf via the following oracle algorithm, where x0,,xnΣ is the decomposition of the input x:

    1. (a)

      If x0=0, then return the answer to oracle query x1xn.

    2. (b)

      Otherwise, return ε.

  • It holds that dom(f)mpdom(f) via similar reduction functions.

4 Construction of a Function Problem with Chosen Difficulty for 2nd Solution

Based on the terms we discussed so far, we will now work on a construction that takes an AFNP such that dom(A) is 1-paddable and any BFNP, and yields a new problem that is similar to A, but whose “second solution” problem assumes exactly the difficulty of B. We can view “difficulty” both in terms of the function problem itself as well as in terms of the corresponding decision problem. First, we work on an intermediary construction. Later, we extend its idea towards the main result.

4.1 Idea

We start with two problems A,BFNP; we must also add the requirement that dom(A) is 1-paddable via p such that we have ran(p) as a set of easily decidable instances to work on. To better understand the construction, we will work not directly with function problems, but with corresponding decision machines that guess one of the solutions on each path, as discussed in Section 2.3. In this way, when referring to A and B, we are actually referring to the machines MA and MB from now on. The result of our efforts will be a new (intermediary) machine α such that the corresponding function problem fα works towards the design goals outlined above. (The problem of finding a second solution in this function problem corresponds to finding a second path in machine α.)

We must tackle the following task in constructing α: the reduction “B2nd solution in α” must solve a problem B(x) when allowed access to one answer to a “second solution” (or “second accepting path”) problem in α, which we call α[1] in our notation. How can such a reduction algorithm pose an “α[1] query” that helps it solve its “B question”? To be able to ask for a second accepting path, it must already provide a first accepting path itself.

To solve this challenge, we make use of the 1-padding function p that separates dom(A) into instances ran(p) that are easy to decide, and a rest dom(A)ran(p) which contains all difficult instances. We will make use of the former to precisely reach the complexity of B with α[1].

How can we reach exactly the complexity of 𝑩 with 𝜶[𝟏]?

For instances xran(p), we already know xdom(A) to be true, so we need not consult the machine A(x). Instead, we grant those instances one unique simple accepting path. In parallel to this simple path, we run the computation of B on an instance z. This setup guarantees that any second solution other than the aforementioned simple path must solve a problem in B. Since p is an injective, total and polynomial-time invertible function, it holds that p1(ran(p))=Σ. From this, we recognize zp1(x) to be a good choice, as it encodes every question z to B in one instance x=p(z) of α (such that α solves the question z in either a first or, at the latest, a second accepting path). This provides us with “B2nd solution in α”. For instances in ran(p), the other direction “2nd solution in αB” is already true per construction as well.

How do we treat difficult instances of 𝑨?

The instances xran(p) need a different kind of treatment, as the answer whether xdom(A) is not known to us in advance; rather, we have to execute machine A(x) to find out. However, we need to maintain the reduction “2nd solution in αB” by ensuring that a solution algorithm for α[1] never needs to find a second accepting path in machine A. For this reason, we execute two copies of A(x) in parallel, so that such an α[1] algorithm can trivially compute a second solution (even without consulting oracle B).

In summary, our machine α is sketched in Figure 1, and can be formalized as follows:

Definition 4.1.

Let A,BFNP such that A is 1-paddable via p. We define α via:

α(A,B)(x){{0,1}A(x),if xran(p){0}({1}B(p1(x))),if xran(p)
Figure 1: Nondeterministic machine corresponding to construction α.

For a better overview, we now formalize the properties that the construction α satisfies, but we will omit the proof; we instead refer to Theorem 4.6.

Lemma 4.2.

Let A,BFNP, B and dom(A) 1-paddable via p. Then, it holds that:

  1. 1.

    dom(α(A,B))=dom(A)

  2. 2.

    dom(α(A,B)[1])mpdom(B)

  3. 3.

    α(A,B)[1]1-TpB

Potential for improvement.

The construction α(A,B) differs too much from A to justify claiming it to be a “rephrasing” of A, as we are unable to show α(A,B)1-TpA. While α(A,B)1-TpA is easy to see, for the other direction, i. e. A1-Tpα(A,B), a problem emerges: because of our special treatment of instances in ran(p), there are no proofs in the sense of A for those instances in α, as they have the trivial solution 0 instead. It is for this reason that the newly constructed problem could be easier than A. This problem is illustrated in the following example.

Example 4.3.

Let Pr constitute a TFNP formulation of prime factorization, so Pr has at least one function value for each input. It follows that dom(Pr)=Σ is 1-paddable via p for p(x)x. Now, consider the constructed function problem α(Pr,B) for B= with this specific function p. Since ran(p)=Σ=dom(Pr)=dom(α(Pr,B)), per construction of α, output 0 is always a valid first solution and α(Pr,B) is thus easy to solve. Since α(Pr,B) has a polynomial-time algorithm, in this setup, Pr1-Tpα(Pr,B) would imply that Pr has a polynomial-time algorithm as well, but such an algorithm is not known.

The given 1-padding function p for dom(A) only returns values for which the decision problem dom(A) is easy to solve, but for those same instances, the function problem A may be hard. For the concrete set Pr, the reduction Prα(Pr,B) can be achieved via better choice of p (for example, such that p only chooses multiples of 2); in general, it is not clear whether such a choice of p always exists.

Goal.

We iterate upon our construction α to create a new construction β. There, we want the constructed problem to be very similar to the original problem in the sense of β(A,B)1-TpA.

Definition 4.4.

Let AΣ be 1-paddable via p. Then, we define:

p0(x) p(0x)
p1(x) p(1x)

With this definition, ran(p0)ran(p1){p(ε)}=ran(p) is a disjoint decomposition, as illustrated in Figure 2. ran(p0) shall assume the role that ran(p) previously held in α, so ran(p1) is a new range of values in which we can work towards A1-Tpβ(A,B) (which we previously found to be the difficult reduction for β(A,B)1-TpA).

Figure 2: Decomposition of ran(p), a subset of dom(A).

How can we utilize this space towards our goal 𝑨𝟏-𝑻𝒑𝜷(𝑨,𝑩)?

We want questions to A to be answerable by generating accepting paths for instances from ran(p1). An oracle algorithm that must solve A(x) for an xran(p) can already query x in the previous construction α(A,B), as the construction consists of A(x) on such queries. As such, we must only deal with instances xran(p) in β. Due to ran(p)dom(A), we know that A(x) always has at least one accepting path on such instances and thus, we can encode such queries in instances of ran(p1).

From a technical perspective, we build this encoding by computing the index of instances xran(p1) using p11, and then interpreting this index as an index in ran(p) using p. Put together, on input xran(p1), we execute the machine A(p(p11(x))). It is easy to see that we reach each value in ran(p) with exactly one input, since p(p11(ran(p1)))=ran(p). Just like for inputs outside of ran(p), we duplicate this computation of A on two simultaneous branches to leave the difficulty of computing a second solution untouched.

In total, our construction for β can be sketched as in Figure 3, and yields the following formalization:

Definition 4.5.

Let A,BFNP such that dom(A) is 1-paddable via p. We define β as follows:

β(A,B)(x){{0,1}A(x),if xran(p)x=p(ε){0}({1}B(p01(x))),if xran(p0){0,1}A(p(p11(x))),if xran(p1)
Figure 3: Nondeterministic machine corresponding to construction β.

4.2 Properties of the Construction

We show that the constructed problem generally has a solution if and only if A has a solution. Furthermore, we show that finding a first solution is equally difficult as solving A, and that finding a second solution (or deciding whether such exists) is equally difficult as solving B (or as deciding whether B has a solution, respectively).

Theorem 4.6.

Let A,BFNP, B and dom(A) 1-paddable via p. Then, it holds that:

  1. 1.

    dom(β(A,B))=dom(A)

  2. 2.

    β(A,B)1-TpA

  3. 3.

    dom(β(A,B)[1])mpdom(B)

  4. 4.

    β(A,B)[1]1-TpB

Proof.

For sake of better readability, we write “β” when referring to β(A,B). Each time we decompose a function value of β into its letters, we make use of the fact that according to the definition of β, each such function value has at least length 1.

Statement 1.

We distinguish the cases from the definition of β, and want to show that β(x) has function values if and only if A(x) has function values.

  • Case 1: xran(p)x=p(ε). Then, per definition, β(x)={0,1}A(x). If A(x)=, then β(x)={0,1}==A(x). Otherwise, A(x) is not empty, so there exists a yA(x); it follows that 0yβ(x). In both cases, A(x)β(x), so xdom(A)xdom(β).

  • Case 2: xran(p0). Then, per definition, β(x)={0}{1}B(p01(x)). Since xran(p0)ran(p)dom(A) always holds, β(x) is required to always have a function value. As 0β(x) for all x, the construction behaves correctly.

  • Case 3: xran(p1). Then, per definition, β(x)={0,1}A(p(p11(x))). From xran(p1), it follows that the value xp11(x) exists. From the properties of the 1-padding function p, it follows that p(x)dom(A), so there exists a yA(p(x))=A(p(p11(x))). Per definition of β, it holds that 0yβ(x); since xran(p1)ran(p)dom(A) by condition, this behavior is correct.

Statement 2.

We show β1-TpA through reductions in both directions.

  • β1-TpA holds via the following oracle algorithm on input x:

    1. (a)

      If xran(p)x=p(ε), then let y be the answer to oracle query x and return 0y.

    2. (b)

      If xran(p0), then return 0.

    3. (c)

      If xran(p1), then let y be the answer to oracle query p(p11(x)) and return 0y.

    In case (b), 0 is always a function value of β(x) according to its definition. Cases (a)111If the oracle query in case (a) returns , the algorithm correctly returns the value 0= by definition. Since such an oracle answer implies A(x)=, it also holds that {0,1}A(x)=. and (c)222In this case, A(p(p11(x))) always returns a function value. See Statement 1, Case 3. behave exactly analogous to the definition of β(x).

  • A1-Tpβ via the following algorithm on input x:

    1. (a)

      If xran(p), then let y be the answer to oracle query x. If y=, then return .

    2. (b)

      If xran(p), then let y be the answer to oracle query p1(p1(x)).

    3. (c)

      Let yy0y1yn for n0. Return y1yn.

    In case (a), per definition, β(x)={0,1}A(x). If A(x) is empty, the oracle answers and the algorithm behaves correctly by returning as well. Otherwise, the oracle provides a result y0y1yn with y0{0,1} and y1ynA(x), due to which the latter is the correct output in line (c).

    To argue for case (b), let xp1(p1(x)). Clearly, xran(p1), so

    β(x)={0,1}A(p(p11(x))={0,1}A(p(p11(p1(p1(x)))))={0,1}A(x).

    Due to the precondition that xran(p), it follows that A(x) always has at least one function value. Like in case (a), line (c) splits off the first letter, leading to a correct result.

Statement 3.

Follows via simple reduction algorithms derived from the algorithms provided in Statement 4.

Statement 4.

We prove the equivalence β[1]1-TpB by reduction in both directions.

  • β[1]1-TpB via the following oracle algorithm on input x:

    1. (a)

      If xx,y, return . From here on, it holds that xx,y. Let yy0y1yn be decomposed into its letters.

    2. (b)

      If (x,y)β, return .

    3. (c)

      If xran(p0), return (1y0)y1yn.

    4. (d)

      If xran(p0) and y0=1, return 0.

    5. (e)

      If xran(p0) and y0=y=0, then let y be the answer to oracle query p01(x) and return 1y.

    We argue this algorithm to be correct as follows:

    • Lines (a) and (b) check whether input x has the correct form for a “second solution” problem. If not, we know β[1](x) to be empty, so our algorithm must not return a function value. Otherwise, due to (x,y)β, we know that y is a correct first solution for β(x) from here on.

    • According to the definition of β, flipping the first bit yields a different solution in all cases except xran(p0); hence, line (c) acts accordingly.

    • Now, only the case xran(p0) remains. The known solution y thus is contained in the set β(x)={0}{1}B(p01(x)), from which we must find a second solution. In line (d), the case that y is contained in the right-hand side of this union is handled by returning 0.

    • Line (e) treats the opposite case that solution y is contained in the left side of this union, i. e. y=0. In this case, there exists a second solution if and only if B(p01(x)) is not empty. If it is empty, the oracle query yields , so the algorithm returns 1=. Otherwise, the algorithm returns 1y, which is a correct second solution.

    All cases are handled, and the algorithm behaves correctly in each case, so we have shown the reduction β[1]1-TpB.

  • B1-Tpβ[1] via the following oracle algorithm on input x:

    1. (a)

      Let y be the answer to oracle query p0(x),0. If y=, return .

    2. (b)

      Otherwise, y1y1yn. Return y1yn.

    The oracle query p0(x),0 corresponds to asking for a second solution in β(p0(x)), provided that the first solution 0 is already given. Due to p0(x)ran(p0), it holds that

    β(p0(x))={0}{1}B(p01(p0(x)))={0}{1}B(x).

    Since the answer to the oracle query must be different from the provided first solution 0, an answer (if one is given) always has the form 1B(x). Therefore, the assumption in line (b), as well as the behavior to split off the first letter, are correct. If there are no function values in B(x), the oracle answer is . In this case, the algorithm correctly returns in line (a).

With this, we were able to show that the difficulty of finding a second solution is independent of the difficulty of finding a first solution.

 Remark 4.7 (interpretation as machines).

According to 2.11, we can interpret the results of Theorem 4.6 as results about finding accepting paths in NP machines.

Let ANP be 1-paddable and let M1 be any nondeterministic polynomial-time machine that accepts A. Let M2 be an arbitrary second nondeterministic polynomial-time machine; we define B to be the language accepted by M2. Then, there exists a machine M with the following properties:

  1. 1.

    M decides A, i. e. the same language as M1.

  2. 2.

    Finding an accepting path in M is equally difficult as finding an accepting path in M1.

  3. 3.

    Testing whether M has a second accepting path is equally difficult as testing whether M2 accepts.

  4. 4.

    Finding a second accepting path in M is equally difficult as finding one accepting path in M2.

Proof.

From Section 2.3, we know that MfM1mpM1 and MfM2mpM2. The desired machine M is Mβ(fM1,fM2); its properties follow directly from Theorem 4.6.

Corollary 4.8.

For every 1-paddable XNP and every BFNP, there exists a CFNP such that dom(C)=X and C[1]1-TpB.

Proof.

Let M be an NP machine for X. Then, for C=β(fM,B), it holds that dom(C)=dom(fM)=X and C[1]1-TpB by virtue of Theorem 4.6.1 and 4.6.4.

4.3 Properties of the Construction for B inside TFNP

The case BTFNP grants us the interesting additional property that, given a first solution, finding a second solution in the constructed problem is a total search problem itself (except for invalid inputs).

Corollary 4.9.

For AFNP with dom(A) 1-paddable and BTFNP, it holds that β(A,B)[1]tTFNP.

Proof.

It is easy to see that dom(β(A,B)[1]t)mpdom(β(A,B)[1]) holds. Together with Theorem 4.6.3 and BTFNP, we obtain

dom(β(A,B)[1]t)mpdom(β(A,B)[1])mpdom(B)=Σ,

so it follows that dom(β(A,B)[1]t)=Σ.

 Remark 4.10 (interpretation as machines).

Again, we interpret 4.9 in terms of finding accepting paths in NP machines. Let ANP be 1-paddable and let M1 be any NP machine that accepts A. Let M2 be an arbitrary second nondeterministic polynomial-time machine that always accepts on at least one path; its accepted language is thus Σ. Then, there exists a machine M with the properties from 4.7 for which it additionally holds that whenever M accepts, it has at least two accepting paths.

With this, we have shown that, even in the special case of machines that never have exactly one accepting path, finding a second accepting path can still assume any difficulty out of all possible difficulties, that is, out of TFNP.

4.4 Properties of the Construction without dom(A) 1-paddable

How can we apply the results from Theorem 4.6 if we are only given two function problems A and B, but do not know dom(A) to be 1-paddable? By applying 3.6, we can still reach a weakened conclusion.

Corollary 4.11.

For A,BFNP and A,B, there exists a CFNP such that:

  1. 1.

    dom(C)mpdom(A)

  2. 2.

    C1-TpA

  3. 3.

    dom(C[1])mpdom(B)

  4. 4.

    C[1]1-TpB

Proof.

Let AFNP with A1-TpA and dom(A) 1-paddable such that dom(A)dom(A)mpdom(A) from 3.6. The desired problem C is now β(A,B) according to Theorem 4.6.

When only given these prerequisites, we need to abstain from the property that C has solutions for exactly those instances that A has instances for, in other words, we lose control over dom(C). However, we still obtain a problem whose difficulty of the first and second solution adapts exactly to our two provided problems. This implies that every combination of difficulties for first and second solution can appear in function problems; no possibility can be ruled out. 4.9 can be transferred in the same way.

4.5 Shared Degree Structure Results

We obtain the following statements, which are formalized by 4.12:

  • The set of all second solution problems has the same degrees as the entire class FNP.

  • This holds even if we restrict to second solution problems of search problems that correspond to a single 1-paddable XNP.

  • Alternatively, we can restrict to second solution problems of search problems that assume an arbitrary difficulty AFNP.

The last statement implies that in every FNP degree, one can find a problem that matches an arbitrary difficulty BFNP for its second solution.

Corollary 4.12.

For every 1-paddable XNP and every AFNP, the following classes share a degree structure w. r. t. 1-Tp:

  • FNP

  • {C[1]CFNP}

  • {C[1]CFNP and dom(C)=X}

  • {C[1]CFNP and C1-TpA}

Proof.

Follows directly from 4.8 and 4.11.

5 Conclusion and Open Questions

Our results show that the difficulty of a search problem and its second solution variant are independent, and that second solution problems have the same degree structure as the entire class FNP. Our construction focuses on the problem of finding a second solution when given a first one. Consider the following extensions of this, using the generic notion of solution by Yato and Seta [17]:

  1. 1.

    Compute an n+1-th solution when given n solutions for some fixed n.

  2. 2.

    Compute an additional solution when given a list of solutions of any length.

  3. 3.

    Compute all remaining solutions when given n solutions for some fixed n.

Analysing these extensions would provide a further understanding as to how different solutions to a problem relate to each other. Is it possible to translate our results to these more general cases? These extension could be related to enumeration problems, where one computes a list of all solutions of a problem. Typically, enumeration problems deal with “natural” solutions like satisfying assignments of formulas. Analyzing above extension would give some insights on the complexity of enumerating all solutions of arbitrary FNP problems.

In Subsection 4.4, the main result is transferred to the situation where dom(A) is not 1-paddable, but without achieving the equality between the domains of A and the constructed search problem. We are interested in whether the requirement that dom(A) is 1-paddable is necessary for this property, or if a weaker (or even no such) requirement suffices.

References

  • [1] Eric W. Allender and Roy S. Rubinstein. P-Printable Sets. SIAM Journal on Computing, 17(6):1193–1202, December 1988. doi:10.1137/0217075.
  • [2] Leonard C. Berman and Juris Hartmanis. On Isomorphisms and Density of NP and Other Complete Sets. SIAM Journal on Computing, 6(2):305–322, June 1977. Publisher: Society for Industrial and Applied Mathematics. doi:10.1137/0206023.
  • [3] Argyrios Deligkas, George B. Mertzios, Paul G. Spirakis, and Viktor Zamaraev. Exact and Approximate Algorithms for Computing a Second Hamiltonian Cycle. LIPIcs, Volume 170, MFCS 2020, 170:27:1–27:13, 2020. doi:10.4230/LIPICS.MFCS.2020.27.
  • [4] Ding-Zhu Du and Ker-I Ko. Theory of Computational Complexity. John Wiley & Sons, Incorporated, Somerset, United States, 2014. doi:10.1002/9781118595091.
  • [5] Stephen Fenner, Frederic Green, Steven Homer, Alan L. Selman, Thomas Thierauf, and Heribert Vollmer. Complements of multivalued functions. Chicago Journal of Theoretical Computer Science, 1999. Article 3 of volume 1999. doi:10.4086/cjtcs.1999.003.
  • [6] Stephen Fenner, Steven Homer, Mitsunori Ogihara, and Alan L. Selman. Oracles that compute values. SIAM Journal on Computing, 26:1043–1065, 1997. doi:10.1137/S0097539793247439.
  • [7] Paul W. Goldberg and Christos H. Papadimitriou. TFNP: An Update. In Algorithms and Complexity, pages 3–9, Cham, 2017. Springer International Publishing. doi:10.1007/978-3-319-57586-5_1.
  • [8] Juris Hartmanis, Neil Immerman, and Vivian Sewelson. Sparse sets in NP-P: EXPTIME versus NEXPTIME. Information and Control, 65(2-3):158–181, May 1985. doi:10.1016/S0019-9958(85)80004-8.
  • [9] Juris Hartmanis and Yaacov Yesha. Computation times of NP sets of different densities. Theoretical Computer Science, 34(1):17–32, January 1984. doi:10.1016/0304-3975(84)90111-7.
  • [10] Lane A. Hemaspaandra, Ashish V. Naik, Mitsunori Ogihara, and Alan L. Selman. Computing solutions uniquely collapses the polynomial hierarchy. SIAM Journal on Computing, 25:697–708, 1996. doi:10.1137/S0097539794268315.
  • [11] Mark W. Krentel. The complexity of optimization problems. Journal of Computer and System Sciences, 36(3):490–509, 1988. doi:10.1016/0022-0000(88)90039-6.
  • [12] Nimrod Megiddo and Christos H. Papadimitriou. On total functions, existence theorems and computational complexity. Theoretical Computer Science, 81(2):317–324, 1991. doi:10.1016/0304-3975(91)90200-L.
  • [13] Alan L. Selman. A survey of one-way functions in complexity theory. Mathematical Systems Theory, 25:203–221, 1992. doi:10.1007/BF01374525.
  • [14] Alan L. Selman. A taxonomy on complexity classes of functions. Journal of Computer and System Sciences, 48(2):357–381, 1994. doi:10.1016/S0022-0000(05)80009-1.
  • [15] Alan L. Selman. Much ado about functions. In Proceedings 11th Conference on Computational Complexity, pages 198–212. IEEE Computer Society Press, 1996. doi:10.1109/CCC.1996.507682.
  • [16] Nobuhisa Ueda and Tadaaki Nagao. NP-completeness Results for NONOGRAM via Parsimonious Reductions. Technical Report TR96-0008, Department of Computer Science, Tokyo Institute of Technology, 1996. URL: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=1bb23460c7f0462d95832bb876ec2ee0e5bc46cf.
  • [17] Takayuki Yato and Takahiro Seta. Complexity and completeness of finding another solution and its application to puzzles. IEICE TRANSACTIONS on Fundamentals, 86-A:1052–1060, May 2003. URL: http://search.ieice.org/bin/summary.php?id=e86-a_5_1052.