The Complexity of Computing Second Solutions
Abstract
We study the complexity of computing second solutions for NP search problems, i. e., given a problem instance and a valid solution , we have to find another valid solution .
Our main result shows that for typical NP decision problems, the complexity of computing second solutions is completely determined by the choice of the type of solution (i. e., the specific function problem), but independent of the underlying decision problem. More precisely, we show that for every that is 1-paddable (a weak form of paddability), different choices of the type of solution lead to different second solution problems, which altogether have the same degree structure as the entire class of NP search problems (FNP). In fact, each degree of difficulty within FNP does occur as a second solution problem for . This proves that typical NP decision problems have no intrinsic complexity w. r. t. the search for a second solution, but only the specification of the type of solution determines this complexity. This explains the empirical observation that the difficulty of computing second solutions strongly depends on the formulation of the problem.
Moreover, we show that the complexities of a search problem and its second solution variant are independent in the following sense: For all search problems and representing two degrees of difficulty, there exists a search problem such that
-
1.
is as difficult as and
-
2.
computing second solutions for is as difficult as .
Keywords and phrases:
function problems, another solution problem, turing machinesFunding:
Fabian Egidy: supported by the German Academic Scholarship Foundation.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Complexity classes ; Theory of computation Turing machinesEditors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał SkrzypczakSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
1.1 Motivation
We study the complexity of computing a second solution for an NP search problem when we are given a first solution. NP search problems (or search problems for short) are a fundamental concept in complexity theory, first studied as multivalued functions by Selman [13, 14, 15] and further developed by Fenner et al. [6, 5] and Hemaspaandra et al. [10]. Particularly intensive research has been done on the class of total search problems TFNP [12], where each input is guaranteed to have at least one solution (see [7] for an overview). While the standard setting of search problems asks to compute a first solution, we focus on computing a second solution. This means we are given a problem instance together with a valid solution , and have to find another valid solution . In particular, we are interested in the extent to which a first solution helps compute a second one.
On the one hand, our investigation is motivated by use cases where users ask for multiple options to choose from (e. g., routes or schedules), or for new solutions different from the previous ones (e. g., generation of exam tasks). On the other hand, the following consideration shows that our investigation is also interesting from a theoretical perspective: It is easy to devise examples where finding an additional solution is simple (because it can be easily derived from a first solution, e. g., reversing the direction of a Hamiltonian cycle). In contrast, for other examples, finding the first few solutions is trivial, but finding additional solutions is hard (e. g., when searching for divisors). This observation prompts us to systematically investigate the relationship between first and second solutions from a complexity theory perspective.
1.2 Previous Work
In the past, questions about the complexity of finding alternative solutions were mostly raised in custom-tailored form for concrete problems, like the “Another Hamiltonian Cycle” problem (see [3] for an overview). Ueda and Nagao [16] consider several specific problems of this shape and employ the general problem scheme “alternative solution for ” based on an intuitive understanding of the base problem . They investigate the resulting problem for several such problems (e. g., 3-Dimensional matching and Nonogram).
In our paper, we use the following generic and abstract definition of finding an -th solution, which was proposed by Yato and Seta [17] and which is applicable to any search problem: given the solutions for the input instance , we have to find a new -th solution (if such solution exists). This definition yields, for any search problem , the unique corresponding second solution problem .
1.3 Our Contribution
We analyze how the formulation of a problem influences the complexity of computing second solutions. Our main result in Section 4 shows that this complexity is completely determined by the choice of the type of solution, but independent of the underlying decision problem.
Conclusion 1.
The complexities of a search problem and its second solution variant are independent.
In 4.11, we show that for any nonempty search problems and (representing two arbitrary degrees of difficulty), there exists a search problem such that:
-
1.
is as difficult as .
-
2.
is as difficult as .
This shows that the complexity of is independent of the complexity of its second solution problem . Here, “independent” really is to be understood in a mathematical sense: the complexity of one problem variant provides no information about the complexity of the other. For example, there exists a search problem just as hard as factorization such that its second solution problem is just as hard as graph isomorphism.
Conclusion 2.
For every 1-paddable , the corresponding second solution problems have the same degree structure as the entire class FNP. In fact, all possible degrees of difficulty do occur as second solution problems for .
Conclusion 2 uses the additional assumption 1-paddable, which is weaker than paddablitity in the sense of Berman-Hartmanis [2]; hence, natural problems in NP are 1-paddable. The conclusion is based on the following arguments. By 4.8, for every 1-paddable and every nonempty search problem , there exists a search problem such that:
-
1.
.
-
2.
is as difficult as .
Hence, solves the decision problem , while its second solution problem has the same complexity as . Phrased in terms of machines (cf. 4.7), we construct an NP machine for such that finding a second accepting path has the same complexity as . Since any such choice of complexity is possible, we recognize that each 1-paddable has maximum freedom for the complexity of the second solution problem. As a consequence, the choice of the type of solution has the greatest possible influence on the difficulty of the second solution problem. For example, for the satisfiability problem SAT, there exist search problems and both solving SAT such that the second solution problem of is just as hard as factorization, while that of is solvable in polynomial time. This demonstrates that NP decision problems have no intrinsic complexity w. r .t. the search for a second solution. Rather, the specification of the type of solution (that is, the corresponding search problem) determines the complexity of computing a second solution, where all levels of complexities are possible. Note that, given , the analog statement for first solution problems does not hold: under this assumption, there does not exist a search problem such that and is solvable in polynomial time.
4.12 restates this result in terms of degree structures, as follows. For every 1-paddable , all of the following classes share the same degree structure:
-
FNP
-
-
This shows that the entire variety of degrees of difficulty in FNP reappears in second solution problems; this still applies even if we restrict to any single 1-paddable and corresponding second solution problems.
In our proofs, we use artificially defined search problems to achieve the desired degrees of difficulty. Therefore, we do not obtain statements about specific problems. We rather see our results as an objective description and systematic fine-grained analysis of the empirical observation that the difficulty of computing second solutions strongly depends on the formulation of the problem, more precisely the chosen type of solution.
2 Preliminaries
2.1 Basic Definitions
We choose Turing machines (as commonly defined) as the computation model for our algorithms, usually on the fixed alphabet . Let denote the range of .
Definition 2.1 (inverse of a polynomial-time computable function).
Let be a polynomial-time computable, injective, partial function. Then is defined as follows:
If is polynomial-time computable, we call polynomial-time invertible.
In the following, when defining a deterministic polynomial-time algorithm by means of a function definition, we can informally use “” as a function value instead of “”, which other algorithms can use to continue their computations. For this, we define for a compact notation in our formulas.
Definition 2.2.
Each is a (partial) multivalued function. We denote the function values as .
Definition 2.3 (refinement, [14]).
Let be a multivalued function. Then, is a (single-valued) refinement of if and only if the following holds for all :
-
is total
-
-
Each multivalued function is a function problem that poses the task of algorithmically computing any of its function values for a given input. From any function problem, we can always derive a decision problem that poses the question whether an instance has at least one function value:
Definition 2.4.
Let be a multivalued function. Then, .
Definition 2.5.
We define FNP as follows:
Hence, belongs to FNP if and only if its graph belongs to P and the length of function values is polynomially bounded. Each , which we refer to as a search problem, uniquely defines its corresponding decision problem . The class FNP is sometimes also referred to as .
Definition 2.6.
.
Definition 2.7 (polynomial-time many-one reduction).
Let . We define:
-
there exists a total, polynomial-time computable such that
-
Note that from , it follows that . For function problems, a different kind of reduction is needed: intuitively, a function problem reduces to if an efficient algorithm for would allow us to solve efficiently as well. This is captured by the following notion, similar to a metric reduction [11]: on input , a polynomial-time algorithm must find a value ; we allow it one oracle query for a value . The notation shall express which oracle an oracle algorithm can access.
Definition 2.8 (metric reduction for function problems, [11]).
Let and be function problems.
-
s. t.
asks at most one oracle query and
for each refinement -
Note that implies , where denotes polynomial-time Turing reduction with an unbounded number of queries. Therefore, our results also hold for instead of . In our proofs, we will show by defining a function or an oracle algorithm from which it is obvious that it can be computed in polynomial time and behaves correctly for each matching oracle. For this, we will informally use as a return value of an arbitrary refinement , i. e. as an answer to the oracle query .
From any search problem , we denote the class of equally difficult problems (by our reduction term ) as its degree . Furthermore, we define the degree structure of a complexity class to be the set of all its degrees, i. e. . We consider two complexity classes and to share a degree structure (w. r. t. ) if both classes contain at least one element of every degree that appears in any of the two classes, i. e. if holds.
2.2 Formalism for Additional Solution Problems
We now define a formalism that describes the search for an additional solution. For this, we define the function problem where an instance of together with solutions are given as a list encoding, and the task is to find an -th solution. For a given and , this search problem is uniquely defined. An input is only valid if it contains valid and pairwise distinct solutions, and we ask the answer to be valid and different from all solutions that are already part of the input. Invalid inputs shall not have function values. Let denote an appropriate list encoding, and let denote the empty word.
Definition 2.9 (additional solution, [17]).
For and , we define:
Compared to the definition of , the definition of provides invalid inputs with one trivial and unique solution. Generally, it holds that . Additionally, it holds that if the existence of solutions in implies the existence of an -th solution, .
2.3 Function Problems as Search Problems in Nondeterministic Machines
Function problems are useful for describing the search for accepting paths in nondeterministic machines. Our motivation for drawing a parallel between the two concepts is that we will allow ourselves to treat function problems and nondeterministic machines as formally different, but intuitively synonymous. For reasoning the similarity, we define a corresponding function problem for each nondeterministic machine.
Definition 2.10.
Let be a nondeterministic decision machine. Then, we define
For an NP machine , it holds that ; the function problem’s associated task is the search for accepting paths in this specific machine. Conversely, for each FNP problem , we can immediately derive a nondeterministic machine that tests one of the possible function values of on each of its paths on input .
Definition 2.11.
Let the solutions of be length-bounded by polynomial . Then, let be the nondeterministic polynomial-time machine that decides by guessing and verifying one with per execution path.
Note that despite , it holds that .
3 1-paddability
3.1 Definition
In the proof of the main theorem, we will want to add a special behavior to many of the positive instances of one of the two sets that the construction works on. To know when we can safely apply our special behavior, we will need a subset of positive instances that are both easy to find and easy to recognize. Formally, we want an infinite subset, decidable in polynomial time, whose elements are indexed such that the bijection from index to element is easy to compute in both directions. We call sets that have such a subset 1-paddable. We will justify this choice of name shortly.
Definition 3.1 (1-paddability).
Let .
-
is 1-paddable via total and injective, polynomial-time computable, and
-
is 1-paddable there exists a function such that is 1-paddable via
If is 1-paddable via , the desired infinite subset is ; in other words, holds for all . The index of a given is determined by the polynomial-time computable term . From index , it is easy to return to via . Since the output of tells us whether , it holds that .
Let “” denote that there exists a total, injective, polynomial-time computable and polynomial-time-invertible function such that . With this reduction term, we obtain an alternative definition of 1-paddability:
Proposition 3.2.
is 1-paddable .
Proof.
The right-hand side asks for an such that holds. Since is always true, the requirement can be rewritten as . The rest follows immediately from the two definitions.
3.2 Properties
To avoid having to show 1-paddability individually for each set, we want to utilize known results for paddability, which is defined as follows.
Definition 3.3 (paddability [2, 4]).
Let be a set. is paddable there exists a total, injective function such that , are polynomial-time computable and that for all , it holds that Then is a padding function for .
Almost all discovered NP-complete problems are known to be paddable. We show all these problems to be 1-paddable as well.
Proposition 3.4.
.
Proof.
Let be a padding function for and let . We define . From and (a property of the padding function), it follows that for all . Totality, injectivity, polynomial-time computability and polynomial-time invertibility all transfer directly from . Now, is 1-paddable via .
The proof implies an analogy between “paddability” and “1-paddability”, which justifies the choice of name for the latter:
| paddability | every instance is paddable |
|---|---|
| 1-paddability | at least one positive instance is paddable |
However, the inverse implication of 3.4 does not hold.
Proposition 3.5.
Some sets are 1-paddable, but not paddable.
Proof.
Let . We show to be 1-paddable, but not paddable.
-
Assume to be paddable, then we know to be paddable as well (using the same padding function). Let be 1-paddable via by virtue of 3.4. We know to be injective, so and are distinct; a contradiction to , which proves not to be paddable.
-
We show to be 1-paddable via . Clearly, . The empty word is the only element outside of ; since by definition, does not lie in , it holds that . It follows that is a 1-padding function for .
The concept of 1-paddability as existence of a very simple subset is interesting in itself. Analogous to [4] Theorem 7.15, one can show 1-paddable sets not to be sparse. Additionally, 1-paddability is different from P-printability as in [9, 8, 1], as one can show that no set is both 1-paddable and P-printable at the same time.
3.3 1-paddability in Every FNP Equivalence Class
Obviously, there are function problems such that is not 1-paddable. However, we can demonstrate that there exists at least one function problem in each FNP degree such that is 1-paddable. In other words, we can fix the difficulty of an arbitrary FNP problem that this will assume.
Proposition 3.6.
For every nonempty , there exists an such that , 1-paddable, and .
Proof.
Let . We define via:
Obviously, and is 1-paddable via .
-
It holds that via the oracle algorithm that returns the answer to oracle query .
-
It holds that via the following oracle algorithm, where is the decomposition of the input :
-
(a)
If , then return the answer to oracle query .
-
(b)
Otherwise, return .
-
(a)
-
It holds that via similar reduction functions.
4 Construction of a Function Problem with Chosen Difficulty for Solution
Based on the terms we discussed so far, we will now work on a construction that takes an such that is 1-paddable and any , and yields a new problem that is similar to , but whose “second solution” problem assumes exactly the difficulty of . We can view “difficulty” both in terms of the function problem itself as well as in terms of the corresponding decision problem. First, we work on an intermediary construction. Later, we extend its idea towards the main result.
4.1 Idea
We start with two problems ; we must also add the requirement that is 1-paddable via such that we have as a set of easily decidable instances to work on. To better understand the construction, we will work not directly with function problems, but with corresponding decision machines that guess one of the solutions on each path, as discussed in Section 2.3. In this way, when referring to and , we are actually referring to the machines and from now on. The result of our efforts will be a new (intermediary) machine such that the corresponding function problem works towards the design goals outlined above. (The problem of finding a second solution in this function problem corresponds to finding a second path in machine .)
We must tackle the following task in constructing : the reduction “ solution in ” must solve a problem when allowed access to one answer to a “second solution” (or “second accepting path”) problem in , which we call in our notation. How can such a reduction algorithm pose an “ query” that helps it solve its “ question”? To be able to ask for a second accepting path, it must already provide a first accepting path itself.
To solve this challenge, we make use of the 1-padding function that separates into instances that are easy to decide, and a rest which contains all difficult instances. We will make use of the former to precisely reach the complexity of with .
How can we reach exactly the complexity of with ?
For instances , we already know to be true, so we need not consult the machine . Instead, we grant those instances one unique simple accepting path. In parallel to this simple path, we run the computation of on an instance . This setup guarantees that any second solution other than the aforementioned simple path must solve a problem in . Since is an injective, total and polynomial-time invertible function, it holds that . From this, we recognize to be a good choice, as it encodes every question to in one instance of (such that solves the question in either a first or, at the latest, a second accepting path). This provides us with “ solution in ”. For instances in , the other direction “ solution in ” is already true per construction as well.
How do we treat difficult instances of ?
The instances need a different kind of treatment, as the answer whether is not known to us in advance; rather, we have to execute machine to find out. However, we need to maintain the reduction “ solution in ” by ensuring that a solution algorithm for never needs to find a second accepting path in machine . For this reason, we execute two copies of in parallel, so that such an algorithm can trivially compute a second solution (even without consulting oracle ).
In summary, our machine is sketched in Figure 1, and can be formalized as follows:
Definition 4.1.
Let such that is 1-paddable via . We define via:
For a better overview, we now formalize the properties that the construction satisfies, but we will omit the proof; we instead refer to Theorem 4.6.
Lemma 4.2.
Let , and 1-paddable via . Then, it holds that:
-
1.
-
2.
-
3.
Potential for improvement.
The construction differs too much from to justify claiming it to be a “rephrasing” of , as we are unable to show . While is easy to see, for the other direction, i. e. , a problem emerges: because of our special treatment of instances in , there are no proofs in the sense of for those instances in , as they have the trivial solution instead. It is for this reason that the newly constructed problem could be easier than . This problem is illustrated in the following example.
Example 4.3.
Let constitute a TFNP formulation of prime factorization, so has at least one function value for each input. It follows that is 1-paddable via for . Now, consider the constructed function problem for with this specific function . Since , per construction of , output is always a valid first solution and is thus easy to solve. Since has a polynomial-time algorithm, in this setup, would imply that has a polynomial-time algorithm as well, but such an algorithm is not known.
The given 1-padding function for only returns values for which the decision problem is easy to solve, but for those same instances, the function problem may be hard. For the concrete set , the reduction can be achieved via better choice of (for example, such that only chooses multiples of 2); in general, it is not clear whether such a choice of always exists.
Goal.
We iterate upon our construction to create a new construction . There, we want the constructed problem to be very similar to the original problem in the sense of .
Definition 4.4.
Let be 1-paddable via . Then, we define:
With this definition, is a disjoint decomposition, as illustrated in Figure 2. shall assume the role that previously held in , so is a new range of values in which we can work towards (which we previously found to be the difficult reduction for ).
How can we utilize this space towards our goal ?
We want questions to to be answerable by generating accepting paths for instances from . An oracle algorithm that must solve for an can already query in the previous construction , as the construction consists of on such queries. As such, we must only deal with instances in . Due to , we know that always has at least one accepting path on such instances and thus, we can encode such queries in instances of .
From a technical perspective, we build this encoding by computing the index of instances using , and then interpreting this index as an index in using . Put together, on input , we execute the machine . It is easy to see that we reach each value in with exactly one input, since . Just like for inputs outside of , we duplicate this computation of on two simultaneous branches to leave the difficulty of computing a second solution untouched.
In total, our construction for can be sketched as in Figure 3, and yields the following formalization:
Definition 4.5.
Let such that is 1-paddable via . We define as follows:
4.2 Properties of the Construction
We show that the constructed problem generally has a solution if and only if has a solution. Furthermore, we show that finding a first solution is equally difficult as solving , and that finding a second solution (or deciding whether such exists) is equally difficult as solving (or as deciding whether has a solution, respectively).
Theorem 4.6.
Let , and 1-paddable via . Then, it holds that:
-
1.
-
2.
-
3.
-
4.
Proof.
For sake of better readability, we write “” when referring to . Each time we decompose a function value of into its letters, we make use of the fact that according to the definition of , each such function value has at least length 1.
Statement 1.
We distinguish the cases from the definition of , and want to show that has function values if and only if has function values.
-
Case 1: . Then, per definition, . If , then . Otherwise, is not empty, so there exists a ; it follows that . In both cases, , so .
-
Case 2: . Then, per definition, . Since always holds, is required to always have a function value. As for all , the construction behaves correctly.
-
Case 3: . Then, per definition, . From it follows that the value exists. From the properties of the 1-padding function , it follows that , so there exists a . Per definition of , it holds that ; since by condition, this behavior is correct.
Statement 2.
We show through reductions in both directions.
-
holds via the following oracle algorithm on input :
-
(a)
If , then let be the answer to oracle query and return .
-
(b)
If , then return 0.
-
(c)
If , then let be the answer to oracle query and return .
In case (b), 0 is always a function value of according to its definition. Cases (a)111If the oracle query in case (a) returns , the algorithm correctly returns the value by definition. Since such an oracle answer implies , it also holds that . and (c)222In this case, always returns a function value. See Statement 1, Case 3. behave exactly analogous to the definition of .
-
(a)
-
via the following algorithm on input :
-
(a)
If , then let be the answer to oracle query . If , then return .
-
(b)
If , then let be the answer to oracle query .
-
(c)
Let for . Return .
In case (a), per definition, . If is empty, the oracle answers and the algorithm behaves correctly by returning as well. Otherwise, the oracle provides a result with and , due to which the latter is the correct output in line (c).
To argue for case (b), let . Clearly, , so
Due to the precondition that , it follows that always has at least one function value. Like in case (a), line (c) splits off the first letter, leading to a correct result.
-
(a)
Statement 3.
Follows via simple reduction algorithms derived from the algorithms provided in Statement 4.
Statement 4.
We prove the equivalence by reduction in both directions.
-
via the following oracle algorithm on input :
-
(a)
If , return . From here on, it holds that . Let be decomposed into its letters.
-
(b)
If , return .
-
(c)
If , return .
-
(d)
If and , return .
-
(e)
If and , then let be the answer to oracle query and return .
We argue this algorithm to be correct as follows:
-
–
Lines (a) and (b) check whether input has the correct form for a “second solution” problem. If not, we know to be empty, so our algorithm must not return a function value. Otherwise, due to , we know that is a correct first solution for from here on.
-
–
According to the definition of , flipping the first bit yields a different solution in all cases except ; hence, line (c) acts accordingly.
-
–
Now, only the case remains. The known solution thus is contained in the set , from which we must find a second solution. In line (d), the case that is contained in the right-hand side of this union is handled by returning .
-
–
Line (e) treats the opposite case that solution is contained in the left side of this union, i. e. . In this case, there exists a second solution if and only if is not empty. If it is empty, the oracle query yields , so the algorithm returns . Otherwise, the algorithm returns , which is a correct second solution.
All cases are handled, and the algorithm behaves correctly in each case, so we have shown the reduction .
-
(a)
-
via the following oracle algorithm on input :
-
(a)
Let be the answer to oracle query . If , return .
-
(b)
Otherwise, . Return .
The oracle query corresponds to asking for a second solution in , provided that the first solution is already given. Due to , it holds that
Since the answer to the oracle query must be different from the provided first solution , an answer (if one is given) always has the form . Therefore, the assumption in line (b), as well as the behavior to split off the first letter, are correct. If there are no function values in , the oracle answer is . In this case, the algorithm correctly returns in line (a).
-
(a)
With this, we were able to show that the difficulty of finding a second solution is independent of the difficulty of finding a first solution.
Remark 4.7 (interpretation as machines).
According to 2.11, we can interpret the results of Theorem 4.6 as results about finding accepting paths in NP machines.
Let be 1-paddable and let be any nondeterministic polynomial-time machine that accepts . Let be an arbitrary second nondeterministic polynomial-time machine; we define to be the language accepted by . Then, there exists a machine with the following properties:
-
1.
decides , i. e. the same language as .
-
2.
Finding an accepting path in is equally difficult as finding an accepting path in .
-
3.
Testing whether has a second accepting path is equally difficult as testing whether accepts.
-
4.
Finding a second accepting path in is equally difficult as finding one accepting path in .
Proof.
From Section 2.3, we know that and . The desired machine is ; its properties follow directly from Theorem 4.6.
Corollary 4.8.
For every 1-paddable and every , there exists a such that and .
Proof.
Let be an NP machine for . Then, for , it holds that and by virtue of Theorem 4.6.1 and 4.6.4.
4.3 Properties of the Construction for B inside TFNP
The case grants us the interesting additional property that, given a first solution, finding a second solution in the constructed problem is a total search problem itself (except for invalid inputs).
Corollary 4.9.
For with 1-paddable and , it holds that .
Proof.
Remark 4.10 (interpretation as machines).
Again, we interpret 4.9 in terms of finding accepting paths in NP machines. Let be 1-paddable and let be any NP machine that accepts . Let be an arbitrary second nondeterministic polynomial-time machine that always accepts on at least one path; its accepted language is thus . Then, there exists a machine with the properties from 4.7 for which it additionally holds that whenever accepts, it has at least two accepting paths.
With this, we have shown that, even in the special case of machines that never have exactly one accepting path, finding a second accepting path can still assume any difficulty out of all possible difficulties, that is, out of TFNP.
4.4 Properties of the Construction without dom(A) 1-paddable
How can we apply the results from Theorem 4.6 if we are only given two function problems and , but do not know to be 1-paddable? By applying 3.6, we can still reach a weakened conclusion.
Corollary 4.11.
For and , there exists a such that:
-
1.
-
2.
-
3.
-
4.
Proof.
Let with and 1-paddable such that from 3.6. The desired problem is now according to Theorem 4.6.
When only given these prerequisites, we need to abstain from the property that has solutions for exactly those instances that has instances for, in other words, we lose control over . However, we still obtain a problem whose difficulty of the first and second solution adapts exactly to our two provided problems. This implies that every combination of difficulties for first and second solution can appear in function problems; no possibility can be ruled out. 4.9 can be transferred in the same way.
4.5 Shared Degree Structure Results
We obtain the following statements, which are formalized by 4.12:
-
The set of all second solution problems has the same degrees as the entire class FNP.
-
This holds even if we restrict to second solution problems of search problems that correspond to a single 1-paddable .
-
Alternatively, we can restrict to second solution problems of search problems that assume an arbitrary difficulty .
The last statement implies that in every FNP degree, one can find a problem that matches an arbitrary difficulty for its second solution.
Corollary 4.12.
For every 1-paddable and every , the following classes share a degree structure w. r. t. :
-
FNP
-
-
-
Proof.
5 Conclusion and Open Questions
Our results show that the difficulty of a search problem and its second solution variant are independent, and that second solution problems have the same degree structure as the entire class FNP. Our construction focuses on the problem of finding a second solution when given a first one. Consider the following extensions of this, using the generic notion of solution by Yato and Seta [17]:
-
1.
Compute an -th solution when given solutions for some fixed .
-
2.
Compute an additional solution when given a list of solutions of any length.
-
3.
Compute all remaining solutions when given solutions for some fixed .
Analysing these extensions would provide a further understanding as to how different solutions to a problem relate to each other. Is it possible to translate our results to these more general cases? These extension could be related to enumeration problems, where one computes a list of all solutions of a problem. Typically, enumeration problems deal with “natural” solutions like satisfying assignments of formulas. Analyzing above extension would give some insights on the complexity of enumerating all solutions of arbitrary FNP problems.
In Subsection 4.4, the main result is transferred to the situation where is not 1-paddable, but without achieving the equality between the domains of and the constructed search problem. We are interested in whether the requirement that is 1-paddable is necessary for this property, or if a weaker (or even no such) requirement suffices.
References
- [1] Eric W. Allender and Roy S. Rubinstein. P-Printable Sets. SIAM Journal on Computing, 17(6):1193–1202, December 1988. doi:10.1137/0217075.
- [2] Leonard C. Berman and Juris Hartmanis. On Isomorphisms and Density of NP and Other Complete Sets. SIAM Journal on Computing, 6(2):305–322, June 1977. Publisher: Society for Industrial and Applied Mathematics. doi:10.1137/0206023.
- [3] Argyrios Deligkas, George B. Mertzios, Paul G. Spirakis, and Viktor Zamaraev. Exact and Approximate Algorithms for Computing a Second Hamiltonian Cycle. LIPIcs, Volume 170, MFCS 2020, 170:27:1–27:13, 2020. doi:10.4230/LIPICS.MFCS.2020.27.
- [4] Ding-Zhu Du and Ker-I Ko. Theory of Computational Complexity. John Wiley & Sons, Incorporated, Somerset, United States, 2014. doi:10.1002/9781118595091.
- [5] Stephen Fenner, Frederic Green, Steven Homer, Alan L. Selman, Thomas Thierauf, and Heribert Vollmer. Complements of multivalued functions. Chicago Journal of Theoretical Computer Science, 1999. Article 3 of volume 1999. doi:10.4086/cjtcs.1999.003.
- [6] Stephen Fenner, Steven Homer, Mitsunori Ogihara, and Alan L. Selman. Oracles that compute values. SIAM Journal on Computing, 26:1043–1065, 1997. doi:10.1137/S0097539793247439.
- [7] Paul W. Goldberg and Christos H. Papadimitriou. TFNP: An Update. In Algorithms and Complexity, pages 3–9, Cham, 2017. Springer International Publishing. doi:10.1007/978-3-319-57586-5_1.
- [8] Juris Hartmanis, Neil Immerman, and Vivian Sewelson. Sparse sets in NP-P: EXPTIME versus NEXPTIME. Information and Control, 65(2-3):158–181, May 1985. doi:10.1016/S0019-9958(85)80004-8.
- [9] Juris Hartmanis and Yaacov Yesha. Computation times of NP sets of different densities. Theoretical Computer Science, 34(1):17–32, January 1984. doi:10.1016/0304-3975(84)90111-7.
- [10] Lane A. Hemaspaandra, Ashish V. Naik, Mitsunori Ogihara, and Alan L. Selman. Computing solutions uniquely collapses the polynomial hierarchy. SIAM Journal on Computing, 25:697–708, 1996. doi:10.1137/S0097539794268315.
- [11] Mark W. Krentel. The complexity of optimization problems. Journal of Computer and System Sciences, 36(3):490–509, 1988. doi:10.1016/0022-0000(88)90039-6.
- [12] Nimrod Megiddo and Christos H. Papadimitriou. On total functions, existence theorems and computational complexity. Theoretical Computer Science, 81(2):317–324, 1991. doi:10.1016/0304-3975(91)90200-L.
- [13] Alan L. Selman. A survey of one-way functions in complexity theory. Mathematical Systems Theory, 25:203–221, 1992. doi:10.1007/BF01374525.
- [14] Alan L. Selman. A taxonomy on complexity classes of functions. Journal of Computer and System Sciences, 48(2):357–381, 1994. doi:10.1016/S0022-0000(05)80009-1.
- [15] Alan L. Selman. Much ado about functions. In Proceedings 11th Conference on Computational Complexity, pages 198–212. IEEE Computer Society Press, 1996. doi:10.1109/CCC.1996.507682.
- [16] Nobuhisa Ueda and Tadaaki Nagao. NP-completeness Results for NONOGRAM via Parsimonious Reductions. Technical Report TR96-0008, Department of Computer Science, Tokyo Institute of Technology, 1996. URL: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=1bb23460c7f0462d95832bb876ec2ee0e5bc46cf.
- [17] Takayuki Yato and Takahiro Seta. Complexity and completeness of finding another solution and its application to puzzles. IEICE TRANSACTIONS on Fundamentals, 86-A:1052–1060, May 2003. URL: http://search.ieice.org/bin/summary.php?id=e86-a_5_1052.
