Improved Separation Between Quantum and Classical Computers for Sampling and Functional Tasks
Abstract
This paper furthers existing evidence that quantum computers are capable of computations beyond classical computers. Specifically, we strengthen the collapse of the polynomial hierarchy to the second level if: (i) Quantum computers with postselection are as powerful as classical computers with postselection (), (ii) any one of several quantum sampling experiments (, , ) can be approximately performed by a classical computer (contingent on existing assumptions). This last result implies that if any of these experiment’s hardness conjectures hold, then quantum computers can implement functions classical computers cannot () unless the polynomial hierarchy collapses to its 2nd level. These results are an improvement over previous work which either achieved a collapse to the third level or were concerned with exact sampling, a physically impractical case.
The workhorse of these results is a new technical complexity-theoretic result which we believe could have value beyond quantum computation. In particular, we prove that if there exists an equivalence between problems solvable with an exact counting oracle and problems solvable with an approximate counting oracle, then the polynomial hierarchy collapses to its second level, indeed to .
Keywords and phrases:
Quantum advantage, Approximate counting, Boson samplingFunding:
Simon C. Marshal: supported by the project NEASQC funded from the European Union’s Horizon 2020 research and innovation programme (grant agreement No 951821) and by an unrestricted gift from Google Quantum AI.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Complexity classes ; Theory of computation Quantum complexity theoryAcknowledgements:
SCM thanks Jan H. Kirchner for their insightful comments.Editors:
Srikanth SrinivasanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Quantum computers are thought to be poised to outperform classical computers on a number of significant tasks, such as integer factorisation or simulation of quantum systems. Despite this widely held belief, we have no formal proof that factoring is not solvable in polynomial time on a classical computer, or even that .
This is perhaps not surprising given the challenge posed by proving unconditional results in complexity theory; famously remains unresolved after decades of study. A more realistic approach is to base the hardness of quantum computing on widely accepted complexity-theoretic conjectures.
This approach has so far been quite successful for sampling problems, with a number of papers [3, 8] showing that various quantum sampling experiments would be hard for a classical algorithm unless the polynomial hierarchy collapses to its third level, or else a widely believed conjecture fails. These results are especially interesting as they naturally imply that quantum computers could implement functions classical computers could not () due to a surprising equivalence between sampling and functional classes [2].
In a similar vein, there has been a body of work showing that various changes to quantum computing make quantum computing vastly more powerful than similarly modified classical computing. A well-known example is vs , i.e. quantum/classical computing with postselection. Surprisingly, the former is in some sense equivalent to problems involving exact counting () [1], while the latter is only equivalent to approximate counting [13]. This disconnect implies that the classes are likely not equal, indeed if they are this would also imply a collapse of the polynomial hierarchy to the third level [1].
Both of these research lines provide strong evidence for that quantum computing is more powerful than classical computing, but they rely on a number of conjectures and complexity theory conditions. We are therefore motivated to attempt to minimise or remove the need for these conjectures/conditions until we are left with compelling evidence that quantum computing is fundamentally stronger than classical computing.
Hope that weaker conditions might be possible was provided by [10], who showed that if classical computers can exactly sample from any one of a number of quantum supremacy experiments then the polynomial hierarchy would collapse to its second level, an improvement over the existing collapse to third level. In fact they push the collapse all the way to , in the second level of the hierarchy. While this result provides insight as to what may be possible, it stops short of our goal as the proof strategy did not extend to additive approximate sampling, which is arguably the realistic case [3] and is relevant for questions such as .
In this paper, we provide the following new complexity theoretic result, which we then use to improve the abovementioned results to a 2nd level polynomial hierarchy collapse. In the following theorem, we use to denote only one round of parallel oracle calls.
Theorem 1 (Main).
If then
By showing that the existing collapse results for classical boson sampling, commuting circuit sampling and 1 clean qubit sampling can all be modified to show (we are required to add the parallel requirement to for our collapse), we improve the implied collapse of the polynomial hierarchy from 3rd order to 2nd. Consequently, this strengthens the evidence that . This theorem can also be made to strengthen the hierarchy collapse implied by assuming from the third to the second level.
From a purely complexity-theoretic perspective, there is an interesting alternative form of this result in terms of exact and approximate counting111Exact counting can be defined as exactly counting the number of accepting inputs of some poly-time machine, approximate counting is getting within a multiplicative factor of 2 of exact counting (equivilently to within a factor)..
If then
This naturally gives our main result the following interpretation: If the problems solved with the aid of exact counting can also be solved with approximate counting, then both are contained in . The generality of this interpretation hints that there may be applications of this theorem to counting beyond the quantum theory considered herein. It is also interesting to note that the proof contained herein does not relativise as we rely on the checkability of for a step of our proof.
The paper is structured in 3 sections. Section 2 focuses on the proof of our central theorem, section 3 proves that the polynomial hierarchy collapses to second order if . In the third section, we prove our various sampling results. Finally, we close by discussing interesting future research and open questions.
2 Main theorem
In this section we prove our main theorem:
Theorem 2.
If then .
We will assume readers are broadly familiar with classes such as , where classes are undefined we use the definitions given in the complexity zoo [4]. We use the notation for with one set of parallel oracle calls to .
It is useful to first give an informal description of the proof of theorem 2 before we proceed to the formal proof to give the reader a better idea of the purpose of each lemma. Broadly the proof relies on the notion of random-reducibility, where a given instance of a problem can be solved by a polynomial time algorithm with randomly selected instances of some other problem. If a problem can be randomly reduced to itself we say it is random self-reducible. The first key realisation for Theorem 2 is that a random self-reducible language that is in is itself random reducible to , because randomly selecting for a language in gives rise to random set of non-adaptive calls, which meets the definition of random reducibility to . Next, as is random self-reducible [9], the antecedent of Theorem 2 implies that it is random reducible to . The second key realisation is to notice that the proof by Feigenbaum and Fortnow that cannot be random self-reducible unless it is also in extends beyond self reducibility; any language that is random reducible to is in . By repeating the argument for we show . Arvind and Köbler [6] showed any checkable language in is low for which gives us a polynomial hierarchy collapse to . The final collapse to comes from using the algorithm to produce proofs verifiable in , thereby achieving zero error.
We begin by defining random reducibility. A problem is randomly self-reducible if any given instance of the problem can be solved probabilistically by a polynomial time algorithm with access to solved random instances of the same problem. For our purposes we will work with a similar concept: random-reducibility, which is the same except the random instances may be from a different problem. It should be stated that this is different from a “randomized reduction”, which is a randomised Karp reduction. Random reductions are sometimes called 1-locally random reductions to avoid this confusion.
Definition 3 (Random Reducible).
A function is nonadaptively random-reducible to a function if there are polynomial time computable functions and with the following two properties.
-
1.
For all and all can probably be solved by with instance of for selected by :
-
2.
For all , all pairs and all , if is chosen uniformly at random then and are identically distrubuted.
If both conditions hold we say (, ) is a random-reduction from to .
As we are dealing with no other forms of random reducibility we will just say “ is rr to ” when is non-adaptively random-reducible to for some polynomial . If a function is rr to itself then we say the function is random self-reducible (rsr). A set is rr to another if its characteristic function is rr. A set of languages, is rr to if every is rr to some .
As with probabilistic classes like , we can boost random reducibility an arbitrarily low () failure probability.
Lemma 4 ([9]).
If function is non-adaptively random-reducible to then for all , is non-adaptively random-reducible to where condition (1) holds for at least of the r’s in .
Remark.
In general, this boosting may not work for a definition of random reducibility dealing with relations instead of functions as they may have multiple correct outputs. However, counting problems like , which has only one correct output for a given input, can be boosted.
Lemma 5 ([9]).
Any -complete language is rsr.
This concludes our definitions, we can now state an intermediate theorem to our final result.
Theorem 6.
If then is random reducible to .
Before we prove this, we will provide a number of smaller results. In the following lemma the notation means with one oracle call to .
Lemma 7.
Any language in is random reducible to .
Proof.
As -complete languages are random self-reducible and only one oracle call is needed we can use a random reduction of to give the answer to the oracle call and use the polynomial time algorithm to perform the rest of the algorithm.
The next lemma captures the intuition that algorithms are “almost” rr to , only missing the random element selection part of the definition (property (2)).
Lemma 8.
For all in there exists polynomial time computable functions , such that
Proof.
If then there exists a polynomial time algorithm, , to decide which calls only one set of up to non-adaptive queries. We assume the calls for simplicity. Let the algorithm for run until the step involving oracle queries and then output just the ’th query (assuming some arbitrary query ordering). The algorithm for is now just the algorithm using the oracle calls produced asked by in place of directly making its own calls. Since both and have access to the same random string they will both select the same oracle calls. Lemma 8 is useful as it proves that fulfils property (1) of the definition of random reducibility to , but does not prove the random distribution of . The next theorem shows that if a langauge already randomly reduces to then this random reduction condition (condition (2)) is also fufilled, and thus the language is random reducible to
Lemma 9.
All languages random reducible to a language in are random reducible to .
Proof.
Let be a language which is random reducible to . We will demonstrate that is rr to by writing out the definition of random reducibility to then using lemma 8 to substitute the calls to with a formula using calls to . We can then rearange this into a format which directly makes calls to and we show that these calls inherit the randomness property from the random reduction of to .
Let us assume the random reduction of to on input uses calls, from the definition of random reducibility there exists and such that
Define .
Using lemma 8 we can substitute with , and a random string for the i’th call to . In the following we assume each lemma-8-reduction uses calls.
(1)
(2)
The failure probability comes the failure probability of the reductions combined with the failure probability from the random reduction.
We can now begin to combine elements of the reduction from to with reduction from to . We start with which we combine with to define , informally we can think of this as doing the two polynomial-time algorithms as a larger polynomial time algorithm.
We use “” to denote appended to . Define .
Again we define a new function, , as the combination of two existing function, and :
Which gives the following simplified equation:
| (3) |
We will now directly check the definition of , and the equation 3 against the definition of random-reducbile, recall the two conditions definition 3:
-
1.
For all and all can probably be solved by with instance of for selected by :
-
2.
For all , all pairs and all , if is chosen uniformly at random then and are identically distrubuted.
We can see that these two conditions are met:
-
1.
For sufficiently large , therefore equation 3 is of the form:
-
2.
For all , all pairs , all and , if is chosen uniformly at random then and are identically distributed (as per their definition), therefore is identically distributed too. This is because the only dependence on is through the identically distributed : . This holds for all , .
Thus fufilling the conditions for to be random reducible to
Finally, we can proceed with the proof of theorem 6
Proof of theorem 6.
Toda [15] showed that . Therefore, by the assumption of the theorem if then .
By lemma 7 we know is rr to . Under the assumption of this theorem, , is rr to . By lemma 9 this means is rr to .
The next result is heavily based on the work of Feigenbaum and Fortnow [9]. They show that if is random self reducible then is in , while the result we are trying to show is slightly different than this, the method is very similar.
Theorem 10.
Any language that is random-reducible to a language in is also in .
Proof.
Let be with polynomial advice given to Arthur222 This is not the same as , as the latter must be an language for all advice, whereas the former only needs to be defined for the correct advice.. It turns out that and [9]. We will prove is in , beginning with .
Suppose is non-adaptively random-reducible to with error probability . We will adopt the notation . For instance size we will give Arthur the advice where is the probability that ,
As is distributed identically (given uniform random ) for all this advice does not depend on the input .
The proof system will consist of Arthur selecting random strings, , and passing this to Merlin, these random strings defines which queries, , will be made. Merlin will hand back “transcripts”. These transcripts consist of a list which is either a witness for or is the string “NIL” (which is interpreted as the claim that ). For each of the random strings we get the transcript:
Arthur performs three checks:
-
1.
For all either the witness is “NIL” or he checks the witness is valid for
-
2.
Let be 0 if is “NIL” and one otherwise. For all Arthur checks that , i.e. If Merlin has told the truth then will accept.
-
3.
For each Arthur checks that more than of the have been proved to be in , if this condition does not hold he rejects.
If these checks all pass, then Arthur accepts. The trick in this proof is this final condition. Over the random strings, each has some probability of being in with a given variance, by picking to be large we force this variance to be small. With high probability, a correct answer lies in this range. Since Merlin cannot lie about yes instances (since he must prove them), he can only lie about no instances. However, if he lied too much it would be clear that not enough of are in , thus we could probabilistically guess he was cheating. To exploit this, is picked precisely so Merlin cannot cheat on all without exceeding his “lying budget”. We will now formally show soundness and completeness.
Completness. If and Merlin is honest, providing witnesses for all in , condition (1) will always hold. Condition (2) holds with probability at least (by the definition of rr). Therefore we just need to show condition (3) holds with high probability.
Let and . We defined our advice so , we can derive that . We can use these properties and Chebyshev’s inequality to bound our probability of failing check (2).
| (4) | |||
| (5) | |||
| (6) | |||
| (7) |
The probability of failing any check given is less than which is less than for large enough . This proves completeness
Soundness. Suppose , if Arthur accepts then all 3 checks must have passed for all . If some is in but Merlin has provided then we say Merlin has lied about , if check (1) passes Merlin has not lied about any “yes” instances, so we disregard this case. The probability of the random reduction failing is without any lies, so for all reductions to fail Merlin must lie times with probability .
If Merlin lies times there must be an that he claims at least of the are not in when they are. To satisfy condition (3) at least of the must be in to leave “room” for Merlin to lie times without violating (3). The probability of this is given by a Chernoff bound on the random variable , as defined above.
| (8) | |||
| (9) | |||
| (10) | |||
| (11) | |||
| (12) | |||
| (13) |
Combining this with the normal probability that the random reduction fails even with correct oracle answers (which had a probability) gives an acceptance probability given of less than for large enough . This proves soundness.
The previous analysis can just as easily be repeated for with Merlin proving no instances, giving . By Feigenbaum and Fortnow we know this equals .
To prove the next Theorem we must recall a crucial result by Arvind and Köbler, which checkability [6, 4]. The notion of checkability is somewhat involved but intuitively has to do with whether efficent programs that decide the set can themselves be efficently checked. Since we will just need the checkability of and to deploy the following theorem we will not formally define it, instead we refer the reader to [6, 7].
Theorem 11 ([6], Theorem 22).
Checkable sets in are low for .
This result is imperative to proceed, as while it is true that implies , the same is not true for .
Lemma 12.
If then
Proof.
By Toda [15]. If is in then clearly so is , combining this with the existence of -complete checkable sets [6] we know that is low for .
This lowness implies we can put in by the following series of inclusions
As by Toda [15] we can fix the previous inclusion into an equality: .
As (Toda [15]) we get . Which gives us the final equality:
The previous lemma has achieved a collapse to the second level but we can collapse to by using the proveability of languages in to make the algorithm zero-error.
Lemma 13.
All languages in are in
Proof.
Fix some . We can check if using the algorithm. Use the program to determine a proof of this fact for either the verifier or the verifier. We can test either proof in using the verifiers from either or . With probability the algorithm succeeds in producing a valid proof in polynomial time, any valid proof proves or disproves . Therefore in polynomial time the algorithm has successfully determined if with probability and never incorrectly answers, placing .
This completes the proof of our main theorem which is given below.
Theorem 14.
If then
Summary of proof.
If then is random reducible to (Theorem 5).
If is random reducible to then is in (Theorem 10).
If then (Lemma 12).
If and then (Lemma 13).
Remark.
Interestingly we can perform all of the oracle calls required to produce the proof of in one parallel step, it then only takes one extra Oracle call to check this proof. Thus it is possible to answer correctly with probability or with “I don’t know” after only 2 rounds of parallel oracle calls.
Remark.
Feigenbaum and Fortnow [9] showed their proof applies beyond non-adaptive random self-reducibility, up to logarithmically many adaptive steps are allowed. For the same reasons, our proof could be extended to logarithmically many rounds of polynomially many parallel oracle calls.
The following theorem formalises the “natural interpretation” of our result given in the intro.
Theorem 15 (Natural interpretation).
If then
This result becomes easy to show after Theorem 18, so we will prove it at the end of the next section.
3 PostBQP and PostBPP
An immediate consequence of the last section’s main theorem is to the question of , i.e. is a poly-time quantum computer with postselection equal to a classical computer with postselection? The best existing answer was given when Aaronson showed [2], thus equality would imply (a collapse to the third level). However, as [13, 11] our theorem can be used to improve this collapse to the second level. Before we provide proof of this we formally define and the operator which makes a complexity class probabilistic.
Definition 16.
is the set of languages for which there exists a uniform family of polynomial width and polynomial depth quantum circuits such that for any input x,
-
1.
There is a non-zero probability of measuring the first qubit of to be .
-
2.
If , conditioned on the first qubit being , the second qubit is with probability at least 2/3.
-
3.
If , conditioned on the first qubit being , the second qubit is with probability at most 1/3.
can be defined similarly with classical circuits and additional input of random bits.
Definition 17.
Let K be a complexity class. A language , is in if there exists a language and a polynomial such that for all strings .
This covers the necessary definitions and we can proceed with the collapse result of interest.
Theorem 18.
If then and the polynomial hierarchy collapses to the second level.
Proof.
Therefore .
At this point we recall an interesting lemma from Toda [15] to continue.
Clearly and it can be shown that if then (a full proof follows in lemma 19), therefore
Since and we can conclude . We now have the condition so by theorem 2 we get , the final result.
Lemma 19.
If is in then so is
Proof.
Fix some . By the definition, there exists which defines . Assume is in , therefore is in . Therefore is in .
Note that majority-vote amplification can be used on and to decrease the probability of failure to less than .
If we combine the random coin flips of both the and parts of the algorithm we get a single failure probability below 1/3, which fits the definition of .
Proof of Theorem 15.
By definition . O’Donnell and Say show [13]. Therefore we can rewrite the antecedent of the Theorem as .
Any problem in can be solved with one query, which can obviously be done in one parallel round of oracle calls, so . This gives (as we already know and by Theorem 18 .
4 Improved Hardness of , , and
In the original work on the hardness of Boson Sampling [3] two cases were considered: the exact case, where the probability that the algorithm would sample a given element must be at least multiplicatively close to the target distribution, and the approximate case, which allowed additive error in the total variation distance between the sampled and target distribution. For the exact case, classical simulation implied the polynomial hierarchy collapsed to the third level. For the approximate case, a collapse to the 3rd level was achieved, but subject to additional assumptions (we refer to these as additional assumptions henceforth). This separation between multiplicative and additive error also applies to the work on instantaneous quantum polynomial-time sampling () [8] and sampling from 1 clean qubit circuits () [12], albeit with different additional assumptions. It is important to notice that realistic quantum computers are believed not to be able to achieve multiplicative error in sampling, but can achieve additive error [8, 5]. So it is the approximate case that distinguishes quantum from classical computation.
[10] strengthened the collapse for the exact case to (a collapse to the second level of the polynomial hierarchy) for , and (amongst others). However, for fundamental reasons, the proof did not extend to the approximate case.
In this section, we show that efficient classical sampling of the physically relevant approximate case would also collapse the polynomial hierarchy to its second level (), conditional on the existing additional assumptions. In this sense, our work provides the strongest known separation between classical and quantum computations for sampling and functional problems. While our result can also be applied to the exact case it is an not improvement on [10]’s result, as it is known , and this application is therefore not relevant.
To apply Theorem 2 and show a second level collapse we require the condition , however, the proofs contained in [3, 8, 12] only show . Fortunately, each of the proofs can be easily modified to only use parallel oracle calls, giving . Formally proving this fact would require reproducing each proof in detail and would make this paper excessively long. Instead of completely rewriting these proofs we will instead notice that each proof only uses the oracle to approximately count some post-selected quantity with Stockmeyers algorithm [14]. In the next lemma, we show that this computation can be done with parallel oracle calls. This will allow us to argue classical sampling of , or would imply without formally reproducing each of the proofs.
Lemma 20 (Counting a postselected quantity).
Given a Boolean function and a post selection criteria let
Then for all , there exists an machine that approximates to within a multiplicative factor of
The proof of lemma 20 is quite trivial once it is realised that Stockmeyer’s approximate counting theorem can be done with only parallel oracle calls, which we capture in the next lemma.
Lemma 21 (Approximate counting in parallel).
Given a Boolean function , let
Then for all , there exists an machine that approximates p to within a multiplicative factor of
We will not provide all the steps of this lemma as it is a clear corollary of the version of the proof given by Valiant and Vazirani [16]333To see this, note that they can fix their random vectors at the start of their algorithm and check in parallel if 1, 2, up to vectors are sufficient to empty the set.. This theorem is also a consequence of the results of O’Donnel and Say [13], who show that approximate counting is in , and therefore in .
We then proceed with the proof of lemma 20.
Proof of lemma 20.
Fix a target error, , from the assumption of the theorem there exists such that . By lemma 21 we can approximate to a multiplicative factor of with high probability. Similarly we can approximate to as well. Dividing the first sum by the second gives us within a multiplicative factor of
with sufficiently high probability. Since each of these sums can be done in parallel and then divided for the final answer we can perform them in parallel. Given we only need to decrease the failure probability by a factor of 2 the final algorithm is in .
We will first apply the above logic for results. Aaronson and Arkhipov use one post-selection step in lemma 42 in [3], they then perform one approximate counting step on a quantity derived from this post-selection step to prove their main theorem. The structure of this proof fits the format of lemma 20, therefore we can state a parallelised version of their main theorem.
Theorem 22 (Aaronson and Arkhipov with parallel calls).
Let be the probability distribution sampled by a boson computer . Suppose there exists a classical algorithm C that takes as input a description of A as well as an error bound , and that samples from a probability distribution such that in poly time. Then the problem is solvable in .
The problem (defined in [3]) becomes complete given two conjectures: The permanent-of-Gaussians Conjecture (PGC) and the Permanent Anti-Concentration Conjecture (PACC). These conjectures capture the belief that the permanent of Gaussian matrices does not concentrate close to zero and is hard to estimate, further justification of these conjectures is available in the original paper [3]. These are the additional assumptions we referred to earlier.
Theorem 23.
Assume PACC and PGC hold. If there exists a classical algorithm that can sample the distribution of a boson computer with additive error, then and the polynomial hierachy collapse to the second level.
Next, we discuss how the same strategy applies to the case. The proof provided by Bremner, Montanaro and Shepherd uses just Stockmeyer counting to prove their collapse to in their theorem 1. By lemma 21 all calls in their algorithm can be done in parallel and we can convert their theorem 1 to a result. For we do not need to conjecture the concentration of some hard-problem. The result only rest on the conjectured average case hardness of approximately computing either the partition function of the Ising model or on a property of low-degree polynomials over finite fields (Conjectures 2 and 3 in [8]).
Theorem 24 (Improved IQP hardness [8]).
Assume either above conjecture is true. If it is possible to classically sample from the output probability distribution of any circuit in polynomial time, up to an error of 1/192 in norm, then . Hence the Polynomial Hierarchy would collapse to its second level.
Finally, the same strategy can be applied to improve Morimae’s result [12] on the hardness of the model for sampling [10] as it again depends on Stockmeyer counting. The necessary conjecture for Morimae’s is an assumption directly about the average case hardness of the one clean qubit model.
Theorem 25 (Improved one clean qubit hardness [12]).
Assuming Morimae’s conjecture, if there exists a classical algorithm which can output samples from any one clean qubit machine with at most 1/36 error in the norm then there is a algorithm to solve any problem in . Hence the Polynomial Hierarchy would collapse to its second level.
It seems likely that other results will fit the structure we give here, although we provide only these three results.
We finish this section by noting that the above results are in , so classical impossibility on any of these tasks would imply , which would also imply a separation in the functional classes [2].
Theorem 26.
If any of the sets of conjecture above hold and the polynomial hierarchy does not collapse to it’s second level, then and equivalently .
5 Conclusion and Open Problems
This paper has shown that if then the polynomial hierarchy collapses to its second level. We have connected this result to approximate/exact counting and shown that it improves a number of results demonstrating the separation of quantum and classical computing. The natural next research question is “how low can we go?”. [10] extended the result for the multiplicative error case to , perhaps this could be achieved. We have not used the fact that the algorithm likely only needs two rounds of oracle calls. This could be an avenue to collapsing the hierarchy further.
This hierarchy collapse strengthens claims of quantum supremacy but as these claims also rest on a number of additional assumptions (e.g. permanents-of-Gaussians conjectures), diminishing, removing or proving the other assumptions remain one of the central challenges of showing quantum supremacy and proving . Alternatively further work may reveal one of these assumptions to fall through, such as with XQUATH [5].
Our results offer other, more direct, extensions. Of particular interest is whether our main theorem relativises, like previous supremacy results did [3]. Avoiding using the checkability of may be key to proving relativisation as this is the only step of our proof that did not relativise.
Another open question is how Theorem 2 may be used for other non-quantum purposes, perhaps where approximate and exact counting are being compared. Alternatively, a research line we did not pursue is extending theorem 2. Extensions could be to other checkable rsr languages, perhaps or (although these may only be adaptively-rsr [9]), or to showing the equality holds for other elements of the polynomial hierarchy ().
References
- [1] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 461(2063):3473–3482, 2005.
- [2] Scott Aaronson. The equivalence of sampling and searching. Theory of Computing Systems, 55(2):281–298, 2014. doi:10.1007/S00224-013-9527-3.
- [3] Scott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 333–342, 2011. doi:10.1145/1993636.1993682.
- [4] Scott Aaronson, Greg Kuperberg, and Christopher Granade. The complexity zoo, 2005.
- [5] Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. A polynomial-time classical algorithm for noisy random circuit sampling. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 945–957, 2023. doi:10.1145/3564246.3585234.
- [6] V. Arvind and Johannes Köbler. New lowness results for zppnp and other complexity classes. Journal of Computer and System Sciences, 65(2):257–277, 2002. doi:10.1006/jcss.2002.1835.
- [7] Manuel Blum and Sampath Kannan. Designing programs that check their work. Journal of the ACM (JACM), 42(1):269–291, 1995. doi:10.1145/200836.200880.
- [8] Michael J Bremner, Ashley Montanaro, and Dan J Shepherd. Average-case complexity versus approximate simulation of commuting quantum computations. Physical review letters, 117(8):080501, 2016.
- [9] J Feigenbaum and L Fortnow. On the random-self-reducibility of complete sets, university of chicago technical report 90-22. Computer Science Department, 20, 1990.
- [10] Keisuke Fujii, Hirotada Kobayashi, Tomoyuki Morimae, Harumichi Nishimura, Shuhei Tamate, and Seiichiro Tani. Power of quantum computation with few clean qubits. arXiv preprint, 2015. arXiv:1509.07276.
- [11] Yenjo Han, Lane A Hemaspaandra, and Thomas Thierauf. Threshold computation and cryptographic security. SIAM Journal on Computing, 26(1):59–78, 1997. doi:10.1137/S0097539792240467.
- [12] Tomoyuki Morimae. Hardness of classically sampling the one-clean-qubit model with constant total variation distance error. Physical Review A, 96(4):040302, 2017.
- [13] Ryan O’Donnell and AC Cem Say. The weakness of ctc qubits and the power of approximate counting. ACM Transactions on Computation Theory (TOCT), 10(2):1–22, 2018. doi:10.1145/3196832.
- [14] Larry Stockmeyer. On approximation algorithms for# p. SIAM Journal on Computing, 14(4):849–861, 1985. doi:10.1137/0214060.
- [15] Seinosuke Toda. Pp is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865–877, 1991. doi:10.1137/0220053.
- [16] Leslie G Valiant and Vijay V Vazirani. Np is as easy as detecting unique solutions. In Proceedings of the seventeenth annual ACM symposium on Theory of computing, pages 458–463, 1985. doi:10.1145/22145.22196.
