Abstract 1 Introduction 2 Main theorem 3 PostBQP and PostBPP 4 Improved Hardness of 𝗕𝗼𝘀𝗼𝗻𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴, 𝗜𝗤𝗣, and 𝗗𝗤𝗖𝟭 5 Conclusion and Open Problems References

Improved Separation Between Quantum and Classical Computers for Sampling and Functional Tasks

Simon C. Marshal ORCID Leiden University, The Netherlands Scott Aaronson ORCID University of Texas at Austin, TX, USA Vedran Dunjko ORCID Leiden University, The Netherlands
Abstract

This paper furthers existing evidence that quantum computers are capable of computations beyond classical computers. Specifically, we strengthen the collapse of the polynomial hierarchy to the second level if: (i) Quantum computers with postselection are as powerful as classical computers with postselection (𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯), (ii) any one of several quantum sampling experiments (𝖡𝗈𝗌𝗈𝗇𝖲𝖺𝗆𝗉𝗅𝗂𝗇𝗀, 𝖨𝖰𝖯, 𝖣𝖰𝖢𝟣) can be approximately performed by a classical computer (contingent on existing assumptions). This last result implies that if any of these experiment’s hardness conjectures hold, then quantum computers can implement functions classical computers cannot (𝖥𝖡𝖰𝖯𝖥𝖡𝖯𝖯) unless the polynomial hierarchy collapses to its 2nd level. These results are an improvement over previous work which either achieved a collapse to the third level or were concerned with exact sampling, a physically impractical case.

The workhorse of these results is a new technical complexity-theoretic result which we believe could have value beyond quantum computation. In particular, we prove that if there exists an equivalence between problems solvable with an exact counting oracle and problems solvable with an approximate counting oracle, then the polynomial hierarchy collapses to its second level, indeed to 𝖹𝖯𝖯𝖭𝖯.

Keywords and phrases:
Quantum advantage, Approximate counting, Boson sampling
Funding:
Simon C. Marshal: supported by the project NEASQC funded from the European Union’s Horizon 2020 research and innovation programme (grant agreement No 951821) and by an unrestricted gift from Google Quantum AI.
Scott Aaronson: supported by a Vannevar Bush Fellowship from the US Department of Defense, the Berkeley NSF-QLCI CIQC Center, and a Simons Investigator Award.
Vedran Dunjko: supported by the European Union’s Horizon Europe program through the ERC CoG BeMAIQuantum (Grant No. 101124342) and the Dutch National Growth Fund (NGF), as part of the Quantum Delta NL program.
Copyright and License:
[Uncaptioned image] © Simon C. Marshal, Scott Aaronson, and Vedran Dunjko; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Complexity classes
; Theory of computation Quantum complexity theory
Related Version:
Full Version: https://arxiv.org/pdf/2410.20935
Acknowledgements:
SCM thanks Jan H. Kirchner for their insightful comments.
Editors:
Srikanth Srinivasan

1 Introduction

Quantum computers are thought to be poised to outperform classical computers on a number of significant tasks, such as integer factorisation or simulation of quantum systems. Despite this widely held belief, we have no formal proof that factoring is not solvable in polynomial time on a classical computer, or even that 𝖡𝖰𝖯𝖡𝖯𝖯.

This is perhaps not surprising given the challenge posed by proving unconditional results in complexity theory; famously 𝖯=?𝖭𝖯 remains unresolved after decades of study. A more realistic approach is to base the hardness of quantum computing on widely accepted complexity-theoretic conjectures.

This approach has so far been quite successful for sampling problems, with a number of papers [3, 8] showing that various quantum sampling experiments would be hard for a classical algorithm unless the polynomial hierarchy collapses to its third level, or else a widely believed conjecture fails. These results are especially interesting as they naturally imply that quantum computers could implement functions classical computers could not (𝖥𝖡𝖰𝖯𝖥𝖡𝖯𝖯) due to a surprising equivalence between sampling and functional classes [2].

In a similar vein, there has been a body of work showing that various changes to quantum computing make quantum computing vastly more powerful than similarly modified classical computing. A well-known example is 𝖯𝗈𝗌𝗍𝖡𝖰𝖯 vs 𝖯𝗈𝗌𝗍𝖡𝖯𝖯, i.e. quantum/classical computing with postselection. Surprisingly, the former is in some sense equivalent to problems involving exact counting (𝖯𝖯) [1], while the latter is only equivalent to approximate counting [13]. This disconnect implies that the classes are likely not equal, indeed if they are this would also imply a collapse of the polynomial hierarchy to the third level [1].

Both of these research lines provide strong evidence for that quantum computing is more powerful than classical computing, but they rely on a number of conjectures and complexity theory conditions. We are therefore motivated to attempt to minimise or remove the need for these conjectures/conditions until we are left with compelling evidence that quantum computing is fundamentally stronger than classical computing.

Hope that weaker conditions might be possible was provided by [10], who showed that if classical computers can exactly sample from any one of a number of quantum supremacy experiments then the polynomial hierarchy would collapse to its second level, an improvement over the existing collapse to third level. In fact they push the collapse all the way to 𝖠𝖬, in the second level of the hierarchy. While this result provides insight as to what may be possible, it stops short of our goal as the proof strategy did not extend to additive approximate sampling, which is arguably the realistic case [3] and is relevant for questions such as 𝖥𝖡𝖰𝖯=?𝖥𝖡𝖯𝖯.

In this paper, we provide the following new complexity theoretic result, which we then use to improve the abovementioned results to a 2nd level polynomial hierarchy collapse. In the following theorem, we use to denote only one round of parallel oracle calls.

Theorem 1 (Main).

If 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯

By showing that the existing collapse results for classical boson sampling, commuting circuit sampling and 1 clean qubit sampling can all be modified to show 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 (we are required to add the parallel requirement to 𝖡𝖯𝖯𝖭𝖯 for our collapse), we improve the implied collapse of the polynomial hierarchy from 3rd order to 2nd. Consequently, this strengthens the evidence that 𝖥𝖡𝖰𝖯𝖥𝖡𝖯𝖯. This theorem can also be made to strengthen the hierarchy collapse implied by assuming 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯 from the third to the second level.

From a purely complexity-theoretic perspective, there is an interesting alternative form of this result in terms of exact and approximate counting111Exact counting can be defined as exactly counting the number of accepting inputs of some poly-time machine, approximate counting is getting within a multiplicative factor of 2 of exact counting (equivilently to within a 1+1poly(n) factor)..

If 𝖯𝖤𝗑𝖺𝖼𝗍𝖢𝗈𝗎𝗇𝗍=𝖯𝖠𝗉𝗉𝗋𝗈𝗑𝖢𝗈𝗎𝗇𝗍 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯

This naturally gives our main result the following interpretation: If the problems solved with the aid of exact counting can also be solved with approximate counting, then both are contained in 𝖹𝖯𝖯𝖭𝖯. The generality of this interpretation hints that there may be applications of this theorem to counting beyond the quantum theory considered herein. It is also interesting to note that the proof contained herein does not relativise as we rely on the checkability of #𝖯 for a step of our proof.

The paper is structured in 3 sections. Section 2 focuses on the proof of our central theorem, section 3 proves that the polynomial hierarchy collapses to second order if 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯. In the third section, we prove our various sampling results. Finally, we close by discussing interesting future research and open questions.

2 Main theorem

In this section we prove our main theorem:

Theorem 2.

If 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯.

We will assume readers are broadly familiar with classes such as #𝖯, where classes are undefined we use the definitions given in the complexity zoo [4]. We use the notation 𝖠𝖡 for 𝖠 with one set of parallel oracle calls to 𝖡.

It is useful to first give an informal description of the proof of theorem 2 before we proceed to the formal proof to give the reader a better idea of the purpose of each lemma. Broadly the proof relies on the notion of random-reducibility, where a given instance of a problem can be solved by a polynomial time algorithm with randomly selected instances of some other problem. If a problem can be randomly reduced to itself we say it is random self-reducible. The first key realisation for Theorem 2 is that a random self-reducible language that is in 𝖡𝖯𝖯𝖭𝖯 is itself random reducible to 𝖭𝖯, because randomly selecting x for a language in 𝖡𝖯𝖯𝖭𝖯 gives rise to random set of non-adaptive 𝖭𝖯 calls, which meets the definition of random reducibility to 𝖭𝖯. Next, as #𝖯 is random self-reducible [9], the antecedent of Theorem 2 implies that it is random reducible to 𝖭𝖯. The second key realisation is to notice that the proof by Feigenbaum and Fortnow that 𝖭𝖯 cannot be random self-reducible unless it is also in 𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒 extends beyond self reducibility; any language that is random reducible to 𝖭𝖯 is in 𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒. By repeating the argument for 𝖼𝗈𝖭𝖯 we show #𝖯𝖭𝖯/𝗉𝗈𝗅𝗒𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒. Arvind and Köbler [6] showed any checkable language in 𝖭𝖯/𝗉𝗈𝗅𝗒𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒 is low for Σ𝟤𝖯 which gives us a polynomial hierarchy collapse to Σ𝟤𝖯. The final collapse to 𝖹𝖯𝖯𝖭𝖯 comes from using the 𝖡𝖯𝖯𝖭𝖯 algorithm to produce proofs verifiable in 𝖯𝖭𝖯, thereby achieving zero error.

We begin by defining random reducibility. A problem is randomly self-reducible if any given instance of the problem can be solved probabilistically by a polynomial time algorithm with access to solved random instances of the same problem. For our purposes we will work with a similar concept: random-reducibility, which is the same except the random instances may be from a different problem. It should be stated that this is different from a “randomized reduction”, which is a randomised Karp reduction. Random reductions are sometimes called 1-locally random reductions to avoid this confusion.

Definition 3 (Random Reducible).

A function f is nonadaptively 𝐤(𝐧) random-reducible to a function g if there are polynomial time computable functions σ and ϕ with the following two properties.

  1. 1.

    For all n and all x{0,1}n f(x) can probably be solved by ϕ with instance of g(y) for y selected by σ:

    Prr(f(x)=ϕ(x,r,g(σ(1,x,r)),,g(σ(k,x,r))))2/3
  2. 2.

    For all n, all pairs {x1,x2}{0,1}n and all i, if r is chosen uniformly at random then σ(i,x1,r) and σ(i,x2,r) are identically distrubuted.

If both conditions hold we say (σ, ϕ) is a random-reduction from f to g.

As we are dealing with no other forms of random reducibility we will just say “f is rr to g” when f is non-adaptively k(n) random-reducible to g for some polynomial k. If a function is rr to itself then we say the function is random self-reducible (rsr). A set is rr to another if its characteristic function is rr. A set of languages, 𝖠 is rr to 𝖡 if every L𝖠 is rr to some L𝖡.

As with probabilistic classes like 𝖡𝖯𝖯, we can boost random reducibility an arbitrarily low (2n) failure probability.

Lemma 4 ([9]).

If function f is non-adaptively k(n) random-reducible to g then for all t(n), f is non-adaptively 24t(n)k(n) random-reducible to g where condition (1) holds for at least 12t(n) of the r’s in {0,1}24t(n)k(n).

 Remark.

In general, this boosting may not work for a definition of random reducibility dealing with relations instead of functions as they may have multiple correct outputs. However, counting problems like 𝖯𝖾𝗋𝗆, which has only one correct output for a given input, can be boosted.

Lemma 5 ([9]).

Any #𝖯-complete language is rsr.

This concludes our definitions, we can now state an intermediate theorem to our final result.

Theorem 6.

If 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯 is random reducible to 𝖭𝖯.

Before we prove this, we will provide a number of smaller results. In the following lemma the notation 𝖠(𝖡[𝟣]) means 𝖠 with one oracle call to 𝖡.

Lemma 7.

Any language in 𝖯#𝖯[𝟣] is random reducible to #𝖯.

Proof.

As #𝖯-complete languages are random self-reducible and only one oracle call is needed we can use a random reduction of #𝖯 to give the answer to the oracle call and use the polynomial time ϕ algorithm to perform the rest of the 𝖯 algorithm.

The next lemma captures the intuition that 𝖡𝖯𝖯𝖭𝖯 algorithms are “almost” rr to 𝖭𝖯, only missing the random element selection part of the definition (property (2)).

Lemma 8.

For all L in 𝖡𝖯𝖯𝖭𝖯 there exists polynomial time computable functions σB, ϕB such that

Prr(L(x)=ϕB(x,r,𝖲𝖠𝖳(σB(1,x,r)),,𝖲𝖠𝖳(σB(m,x,r))))12n.

Proof.

If L𝖡𝖯𝖯𝖭𝖯 then there exists a polynomial time algorithm, A, to decide xL which calls only one set of up to m non-adaptive 𝖭𝖯 queries. We assume the 𝖭𝖯 calls 𝖲𝖠𝖳 for simplicity. Let the algorithm for σB(i,x,r) run A until the step involving oracle queries and then output just the i’th query (assuming some arbitrary query ordering). The algorithm for ϕB is now just the algorithm A using the oracle calls produced asked by σB(i,x,r) in place of directly making its own calls. Since both ϕB and σ𝖡 have access to the same random string r they will both select the same oracle calls. Lemma 8 is useful as it proves that 𝖡𝖯𝖯𝖭𝖯 fulfils property (1) of the definition of random reducibility to 𝖭𝖯, but does not prove the random distribution of σB(1,x,r). The next theorem shows that if a langauge already randomly reduces to 𝖡𝖯𝖯𝖭𝖯 then this random reduction condition (condition (2)) is also fufilled, and thus the language is random reducible to 𝖭𝖯

Lemma 9.

All languages random reducible to a language in 𝖡𝖯𝖯𝖭𝖯 are random reducible to 𝖭𝖯.

Proof.

Let L be a language which is random reducible to LB𝖡𝖯𝖯𝖭𝖯. We will demonstrate that L is rr to 𝖭𝖯 by writing out the definition of random reducibility to LB then using lemma 8 to substitute the calls to LB with a formula using calls to 𝖲𝖠𝖳. We can then rearange this into a format which directly makes calls to 𝖲𝖠𝖳 and we show that these calls inherit the randomness property from the random reduction of L to LB.

Let us assume the random reduction of L to LB on input x uses k calls, from the definition of random reducibility there exists ϕ and σ such that

Prr(L(x)=ϕ(x,r,L𝖡𝖯𝖯(σ(1,x,r)),,LB(σ(k,x,r))))12n.

Define yi:=σ(i,x,r).

Prr(L(x)=ϕ(x,r,L𝖡(y1),,L𝖡𝖯𝖯(yk)))12n

Using lemma 8 we can substitute LB with ϕB, σB and a random string ri for the i’th call to LB. In the following we assume each lemma-8-reduction uses m 𝖲𝖠𝖳 calls. Prr,r1,rk(L(x)=ϕ(x,r, ϕB(y1,r1,𝖲𝖠𝖳(σB(1,y1,r1)),,𝖲𝖠𝖳(σB(m,y1,r1))), (1) , ϕB(yk,rk,𝖲𝖠𝖳(σB(1,yk,rk)),,𝖲𝖠𝖳(σB(m,yk,rk))))1(k+1)2n. (2)

The failure probability 1(k+1)2n comes the failure probability of the k reductions combined with the failure probability from the random reduction.

We can now begin to combine elements of the reduction from L to LB with reduction from LB to 𝖲𝖠𝖳. We start with ϕ which we combine with ϕB to define ϕ, informally we can think of this as doing the two polynomial-time algorithms as a larger polynomial time algorithm.

ϕ(x,r,r1,,rk,𝖲𝖠𝖳(σ𝖡(𝟣,𝗒𝟣,𝗋𝟣)),𝖲𝖠𝖳(σ𝖡(𝗆,𝗒𝗄,𝗋𝗄))):=ϕ(x,r,ϕB(y1,r1,𝖲𝖠𝖳(σB(1,y1,r1)),,𝖲𝖠𝖳(σB(m,y1,r1))),,ϕB(yk,rk,𝖲𝖠𝖳(σB(1,yk,rk)),,𝖲𝖠𝖳(σB(m,yk,rk))

We use “x#y” to denote y appended to x. Define 𝐫:=r#r1##rk.

Again we define a new function, σ, as the combination of two existing function, σ and σB:

σ(i,j,x,𝐫):=σB(i,σ(j,x,r),rj).

Which gives the following simplified equation:

Pr𝐫(L(x)=ϕ(x,𝐫,𝖲𝖠𝖳(σ(1,1,x,𝐫)),𝖲𝖠𝖳(σ(m,k,x,𝐫)))1(k+1)2n. (3)

We will now directly check the definition of σ, ϕ and the equation 3 against the definition of random-reducbile, recall the two conditions definition 3:

  1. 1.

    For all n and all x{0,1}n f(x) can probably be solved by ϕ with instance of g(y) for y selected by σ:

    Prr(f(x)=ϕ(x,r,g(σ(1,x,r)),,g(σ(k,x,r))))2/3
  2. 2.

    For all n, all pairs {x1,x2}{0,1}n and all i, if r is chosen uniformly at random then σ(i,x1,r) and σ(i,x2,r) are identically distrubuted.

We can see that these two conditions are met:

  1. 1.

    For sufficiently large n, 1(k+1)2n>2/3 therefore equation 3 is of the form:

    Prr(f(x)=ϕ(x,r,𝖲𝖠𝖳(σ(1,x,r)),,𝖲𝖠𝖳(σ(k,x,r))))2/3.
  2. 2.

    For all n, all pairs {x1,x2}{0,1}n, all i and j, if r is chosen uniformly at random then σ(j,x1,r) and σ(j,x2,r) are identically distributed (as per their definition), therefore σ(i#j,x1/2,𝐫) is identically distributed too. This is because the only dependence on x is through the identically distributed σ: σ(i#j,x,𝐫)=σ𝖡(i,σ(j,x,r),rj). This holds for all ri, rj.

Thus fufilling the conditions for L to be random reducible to 𝖭𝖯

Finally, we can proceed with the proof of theorem 6

Proof of theorem 6.

Toda [15] showed that 𝖯𝖧𝖯#𝖯[𝟣]. Therefore, by the assumption of the theorem if 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯=𝖯𝖧 then 𝖯#𝖯=𝖯#𝖯[𝟣].

By lemma 7 we know 𝖯#𝖯 is rr to #𝖯. Under the assumption of this theorem, #𝖯𝖡𝖯𝖯𝖭𝖯, 𝖯#𝖯 is rr to 𝖡𝖯𝖯𝖭𝖯. By lemma 9 this means 𝖯#𝖯 is rr to 𝖭𝖯.

The next result is heavily based on the work of Feigenbaum and Fortnow [9]. They show that if 𝖭𝖯 is random self reducible then 𝖼𝗈𝖭𝖯 is in 𝖭𝖯/𝗉𝗈𝗅𝗒, while the result we are trying to show is slightly different than this, the method is very similar.

Theorem 10.

Any language that is random-reducible to a language in 𝖭𝖯 is also in 𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒𝖭𝖯/𝗉𝗈𝗅𝗒.

Proof.

Let 𝖠𝖬𝗉𝗈𝗅𝗒 be 𝖠𝖬 with polynomial advice given to Arthur222 This is not the same as 𝖠𝖬/𝗉𝗈𝗅𝗒, as the latter must be an 𝖠𝖬 language for all advice, whereas the former only needs to be defined for the correct advice.. It turns out that 𝖠𝖬𝗉𝗈𝗅𝗒=𝖭𝖯/𝗉𝗈𝗅𝗒 and 𝖼𝗈𝖠𝖬𝗉𝗈𝗅𝗒=𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒 [9]. We will prove L is in 𝖠𝖬𝗉𝗈𝗅𝗒𝖼𝗈𝖠𝖬𝗉𝗈𝗅𝗒, beginning with L𝖠𝖬𝗉𝗈𝗅𝗒.

Suppose L(x) is non-adaptively k(n) random-reducible to 𝖲𝖠𝖳(x) with error probability 2n. We will adopt the notation yi=σ(i,x,r). For instance size n we will give Arthur the advice (p1,,pk) where pi is the probability that yi=1,

pi=Prr(𝖲𝖠𝖳(σ(i,x,r))=1).

As σ(i,x,r) is distributed identically (given uniform random r) for all x this advice does not depend on the input x.

The proof system will consist of Arthur selecting m=9k3 random strings, {rj}j[m], and passing this to Merlin, these random strings defines which 𝖲𝖠𝖳 queries, y0,j,,yk,j, will be made. Merlin will hand back m “transcripts”. These transcripts consist of a list wi,j which is either a witness for yi,j or is the string “NIL” (which is interpreted as the claim that yi,j𝖲𝖠𝖳). For each of the m random strings we get the transcript:

Transcript(x,rj)=(w0,j,w1,j,,wk,j)

Arthur performs three checks:

  1. 1.

    For all i,j either the witness wi,j is “NIL” or he checks the witness is valid for yi,j𝖲𝖠𝖳

  2. 2.

    Let bi,j be 0 if wi,j is “NIL” and one otherwise. For all j[m] Arthur checks that ϕ(x,rj,b0,j,,bk,j)=1, i.e. If Merlin has told the truth then ϕ will accept.

  3. 3.

    For each i[k] Arthur checks that more than pim2km of the yi,j have been proved to be in 𝖲𝖠𝖳, if this condition does not hold he rejects.

If these checks all pass, then Arthur accepts. The trick in this proof is this final condition. Over the random strings, each yi has some probability of being in 𝖲𝖠𝖳 with a given variance, by picking m to be large we force this variance to be small. With high probability, a correct answer lies in this range. Since Merlin cannot lie about yes instances (since he must prove them), he can only lie about no instances. However, if he lied too much it would be clear that not enough of yi are in 𝖲𝖠𝖳, thus we could probabilistically guess he was cheating. To exploit this, m is picked precisely so Merlin cannot cheat on all j without exceeding his “lying budget”. We will now formally show soundness and completeness.

Completness. If xL and Merlin is honest, providing witnesses for all yi,j in 𝖲𝖠𝖳, condition (1) will always hold. Condition (2) holds with probability at least 12n (by the definition of rr). Therefore we just need to show condition (3) holds with high probability.

Let Zi,j:=(yi,j=1) and Zi=j=1mZi,j. We defined our advice so pi=𝔼[Zi]/m, we can derive that Var(X)=pi(1pi)m<m. We can use these properties and Chebyshev’s inequality to bound our probability of failing check (2).

Pr(Zipim2km) (4)
Pr(Zipim2km) (5)
=Pr(ZiZi¯2km) (6)
Var(X)/4km1/(4k)1/4 (7)

The probability of failing any check given xL is less than 14+2n which is less than 1/3 for large enough n. This proves completeness

Soundness. Suppose xL, if Arthur accepts then all 3 checks must have passed for all j[m]. If some yi,j is in 𝖲𝖠𝖳 but Merlin has provided wi,j=NIL′′ then we say Merlin has lied about yi,j, if check (1) passes Merlin has not lied about any “yes” instances, so we disregard this case. The probability of the random reduction failing is 2n without any lies, so for all m reductions to fail Merlin must lie m times with probability (12n)m>1m2n.

If Merlin lies m times there must be an i that he claims at least m/k of the yi,j are not in 𝖲𝖠𝖳 when they are. To satisfy condition (3) at least pim2km+𝐦/𝐤 of the yi,j must be in 𝖲𝖠𝖳 to leave “room” for Merlin to lie m/k times without violating (3). The probability of this is given by a Chernoff bound on the random variable Zi, as defined above.

Pr(Zipim2km+m/k) (8)
=Pr(Zipimm/k2km) (9)
=Pr(Zipim9k3/k2k9k3) (10)
=Pr(Zipim3k2) (11)
e2×9k4/9k3=e2×k (12)
1/(4k) (13)

Combining this with the normal probability that the random reduction fails even with correct oracle answers (which had a 2n probability) gives an acceptance probability given xL of less than m2n+1/(4k)<1/3 for large enough n. This proves soundness.

The previous analysis can just as easily be repeated for 𝖼𝗈𝖠𝖬𝗉𝗈𝗅𝗒 with Merlin proving no instances, giving L𝖠𝖬𝗉𝗈𝗅𝗒𝖼𝗈𝖠𝖬𝗉𝗈𝗅𝗒. By Feigenbaum and Fortnow we know this equals 𝖭𝖯/𝗉𝗈𝗅𝗒𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒.

To prove the next Theorem we must recall a crucial result by Arvind and Köbler, which checkability [6, 4]. The notion of checkability is somewhat involved but intuitively has to do with whether efficent programs that decide the set can themselves be efficently checked. Since we will just need the checkability of 𝖯𝖯 and #𝖯 to deploy the following theorem we will not formally define it, instead we refer the reader to [6, 7].

Theorem 11 ([6], Theorem 22).

Checkable sets in 𝖭𝖯/𝗉𝗈𝗅𝗒𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒 are low for Σ𝟤𝖯.

This result is imperative to proceed, as while it is true that 𝖭𝖯(𝖭𝖯𝖼𝗈𝖭𝖯)/𝗉𝗈𝗅𝗒 implies 𝖯𝖧=𝖹𝖯𝖯𝖭𝖯, the same is not true for 𝖭𝖯(𝖭𝖯/𝗉𝗈𝗅𝗒)(𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒).

Lemma 12.

If 𝖯#𝖯(𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒𝖭𝖯/𝗉𝗈𝗅𝗒) then 𝖯#𝖯=Σ𝟤𝖯Π𝟤𝖯

Proof.

By Toda 𝖯#𝖯=𝖯𝖯𝖯 [15]. If 𝖯𝖯𝖯 is in 𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒𝖭𝖯/𝗉𝗈𝗅𝗒 then clearly so is 𝖯𝖯, combining this with the existence of 𝖯𝖯-complete checkable sets [6] we know that 𝖯𝖯 is low for Σ𝟤𝖯.

This lowness implies we can put 𝖯#𝖯 in Σ𝟤𝖯 by the following series of inclusions

𝖯#𝖯=𝖯𝖯𝖯(Σ𝟤𝖯)𝖯𝖯=Σ𝟤𝖯.

As Σ𝟤𝖯𝖯#𝖯 by Toda [15] we can fix the previous inclusion into an equality: Σ𝟤𝖯=𝖯#𝖯.

As Π𝟤𝖯𝖯#𝖯 (Toda [15]) we get Π𝟤𝖯Σ𝟤𝖯. Which gives us the final equality:

𝖯#𝖯Σ𝟤𝖯𝖯Π𝟤𝖯=𝖯Σ𝟤𝖯Π𝟤𝖯=Σ𝟤𝖯Π𝟤𝖯

The previous lemma has achieved a collapse to the second level but we can collapse to 𝖹𝖯𝖯𝖭𝖯 by using the proveability of languages in Σ𝟤𝖯Π𝟤𝖯 to make the 𝖡𝖯𝖯𝖭𝖯 algorithm zero-error.

Lemma 13.

All languages in 𝖡𝖯𝖯𝖭𝖯Σ𝟤𝖯Π𝟤𝖯 are in 𝖹𝖯𝖯𝖭𝖯

Proof.

Fix some L𝖡𝖯𝖯𝖭𝖯Σ𝟤𝖯Π𝟤𝖯. We can check if xL using the 𝖡𝖯𝖯𝖭𝖯 algorithm. Use the 𝖡𝖯𝖯𝖭𝖯 program to determine a proof of this fact for either the Σ𝟤𝖯 verifier or the Π𝟤𝖯 verifier. We can test either proof in 𝖯𝖭𝖯 using the verifiers from either Σ𝟤𝖯 or Π𝟤𝖯. With probability 12n the 𝖡𝖯𝖯𝖭𝖯 algorithm succeeds in producing a valid proof in polynomial time, any valid proof proves or disproves xL. Therefore in polynomial time the algorithm has successfully determined if xL with probability 12n and never incorrectly answers, placing L𝖹𝖯𝖯𝖭𝖯.

This completes the proof of our main theorem which is given below.

Theorem 14.

If 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯

Summary of proof.

If 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯 is random reducible to 𝖭𝖯 (Theorem 5).

If 𝖯#𝖯 is random reducible to 𝖭𝖯 then 𝖯#𝖯 is in 𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒𝖭𝖯/𝗉𝗈𝗅𝗒 (Theorem 10).

If 𝖯#𝖯𝖼𝗈𝖭𝖯/𝗉𝗈𝗅𝗒𝖭𝖯/𝗉𝗈𝗅𝗒 then 𝖯#𝖯=Σ𝟤𝖯Π𝟤𝖯 (Lemma 12).

If 𝖯#𝖯=Σ𝟤𝖯Π𝟤𝖯 and 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯 (Lemma 13).

 Remark.

Interestingly we can perform all of the oracle calls required to produce the proof of xL in one parallel step, it then only takes one extra Oracle call to check this proof. Thus it is possible to answer correctly with probability 12n or with “I don’t know” after only 2 rounds of parallel 𝖭𝖯 oracle calls.

 Remark.

Feigenbaum and Fortnow [9] showed their proof applies beyond non-adaptive random self-reducibility, up to logarithmically many adaptive steps are allowed. For the same reasons, our proof could be extended to logarithmically many rounds of polynomially many parallel oracle calls.

The following theorem formalises the “natural interpretation” of our result given in the intro.

Theorem 15 (Natural interpretation).

If 𝖯𝖤𝗑𝖺𝖼𝗍𝖢𝗈𝗎𝗇𝗍=𝖯𝖠𝗉𝗉𝗋𝗈𝗑𝖢𝗈𝗎𝗇𝗍 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯

This result becomes easy to show after Theorem 18, so we will prove it at the end of the next section.

3 PostBQP and PostBPP

An immediate consequence of the last section’s main theorem is to the question of 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=?𝖯𝗈𝗌𝗍𝖡𝖯𝖯, i.e. is a poly-time quantum computer with postselection equal to a classical computer with postselection? The best existing answer was given when Aaronson showed 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝖯 [2], thus equality would imply 𝖯𝖧=𝖡𝖯𝖯𝖭𝖯 (a collapse to the third level). However, as 𝖯𝗈𝗌𝗍𝖡𝖯𝖯𝖡𝖯𝖯𝖭𝖯 [13, 11] our theorem can be used to improve this collapse to the second level. Before we provide proof of this we formally define 𝖯𝗈𝗌𝗍𝖡𝖰𝖯 and the operator 𝖡𝖯 which makes a complexity class probabilistic.

Definition 16.

𝖯𝗈𝗌𝗍𝖡𝖰𝖯 is the set of languages for which there exists a uniform family of polynomial width and polynomial depth quantum circuits {Cn}n1 such that for any input x,

  1. 1.

    There is a non-zero probability of measuring the first qubit of Cn|00|x to be |1.

  2. 2.

    If xL, conditioned on the first qubit being |1, the second qubit is |1 with probability at least 2/3.

  3. 3.

    If xL, conditioned on the first qubit being |1, the second qubit is |1 with probability at most 1/3.

𝖯𝗈𝗌𝗍𝖡𝖯𝖯 can be defined similarly with classical circuits and additional input of random bits.

Definition 17.

Let K be a complexity class. A language L, is in 𝖡𝖯𝖪 if there exists a language 𝖠𝖪 and a polynomial p such that for all strings x.

Pr(r{0,1}p(|x|):xLx#rA)2/3

This covers the necessary definitions and we can proceed with the collapse result of interest.

Theorem 18.

If 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯 then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯 and the polynomial hierarchy collapses to the second level.

Proof.

Assume 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯. By Toda 𝖯𝖧𝖯#𝖯, by Aaronson [1] 𝖯𝗈𝗌𝗍𝖡𝖰𝖯=𝖯𝖯 and by 𝖯𝗈𝗌𝗍𝖡𝖯𝖯𝖡𝖯𝖯𝖭𝖯 [13, 11]. This gives:

𝖯𝖯𝖡𝖯𝖯𝖭𝖯 and 𝖯#𝖯=𝖯𝖯𝖯𝖯𝖡𝖯𝖯𝖭𝖯=𝖡𝖯𝖯𝖭𝖯=𝖯𝖧𝖯#𝖯.

Therefore 𝖯#𝖯=𝖯𝖧.

At this point we recall an interesting lemma from Toda [15] to continue.

𝖯𝖯𝖯𝖧𝖡𝖯𝖯𝖯

Clearly 𝖯𝖧𝖯𝖯𝖯𝖧 and it can be shown that if 𝖯𝖯𝖡𝖯𝖯𝖭𝖯 then 𝖡𝖯𝖯𝖯𝖡𝖯𝖯𝖭𝖯 (a full proof follows in lemma 19), therefore

𝖯𝖧𝖡𝖯𝖯𝖯𝖡𝖯𝖯𝖭𝖯.

Since 𝖯𝖧=𝖯#𝖯 and 𝖡𝖯𝖯𝖭𝖯𝖯𝖧 we can conclude 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯. We now have the condition 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 so by theorem 2 we get 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯, the final result.

Lemma 19.

If 𝖯𝖯 is in 𝖡𝖯𝖯𝖭𝖯 then so is 𝖡𝖯𝖯𝖯

Proof.

Fix some L𝖡𝖯𝖯𝖯. By the definition, there exists A𝖯𝖯 which defines L. Assume 𝖯𝖯 is in 𝖡𝖯𝖯𝖭𝖯, therefore A is in 𝖡𝖯𝖯𝖭𝖯. Therefore L is in 𝖡𝖯𝖡𝖯𝖯𝖭𝖯.

Note that majority-vote amplification can be used on 𝖡𝖯𝖯𝖯 and 𝖡𝖯𝖯𝖭𝖯 to decrease the probability of failure to less than 1/6.

If we combine the random coin flips of both the 𝖡𝖯𝖯𝖯 and 𝖡𝖯𝖯 parts of the algorithm we get a single failure probability below 1/3, which fits the definition of 𝖡𝖯𝖯𝖭𝖯.

Theorem 18 provides a quick a simple proof of Theorem 15 (the natural interpretation of our result).

Proof of Theorem 15.

By definition #𝖯=𝖤𝗑𝖺𝖼𝗍𝖢𝗈𝗎𝗇𝗍. O’Donnell and Say show 𝖯𝖠𝗉𝗉𝗋𝗈𝗑𝖢𝗈𝗎𝗇𝗍=𝖯𝗈𝗌𝗍𝖡𝖯𝖯 [13]. Therefore we can rewrite the antecedent of the Theorem as 𝖯#𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯.

Any problem in 𝖯𝖯 can be solved with one #𝖯 query, which can obviously be done in one parallel round of oracle calls, so 𝖯𝖯𝖯#𝖯. This gives 𝖯𝖯=𝖯𝗈𝗌𝗍𝖡𝖯𝖯 (as we already know 𝖯𝗈𝗌𝗍𝖡𝖯𝖯𝖯𝖯 and by Theorem 18 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯.

4 Improved Hardness of 𝗕𝗼𝘀𝗼𝗻𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴, 𝗜𝗤𝗣, and 𝗗𝗤𝗖𝟭

In the original work on the hardness of Boson Sampling [3] two cases were considered: the exact case, where the probability that the algorithm would sample a given element must be at least multiplicatively close to the target distribution, and the approximate case, which allowed additive error in the total variation distance between the sampled and target distribution. For the exact case, classical simulation implied the polynomial hierarchy collapsed to the third level. For the approximate case, a collapse to the 3rd level was achieved, but subject to additional assumptions (we refer to these as additional assumptions henceforth). This separation between multiplicative and additive error also applies to the work on instantaneous quantum polynomial-time sampling (𝖨𝖰𝖯) [8] and sampling from 1 clean qubit circuits (𝖣𝖰𝖢𝟣) [12], albeit with different additional assumptions. It is important to notice that realistic quantum computers are believed not to be able to achieve multiplicative error in sampling, but can achieve additive error [8, 5]. So it is the approximate case that distinguishes quantum from classical computation.

[10] strengthened the collapse for the exact case to 𝖯𝖧=𝖠𝖬𝖼𝗈𝖠𝖬 (a collapse to the second level of the polynomial hierarchy) for 𝖡𝗈𝗌𝗈𝗇𝖲𝖺𝗆𝗉𝗅𝗂𝗇𝗀, 𝖣𝖰𝖢𝟣 and 𝖨𝖰𝖯 (amongst others). However, for fundamental reasons, the proof did not extend to the approximate case.

In this section, we show that efficient classical sampling of the physically relevant approximate case would also collapse the polynomial hierarchy to its second level (𝖹𝖯𝖯𝖭𝖯), conditional on the existing additional assumptions. In this sense, our work provides the strongest known separation between classical and quantum computations for sampling and functional problems. While our result can also be applied to the exact case it is an not improvement on [10]’s result, as it is known 𝖠𝖬𝖼𝗈𝖠𝖬𝖹𝖯𝖯𝖭𝖯, and this application is therefore not relevant.

To apply Theorem 2 and show a second level 𝖯𝖧 collapse we require the condition 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯, however, the proofs contained in [3, 8, 12] only show 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯. Fortunately, each of the proofs can be easily modified to only use parallel oracle calls, giving 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯. Formally proving this fact would require reproducing each proof in detail and would make this paper excessively long. Instead of completely rewriting these proofs we will instead notice that each proof only uses the 𝖭𝖯 oracle to approximately count some post-selected quantity with Stockmeyers algorithm [14]. In the next lemma, we show that this computation can be done with parallel oracle calls. This will allow us to argue classical sampling of 𝖡𝗈𝗌𝗈𝗇𝖲𝖺𝗆𝗉𝗅𝗂𝗇𝗀, 𝖨𝖰𝖯 or 𝖣𝖰𝖢𝟣 would imply 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 without formally reproducing each of the proofs.

Lemma 20 (Counting a postselected quantity).

Given a Boolean function f:{0,1}n{0,1} and a post selection criteria h:{0,1}n{0,1} let

p=Prx{0,1}n[f(x)=1|h(x)=1]=x{0,1}nf(x)h(x)x{0,1}nh(x).

Then for all g1+1poly(n), there exists an 𝖥𝖡𝖯𝖯𝖭𝖯𝖿,𝗁 machine that approximates p to within a multiplicative factor of g.

The proof of lemma 20 is quite trivial once it is realised that Stockmeyer’s approximate counting theorem can be done with only parallel oracle calls, which we capture in the next lemma.

Lemma 21 (Approximate counting in parallel).

Given a Boolean function f:{0,1}n{0,1}, let

p=Prx{0,1}n[f(x)=1]=12nx{0,1}nf(x).

Then for all g1+1poly(n), there exists an 𝖥𝖡𝖯𝖯𝖭𝖯𝖿 machine that approximates p to within a multiplicative factor of g.

We will not provide all the steps of this lemma as it is a clear corollary of the version of the proof given by Valiant and Vazirani [16]333To see this, note that they can fix their random vectors at the start of their algorithm and check in parallel if 1, 2, up to n vectors are sufficient to empty the set.. This theorem is also a consequence of the results of O’Donnel and Say [13], who show that approximate counting is in 𝖯𝗈𝗌𝗍𝖡𝖯𝖯, and therefore in 𝖡𝖯𝖯𝖭𝖯.

We then proceed with the proof of lemma 20.

Proof of lemma 20.

Fix a target error, g, from the assumption of the theorem there exists d such that g>1+1nd. By lemma 21 we can approximate x{0,1}nf(x)h(x) to a multiplicative factor of g=1+13nd with high probability. Similarly we can approximate x{0,1}nh(x) to g as well. Dividing the first sum by the second gives us p within a multiplicative factor of

g2=1+23nd+19n2d1+23nd+19nd=1+791nd<1+1nd

with sufficiently high probability. Since each of these sums can be done in parallel and then divided for the final answer we can perform them in parallel. Given we only need to decrease the failure probability by a factor of 2 the final algorithm is in 𝖥𝖡𝖯𝖯𝖭𝖯f.

We will first apply the above logic for 𝖡𝗈𝗌𝗈𝗇𝖲𝖺𝗆𝗉𝗅𝗂𝗇𝗀 results. Aaronson and Arkhipov use one post-selection step in lemma 42 in [3], they then perform one approximate counting step on a quantity derived from this post-selection step to prove their main theorem. The structure of this proof fits the format of lemma 20, therefore we can state a parallelised version of their main theorem.

Theorem 22 (Aaronson and Arkhipov with parallel calls).

Let 𝒟A be the probability distribution sampled by a boson computer A. Suppose there exists a classical algorithm C that takes as input a description of A as well as an error bound ε, and that samples from a probability distribution 𝒟A such that 𝒟A𝒟Aε in poly(|A|,1/ε) time. Then the |GPE|±2 problem is solvable in 𝖡𝖯𝖯𝖭𝖯.

The |GPE|±2 problem (defined in [3]) becomes #𝖯 complete given two conjectures: The permanent-of-Gaussians Conjecture (PGC) and the Permanent Anti-Concentration Conjecture (PACC). These conjectures capture the belief that the permanent of Gaussian matrices does not concentrate close to zero and is hard to estimate, further justification of these conjectures is available in the original paper [3]. These are the additional assumptions we referred to earlier.

Theorem 23.

Assume PACC and PGC hold. If there exists a classical algorithm that can sample the distribution of a boson computer with additive error, then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯 and the polynomial hierachy collapse to the second level.

Next, we discuss how the same strategy applies to the 𝖨𝖰𝖯 case. The proof provided by Bremner, Montanaro and Shepherd uses just Stockmeyer counting to prove their collapse to 𝖡𝖯𝖯𝖭𝖯 in their theorem 1. By lemma 21 all 𝖭𝖯 calls in their algorithm can be done in parallel and we can convert their theorem 1 to a 𝖡𝖯𝖯𝖭𝖯 result. For 𝖨𝖰𝖯 we do not need to conjecture the concentration of some hard-problem. The result only rest on the conjectured average case hardness of approximately computing either the partition function of the Ising model or on a property of low-degree polynomials over finite fields (Conjectures 2 and 3 in [8]).

Theorem 24 (Improved IQP hardness [8]).

Assume either above conjecture is true. If it is possible to classically sample from the output probability distribution of any 𝖨𝖰𝖯 circuit in polynomial time, up to an error of 1/192 in l1 norm, then 𝖯#𝖯=𝖹𝖯𝖯𝖭𝖯. Hence the Polynomial Hierarchy would collapse to its second level.

Finally, the same strategy can be applied to improve Morimae’s result [12] on the hardness of the 𝖣𝖰𝖢𝟣 model for sampling [10] as it again depends on Stockmeyer counting. The necessary conjecture for Morimae’s is an assumption directly about the average case hardness of the one clean qubit model.

Theorem 25 (Improved one clean qubit hardness [12]).

Assuming Morimae’s conjecture, if there exists a classical algorithm which can output samples from any one clean qubit machine with at most 1/36 error in the l1 norm then there is a 𝖹𝖯𝖯𝖭𝖯 algorithm to solve any problem in 𝖯#𝖯. Hence the Polynomial Hierarchy would collapse to its second level.

It seems likely that other results will fit the structure we give here, although we provide only these three results.

We finish this section by noting that the above results are in 𝖲𝖺𝗆𝗉𝖡𝖰𝖯, so classical impossibility on any of these tasks would imply 𝖲𝖺𝗆𝗉𝖡𝖰𝖯𝖲𝖺𝗆𝗉𝖯, which would also imply a separation in the functional classes 𝖥𝖡𝖰𝖯𝖥𝖡𝖯𝖯[2].

Theorem 26.

If any of the sets of conjecture above hold and the polynomial hierarchy does not collapse to it’s second level, then 𝖥𝖡𝖰𝖯𝖥𝖡𝖯𝖯 and equivalently 𝖲𝖺𝗆𝗉𝖡𝖰𝖯𝖲𝖺𝗆𝗉𝖯.

5 Conclusion and Open Problems

This paper has shown that if 𝖯#𝖯=𝖡𝖯𝖯𝖭𝖯 then the polynomial hierarchy collapses to its second level. We have connected this result to approximate/exact counting and shown that it improves a number of results demonstrating the separation of quantum and classical computing. The natural next research question is “how low can we go?”. [10] extended the result for the multiplicative error case to 𝖠𝖬𝖼𝗈𝖠𝖬, perhaps this could be achieved. We have not used the fact that the 𝖹𝖯𝖯𝖭𝖯 algorithm likely only needs two rounds of oracle calls. This could be an avenue to collapsing the hierarchy further.

This hierarchy collapse strengthens claims of quantum supremacy but as these claims also rest on a number of additional assumptions (e.g. permanents-of-Gaussians conjectures), diminishing, removing or proving the other assumptions remain one of the central challenges of showing quantum supremacy and proving 𝖲𝖺𝗆𝗉𝖡𝖰𝖯𝖲𝖺𝗆𝗉𝖯. Alternatively further work may reveal one of these assumptions to fall through, such as with XQUATH [5].

Our results offer other, more direct, extensions. Of particular interest is whether our main theorem relativises, like previous supremacy results did [3]. Avoiding using the checkability of #𝖯 may be key to proving relativisation as this is the only step of our proof that did not relativise.

Another open question is how Theorem 2 may be used for other non-quantum purposes, perhaps where approximate and exact counting are being compared. Alternatively, a research line we did not pursue is extending theorem 2. Extensions could be to other checkable rsr languages, perhaps 𝖯𝖲𝖯𝖠𝖢𝖤 or 𝖤𝖷𝖯 (although these may only be adaptively-rsr [9]), or to showing the equality holds for other elements of the polynomial hierarchy (𝖯#𝖯=𝖡𝖯𝖯Σ𝗂?𝖯#𝖯=𝖹𝖯𝖯Σ𝗂).

References

  • [1] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 461(2063):3473–3482, 2005.
  • [2] Scott Aaronson. The equivalence of sampling and searching. Theory of Computing Systems, 55(2):281–298, 2014. doi:10.1007/S00224-013-9527-3.
  • [3] Scott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 333–342, 2011. doi:10.1145/1993636.1993682.
  • [4] Scott Aaronson, Greg Kuperberg, and Christopher Granade. The complexity zoo, 2005.
  • [5] Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. A polynomial-time classical algorithm for noisy random circuit sampling. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 945–957, 2023. doi:10.1145/3564246.3585234.
  • [6] V. Arvind and Johannes Köbler. New lowness results for zppnp and other complexity classes. Journal of Computer and System Sciences, 65(2):257–277, 2002. doi:10.1006/jcss.2002.1835.
  • [7] Manuel Blum and Sampath Kannan. Designing programs that check their work. Journal of the ACM (JACM), 42(1):269–291, 1995. doi:10.1145/200836.200880.
  • [8] Michael J Bremner, Ashley Montanaro, and Dan J Shepherd. Average-case complexity versus approximate simulation of commuting quantum computations. Physical review letters, 117(8):080501, 2016.
  • [9] J Feigenbaum and L Fortnow. On the random-self-reducibility of complete sets, university of chicago technical report 90-22. Computer Science Department, 20, 1990.
  • [10] Keisuke Fujii, Hirotada Kobayashi, Tomoyuki Morimae, Harumichi Nishimura, Shuhei Tamate, and Seiichiro Tani. Power of quantum computation with few clean qubits. arXiv preprint, 2015. arXiv:1509.07276.
  • [11] Yenjo Han, Lane A Hemaspaandra, and Thomas Thierauf. Threshold computation and cryptographic security. SIAM Journal on Computing, 26(1):59–78, 1997. doi:10.1137/S0097539792240467.
  • [12] Tomoyuki Morimae. Hardness of classically sampling the one-clean-qubit model with constant total variation distance error. Physical Review A, 96(4):040302, 2017.
  • [13] Ryan O’Donnell and AC Cem Say. The weakness of ctc qubits and the power of approximate counting. ACM Transactions on Computation Theory (TOCT), 10(2):1–22, 2018. doi:10.1145/3196832.
  • [14] Larry Stockmeyer. On approximation algorithms for# p. SIAM Journal on Computing, 14(4):849–861, 1985. doi:10.1137/0214060.
  • [15] Seinosuke Toda. Pp is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865–877, 1991. doi:10.1137/0220053.
  • [16] Leslie G Valiant and Vijay V Vazirani. Np is as easy as detecting unique solutions. In Proceedings of the seventeenth annual ACM symposium on Theory of computing, pages 458–463, 1985. doi:10.1145/22145.22196.