Single-Round Proofs of Quantumness from Knowledge Assumptions
Abstract
A proof of quantumness is an efficiently verifiable interactive test that an efficient quantum computer can pass, but all efficient classical computers cannot (under some cryptographic assumption). Such protocols play a crucial role in the certification of quantum devices. Existing single-round protocols based solely on a cryptographic hardness assumption (like asking the quantum computer to factor a large number) require large quantum circuits, whereas multi-round ones use smaller circuits but require experimentally challenging mid-circuit measurements.
In this work, we construct efficient single-round proofs of quantumness based on existing knowledge assumptions. While knowledge assumptions have not been previously considered in this context, we show that they provide a natural basis for separating classical and quantum computation. Our work also helps in understanding the interplay between black-box/white-box reductions and cryptographic assumptions in the design of proofs of quantumness. Specifically, we show that multi-round protocols based on Decisional Diffie-Hellman (DDH) or Learning With Errors (LWE) can be “compiled” into single-round protocols using a knowledge-of-exponent assumption [7] or knowledge-of-lattice-point assumption [36], respectively. We also prove an adaptive hardcore-bit statement for a family of claw-free functions based on DDH, which might be of independent interest.
Keywords and phrases:
Proofs of quantumness, Knowledge assumptions, Learning with errors, Decisional Diffie-HellmanFunding:
Alexandru Gheorghiu: Knut and Alice Wallenberg Foundation, through the Wallenberg Centre for Quantum Technology (WACQT).Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Cryptographic protocolsAcknowledgements:
We thank Alex Lombardi, Urmila Mahadev, Greg Kahanamoku-Meyer, Umesh Vazirani, John Wright, and Tina Zhang for helpful discussions. We are especially grateful to Vinod Vaikuntanathan to suggesting many of these ideas in the early stages of the project.Funding:
P.A., V.G. and T.M. acknowledge support from the ETH Zurich Quantum Center and the Air Force Office for Scientific Research, grant No. FA9550-19-1-0202.Editors:
Raghu MekaSeries and Publisher:

1 Introduction
Demonstrating quantum advantage, the point where a quantum computer can solve a problem that no existing classical computer can, is both a theoretical and technological challenge. It requires a problem that is plausibly intractable for classical algorithms and which admits an efficient quantum algorithm, ideally one that can be performed with noisy intermediate-scale quantum (NISQ) devices [42]. Additionally, the problem’s solution should be efficiently verifiable by a classical computer. This is necessary if one wishes to have a convincing and scalable way of proving quantum advantage. Currently, there are three main paradigms for demonstrating quantum advantage.
The most straightforward one is to solve a problem which is believed to be classically hard, such as integer factorization. A quantum computer could efficiently solve this task by running Shor’s algorithm [43] and the solution can be efficiently verified by multiplying the factors reported by the prover. But while Shor’s algorithm is efficient in the sense of only requiring a polynomial-size quantum circuit, the actual circuit for any reasonably-sized integer to be factored is too large to implement with NISQ devices [22, 25]. Another approach is based on the classical hardness of sampling problems such as random circuit sampling [12, 6, 44] or boson sampling [1, 48, 37]. While these experiments can be implemented with current hardware, they are not efficiently verifiable.
The work of Brakerski et al. [13] introduced a new approach towards testing for quantum advantage, referred to as a proof of quantumness protocol. Akin to the cryptographic notions of proof or argument systems [24, 15, 23], a proof of quantumness is an interactive protocol between a polynomial-time classical verifier and an ostensibly quantum polynomial-time prover. The verifier issues challenges to the prover and checks the correctness of the prover’s responses. The key feature of such a protocol is that there should exist an efficient quantum strategy that allows the prover to correctly answer the verifier’s challenges with high probability, whereas any efficient classical strategy can only succeed with low probability (under some plausible cryptographic hardness assumption such as the classical intractability of factoring, Decisional Diffie-Hellman (DDH), or the Learning With Errors (LWE) problem).
In contrast to other paradigms for proving quantum advantage, the cryptographic proofs of quantumness of Brakerski et al. and follow-up works do not require the quantum prover to break the underlying computational hardness assumption. Instead, they leverage the fact that the restriction imposed on the prover through mid-protocol interaction limits a classical prover’s capacity for correctly responding to subsequent challenges from the verifier more strongly than it limits a quantum prover.
The advantage of these protocols over an integer factorization-based test is that they have much smaller circuits than Shor’s algorithm, making them potentially more suitable for implementation on near-term devices [30, 29, 49, 26, 35, 4]. However, due to their interactive nature, they require the honest quantum prover to perform mid-circuit measurements for each of the verifier’s challenges. Mid-circuit measurements on a subset of qubits are difficult to implement on existing quantum devices without disturbing neighboring qubits and can thus degrade the quality of the remaining computation. An experimental implementation of the two-round protocol by Brakerski et al. [49] directly compared the performance of an ion-trap quantum computer running the protocol with and without mid-circuit measurements, revealing a significant difference. Thus, implementing this proof of quantumness protocol at the scale required for quantum advantage seems especially challenging.111Note that the instance sizes of the underlying cryptographic problem in [49] are so small that they can easily be broken by a laptop. For a convincing demonstration of quantum advantage, one would have to use instance sizes that cannot be broken even by the fastest classical supercomputers. It would therefore be desirable to have proofs of quantumness with relatively small quantum circuits and which involve only one round222Throughout the paper, we use the convention that a one-round protocol refers to a protocol with two messages, one message from the verifier to the prover and one message back. of interaction between the verifier and the prover. Beyond the practical motivation, this would also give us a better understanding of the structure required for demonstrating quantum advantage in a way that is efficiently verifiable.
The recent breakthrough work of Yamakawa and Zhandry made progress in this direction by giving a single-round proof of quantumness protocol in the random oracle model (ROM) [45]. Prior to their work, Brakerski et al. constructed single-round proofs of quantumness in the ROM that additionally required a structured computational assumption, such as factoring or LWE [14]. However, with both of these approaches the circuits that the prover would have to perform are possibly larger than those in existing multi-round proofs of quantumness [13, 29].
Our main result is a single-round proof of quantumness that only requires the same small circuits as an existing multi-round interactive proof of quantumness and is based on knowledge assumptions, a cryptographic idea that turns out to be natural for proofs of quantumness. We achieve this by starting from the two-round protocol of [13] and removing one of the rounds of interaction through the use of a knowledge assumption [17, 40, 16, 36, 32]. As explained in [32], a knowledge assumption is a statement of the form:
“If an algorithm outputs an object of type , it must know a corresponding witness of the type , such that the output and the witness are in some relation .”
The rationale behind knowledge assumptions is that certain computational tasks, performed by some probabilistic algorithm , can only be performed efficiently by following a specific sequence of steps, thus obtaining a series of intermediate values. Informally, we say that must have “known” the intermediate values for its specific output. This is made more precise by saying that there exists an efficient extractor that receives as input ’s random coins and outputs (or extracts) the relevant intermediate values of .
Knowledge assumptions have traditionally been used for the design of protocols that require both extractability (like in proof/argument of knowledge protocols) and succinctness [9, 32]. In our case, we are able to leverage knowledge assumptions to construct efficient, single-round proofs of quantumness. As far as we are aware, this is the first time knowledge assumptions are used in this context. We argue that their application here is natural and in some sense necessary, if one wishes to avoid multi-round interaction or working in the ROM. Intuitively, this is because some aspect of the proof of quantumness protocol has to differentiate between classical and quantum provers. In multi-round protocols, this distinction comes from the fact that classical provers can be rewound, but quantum provers generally cannot. In a ROM-based protocol, the distinction arises from the ability to record classical queries to a random oracle in a way that is not possible quantumly.
In our single-round protocols, the classical-quantum distinction is due to the knowledge assumption: for example, if is a one-way function with a sparse range (i.e. most values in are not in the range of ), it is plausible that a classical prover that produces a value in the range of must have done so by evalating the function on some , and must therefore also “know” the corresponding preimage; this can be captured formally as a knowledge assumption [16, 8]. However, a quantum prover can do the following: first prepare the state , then measure the second register in the computational basis to obtain an image , and measure the first register in the Hadamard basis to “erase” any knowledge of the preimage . Such a prover can be said to produce a value in the range of without knowledge of a preimage. It is this distinction between classical and quantum computation that our single-round proofs of quantumness exploit. We note that a related idea of oblivious LWE sampling was recently explored in [18]; since a proof of quantumness only requires soundness against classical provers, we do not need to analyse the quantum prover in detail, but it might be interesting to explore the relation between knowledge assumptions used in our work and the (strong) notion of oblivious LWE sampling proposed in [18]. We discuss the classical-quantum distinction and the impossibility of single-round proofs of quantumness with black-box security reductions in more detail in Section 3.
Organsation
The rest of this work is structured as follows: in Section 1.1, we give a brief overview of the two-round proof of quantumness from [13]. This will form the basis for our single-round proof of quantumness. In Section 1.2, we recall two existing knowledge assumptions from [7, 36] that we use in our protocols. In Section 2, we explain how to make use of these knowledge assumptions to obtain different single-round proofs of quantumness. In Section 3 we argue that a white-box assumption (like a knowledge assumption) seems to be necessary for single-round proofs of quantumness. Finally, in Section 4 and Section 5 we discuss additional related works and open questions. For a formal description of our results and proofs, we refer to the full version of this paper [5].
1.1 A two-round proof of quantumness
To explain our results, we first need to outline the two-round (four-message) proof of quantumness protocol from [13]. At the heart of this protocol is a collection of functions known as Trapdoor Claw-free Functions (TCF). TCFs are a type of collision-resistant hash function – they are 2-to-1 functions for which it should be intractable to find a colliding pair of inputs (known as a claw), given the description of the function. Additionally, the functions are generated with a trapdoor that allows for efficient inversion. The specific TCFs used in [13] require an additional property known as the adaptive hardcore-bit (AHCB) property. Informally, this states that it is not only intractable to find collisions, but given any particular input and corresponding image under the TCF, denoted , it should be intractable to recover even a single bit of , the other input with which forms a claw (i.e. ). Prior to our work, constructing TCFs with an AHCB had only been achieved from LWE and (non-standard) hardness assumptions of isogeny-based group actions [3]. One of our results shows an AHCB property for a TCF based on DDH.
-
(a)
Preimage test. The verifier asks the prover for a valid preimage of . Denoting the prover’s response as , the verifier accepts iff .
-
(b)
Equation test. The verifier asks the prover for a non-zero string , such that . The verifier uses the trapdoor, , to recover and from and then checks whether the equation is satisfied, accepting if it is and rejecting otherwise.
Given a family of TCFs, with the AHCB property, the Brakerski et al. protocol from [13] is described informally in Protocol 1. Brakerski et al. showed that, assuming the AHCB property, no polynomial-time classical prover makes the verifier accept with probability non-negligibly333A negligible function , denoted by , is a function such that for every constant . larger than (this is referred to as the soundness of the protocol). At the same time, they gave a simple quantum strategy which would allow the prover to succeed with probability (referred to as the completeness of the protocol). This honest prover first creates an equal superposition over evaluations of ,
The prover then measures the second register, resulting in a value and collapsing the state in the first register to
(1.1) |
We can see that if this state is measured in the computational basis, it results in one of the two preimages of , thus providing a valid response to the preimage test. Conversely, if the state is measured in the Hadamard basis (i.e. applying Hadamard gates to all qubits and measuring in the computational basis), the result will be a string such that , yielding a valid response to the equation test. Thus, a quantum prover implementing this strategy would make the verifier accept with probability 1.
The intuition for why it is classically intractable to succeed in this protocol is that a classical prover that answers both the preimage and equation tests correctly can be rewound so as to obtain both a valid preimage and a valid equation. However, having both would contradict the AHCB property. In contrast, a quantum prover cannot be rewound and can use the state from Equation 1.1 to answer either of the two challenges correctly, but not both at the same time. Communication between the two rounds is crucial for the security proof, as a reduction to the AHCB property requires that the prover holds a preimage and equation for the same . This can only be guaranteed if the prover commits to a fixed choice of before receiving the second challenge.
A natural question is whether the equation test on its own is already classically intractable. Unfortunately, simply removing the preimage test breaks the security proof of [13]: the AHCB property says that it is hard to produce an image, an equation, and a preimage together. Therefore, without the preimage test one cannot use a successful classical prover in the proof of quantumness to break the AHCB property, as one does not have access to a preimage444We remark here that this is actually not the only problem that arises when removing the preimage test. In fact, Urmila Mahadev observed that for both constructions we consider, there exists a classical winning strategy in the equation test which involves evaluating on an extended domain. In the original Brakerski et al. protocol [13], this strategy is excluded by the preimage test. In our protocols, this strategy is excluded by adding another type of test which does not increase the number of rounds of the protocol and is explained in Section 2.. Indeed, it can be shown that no black-box reduction can reduce the security of this “equation-test only” proof of quantumness to LWE ([39], see also Section 3).
Our results show that a variation of the “equation-test only” proof of quantumness can be made to work, provided we use a knowledge assumption to replace the role of the preimage test. Intuitively, the knowledge assumption can be used to “extract” a preimage from a successful classical prover without the need for an explicit preimage test. Since a knowledge assumption deals with the inner workings of a prover, our result can be interpreted as a white-box reduction from a version of the equation test to the AHCB property.
1.2 Knowledge assumptions
Knowledge assumptions posit the existence of an efficient extractor that is able to produce certain intermediate values that are in some relation with the output of an algorithm. A canonical example is the so-called knowledge of exponent assumption (KEA), introduced by Damgård in [17]. Informally, this says that given a generator of some multiplicative group , as well for some random power , the only way to produce a new pair of the form is by exponentiating the original pair. In other words, any efficient (randomized) algorithm which outputs given must “know” an exponent such that . This is formalized by saying that for every such algorithm , there exists an efficient extractor that, given and the random coins of for which outputs , will output (or extract) the exponent such that with overwhelming probability:
Assumption 1 (KEA (informal), from [17]).
Let be an efficient probabilistic algorithm which receives as input a generator of a multiplicative group and a random group element . Then there exists an efficient extractor , which uses the same input and random coins as , and such that
(1.2) |
Variations of this assumption have also been considered, such as the -KEA, in [7], in which the input to is instead of the form , where is a -dimensional vector and exponentiation is element-wise.
The rationale behind knowledge of exponent assumptions is that the function that maps to () has a sparse image (considered as a subset of ). Thus, it seems implausible that could come up with a valid image simply by cleverly sampling its output from without exponentiating the input.555An alternative way of coming up with an image would be if could determine , given and . However, this would entail solving the discrete logarithm problem, which is believed to be classically intractable.
More recently, similar knowledge assumptions were proposed for lattices. One example, introduced in [36], is known as the knowledge of lattice point assumption (denoted as LK-). This says the following: suppose there is an efficient randomized classical algorithm which takes as input a lattice basis and outputs a point that is -close to the lattice . Then must “know” the lattice point closest to .
Assumption 2 (LK- (informal), from [36]).
Let be a fixed constant in . Denote by the norm of the shortest vector in a lattice . Let be an efficient probabilistic algorithm which receives as input a basis of a lattice and outputs a vector . Then there exists an efficient algorithm which uses the same input and random coins as , and outputs a lattice point such that
(1.3) |
The motivation for this assumption is similar to that of KEA – the set of points that are close to the lattice (for a suitable choice of ) is sparse in the set of all points, and the only apparent efficient strategy to sample from this sparse set is to pick a random lattice point and perturb it. This lattice knowledge assumption and variants of it have been used to construct efficient FHE and SNARK schemes [19, 27].
2 Main results
Our main result is that combining knowledge assumptions with standard cryptographic assumptions, like LWE or DDH, leads to efficient single-round proof of quantumness protocols. To make our results modular, we first show how to construct a general single-round proof of quantumness from a cryptographic primitive that we call Doubly Extended Extractable Noisy Trapdoor Claw-free Functions (abbreviated e3NTCF). Second, we give two constructions of e3NTCF: one from the DDH and -KEA assumptions, and one from the LWE and LK- assumptions.
2.1 Single-round proof of quantumness from e3NTCF
Our starting point is the protocol from [13], which we explained in Section 1.1. Recall that there, one uses a TCF with the AHCB property, and argues that if a classical prover could succeed in the preimage and equation tests, by rewinding we could construct a tuple of a preimage, image, and equation that would contradict the AHCB property.
Now suppose that the TCF family used in this protocol had an additional extractability property: for any classical prover that produces an image , there exists an extractor that produces a corresponding preimage . This is, in essence, a knowledge assumption. With such an extractable TCF, we could simply remove the preimage test from Protocol 1: then, if we had a classical prover that succeeded in this modified protocol, we could use that prover to find an image and equation , and use the extractor to find a corresponding preimage , such that break the AHCB property.
Unfortunately, we do not know how to construct such an extractable TCF with AHCB from existing knowledge assumptions such as -KEA or LK-666To be precise, we found that using these knowledge assumptions directly on the known NTCF constructions based on DDH and LWE does not work to infer extractability. . The key difficulty here is that the extractability and AHCB properties have to hold simultaneously.
One way to circumvent this issue would be to introduce an additional function family that is indistinguishable from the TCF family (i.e. given a description of a random choice of either kind of function, it is hard to tell which kind of function it is). This function family can be constructed such that it has an extractability property, i.e. if a classical algorithm produces a value in the image of , an extractor can find a preimage (under a standard knowledge assumption such as -KEA). However, itself is not a TCF and has no AHCB property.
We now want to combine the AHCB property of with the extractability property of . For this we leverage that the two function families are computationally hard to distinguish777One may ask here why we do not use computational indistinguishability to directly transfer the extractability property of to , given that the extractor is an efficient algorithm. This has to do with the exact form of the extractability property: successful extraction is only guaranteed under the condition , which cannot be checked efficiently without the trapdoor. Thus, while we can exploit the computational indistinguishability of and in our protocol, it is not possible to use computational indistinguishability to infer an analogous extractability property for .: if we send a description of either or to a classical prover, the prover cannot tell which kind of function it received. This suggests the following protocol.
-
1.
The verifier generates a description of a TCF (with a trapdoor ) or an extractable function and sends this function description to the prover.
-
2.
The verifier receives from the prover.
-
If the verifier sent a TCF , it uses the trapdoor to recover and such that and accepts iff .
-
If the verifier sent an extractable function , it accepts iff .
-
Suppose that a classical prover succeeded in this protocol with high probability. We can use this prover (and the corresponding extractor for the extractable function family) to break the AHCB property of as follows: given a description of , run the prover to generate . Then use the extractor on this prover on input to generate a preimage . We claim that violates the AHCB property.
Note that this reduction runs the extractor for the function family on an input , i.e. it runs the extractor on an input for which it was not designed. However, recall that descriptions of and are computationally indistinguishable. On the other hand, the extractor as well as the function that checks whether the extractor produced a correct preimage are efficient. Therefore, since the protocol ensures that on input , the classical prover produces , it follows that when we run the extractor for this classical prover on an input , it will still produce a valid preimage of ; otherwise, we could distinguish from .
When trying to construct such a pair of function families from DDH and -KEA, we are faced with one additional technical challenge: the only evident construction of an extractable function family works by extending the functions to a larger domain (on which no longer satisfies the AHCB property) and constructing extractable functions , i.e. the extractability property of only holds with respect to the extended domain . This means that the extractor may, on input , produce a that also maps to . While this is a valid preimage to , it is not useful for breaking the AHCB property: for that we need a preimage .
We therefore introduce a third function family, , that is computationally indistinguishable from both and the extension of . The functions in are injective even on the larger domain . In our proof of quantumness, the verifier will check that when sent a function , the prover returns an image , i.e. the image must be in the range of the restricted domain . This essentially forces the prover to evaluate any function it receives only on the restricted domain888This also serves to exclude the “extended-domain” attack mentioned in Footnote 4. , since if it evaluated the injective function on an input , the verifier would reject the resulting image .
In summary, our single-round proof of quantumness relies on a triplet of function families with the following properties:999As usual, this primitive depends on a security parameter , which we suppress for readability.
Definition 1 (e3NTCF, informal).
A tuple of function families is called a Doubly Extended Extractable Noisy Trapdoor Claw-free Functions (abbreviated e3NTCF) if
-
1.
is a TCF family with an AHCB property.
-
2.
is an injective trapdoor one-way function family.
-
3.
is an extractable one-way function family, in the sense that for any algorithm that takes the description of as input and outputs , there exists an extractor which outputs a preimage such that .
-
4.
The functions are computationally indistinguishable from each other. In other words, given a description of a function from one of the three families, no polynomial-time classical algorithm can determine the function’s type with probability non-negligibly greater than .
For the formal statement, see [5, Definition 8].
The notion of an e3NTCF function family is an extension of Injective Invariant Noisy TCFs, which were introduced in [38] to derive a protocol for verifying general quantum computations, and which only use the first two types of families and .
The preceding discussion naturally leads to the single-round Protocol 2 (see [5, Protocol 4] for the formal protocol), based on an e3NTCF. Our main result is that this protocol is a proof of quantumness, i.e. that an efficient quantum prover can succeed with high probability, but no efficient classical prover can.
Theorem 2 (informal).
Suppose that is an e3NTCF. Then Protocol 2 is a proof of quantumness, i.e.,
-
1.
Completeness. There exists an efficient quantum prover that succeeds with probability .
-
2.
Soundness. Every efficient classical prover succeeds with probability at most .
In the theorem, is the security parameter. As usual, the families implicitly depend on the security parameter and “efficient” classical or quantum provers are those whose runtime scales polynomially in . For the formal statement, see [5, Theorem 4].
Proof Sketch.
We have already sketched the security proof when we explained why we need to introduce the three different function families , and . We briefly summarize the role that each of these function families and the associated challenge types , and play, and provide a high-level explanation of how the soundness constant is derived.
-
The weak image test (challenge type ) uses the extractable function family . This test ensures that for any classical prover in the protocol, there exists an extractor that extracts a preimage to the output produced by the prover. Note that as discussed above, this preimage might in principle be from the extended domain , not just .
-
The strong image test (challenge type ) uses the injective function family . This test, combined with the indistinguishability of the function families , and , ensures that the prover evaluates any function , , or only on inputs in the restricted domain . Furthermore, from this we can prove that for any that has a preimage in under a given TCF function , the extractor will output such a preimage (rather than a different preimage in ).
-
The equation test (challenge type ) uses the TCF function family that has the AHCB property. The verifier checks that the prover’s output satisfies the equation , with the preimages of . From the strong image test, we also know that the extractor for a successful classical prover will produce a preimage to Together, would break the AHCB property of , so no classical prover can win with very high probability.
The soundness factor of arises from the fact that the adaptive hardcore bit property only gives an upper bound of for the event that the prover fulfills the conditions of all three tests simultaneously, given a function of the family . Using the computational indistinguishability of the function families , and the relation
we show that the overall success probability of a classical prover in the prover can roughly be bounded as
where denotes the event that passes the test , and denotes the event that the prover succeeds in the equation test, and is employing a strategy that would succeed in the image tests as well. This corresponds to the event that produce a valid equation-image pair, which cannot occur with probability higher than due to the AHCB property of the function family .
A key objective in the design of our single-round proofs of quantumness was that the required circuit sizes should not be larger than for multi-round proofs of quantumness. Protocol 2 achieves this: to pass in the protocol, a quantum prover can simply prepare the state for a given function , measure the first register in the Hadamard basis to get a string , and measure the second register in the computational basis to get an image . These are exactly the same actions as those of an honest prover in the equation test in Protocol 1,101010In our constructions of e3NTCF, the functions and are essentially as costly to evaluate as in Protocol 1, so introducing these additional function families does not increase the demands on an honest prover. except that now the honest prover can perform the entire measurement in one step without experimentally challenging mid-circuit measurements.
2.2 Instantiating e3NTCF families from DDH and LWE
We show that e3NTCF can be instantiated either from DDH and -KEA, or from LWE and LK-. Here, we only state the main results and defer a more detailed discussion of the construction to [5, Section 4] and [5, Section 6], respectively.
For the LWE-based construction, we already know that the LWE-based TCF from [13] has an AHCB property. We can combine this with suitable injective and extractable one-way functions. In fact, it turns out that for the LWE-based construction, the roles of the injective and extractable functions in Protocol 2 can be played by the same function family, i.e. we only require two distinct families . This simplifies the analysis somewhat, as we explain in [5, Section 5]. Combined with Theorem 2, we obtain the following:
Theorem 3 (Informal).
Assuming the classical intractability of LWE and the lattice knowledge assumption LK-, there exists a single-round proof of quantumness with completeness and soundness . The circuit sizes for an honest prover in this protocol are identical to the circuit sizes in the 2-round protocol from [13].
While a family of TCFs based on DDH was considered in [29] to construct a 3-round proof of quantumness, it had not been shown that these functions have an AHCB property. We prove this through a lossy sampling technique similar to the proof in [13], showing a reduction from the AHCB property to the matrix -linear assumption [41] (which is implied by standard DDH). We then again augment this TCF family with an injective and an extractable family to get the following result.
Theorem 4 (Informal).
Assuming the classical intractability of DDH and the knowledge of exponent assumption -KEA, there exists a single-round proof of quantumness with completeness111111The reason completeness here is instead of is inherited from [29]. There, the quantum prover does not create a superposition over the exact range of the TCF but over a superset. However, it can be shown that the exact range contains at least a fraction of the elements in the superset. and soundness . The circuit sizes for an honest prover in this protocol are identical to the circuit sizes in the 3-round protocol from [29].
3 Impossibility of black-box reductions
Knowledge assumptions are non-falsifiable cryptographic assumptions [40, 20], which makes their use somewhat undesirable. A natural question is whether our main results, single-round proofs of quantumness, can also be achieved using only standard falsifiable (also called game-based) assumptions such as DDH or LWE, or even weaker assumptions like the existence of one-way functions.
As was observed in [39], the security of a single-round proof of quantumness cannot be reduced to a quantumly hard problem like LWE in a black-box manner. The reason for this is simple: if there existed a black-box reduction from a successful classical prover to an algorithm for breaking LWE, we could also apply that reduction to the honest quantum prover in the proof of quantumness. Since the honest quantum prover is required to succeed with high probability, this would give a quantum algorithm for LWE. This rules out the existence of such a black-box reduction.121212Note that this argument does not apply to interactive protocols, ROM-based protocols, or protocols that use whitebox assumptions like ours: in those cases, one cannot simply run the reduction on the quantum prover as the reduction may perform operations that only work for classical provers, e.g. rewinding or using knowledge extractors.
An extension of this reasoning suggests that the use of knowledge assumptions is still justified even for single-round proofs of quantumness based on computational problems that are quantumly easy: now it is of course not a contradiction to have a quantum algorithm that breaks the underlying cryptographic assumption, but we can argue that implementing an honest quantum prover in such a protocol is no easier than breaking the assumption outright: Suppose we had a single-round proof of quantumness with a black-box reduction to factoring. Then we could again use that reduction with the honest quantum prover to construct a factoring algorithm whose only quantum component is repeatedly (and independently) running the honest prover. In this sense, implementing the honest prover is as hard as breaking factoring: if we had an honest prover that was much easier to implement than Shor’s algorithm, we could use that prover and the black-box reduction to construct a much more practical quantum factoring algorithm. In contrast, in the DDH-based proof of quantumness from [29] and also in our single-round version thereof, implementing the honest quantum prover does not require computing discrete logarithms.
4 Related work
As mentioned in Section 1.1, cryptographic proofs of quantumness were introduced in [13]. Since then, there have been a number of follow-up works aiming to understand the types of cryptographic constructions on which these protocols can be based, as well as how to make them more efficient [30, 14, 35, 26, 4, 31, 39]. So far, only small-scale demonstrations of these protocols have been performed [49, 33], though the hope is that by further reducing the sizes of the quantum circuits and the amount of interaction between the verifier and the prover, these protocols can be performed with NISQ devices.
The only existing constructions of single-round proofs of quantumness (apart from trivial ones like asking the prover to factor a large number) use the random oracle model [14, 45]. In both of these prior works, the quantum circuits required to succeed in these protocols do not have to perform mid-circuit measurements. However, instantiating the random oracle with plausible cryptographic hash functions, like SHA-2 or SHA-3, incurs a substantial overhead. The most efficient known quantum implementations of SHA-2 and SHA-3 [28] need a number of qubits on the order of and have a quantum gate count on the order of .
On the other hand, the ROM-based single-round proof of quantumness in [14] only requires a TCF, without the need for the AHCB property. General TCFs have been constructed from a wider variety of cryptographic assumptions, achieving lower asymptotic complexities than known NTCF constructions. Thus, despite the fact that the use of a random oracles introduces a significant overhead, a resource comparison between our LWE-based single-round proof of quantumness and the ROM-based protocol of [14] is not straightforward. A more conclusive answer on which single-round protocol admits the smallest quantum circuit for a demonstration of quantum advantage requires a detailed finite-size analysis, which we leave for future work.
Since our construction of e3NTCFs makes use of extractable one-way functions (EOWFs), it is natural to ask whether we could have used existing results concerning these functions [16, 8, 10]. In particular, in [8], Bitansky et al. prove the existence of a class of EOWFs from the subexponential hardness of LWE against adversaries with bounded auxilary input, without having to rely on a knowledge assumption. There are at least two obstacles towards applying those results to our setting. First, the extractable one-way functions we require have to be computationally indistinguishable from TCFs. It is unclear whether the EOWFs of [8] can be suitably adapted to satisfy this requirement. Second, since one of the motivations for our work is to devise more efficient single-round proofs of quantumness, we would like to ensure that the honest quantum prover’s circuits are at most as large as in the multi-round protocols. However, constructing a family of e3NTCFs with the EOWFs in [8] would likely require larger circuits.
Knowledge assumptions in the quantum setting have also been considered in [34] to derive quantum money and quantum lightning schemes. There, the assumptions are required to be sound against quantum adversaries. This introduces some challenges in formalizing the appropriate notion of a quantum knowledge assumption. The subsequent work of [46] also discusses this point. Interestingly, this latter work points out how certain knowledge assumptions can be generically broken by a quantum adversary. In some sense, this is exactly what we exploit to arrive at our single-round proofs of quantumness.
5 Discussion and open problems
We have shown how existing knowledge assumptions, together with standard cryptographic assumptions, lead to efficient single-round proofs of quantumness. Our work opens up several avenues for future research.
One such avenue is the possibility of a white-box proof for a single-round proof of quantumness based only on the classical intractability of, say, LWE. This would circumvent the impossibility discussed in Section 3. As mentioned in the previous section, one possible approach would be to use the extractable one-way functions based on subexponential LWE from [8], with the caveat that this would provide soundness against classical provers with bounded auxiliary input. We leave this as an interesting but challenging open problem.
Currently, our construction necessitates the use of a TCF which satisfies the AHCB property. This is a fairly strong requirement which so far has only been achieved from LWE [13], isogeny-based group actions [3], and, in this work, the DDH assumption. It would be desirable to use TCFs that are not known to have an AHCB property, such as those based on Ring-LWE or Rabin’s function (, since those require smaller circuits than the TCFs based on LWE or DDH, as outlined in [29, 30]. However, it is unclear if our construction can be modified so as to not require the AHCB property; in the equation test, the prover is allowed to output any valid equation, hence the need for an adaptive hardcore bit. The protocol in [29] is able to circumvent this and use TCFs without an AHCB by introducing an additional round of interaction between the verifier and the prover, leading to a 3-round protocol. Unfortunately, there does not seem to be an obvious way of employing knowledge assumptions to reduce the round complexity of the protocol from [29]. Thus, a fundamentally different approach would be needed to achieve a single-round protocol without relying on the AHCB property. Of course, for the purpose of having more practical protocols, proving an AHCB statement for the TCFs based on Ring-LWE or Rabin’s function (even relying on knowledge assumptions) would be equally useful.
We remark that the LWE-based approach is expected to be more efficient than the DDH one. This is because, as is also discussed in [29], the DDH approach requires performing a large number of group exponentiations coherently. It would therefore seem that the DDH approach is less efficient than performing Shor’s algorithm to solve the discrete logarithm problem. However, as [29] points out, the proof of quantumness protocol can benefit from certain optimizations that are not known to be possible for Shor’s algorithm. As an example, the group exponentiation circuits for the DDH-based proof of quantumness need not be reversible, thereby halving the depth and number of gates compared to Shor’s algorithm (see section III.D in [29] for more details).
From existing knowledge assumptions one is only able to prove soundness against classical adversaries, i.e. one is able to bound the success probability of a classical prover, but one cannot make a statement about the internal operations of a successful quantum prover. This is sufficient for a proof of quantumness, but other protocols, such as certifiable randomness generation [13], remote state preparation [21], or verification of quantum computations [38], require soundness against quantum adversaries, i.e. one needs to have at least a partial characterization of any quantum adversary in the protocol. As mentioned, one can derive single-round proofs of quantumness in the random oracle model, and these can be extended to single-round versions of quantum-sound protocols by proving security in the quantum random oracle model [11, 14, 2, 47]. However, with knowledge assumptions, it is unclear what the right quantum-sound analogue would be to derive these more advanced functionalities. As mentioned in the previous section, quantum knowledge assumptions have been considered in [34, 46] to construct quantum money and quantum lightning schemes. These could serve as a useful starting point for constructing single-round quantum protocols for functionalities like single-device randomness expansion. We leave formalizing this for future work.
References
- [1] Scott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 333–342, 2011. doi:10.1145/1993636.1993682.
- [2] Gorjan Alagic, Andrew M Childs, Alex B Grilo, and Shih-Han Hung. Non-interactive classical verification of quantum computation. In Theory of Cryptography Conference, pages 153–180. Springer, 2020. doi:10.1007/978-3-030-64381-2_6.
- [3] Navid Alamati, Giulio Malavolta, and Ahmadreza Rahimi. Candidate trapdoor claw-free functions from group actions with applications to quantum protocols. In Theory of Cryptography Conference, pages 266–293. Springer, 2022. doi:10.1007/978-3-031-22318-1_10.
- [4] Yusuf Alnawakhtha, Atul Mantri, Carl A Miller, and Daochen Wang. Lattice-based quantum advantage from rotated measurements. arXiv preprint, 2022. doi:10.48550/arXiv.2210.10143.
- [5] Petia Arabadjieva, Alexandru Gheorghiu, Victor Gitton, and Tony Metger. Single-round proofs of quantumness from knowledge assumptions, 2024. doi:10.48550/arXiv.2405.15736.
- [6] Frank Arute et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510, 2019. doi:10.1038/s41586-019-1666-5.
- [7] Nir Bitansky, Ran Canetti, Alessandro Chiesa, and Eran Tromer. From extractable collision resistance to succinct non-interactive arguments of knowledge, and back again. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, pages 326–349, New York, NY, USA, 2012. Association for Computing Machinery. doi:10.1145/2090236.2090263.
- [8] Nir Bitansky, Ran Canetti, Omer Paneth, and Alon Rosen. On the existence of extractable one-way functions. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 505–514, 2014. doi:10.1145/2591796.2591859.
- [9] Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Omer Paneth, and Rafail Ostrovsky. Succinct non-interactive arguments via linear interactive proofs. In Theory of Cryptography: 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, March 3-6, 2013. Proceedings, pages 315–333. Springer, 2013. doi:10.1007/s00145-022-09424-4.
- [10] Nir Bitansky, Noa Eizenstadt, and Omer Paneth. Weakly extractable one-way functions. In Theory of Cryptography: 18th International Conference, TCC 2020, Durham, NC, USA, November 16–19, 2020, Proceedings, Part I 18, pages 596–626. Springer, 2020. doi:10.1007/978-3-030-64375-1_21.
- [11] Dan Boneh, Özgür Dagdelen, Marc Fischlin, Anja Lehmann, Christian Schaffner, and Mark Zhandry. Random oracles in a quantum world. In Advances in Cryptology–ASIACRYPT 2011: 17th International Conference on the Theory and Application of Cryptology and Information Security, Seoul, South Korea, December 4-8, 2011. Proceedings 17, pages 41–69. Springer, 2011. doi:10.1007/978-3-642-25385-0_3.
- [12] Adam Bouland, Bill Fefferman, Chinmay Nirkhe, and Umesh Vazirani. On the complexity and verification of quantum random circuit sampling. Nature Physics, 15(2):159–163, 2019. doi:10.1038/s41567-018-0318-2.
- [13] Zvika Brakerski, Paul F. Christiano, Urmila Mahadev, Umesh V. Vazirani, and Thomas Vidick. A cryptographic test of quantumness and certifiable randomness from a single quantum device. In Mikkel Thorup, editor, 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 320–331. IEEE Computer Society, 2018. doi:10.1109/FOCS.2018.00038.
- [14] Zvika Brakerski, Venkata Koppula, Umesh Vazirani, and Thomas Vidick. Simpler proofs of quantumness, 2020. doi:10.48550/arXiv.2005.04826.
- [15] Gilles Brassard, David Chaum, and Claude Crépeau. Minimum disclosure proofs of knowledge. Journal of computer and system sciences, 37(2):156–189, 1988. doi:10.1016/0022-0000(88)90005-0.
- [16] Ran Canetti and Ronny Ramzi Dakdouk. Towards a theory of extractable functions. In Theory of Cryptography Conference, pages 595–613. Springer, 2009. doi:10.1007/978-3-642-00457-5_35.
- [17] Ivan Damgård. Towards practical public key systems secure against chosen ciphertext attacks. In Joan Feigenbaum, editor, Advances in Cryptology — CRYPTO ’91, pages 445–456, Berlin, Heidelberg, 1992. Springer Berlin Heidelberg. doi:10.1007/3-540-46766-1_36.
- [18] Thomas Debris-Alazard, Pouria Fallahpour, and Damien Stehlé. Quantum oblivious LWE sampling and insecurity of standard model lattice-based snarks. In Bojan Mohar, Igor Shinkar, and Ryan O’Donnell, editors, Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24-28, 2024, pages 423–434. ACM, 2024. doi:10.1145/3618260.3649766.
- [19] Rosario Gennaro, Michele Minelli, Anca Nitulescu, and Michele Orrù. Lattice-based zk-SNARKs from square span programs. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, pages 556–573, New York, NY, USA, 2018. Association for Computing Machinery. doi:10.1145/3243734.3243845.
- [20] Craig Gentry and Daniel Wichs. Separating succinct non-interactive arguments from all falsifiable assumptions. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 99–108, 2011. doi:10.1145/1993636.1993651.
- [21] Alexandru Gheorghiu and Thomas Vidick. Computationally-secure and composable remote state preparation. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1024–1033. IEEE, 2019. doi:10.1109/FOCS.2019.00066.
- [22] Craig Gidney and Martin Ekerå. How to factor 2048 bit rsa integers in 8 hours using 20 million noisy qubits. Quantum, 5:433, 2021. doi:10.22331/q-2021-04-15-433.
- [23] Oded Goldreich and Johan Håstad. On the complexity of interactive proofs with bounded communication. Inf. Process. Lett., 67(4):205–214, 1998. doi:10.1016/S0020-0190(98)00116-1.
- [24] Shafi Goldwasser, Silvio Micali, and Charles Rackoff. The knowledge complexity of interactive proof systems. SIAM J. COMPUT, 18(1):186–208, 1989. doi:10.1145/22145.22178.
- [25] Élie Gouzien and Nicolas Sangouard. Factoring 2048-bit rsa integers in 177 days with 13 436 qubits and a multimode memory. Physical review letters, 127(14):140503, 2021. doi:10.1103/PhysRevLett.127.140503.
- [26] Shuichi Hirahara and François Le Gall. Test of quantumness with small-depth quantum circuits. arXiv preprint, 2021. doi:10.48550/arXiv.2105.05500.
- [27] Yuval Ishai, Hang Su, and David J. Wu. Shorter and faster post-quantum designated-verifier zkSNARKs from lattices. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS ’21, pages 212–234, New York, NY, USA, 2021. Association for Computing Machinery. doi:10.1145/3460120.3484572.
- [28] Kyungbae Jang, Sejin Lim, Yujin Oh, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo. Quantum implementation and analysis of sha-2 and sha-3. Cryptology ePrint Archive, Paper 2024/513, 2024. URL: https://eprint.iacr.org/2024/513.
- [29] Gregory D. Kahanamoku-Meyer, Soonwon Choi, Umesh V. Vazirani, and Norman Y. Yao. Classically verifiable quantum advantage from a computational bell test. Nature Physics, 18(8):918–924, August 2022. doi:10.1038/s41567-022-01643-7.
- [30] Gregory D Kahanamoku-Meyer and Norman Y Yao. Fast quantum integer multiplication with zero ancillas. arXiv preprint, 2024. doi:10.48550/arXiv.2403.18006.
- [31] Yael Kalai, Alex Lombardi, Vinod Vaikuntanathan, and Lisa Yang. Quantum advantage from any non-local game. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1617–1628, 2023. doi:10.1145/3564246.3585164.
- [32] Thomas Kerber, Aggelos Kiayias, and Markulf Kohlweiss. Composition with knowledge assumptions. In Tal Malkin and Chris Peikert, editors, Advances in Cryptology – CRYPTO 2021, pages 364–393, Cham, 2021. Springer International Publishing. doi:10.1007/978-3-030-84259-8_13.
- [33] Laura Lewis, Daiwei Zhu, Alexandru Gheorghiu, Crystal Noel, Or Katz, Bahaa Harraz, Qingfeng Wang, Andrew Risinger, Lei Feng, Debopriyo Biswas, et al. Experimental implementation of an efficient test of quantumness. Physical Review A, 109(1):012610, 2024. doi:10.1103/PhysRevA.109.012610.
- [34] Jiahui Liu, Hart Montgomery, and Mark Zhandry. Another round of breaking and making quantum money: How to not build it from lattices, and more. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 611–638. Springer, 2023. doi:10.1007/978-3-031-30545-0_21.
- [35] Zhenning Liu and Alexandru Gheorghiu. Depth-efficient proofs of quantumness. Quantum, 6:807, 2022. doi:10.22331/q-2022-09-19-807.
- [36] Jake Loftus, Alexander May, Nigel P. Smart, and Frederik Vercauteren. On cca-secure somewhat homomorphic encryption. In Ali Miri and Serge Vaudenay, editors, Selected Areas in Cryptography, pages 55–72, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. doi:10.1007/978-3-642-28496-0_4.
- [37] Lars S Madsen, Fabian Laudenbach, Mohsen Falamarzi Askarani, Fabien Rortais, Trevor Vincent, Jacob FF Bulmer, Filippo M Miatto, Leonhard Neuhaus, Lukas G Helt, Matthew J Collins, et al. Quantum computational advantage with a programmable photonic processor. Nature, 606(7912):75–81, 2022. doi:10.1038/s41586-022-04725-x.
- [38] Urmila Mahadev. Classical verification of quantum computations, 2018. doi:10.48550/arXiv.1804.01082.
- [39] Tomoyuki Morimae and Takashi Yamakawa. Quantum advantage from one-way functions. arXiv preprint, 2023. doi:10.48550/arXiv.2302.04749.
- [40] Moni Naor. On cryptographic assumptions and challenges. In Annual International Cryptology Conference, pages 96–109. Springer, 2003. doi:10.1007/978-3-540-45146-4_6.
- [41] Moni Naor and Gil Segev. Public-key cryptosystems resilient to key leakage. Cryptology ePrint Archive, Paper 2009/105, 2009. doi:10.1007/978-3-642-03356-8_2.
- [42] John Preskill. Quantum computing in the nisq era and beyond. Quantum, 2:79, 2018. doi:10.22331/q-2018-08-06-79.
- [43] Peter W Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM review, 41(2):303–332, 1999. doi:10.1137/S0097539795293172.
- [44] Yulin Wu, Wan-Su Bao, Sirui Cao, Fusheng Chen, Ming-Cheng Chen, Xiawei Chen, Tung-Hsun Chung, Hui Deng, Yajie Du, Daojin Fan, et al. Strong quantum computational advantage using a superconducting quantum processor. Physical review letters, 127(18):180501, 2021. doi:10.1103/PhysRevLett.127.180501.
- [45] Takashi Yamakawa and Mark Zhandry. Verifiable quantum advantage without structure. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 69–74. IEEE, 2022. doi:10.1109/FOCS54457.2022.00014.
- [46] Mark Zhandry. Quantum money from abelian group actions. arXiv preprint, 2023. doi:10.48550/arXiv.2307.12120.
- [47] Jiayu Zhang. Classical verification of quantum computations in linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 46–57. IEEE, 2022. doi:10.1109/FOCS54457.2022.00012.
- [48] Han-Sen Zhong, Hui Wang, Yu-Hao Deng, Ming-Cheng Chen, Li-Chao Peng, Yi-Han Luo, Jian Qin, Dian Wu, Xing Ding, Yi Hu, et al. Quantum computational advantage using photons. Science, 370(6523):1460–1463, 2020. doi:10.1126/science.abe8770.
- [49] Daiwei Zhu, Gregory D Kahanamoku-Meyer, Laura Lewis, Crystal Noel, Or Katz, Bahaa Harraz, Qingfeng Wang, Andrew Risinger, Lei Feng, Debopriyo Biswas, et al. Interactive cryptographic proofs of quantumness using mid-circuit measurements. Nature Physics, 19(11):1725–1731, 2023. doi:10.1038/s41567-023-02162-9.