New Lower-Bounds for Quantum Computation with Non-Collapsing Measurements
Abstract
Aaronson, Bouland, Fitzsimons and Lee [5] introduced the complexity class (which was original labeled ), an alteration of enhanced with the ability to obtain non-collapsing measurements, samples of quantum states without collapsing them. Although , it still requires queries to solve unstructured search.
We formulate an alternative equivalent definition of , which we use to prove the positive weighted adversary lower-bounding method, establishing multiple tighter bounds and a trade-off between queries and non-collapsing measurements. We utilize the technique in order to analyze the query complexity of the well-studied majority and element distinctness problems. Additionally, we prove a tight bound on search. Furthermore, we use the lower-bound to explore under query restrictions, finding that when combined with non-adaptive queries, we limit the speed-up in several cases.
Keywords and phrases:
Non-collapsing measurements, Quantum lower-bounds, Quantum adversary methodCopyright and License:
2012 ACM Subject Classification:
Theory of computation Quantum query complexityAcknowledgements:
We want to thank Kunal Marwaha for helpful discussions. We thank the anonymous reviewers for their valuable comments and suggestions.Funding:
Supported by US Department of Energy (grant no DE-SC0023179) and partially supported by US National Science Foundation (award no 1954311).Editors:
Srikanth SrinivasanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
1.1 Background and motivation
The study of the quantum query complexity model has lead to a rewarding avenue of research in understanding quantum speed-ups [29, 12, 11, 19]. In this model, one is given access to an input through a black-box and the goal is to evaluate for some function while minimizing the number of quantum queries to . A variety of tools have been developed to lower-bound quantum query complexity, notably the polynomial method [11], where one uses the fact that a polynomial may describe the amplitudes of a quantum state, the hybrid method [12] and its generalization the adversary method [7, 33], which quantifies the difficulty of distinguishing between inputs with different images. The study and application of these techniques has been instrumental in understanding the complexity class characterizing quantum computation, (Bounded-error Quantum Polynomial-time). Specifically, as one may be given access to through an oracle , quantum query complexity studies and its relations to other complexity classes with access to .
One of the goals of studying is to characterize the classes of functions which may be efficiently computable once fault-tolerant quantum computers have been realized. The difficulty of this realization and the need to mimic our current hardware has lead us to place various restrictions on . One prominent issue, the presence of noise, motivated the study of noise in the quantum query complexity setting, i.e., to introduce noise to the oracle calls. This was explored in [26], where the authors showed that Grover’s algorithm loses its speed-up even if only a single qubit is randomly affected with depolarizing noise. Furthermore, [14] defined the class (Noisy Intermediate-Scale Quantum), the class of problems solved by a algorithm with access to a noisy quantum device, emulating some aspects of today’s state of quantum computation. They showed an oracle separation between both and , and showed that algorithms cannot solve search as fast as . These results point to the conclusion that the presence of noise considerably weakens the computational power.
Another intriguing limited setting of the query model is the non-adaptive query model [23]. Here we restrict the model to a single round a parallel queries and analyze the necessary width of the round, imposing that the queries are independent of each other. It was shown in [23] that this restriction thwarts the quantum speed-up in solving the search and element distinctness problems, implying that adaptivity is necessary in order to obtain polynomial speed-ups for some problems.
On the other hand, augmenting with a metaphysical ability have also been a fruitful avenue in understanding the power of computation. One of the first considered variations of quantum mechanics was adding post-selection to quantum computation, (Post-selected ). Post-selection is the ability that, given a quantum state, we may select a partial measurement outcome, regardless of its probability. It was shown that [2], establishing an equivalency between a quantum and classical computational class. However, is well above as even , a quantum version of , is contained in .
One of the theories explaining quantum mechanics is the hidden variable theory, an interpretation where the system is described by a state and a hidden variable which stochastically determines the measurement results. (Dynamical Quantum Polynomial-time) is defined as the set of languages solved by a classical deterministic polynomial-time Turing machine which can process the history of the hidden variables of a quantum algorithm [1]. At the moment, we only know that and the best known search algorithm uses queries. However, [5] showed that adjusting to (Circuit-indifferent ), adding a restriction to the hidden variable theory, gives us a lower-bound on search.
A well-studied offspring of is (Product Dynamical Quantum Polynomial-time), quantum computation powered with non-collapsing measurements. By non-collapsing measurements, we mean that given an arbitrary state , we sample a measurement result without collapsing the state. We may view this as a simplification of as we are given samples from the history of computation. Defined in [5], the authors proved a lower-bound on search and .
A class related to is (Copy ), with the ability to create an unentangled copy of any state. Note that can simulate non-collapsing measurement by creating a copy and measuring it. However, the ability to entangle copies together adds considerable power. It was shown that can solve any -hard problem in polynomial time using a single query and copies [9]. The class is positioned between and [21].
In our work, we revisit as, except for , it is known to be only slightly stronger than as it may at most attain a polynomial speed-up over Grover’s algorithm. To understand the lower-bound, we must first describe a caveat. In the standard quantum query model, the goal is to minimize the amount of queries . However, in models with an additional ability, such as creating a copy or performing a non-collapsing measurement, we must provide bounds on the uses of the ability, which we denote by . In the case of non-collapsing measurements, without minimizing both and , every problem of size could be trivially solved using a single query and followed by exponentially-many non-collapsing measurements which are repeated until we learn enough information about the input. Furthermore, unrestricted non-collapsing measurements lead to quantum cloning, faster than light communication and constant communication complexity [5]. Therefore the measure of complexity is , the sum of the number of queries and number of non-collapsing measurements.
The search lower-bound may lead some to the conclusion that a algorithm with non-collapsing measurements may be simulated by with a squared overhead by running separate computations in parallel. However, this intuition fails as the deferred measurement principle does not hold in . To explain why, consider the collision problem. Here, assuming , one is given black-box access to a function with the promise that for any , is either 1 or 2. The algorithm is as follows. First, we query all the indices into a superposition and call the oracle, obtaining,
Next, we measure the query output register, collapsing to and obtaining as our measurement result. We find ourselves with the following state,
By repeating non-collapsing measurements of , we find with high probability, solving the problem with both constant queries and non-collapsing measurements. On the other hand, without the partial measurement, the algorithm is no longer efficient. In contrast, queries are optimal in [13, 28]. Note that the proof of uses the same partial measurement trick to solve the Statistical Difference problem, which is [27].
When it was introduced, was referred to as (non-adaptive Collapse-free Quantum Polynomial-time). The class is a restriction of , the class with the ability to adapt the quantum computation based on the non-collapsing measurement results. The power of this class is unknown as there are no known algorithms which utilize the adaptive feature nor do we have any lower-bounds. We note that that the phrase non-adaptive in is not related to non-adaptive queries discussed above, has the ability to perform a query at any step of the algorithm.
Recently, non-collapsing measurements gained renewed interest. It was shown that the complexity class (Product Dynamical Quantum Merlin-Arthur), the quantum counter-part of with the power of non-collapsing measurements, equals [6, 10]. Inspired by their work and several open directions in the original literature, we revisit the class. There are several settings to study in. One may focus on adapting unitaries based on non-collapsing measurement results, restricting the query adaptivity or limiting partial measurements. We explore the power of non-collapsing measurements with non-adaptive queries. To avoid any confusion, for the remainder of the paper, any discussion of non-adaptability will be concerned with queries.
It is important to stress that the point of these adjustments is not to argue for alterations of the postulates of quantum mechanics. Instead, we explore quantum computation and complexity from an alternative point. Therefore, the models we define to encompass the adjustments of may use ideas which are not inherently present in quantum computation. This is unavoidable as if they could be described solely using the postulates of quantum mechanics, their complexity class would be equal to .
1.2 Results
1.2.1 Lower-bounding methods
We obtain a trade-off between queries and non-collapsing measurements by developing the adversary method lower-bound in our setting. For simplicity, our results are presented using the basic adversary method from [7], but we prove its generalization, the positive weighted adversary method of [8] (see Subsection 7.1). When discussing query complexity, we will use the name of the complexity class as the identifier of the model. We note that in all cases, one recovers the original adversary method by setting the number of non-collapsing measurements to 1, essentially only measuring the end state. The result below exhibits that non-collapsing measurements provide us with a limited speed-up over in problems which may be lower-bounded by the positive weighted adversary method.
Result 1 (Adversary method on BQP with non-collapsing measurements, directly from Lemma 18).
Let be a boolean function and an input. Define , . Based on , let be the minimum number of elements for each and respectively and are the most elements where for each and for each and respectively.
Any algorithm solving in a setting with queries and non-collapsing measurements satisfies,
We develop a similar trade-off between and in the non-adaptive query setting. Defined in Subsection 5.1, we denote it with the superscript .
Result 2 (Adversary method on non-adaptive BQP with non-collapsing measurements, directly from Lemma 24).
Define as in Result 1. Any algorithm solving in a setting with queries and non-collapsing measurements satisfies,
Additionally, we obtained a variation of Result 1 for by applying a similar proof technique.
1.2.2 Applications of the lower-bounding methods
| Unstructured search | |||
| Majority | |||
| Collision | |||
| Element distinctness | |||
We apply the results above to specific problems in order to draw inference about . Specifically, we consider the bounds on search, majority and element distinctness in the settings discussed above. A summary of the bounds is in Table 1. Proof of the applications of the bounds above may be found in Subsection 7.1.
In the search problem, we are given access to a function and the goal is to decide whether there exists an such that . The lower-bound on search in was one of the main results in [5], but it was not tight. As the method in Result 1 provides us with the same lower-bound, we prove a tighter bound in Lemma 19.
Result 4 (Bounds on search).
We obtain tight bounds for unstructured search. In , the bound is , while in it is .
Next, we consider the majority problem. In this problem, assuming and , we are given and must decide whether . As majority may be solved efficiently in , we obtain an oracle separation between and .111The lower-bound on search implies a separation between and , therefore implying this separation. However, we don’t expect majority to separate and nor .
Result 5 (Bounds on majority).
We obtain tight bounds for the majority problem. In both and , it is .
While the collision problem has bounds, it is interesting that the same is not the case for the element distinctness problem. The problem is to decide, given , whether there exist such that and . In , we have a clear connection between the two classes, as their bounds may be derived from each other [15]. However, this is not the case in any setting. It would be very interesting to characterize what causes this difference.
Result 6 (Bounds on element distinctness).
We obtain bounds for the element distinctness problem. In , the bound is between and , while in it is .
1.3 Proof techniques
We describe the ideas behind our proofs. The key part of this paper is Definition 13, an alternative definition of the model. It allowed us to prove Lemma 15, enabling us to ignore partial measurement in the proofs of our lower-bounds.
1.3.1 Alternative definition of computation with non-collapsing measurements
Before explaining our alternative definition, we discuss the original definition of the model with non-collapsing measurements from [5]. is defined as the set of languages solvable by a deterministic Turing machine with one query to an oracle , which runs a quantum circuit with steps. At step , when the computation is in the state , the oracle obtains a measurement sample of without collapsing it. At the end, the machine receives and processes it. An example of how the oracle runs is in Figure 1. We define an alternative oracle which inputs circuit and outputs as well. However, we explicitly describe the quantum circuit which runs. The goal is to describe such that it outputs the tensor product of each state before the non-collapsing measurement. Therefore, we may obtain the results by measuring the end state.
Assume the input circuit is an alternating combination of unitaries and (potentially empty) partial measurement gates . We create the variants and of these gates in . The goal is that at every step , we have copies of the state and our gates affect states, each of whose goal is to end in where . An example of the circuit is in Figure 2. The unitary is a parallel application of over each of the registers separately. The measurement is more complex as it must ensure that the partial measurement results are the same for all registers in order for them to remain copies of each other. We define it as a gate which may post-select on measuring the same outcome for each register with an adjusted probability distribution.
Let us explain the probability adjustment. Suppose we are measuring a single qubit in each register and want to ensure that we measure or . Let the probability of measuring 1 at step of be . If we post-selected on this to happen in a manner, the probability of measuring is , not as desired. Therefore we must adjust the probabilities of measuring by .222This is the reason why it is difficult to directly show . It is important to mention that is not a valid quantum operator as its projectors don’t sum to the identity. The only measurement which is allowed by quantum mechanics is the measurement of the states at the end of . This is why is more powerful than .
1.3.2 Obtaining the lower-bounding techniques
Using our alternative definition, we show in Lemma 15 that given two arbitrary states created by , applying cannot decrease the fidelity between them. Our adversary method uses fidelity as the progress measure, implying that we may ignore the effect of partial measurement during our proof. Finally, we obtain a positive weighted adversary method [8] for our model by closely following the original proof while accounting for the presence of copies of states, obtaining Lemma 18.
Our remaining lower-bounds use the lemmata we proved above. First, our result on non-adaptive queries with non-collapsing measurements is described in Lemma 22. The proof combines the original adversary method proof for non-adaptive queries [23], replacing the original positive weighted adversary with Lemma 18. Second, our tight lower-bound on search in Lemma 19 is a direct extension of the proof on search in with no partial collapsing measurements (Appendix E in [5]) combined with our ability to ignore the effect of partial measurements which comes directly from Lemma 15.
Additionally, we apply the same idea of using non-standard gates to describe a computational model in order to obtain a lower-bounding technique for quantum computation with copies, . The techniques remain the same as above, except we allow for the entanglement between copies instead of instantly collapsing them.
1.4 Related work
Since the introduction of non-collapsing measurements [5], there have been several papers analyzing the adjustment in other settings. Specifically, we have [4] and [6, 10]. Furthermore, it was shown in [20] that there exists an oracle such that and that relative to a random oracle , . Finally, [16] studied the effect on quantum theory when we may only perform non-collapsing measurements, meaning we may never collapse a quantum state.
Another alteration worth mentioning is adding the power to rewind from a collapsed result back to the state prior to measurement. Denoted (Rewindable ), it was shown to be equal to and (Adaptive ) [21]. Adaptivity in means that based on partial measurement outcomes, we adapt the post-selection. Equivalently, may be described as with the additional power to decide whether, once we obtain a non-collapsing measurement, we collapse the state to that sample. This means that adding some form of adaptivity to and makes them equivalent.
Furthermore, without partial measurements may be viewed as with parallel queries. Various bounds on this setting have been developed, such as for Grover’s algorithm [32] and the general adversary method [22].
Closely related to parallel queries is adaptivity as non-adaptive algorithms have a single parallel query. Adaptivity of queries has been a subject of prior research. The positive weighted adversary method was adjusted for non-adaptive queries [23] and it was shown that total functions depending on variables require queries to the input [24]. Both results showed that non-adaptivity restricts the speed-ups quantum algorithms may achieve. Related to this, the analysis of the number of adaptive queries was studied in [18], where the authors showed that for any , the -fold Forrelation problem can be solved with adaptive rounds using one query per round, while adaptive round would require an exponential number of queries per round.
1.5 Open problems
We list several open questions worth pursuing below.
-
In , the bounds between the collision problem and the element distinctness problem are polynomially related. Our results show that this is not true in . Is it possible to characterize the reason behind this stark difference? We note that it is something which is not recognizable by any algorithm.
-
All efforts to lower-bound the query complexity of used methods derived from . Is it possible to develop lower-bounding techniques which are tailored for non-collapsing measurements? Similarly, is it possible to obtain lower-bounds using an altered version of the polynomial method333We discuss this in Subsection 7.2?
-
How does or other metaphysical classes act under a different restrictions. For example, how would it work with noisy computation? Are there restrictions which only affect , but not ?
-
What is the power of if it can adapt its unitaries based on the non-collapsing measurement result?
-
Are there modifications of combined with the power of witnesses which are larger than ?
-
There exist oracle separations between and in both ways444 holds by the lower-bound on search, while is by an oracle separation between and [3] and ., but there are other relationships worth exploring. For example, what is the relationship between and advice classes such as ?
2 Preliminaries
We assume familiarity with basic quantum computation concepts; for introductory texts, see [25, 31]. Let us define the notation used in this paper. Unless mentioned otherwise, specifies the size of an arbitrary input. The set is denoted by . For a string , denotes the character in position and the Hamming weight . When describing a set , we implicitly mean . On the other hand, indicates a general set enumerated over . The symbols and denote arbitrary unitary and measurement gates. The Kronecker delta function is a boolean function which is 1 if and only if . A tensor product of states is denoted by in order to avoid confusion with other superscripts. The symbol denotes the fidelity measure, while denotes an arbitrary boolean function. We list several key properties of fidelity.
-
(Uhlmann’s Theorem [30]) Let be quantum states and a fixed purification of . Then where the maximization is over purifications of .
-
For any states , .
-
(Multiplicativity) For any states and , .
-
(Unitary invariance) For any states and an arbitrary unitary , .
2.1 Quantum Query Model
We will be using the quantum query model. In this model, the goal is to compute for some boolean function and an input . We are given access to via an oracle . We shall assume that we are working with a phase oracle, defined as follows,
where is an query input index of and . The definition of the quantum query model with non-collapsing measurements is in Section 3. We will use the following lemma.
Lemma 7 (Hybrid argument of [12]).
Let and be states of an algorithm solving search using queries in total after steps without partial measurement where respectively there is no marked element or is the marked element. Then,
The adversary method proof technique is developing an upper-bound on the progress an algorithm makes at one step in order to lower-bound the number of necessary steps. We will use the symbol as the algorithm progress measure at step . In order to develop our positive weighted adversary lower-bound, we will use the notation from [8].
Definition 8 (Weight scheme of [8]).
Consider an arbitrary boolean function . Let , and . A weight scheme is defined using and such that for all and where ,
For a variable , define the weight of a variable and the load of a variable as,
Definition 9 (Weight load of [8]).
Assume a valid weight scheme as in Definition 8 for an arbitrary . Let . Then the maximum -load is defined as,
We define the maximum load of as,
Additionally, we will use the following property from [8].
3 Non-collapsing measurement model
We provide a formal definition of the quantum query model with non-collapsing measurements. We begin with the original model from [5] and follow it with our alteration. At the start, we consider the case without an oracle, which we add at the end.
The original definition from [5] is as follows. Let be the total number of non-collapsing measurements made and be a quantum oracle which takes as an input a quantum circuit where is a unitary operator on qubits and is a collapsing measurement operator on qubits such that . Let,
Define the oracle as a quantum oracle which inputs and outputs . If this oracle is called by a machine which cannot hold quantum states, the machine receives the measurement results of (i.e. upon receiving , the state immediately collapses to ).
Definition 11 (Section 2 of [5]).
is the class of languages which may be recognized by a polynomial-time deterministic Turing machine with one query to with probability of error at most .
We propose an alternative definition of the computational model by altering the oracle to an explicit circuit. Let . The circuit starts with registers, each initialized at zero. We label each register with index which denotes which step of , state , they compute. At each step of , we apply adjusted gates and over registers to all registers , creating through parallel computation.
Let be the probability of measuring from and . Let,
| (1) | ||||
| (2) | ||||
| (3) | ||||
| (4) |
We define as a quantum oracle which inputs , runs and outputs the final state . As with , if the machine calling cannot process quantum states, it receives .
Proposition 12.
is equivalent to the class of languages which may be recognized by a polynomial-time deterministic Turing machine with one query to with error probability at most .
Proof.
By Definition 11, and we want to show . As both and have the same inputs, we only require their outputs and are sampled from the same distribution. Consider an arbitrary pair and which are the respective measurements of and where is the resulting state on register of . As the application of is in parallel, only is applied to register in . Therefore we only require that the probability distributions and of the measurement operators on and are equivalent.
Suppose that for , the probability of measuring an arbitrary string during is . For , we have that the probability of measuring , the equivalent of in , is,
As the probability distributions and are equivalent, the oracles produce equivalent outputs.
We shall add queries to our model. Let the oracle operator applied at step for input , be the number of queries performed and . In the quantum oracle model with non-collapsing measurements, we insert the oracle gates after the unitary and measurement operator . Without loss of generality, we may assume that we don’t obtain a non-collapsing measurement at step if we use at this step. This enables the circuit to choose when to perform queries and non-collapsing measurements. In , this means that we apply a gate in parallel over registers. You may see examples of the circuits being run by and in Figure 3.
Definition 13.
The query model creates a circuit based on the input , provides it to which runs the circuit . The query complexity of the model is measured by the number of queries (steps of the algorithm which perform a query) and non-collapsing measurements (steps which don’t perform a query) in .
By using the definition above, we find that the query complexity of the two models is equivalent.
Lemma 14.
The query complexity of the original query model is equivalent to the query complexity of the adjusted query model.
Proof.
Suppose that we query a circuit which performs queries and non-collapsing measurements to the original oracle . By our explanation of the query model above, we know that simulates the queries by the query gate , obtaining the same query complexity.
On the other hand, suppose we have a circuit that is run by . Each unitary , measurement and query may be run by the original oracle by only running the gates on the first register, leading to the same complexity.
We note that one obtains that the gate complexity of the models is equal in a similar way. Furthermore, it is important to mention that by going from to and vice-versa under the same input, we adjust the width of the circuits while maintaining equal depth. By the characterization of fidelity using Bhattacharyya coefficients [17], we know that fidelity between states may not decrease after measurement. We find that the same holds for our adjusted measurement gates, allowing us to restrict the power of in this model.
Lemma 15.
Let be the pair of states with different query oracles at step of before the application of . Let and . Then,
Proof.
Let . The proof of the statement is done by explicitly defining the states as the sum of pure states whose purification results in the maximal fidelity, per Uhlmann’s theorem. We show that for any such purification, the same purification process increases the fidelity when applied to . This lower-bounds by Uhlmann’s theorem, proving our statement.
Let be an arbitrary integer. We define the state at each register the same due to the multiplicativity of fidelity. As the gate only affects registers, we shall ignore the rest. Each of the registers is the following mixed state,
where , , is the register being measured by and is its associated workspace. The definition of in is such that if we purify it as we do below, we obtain the maximum fidelity as shown by Uhlmann’s theorem. Let , and . Therefore the entire state and its purification may be described as,
Let , be the probability of measuring in , and . We apply to , which adjusts the amplitudes and ensures that the register measures to the same value over all registers. By using the definition of from Line 3,
By using our assumption about , we find,
| (5) | ||||
| (6) | ||||
| (7) | ||||
| (8) | ||||
| (9) | ||||
| (10) | ||||
| (11) |
Lines 5- 7 are by our definitions above, Line 8 follows as , and Line 10 follows by Uhlmann’s theorem where we define as the pair of purifications which maximize .
4 Lower-bounds on non-collapsing measurements
In this section we expand the bounds established in [5]. First, we generalize their lower-bound to a wider selection of problems. Next, we show that the search algorithm from [5] is optimal up to logarithmic factors by obtaining a lower-bound.
4.1 Weighted adversary method
Our proof closely follows the original proof of the positive-weighted adversary method [8]. There are two main adjustments. We employ Lemma 15 to argue that using partial measurements won’t affect the progress measure. Additionally, we consider the effect of non-collapsing measurements by using the model in Definition 13. In order to accommodate the second change, we will require the following Proposition.
Proposition 16.
For any and where and ,
Proof.
Let . Note that the statement holds when . Therefore let .
Assume . We have,
By the bounds on and the assumption that , we have that , and . As , the statement holds.
Assume . Additionally, without loss of generality, assume is odd. We find,
As and , .
We define the progress measure as,
We bound the progress made at each step using the following lemma.
Lemma 17.
Let be a boolean function with a valid weight scheme and its associated maximum load . Then the progress made by a algorithm each query is bounded by,
Proof.
By the unitary invariance of fidelity, we will ignore the application of unitaries. Similarly, by Lemma 15, we won’t consider partial measurements as their effect may not decrease the progress measure. Therefore, without loss of generality, we assume our states are always pure. Additionally, we assume that during query , we have non-collapsing measurements left and have already performed .
Let denote the state of the algorithm on input at query and denote the same before the application of the oracle. Additionally, let be the state during query of as defined in Definition 13, while is the state in before the query. Using these definitions, we obtain the following identity,
| (12) | ||||
| (13) | ||||
| (14) |
Let where is the query register and is the workspace register associated with that query register. As we have non-collapsing measurements left to perform, an oracle call will affect the copies of the state labeled by and neglect the remaining .
For the sake of simplicity, let,
We find that,
| (15) | ||||
| (16) | ||||
| (17) | ||||
| (18) | ||||
| (19) | ||||
| (20) |
Line 15 was shown on Line 14, Line 16 uses the multiplicativity of fidelity, Line 17 holds as the inner product of two states is bounded by 1, Line 18 uses Proposition 16 by setting and , and Line 20 is by Proposition 10 and .
Lemma 17 is sufficient to obtain the bound below.
Lemma 18.
Let be a boolean function with a weight scheme and its associated maximum load . In a setting, the following holds,
Proof.
Assuming success probability of , we have that . By Lemma 17,
4.2 Lower-bound on search
We use the proof of a lower-bound on without collapsing measurements from [5] and apply Lemma 15 in order to introduce partial measurement.
Lemma 19.
Any algorithm solving the search problem in setting using queries and and non-collapsing measurements satisfies,
Proof.
Consider a algorithm where . We compare the cases when there are no marked items versus when is a marked item. Let their respective quantum states at step be and and be states of the same algorithm without partial measurement. By Lemma 7, we have,
By summing over non-collapsing measurements and using an averaging argument, there exists an such that,
By our assumption on the number of queries, this norm must be constant. Let’s say that . Let represent all the states before measurement and the distribution of their measurements. In order to notice the presence of a marked item, we require . Therefore,
| (21) | ||||
| (22) | ||||
| (23) | ||||
| (24) | ||||
| (25) | ||||
| (26) | ||||
| (27) |
5 Non-collapsing measurements and non-adaptive queries
5.1 Non-adaptive query model
In this section, we describe the non-adaptive query model and prove its lower-bounds. Informally, the model performs all of its queries at the start of the computation, meaning that they are independent of the input . We obtain a trade-off between and .
The notation and proof closely follows that of [23]. Assume we perform non-adaptive queries. Given an oracle operator for an input , we define the query as,
where . The algorithm may be described as follows,
Assume we perform a non-collapsing measurement after every unitary except . The label describes a variable in the non-adaptive query setting. For example, describes the number of non-collapsing measurements performed. Next, we define a variation of the weight scheme from Definition 8 for non-adaptive queries. The main idea of the proof is lower-bounding the minimum size of the initial parallel query by arguing that a smaller query would require at least two rounds of parallel queries. We use the superscript to denote a variable for the parallel query of size .
Definition 20 (Weight scheme of [23]).
Consider an arbitrary boolean function and an input . We define , and an index-tuple . Therefore .
Let and be the weight variables as in Definition 8 for . Then and are their variations for . We let .
We will use the following lemma from [23],
Lemma 21 (Lemma 2 in [23]).
Let be an input and a weight function. For any and an index-tuple ,
5.2 Weighted adversary method with non-adaptive queries
We prove the results on non-adaptive queries using the technique from [23], resulting in the lemma below.
Lemma 22.
Consider a function with a weight scheme . Then any algorithm with queries and non-adaptive measurements satisfies,
In order to establish Lemma 22, we will first establish the following result which connects Lemma 18 and weights for non-adaptive inputs.
Lemma 23.
Let be a weight function for . Then is a valid weight function for and any algorithm solving using queries and non-collapsing measurements satisfies,
where .
Proof.
We will prove the following lemma:
Lemma 24.
Let be a valid weight function and such that . Then,
Proof.
Let be such that
| (28) |
We shall show that performing non-adaptive queries would require us to repeat the computation more than once, implying that . By using Lemma 21 and the fact that for any and , we have that,
Therefore we have that,
Where we applied our assumption on . Therefore according to Lemma 23, , proving the lemma.
6 Lower-bounding quantum computation with copies
In this section, we apply the ideas from Lemma 18 to , quantum computation which is able to create a copy of any state. Another way to consider is as the computational class representing a world where the no-cloning theorem does not hold. The point of this section is to explicitly showcase the power difference between non-collapsing measurements and copying a state. As most steps in our analysis are identical, we only focus on the differences.
6.1 Computational model with copies
The key difference between and is that the latter has the ability to apply unitaries to the copies after creating them, while must immediately collapse them. As these copies may become entangled, purifying the algorithm in a similar manner as in Definition 13, meaning explicitly creating copies of each state in order to analyze it, will require additional registers. Therefore, as we explain below, instead of using registers as in , we will need registers.555 denotes either the number of non-collapsing measurements or copies. This will be obvious from context.
Assume we perform copies in total. Let be the starting register and for , be its copies at step . Without loss of generality, assume we only copy and apply the oracle to . The gate symbolizes a copy gate, which creates a copy, allowing us to apply unitaries across all registers for . Prior to applying the gate, we must ensure that and are copies, meaning that similarly to Definition 13, we apply unitary in parallel to both. Finally, we account for the effect of partial measurement by using a partial measurement gate , ensuring that partial measurement results before applying are the same across all registers. When adding queries to this model, we follow the same steps as in Definition 13.
Finally, as each copy must mimic the register it is copying, we need to add further purification registers for each copy . These registers mimic the process of the register until it has been copied. As each copy may be entangled, with each additional copy, the amount of purification registers doubles. In order to describe copy of the computation, we require registers including the register . Therefore the total number of registers is . An example of a circuit with the ability to copy and its purification is in Figure 4.
6.2 Adversary lower-bound
We prove a version of Lemma 17 for .
Lemma 25.
Let be a boolean function with a valid weight scheme and its associated maximum load as in Definition 9. Then the progress made by a algorithm each query is bounded by,
where is the number of copies.
Proof.
First, we shall ignore the effect of unitaries and partial measurement by the same arguments and definitions of variables as in Lemma 17. In this case . We obtain the following result,
| (29) | ||||
| (30) | ||||
| (31) |
We find the following relationship between the number of queries and copies.
Lemma 26.
Let be a boolean function with a weight scheme and its associated maximum load . A algorithm with queries and copies satisfies,
7 Bounds on problems with non-collapsing measurements
7.1 Proofs of applications of the lower-bounding techniques
We shall apply Lemmata 18 and 24 on concrete problems in order to compare the two models. First, let us note that the positive-weighted adversary technique may be reduced to the basic adversary technique by setting the weights to 1.
Corollary 27.
Let be a boolean function, be two subsets of -inputs and -inputs respectively and a relation . Furthermore, let be the same as in Result 1. Then a query algorithm solving with queries and non-collapsing measurements satisfies,
Furthermore, a query algorithm with non-adaptive queries solving with queries and non-collapsing measurements satisfies,
Proof.
Let us obtain the values for the search, majority, collision and element distinctness problems. For each problem, we shall define the sets and their relation which maximizes while minimizing . In addition to that, we also describe their algorithm. As has been done during the entirety of the document, we let be the size of the input. Finally, we note that a summary of the results described below may be found in Table 1.
Search problem.
We know that the query complexity of the search problem in is due to the algorithm in [5], while our Lemma 19 proves that this algorithm is tight up to logarithmic factors. However, the same technique does not apply to with non-adaptive queries. We may split the inputs for search as follows:
-
Let as if and only if .
-
Let as these are the only values which only differ by 1 character from any input in .
Therefore we find that as we may choose any index to go from to . Furthermore, as each of these connections between and only occur on one index. By Corollary 27, the query complexity of the search problem in the setting with non-adaptive measurements is . This is tight as one may run an algorithm where we partition into sets and query each separately. Afterwards, we may perform non-collapsing measurements on all partitions at once which, with would return a marked item with high probability if there exists one.
Majority problem.
To lower-bound the query complexity of the majority problem, we shall split the sets and ones whose Hamming weight differs by one. Due to the structure of the problem, this is only possible when the Hamming weight is around . Therefore let , and if and only if . This means that , and . Therefore, by Corollary 27, the lower-bounds for is in both cases, regardless of the adaptivity of queries.
Similarly to the non-adaptive algorithm for search, the best algorithm for the problem is performing queries partitions of the input, each of size . Afterwards, we may perform non-collapsing measurements. By the coupon collector’s problem, we will obtain all values with high probability, meaning the complexity of the problem is .
Element Distinctness problem.
To analyze the element distinctness problem, we let be the set of injective functions and the set of functions which are almost injective. By almost injective, we mean that there exists exactly one pair of inputs such that . Then we may define the relation between these sets of inputs as those which only require one change of outputs. Explicitly, if and only if . Thus every injective function may become almost injective by changing any one of its image to another image which already has a preimage. Therefore and . On the other hand, one may only choose one of two indices if they want to turn an almost injective function into an injective function by changing a single index. Therefore and . Thus we find that the element distinctness problem in the query models with and without adaptive queries have a and lower-bound respectively. Finally, the algorithm is the same as for non-adaptive search and the majority problem, giving us a query bound of .
7.2 Polynomial method lower-bounding technique
We applied a lower-bounding technique to , but it is not tight in all situations. This may be due to the fact that the lower-bound establishes a relationship instead of . We note that the lower-bound on search in [5] had the same issue. It would be interesting if one was able to apply the polynomial method to the setting. One approach towards this would be going in the spirit of Definition 13. Let be the polynomial associated with the amplitudes of the state at register . One may attempt to restrict each in two ways. First, for each , one would ensure that builds on in that if we omitted the last query from , we would obtain . Second, the polynomial describing the amplitude of the entire state is a product of all with a caveat. Without partial measurement, . However, after partial measurement, would become the sum of several polynomials, where the superscript signals a measurement result. In order to ensure partial measurement remains consistent along all polynomials, we would require . We note that a list of other open problems we found interesting may be found in Subsection 1.5.
References
- [1] Scott Aaronson. Quantum computing and hidden variables ii: The complexity of sampling histories, 2004. arXiv:quant-ph/0408119.
- [2] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomial-time, 2004. arXiv:quant-ph/0412187.
- [3] Scott Aaronson. Impossibility of succinct quantum proofs for collision-freeness, 2011. arXiv:1101.0403.
- [4] Scott Aaronson. PDQP/qpoly = ALL, 2018. arXiv:1805.08577.
- [5] Scott Aaronson, Adam Bouland, Joseph Fitzsimons, and Mitchell Lee. The space “just above” BQP, 2014. arXiv:1412.6507.
- [6] Scott Aaronson, Sabee Grewal, Vishnu Iyer, Simon C. Marshall, and Ronak Ramachandran. PDQMA = DQMA = NEXP: QMA with hidden variables and non-collapsing measurements, 2024. doi:10.48550/arXiv.2403.02543.
- [7] Andris Ambainis. Quantum lower bounds by quantum arguments, 2000. arXiv:quant-ph/0002066.
- [8] Andris Ambainis. Polynomial degree vs. quantum query complexity, 2004. arXiv:quant-ph/0305028.
- [9] Ning Bao, Adam Bouland, and Stephen P. Jordan. Grover search and the no-signaling principle. Physical Review Letters, 117(12), September 2016. doi:10.1103/physrevlett.117.120501.
- [10] Roozbeh Bassirian and Kunal Marwaha. Superposition detection and QMA with non-collapsing measurements, 2024. doi:10.48550/arXiv.2403.02532.
- [11] Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum lower bounds by polynomials, 1998. arXiv:quant-ph/9802049.
- [12] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and weaknesses of quantum computing. SIAM Journal on Computing, 26(5):1510–1523, October 1997. doi:10.1137/s0097539796300933.
- [13] Gilles Brassard, Peter Høyer, and Alain Tapp. Quantum cryptanalysis of hash and claw-free functions: Invited paper, pages 163–169. Springer Berlin Heidelberg, 1998. doi:10.1007/bfb0054319.
- [14] Sitan Chen, Jordan Cotler, Hsin-Yuan Huang, and Jerry Li. The complexity of NISQ, 2022. doi:10.48550/arXiv.2210.07234.
- [15] Andrew Childs. Lecture notes on quantum algorithms, 2017.
- [16] Vincenzo Fiorentino and Stefan Weigert. A quantum theory with non-collapsing measurements, 2023. arXiv:2303.13411.
- [17] Christopher A. Fuchs and Carlton M. Caves. Mathematical techniques for quantum communication theory, 1996. arXiv:quant-ph/9604001.
- [18] Uma Girish, Makrand Sinha, Avishay Tal, and Kewen Wu. The power of adaptivity in quantum query algorithms, 2023. doi:10.48550/arXiv.2311.16057.
- [19] Lov K. Grover. A fast quantum mechanical algorithm for database search, 1996. arXiv:quant-ph/9605043.
- [20] Henrique Hepp, Murilo V.G. da Silva, and Leandro M. Zatesko. Oracle separations for non-adaptive collapse-free quantum computing. Theoretical Computer Science, 1030:115078, 2025. doi:10.1016/j.tcs.2025.115078.
- [21] Ryo Hiromasa, Akihiro Mizutani, Yuki Takeuchi, and Seiichiro Tani. Rewindable quantum computation and its equivalence to cloning and adaptive postselection. Theory of Computing Systems, 69(1), January 2025. doi:10.1007/s00224-024-10208-5.
- [22] Stacey Jeffery, Frederic Magniez, and Ronald de Wolf. Optimal parallel quantum query algorithms, 2015. arXiv:1309.6116.
- [23] Pacal Koiran, Jürgen Landes, Natacha Portier, and Penghui Yao. Adversary lower bounds for nonadaptive quantum algorithms, 2008. arXiv:0804.1440.
- [24] Ashley Montanaro. Nonadaptive quantum query complexity, 2010. arXiv:1001.0018.
- [25] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2010.
- [26] Ansis Rosmanis. Addendum to “Quantum search with noisy oracle”, 2024. arXiv:2405.11973.
- [27] Amit Sahai and Salil Vadhan. A complete problem for statistical zero knowledge. Cryptology ePrint Archive, Paper 2000/056, 2000. URL: https://eprint.iacr.org/2000/056.
- [28] Yaoyun Shi. Quantum lower bounds for the collision and the element distinctness problems. In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings., SFCS-02, pages 513–519. IEEE Comput. Soc, 2001. doi:10.1109/sfcs.2002.1181975.
- [29] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484–1509, October 1997. doi:10.1137/s0097539795293172.
- [30] A. Uhlmann. The “transition probability” in the state space of a *-algebra. Reports on Mathematical Physics, 9(2):273–279, 1976. doi:10.1016/0034-4877(76)90060-4.
- [31] John Watrous. The theory of quantum information, 2018.
- [32] Christof Zalka. Grover’s quantum searching algorithm is optimal. Phys. Review A, 60(4):2746–2751, October 1999. doi:10.1103/physreva.60.2746.
- [33] Robert Špalek and Mario Szegedy. All quantum adversary methods are equivalent, 2006. arXiv:quant-ph/0409116.
