A Rewriting Theory for Quantum -Calculus
Abstract
Quantum lambda calculus has been studied mainly as an idealized programming language – the evaluation essentially corresponds to a deterministic abstract machine. Very little work has been done to develop a rewriting theory for quantum lambda calculus. Recent advances in the theory of probabilistic rewriting give us a way to tackle this task with tools unavailable a decade ago. Our primary focus are standardization and normalization results.
Keywords and phrases:
quantum lambda-calculus, probabilistic rewriting, operational semantics, asymptotic normalization, principles of quantum programming languagesCopyright and License:
2012 ACM Subject Classification:
Theory of computation Quantum computation theory ; Theory of computation Operational semantics ; Theory of computation Equational logic and rewriting ; Theory of computation Linear logicFunding:
This work has been partially funded by the French National Research Agency (ANR) by the projects PPS ANR-19-CE48-0014, TaQC ANR-22-CE47-0012 and within the framework of “Plan France 2030”, under the research projects EPIQ ANR-22-PETQ-0007, OQULUS ANR-23-PETQ-0013, HQI-Acquisition ANR-22-PNCQ-0001 and HQI-R&D ANR-22-PNCQ-0002.Editors:
Jörg Endrullis and Sylvain SchmitzSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Quantum computation is a model of computation in which one has access to data coded on the state of objects governed by the law of quantum physics. Due to the unique nature of quantum mechanics, quantum data exhibits several non-intuitive properties [37]: it is non-duplicable, it can exist in superposition, and reading the memory exhibits a probabilistic behavior. Nonetheless, the mathematical formalization is well-established: the state of a quantum memory and the available manipulations thereof can be expressed within the theory of Hilbert spaces.
Knill’s QRAM model [34] describes a generic interface for interacting with such a quantum memory. The memory is modeled with uniquely identified quantum registers, each holding one quantum bit – also called a qubit. The interface should make it possible to create and discard registers and apply elementary operations on arbitrary registers. These operations consist of unitary gates and measurements. The former are internal, local modifications of the memory state, represented by a quantum circuit, while the latter are the operations for reading the memory. Measurements are probabilistic operations returning a classical bit of information.
Quantum algorithms are typically designed with a model akin to Knill’s QRAM [37]. A quantum algorithm consists of the description of a sequence of quantum operations, measurements, and classical processing. The control flow is purely classical, and generally, the algorithm’s behavior depends on the results of past measurements. An algorithm, therefore, mixes classical processing and interaction with quantum memory in a potentially arbitrary way: Quantum programming languages should be designed to handle it.
Quantum -Calculus and Linear Logics.
In the last 20 years, many proposals for quantum programming languages have emerged [28, 30, 40, 39, 8, 11]. Similar to classical languages, the paradigm of higher-order, functional quantum programming languages have been shown to be a fertile playground for the development of well-founded, formal quantum languages aiming at the formal analysis of quantum programs [40, 11].
The quantum -calculus of Selinger&Valiron [42] lies arguably at the foundation of the development of higher-order, quantum functional programming languages [14, 13, 38, 12, 35]. Akin to other extensions of lambda-calculus with probabilistic [17, 15, 21] or non-deterministic behavior [16], the quantum lambda calculus extends the regular lambda calculus – core of functional programming languages – with a set interface to manipulate a quantum memory. Due to the properties of the quantum memory, quantum lambda-calculi should handle non-duplicable data and probabilistic behavior.
One of the critical points that a quantum programming language should address is the problem of the non-duplicability of quantum data. In the literature, non-duplicable data is usually captured with tools coming from linear logic. The first approach [42, 38, 12] consists in using types, imposing all functions to be linear by default, and with the use of a special type to explicitly pinpoint duplicable objects of type . An alternative – untyped – approach [13, 14] considers instead an untyped lambda calculus, augmented with a special term construct “” and validity constraints to forbid the duplication of qubits.
Probabilistic and Infinitary Behavior.
A quantum -calculus featuring all of quantum computation should
not only permit the manipulation of quantum register with unitary
operations but should also give the possibility to measure them,
and retrieve a classical bit of information. As the latter is a
probabilistic operation, an operational semantics for a quantum
-calculus is inherently probabilistic.
As in the non-quantum case, probabilistic choice
and unbounded recursion yield subtle behaviors.
Fair Coin. Consider the following
program , written in a mock-up ML language with quantum features, similar to the language of [42]:
For this introduction, we only describe its behavior informally. The term above produces a qubit111The reader unfamiliar with the notation should not worry, as the formal details are not essential at this point: just retain that the state of our qubit is a superposition of two (basis) states, which play the role of head and tail. When needed, in Section 3 we provide a brief introduction to the mathematical formalism for quantum computation [37]. in state by creating a fresh qubit in state (this is the role of ), and applying the Hadamard gate . Measuring this qubit amounts to flipping a fair coin with equal probability . In one case, the program returns the identity function ; otherwise, it diverges – the term stands for the usual, non-terminating looping term.
The program , therefore, uses the quantum memory only once (at the
beginning of the run of the program), and it terminates with
probability .
Unbounded Use of Fair Coin.
In the context of probabilistic behavior, unbounded loops might
terminate asymptotically: A program may terminate with
probability , but only at the limit (almost sure
termination). A simple example suffices to illustrate this point.
Consider a quantum process that flips a coin by creating and measuring a fresh qubit. If the result is head, the process stops, outputting . If the result is tail, it starts over. In our mock-up ML, the program is
| (1) |
After iterations, the program is in normal form with probability . Even if the termination probability is , this probability of termination is not reached in a finite number of steps but as a limit. The program in Equation 1 is our leading example: we formalize it in Example 4.3.
Operational Semantics of Quantum Programs.
As it is customary when dealing with choice effects, the probabilistic behavior is dealt with by fixing an evaluation strategy. Think of tossing a (quantum) coin and duplicating the result, versus tossing the coin twice, which is indeed one key issue at the core of confluence failure in such settings (as observed in [16, 13]). Following the standard approach adopted for functional languages with side effects, the evaluation strategy in quantum -calculi such as [42, 38, 12] is a deterministic call-by-value strategy: an argument is reduced to a value before being passed to a function.
One aspect that has been seldom examined is however the properties of the general reduction associated to the quantum lambda-calculus: this is the purpose of this paper.
A Rewriting Theory for the Quantum -Calculus.
Lambda calculus has a rich, powerful notion of reduction, whose properties are studied by a vast amount of literature. Such a general rewriting theory provides a sound framework for reasoning about programs transformations, such as compiler optimizations or parallel implementations, and a base on which to reason about program equivalence. The two fundamental operational properties of lambda calculus are confluence and standardization. Confluence guarantees that normal forms are unique, standardization that if a normal form exists, there is a strategy that is guaranteed to terminate in such a form.
As pioneered by Plotkin [41], standardization allows to bridge between the general reduction (where programs transformation can be studied), and a specific evaluation strategy, which implements the execution of an idealized programming language. Summarizing the situation, for programming languages, there are two kinds of term rewriting: run-time rewriting (“evaluation”) and compile-time rewriting (program transformations).
In the context of quantum lambda-calculi, the only line of research discussing rewriting (rather than fixing a deterministic strategy) has been pursued by Dal Lago, Masini, and Zorzi [14, 13]: working with an untyped quantum lambda-calculus, they establish confluence results (and also a form of standardization, but only for the sub-language without measurement [13] – therefore, without the probabilistic behavior).
In this paper, we study not only confluence but also standardization and normalization results for a quantum -calculus featuring measurement, and where reduction (the engine of -calculus) is fully unrestricted. Recent advances in probabilistic and monadic rewriting theory [9, 12, 19, 33, 6, 23, 27, 32] allow us to tackle this task with a formalism and powerful techniques unavailable a decade ago. Still, quantum rewriting is more challenging than probabilistic rewriting because we need to manage the states of the quantum memory. The design of the language is, therefore, also delicate: we need to control the duplication of qubits while allowing the full power of -reduction.
Contributions.
We can summarize the contributions of the paper as follows. These are described in more details in Section 5, once all the necessary materials have been set up.
-
An untyped quantum lambda-calculus, closely inspired by [14] but re-designed to allow for a more general reduction, now encompassing the full strength of -reduction; validity constraints make it quantum-compatible.
-
The calculus is equipped with a rich operational semantics, which is sound with respect to quantum computation. The general reduction enables arbitrary -reduction; surface reduction (in the spirit of [43] and other calculi based on Linear Logic) plays the role of an evaluation strategy.
-
We study the rewriting theory for the system, proving confluence of the reduction, and standardization.
-
We obtain a normalization result that scales to the asymptotic case, defining a normalizing strategy w.r.t. termination at the limit.
Missing proofs and some more technical details are given in the extended version [25].
2 Setting the Scene: the Rewriting Ingredients
This section is devoted to a more detailed (but still informal) discussion of two key elements: the style of -calculus we adopt, and what standardization results are about. The calculus is then defined in Section 3, its operational semantics in Section 4; standardization and normalization in the following sections.
Untyped Quantum -Calculus.
Our quantum calculus is built on top of Simpson’s calculus [43], a variant of untyped -calculus inspired by Girard’s Linear Logic [29]. In this choice, we follow [14, 13, 12]. Indeed, the fine control of duplication which inherits from linear logic makes it an ideal base for quantum computation.
The Bang operator plays the role of a marker for non-linear management: duplicability and discardability of resources. Abstraction is refined into linear abstraction and non-linear abstraction . The latter allows duplication of the argument, which is required to be suspended as thunk , behaving as the -box of linear logic.
Example 2.1 (duplication, or not).
is a valid term, but which would duplicate the qubit created by is not. Instead, we can validly write which thunks and then duplicate it, yielding . Notice that this term produces an entangled pair of qubits.
In our paper, as well as in [14, 13, 12], surface reduction (i.e., no reduction is allowed in the scope of the operator) is the key ingredient to allow for the coexistence of quantum bits with duplication and erasing. Unlike previous work however, in our setting -reduction – the engine of -calculus – is unconstrained. We prove that only quantum operations needs to be surface, making ours a conservative extension of the usual -calculus, with its full power.
Call-by-Value… or rather, Call-by-Push-Value.
The reader may have recognized that reduction in our calculus follows the Call-by-Push-Value paradigm, with the Bang operator thunking a computation. In fact, Simpson’s calculus [43], more precisely the fragment without linear abstraction, is essentially an untyped version of Call-by-Push-Value, and it has been extensively studied in the literature of Linear Logic also with the name of Bang calculus [20, 31, 10], as a unifying framework which subsumes both Call-by-Name (CbN) and Call-by-Value(CbV) calculi.
| Call-by-name -calculus | Call-by-value -calculus | Linear -calculus |
| General reduction: | General reduction: | General reduction: |
| Evaluation: head () | Evaluation: weak-left () | Evaluation: surface () |
| 1. Head factorization: | 1. Weak-left factorization: | 1. Surface factorization: |
| iff | iff | iff |
| 2. Head normalization: | 2. Convergence to a value: | 2. Surface normalization: |
| iff | iff | iff |
A Taste of Standardization and Normalization: Pure -Calculi.
Termination and confluence concern the existence and the uniqueness of
normal forms, which are the results of a computation. Standardization
and normalization are concerned with how to compute a given
outcome. For example, is there a strategy which guarantees
termination, if possible? The desired outcome is generally a specific
kind of terms. In the classical theory of -calculus (à la
Barendregt), the terms of interest are head normal forms. In
the Call-by-Value approach, the terms of computational interest are
values.
Classical -calculus. The simplest form of
standardization is factorization: any reduction sequence can be
re-organized so as to first performing specific steps and then everything
else. A paradigmatic example is the head factorization theorem of
classical -calculus (theorem 11.4.6 in [7]):
every -reduction sequence can be
re-organized/factorized so as to first reducing head redexes and then
everything else – in symbols .
A major consequence is head normalization, relating arbitrary reduction with head reduction, w.r.t. head normal forms, the terms of computational interest in classical -calculus. A term has head normal form if and only if head reduction terminates:
Plotkin’s Call-by-Value. This kind of results takes its full computational meaning in Plotkin’s [41] Call-by-Value -calculus The terms of interest are here values. Plotkin relates arbitary reduction () and the evaluation strategy which only performs weak-left steps (no reduction in the scope of abstractions), by establishing
In words: the unconstrained reduction () returns a
value if and only if the evaluation strategy () returns a
value.
The proof relies on a factorization: iff
.
Simpson’s pure calculus. Standardization and Normalization
results have been established by Simpson also for its calculus
[43]. Here, the evaluation strategy is surface
reduction, i.e. no reduction is allowed in the scope of a
operator.
Summary. The table in Table 1 summarize the
factorization and normalization result for the three calculi (respectively based on ) which we have discussed.
3 Untyped Quantum -Calculus
Quantum lambda-calculus is an idealization of functional quantum programming language: following Selinger and Valiron [42], it consists of a regular -calculus together with specific constructs for manipulating quantum data and quantum operations. One of the problems consists in accomodating the non-duplicability of quantum information: in a typed setting [42] one can rely on a linear type system. In our untyped setting, we instead base our language on Simpson’s -calculus [43], extended with constructs corresponding to quantum data and quantum operations.
Due of entanglement, the state of an array of quantum bits cannot be separated into states of individual qubits: the information is non-local. A corollary is that quantum data cannot easily be written inside lambda-terms: unlike Boolean values or natural numbers, one cannot put in the grammar of terms a family of constants standing for all of the possible values a quantum bit could take. A standard procedure [42] relies on an external memory with register identifiers used as placeholders for qubits inside the lambda-term. As they stands for qubits, these registers are taken as non-duplicable.
In the original quantum lambda-calculus [42], regular free variables of type qubit were used to represent registers. In this work, being untyped we prefer to consider two kinds of variables: regular term variables, and special variables, called registers and denoted by with , corresponding to the qubit number in the quantum memory. The language is also equipped with three term constructs to manipulate quantum information. The first term construct is , producing the allocation of a fresh qubit222Unlike the original quantum -calculus [42], the term literally evaluates to a qubit.. The second term construct is , corresponding to a destructive measurement of the qubit . The evaluation then probabilistically continues as or , depending on the measure being or . Finally, assuming that the memory comes with a set of built-in unitary gates ranged over by letters , the term corresponds to a function applying the gate to the indicated qubits.
Raw Terms.
Formally, raw terms are built according to the following grammar.
| (terms ) |
where ranges over a countable set of variables, over a disjoint set of registers where is called the identifier of the register, and over a set of build-in n-ary gates. In this paper, we limit the arity to be or . Pairs do not appear as a primitive construct, but we adopt the standard encoding, writing as sugar for . We also use the shorthand for . The variable is bound in both and . As usual, we silently work modulo -equivalence. Given a term of shape , we call and its branches. As usual, the set of free variables of a term are denoted with . The set of registers identifiers for a term is denoted with .
Remark 3.1.
Without any constraints, one could write terms such as or . Both are invalid: the former since a qubit cannot be duplicated, the latter since -abstractions are meant to be linear in Simpson’s calculus.
Terms Validity.
To deal with the problem in Remark 3.1, we need to introduce the notions of context and surface context, to speak of occurrences and surface occurrences of subterms.
A context is a term with a hole. We define general contexts where the hole can appear anywhere, and surface contexts for contexts where holes do not occur in the scope of a operator, nor in the branches of a . They are generated by the grammars
| (contexts) | ||||
| (surface contexts) |
where denotes the hole of the corresponding context. The notation (or ) stands for the term where the only occurrence of a hole in C (or S) is replaced with the term , potentially capturing free variables of .
Contexts and surface contexts allow us to formalize two notions of occurence of a subterm . The pair (resp. ) is an occurence (resp. surface occurence) of in whever (resp. ). By abuse of notation, we will simply speak of occurrence of a subterm in , the context being implicit.
We can now define a notion of valid terms, agreeing with the quantum principle of no-cloning.
Definition 3.2 (Valid Terms, and Linearity).
A term is valid whenever
-
no register occurs in more than once, and every occurrences of registers are surface;
-
for every subterm of , is linear in , i.e. occurs free exactly once in and, moreover, this occurrence of is surface.
Remark 3.3.
The validity conditions for registers and linear variables do not allow us to put registers inside branches. So for instance a term such as
is invalid in our syntax. This function would measure and performs an action on based on the result. If one cannot write such a term directly with the constraints we have set on the language, one can however encode the corresponding behavior as follows:
The action on the register is the function whose definition is based on the result of the measurement of .
Quantum Operations.
Before diving into the definition of the notion of program, we briefly recall here the mathematical formalism for quantum computation [37].
The basic unit of information in quantum computation is a quantum bit or qubit. The state of a single qubit is a normalized vector of the 2-dimensional Hilbert space . We denote the standard basis of as , so that the general state of a single qubit can be written as , where .
The basic operations on quantum states are unitary operations and measurements. A unitary operation maps an n-qubit state to an n-qubit state, and is described by a unitary -matrix. We assume that the language provides a set of built-in unitary operations, including for example the Hadamard gate and the controlled not gate :
| (2) |
Measurement acts as a projection. When a qubit is measured, the observed outcome is a classical bit: either or , with probabilities and , respectively. Moreover, the state of the qubit collapses to (resp. ) if (resp. ) was observed.
Programs.
In order to associate quantum states to registers in lambda-terms, we introduce the notion of program, consisting of a quantum memory and a lambda-term of . A program is closed under permutation of register identifiers.
The state of one qubit is a normalized vector in . The state of a quantum memory (also called qubits state in the remainder of the document) consisting of qubits is a normalized vector , the -fold Kronecker product of . The size of is written . We identify a canonical basis element of with a string of bits of size , denoted with . A state is therefore in general of the form If is a permutation of , we define as
A raw program is then a pair , where and where is a valid term such that . We call the size of and we denote it with . If is a permutation over the set , the re-indexing of is the pair where is with each occurence of replaced by .
Definition 3.4 (Program).
A program is an equivalence class of raw programs under re-indexing. We identify programs with their representative elements. The set of all programs is denoted with .
Example 3.5.
The following two raw programs are equal modulo re-indexing: . In both cases, is the first qubit in the pair and the second one. Re-indexing is agnostic with respect to entanglement, and we also have the raw program being a re-indexation of the raw program : they are two representative elements of the same program.
4 Operational Semantics
The operational semantics of -calculus is usually formalized by means of a rewriting system. In the setting of -calculus, the rewriting rules are also known as reductions.
Rewriting System.
We recall that a rewriting system is a pair consisting of a set and a binary relation on whose pairs are written and called reduction steps. We denote (resp. ) the transitive-reflexive (resp. reflexive) closure of . We write if . If are binary relations on then denotes their composition.
Probabilistic Rewriting.
In order to define the operational semantics of the quantum lambda calculus, we need to formalize probabilistic reduction. We do so by means of a rewrite system over multidistributions, adopting the monadic approach recently developed in the literature of probabilistic rewriting (see e.g. [6, 12, 19, 23]). Reduction is here defined not simply on programs, but on (monadic) structures representing probability distributions over programs, called multidistributions.
The operational semantics of the language is defined by
specifying the probabilistic behavior of programs, then lifting reduction
to multidistributions of programs. Let us recall the notions.
Probability Distributions. Given a countable set , a function
is a probability subdistribution if
(a
distribution if ). Subdistributions allow us to
deal with partial results.
We write for the set of subdistributions on ,
equipped with the pointwise order on functions: if
for all .
has a bottom element (the subdistribution ) and
maximal elements (all distributions).
Multidistributions. We use multidistributions [6] to syntactically
represent distributions.
A multidistribution on the set
of programs is a finite multiset of pairs of the form
, with , , and
. The set of all multidistributions on
is . The sum of two multidistributions is noted , and is simply the union of the multisets. The product
of a scalar and a multidistribution is defined
pointwise . We write for .
4.1 The Rewrite System
is the rewrite system where the relation is monadically defined in two phases. First, we define one-step reductions from a program to a multidistribution. For example, if is the program , the program reduces in one step to . Then, we lift the definition of reduction to a binary relation on , in the natural way. So for instance, reusing above, reduces in one step to . Let us see the details.
I. Programs Reduction.
We first define the reduction of a program to a multidistribution.
The operational behavior of is given by
beta reduction, denoted with , and specific rules for handling
quantum operations – the corresponding reduction is denoted with
.
Formally, the relations and (also called reduction steps) are subsets of
and are defined in Figure 2 by contextual
closure of the root rules and , given
in Figure 1. The relation is then the union
.
The root rules. They are given in Figure 1. We call
redex the term on the left-hand side of a rule. Beta rules
come in two flavors, the linear one (b), which does not allow for
duplication, and the non-linear one (b!), which possibly duplicate
boxes (i.e. terms of shape ).
Quantum rules act on the qubits state, exactly implement the operation
which we had informally described in Section 3. Notice that
the rule has a probabilistic behaviour. The qubit which has
been measured can be discharged (as we actually do).
Contextual Closures. They are defined in Figure 2.
Observe that while the rules
are closed under
arbitrary contexts C, while the quantum rules are restricted to
surface contexts S (no reduction in the scope of a operator, nor in the
branches of ). This constraints guarantee that qubits are
neither duplicated nor delated.
II. Lifting.
The lifting of a relation to a relation on multidistributions is defined in Figure 3. In particular, lift to , respectively.
| rules | Quantum rules |
| steps | Quantum steps |
Reduction Sequences.
A -sequence (or reduction sequence) from is a sequence such that for every . We write to indicate the existence of a finite reduction sequence from , and to specify the number of -steps. Given a program and , the sequence naturally models the evaluation of ; each expresses the “expected” state of the system after steps.
Validity.
Validity of terms is preserved:
Proposition 4.2.
If is a valid term, and , then is a valid term.
Notice that the restriction of to surface contexts is necessary to respect the linearity of quantum computation, avoiding duplication or deletion of qubits.
Examples.
Let us see the definitions at work in a few examples. We first formalize the recursive program from Equation 1. Recall that is the Hadamar gate, and .
Example 4.3 (Flipping the quantum coin).
The program in Equation 1 can be written as , where is just the empty memory and . A reduction from behaves as follows. At every reduction step, we underline the redex.
Notice that the first step is a non-linear reduction. The reduction of allocates a fresh qubit in the memory, corresponding to the register . The redex applies the Hadamar gate to that qubit. The last reduction performs measurement, yielding a probabilistic outcome.
Example 4.4 (Entangled pair).
Let (where is sugar for an opportune encoding). This program produces an entangled pair of qubits (notice how is applied to a pair of registers) and then measures one of the qubits. Let us formalize its behaviour:
4.2 Surface Reduction and Surface Normal Forms
So far, we have defined a very liberal notion of reduction, in which is unrestricted – it can validly be performed even inside a -box. What shall we adopt as evaluation strategy?
In the setting of calculi based on linear logic, as Simpson’s calculus [43] and the Bang calculus [20], the natural candidate is surface reduction: the restriction of beta to surface contexts () plays a role akin to that of head reduction in classical -calculus, yielding to similar factorization and normalization results which relate and (as recalled in Table 1). The terms of interest are here surface normal forms (snf), such as or . They are the analog of values in Plotkin’s Call-by-Value -calculus and of head normal forms in classical -calculus – such an analogy can indeed be made precise [20, 31, 10]333A consequence of Girard’s translation of Call-by-Name and Call-by-Value -calculi into Linear Logic..
In our setting, surface reduction and surface normal forms(snf) also play a privileged role.
Surface Reduction.
Surface steps (Figure 5) are the union of quantum steps together with , i.e. the closure under surface contexts S of the rules. A program is a surface normal form (snf) if , i.e. no surface reduction is possible from it.
A -step which is not surface is noted . The lifting of to relations on multidistributions is denoted respectively.
Remark 4.5.
Notice that steps do not act on the qubits state, since they are steps.
Strict Lifting.
To guarantee normalization results (Section 7), we will need a stricter form of lifting, noted (Figure 4), forcing a reduction step to be performed in each program of the multidistribution , if a redex exists. Clearly .
Example 4.6.
We will prove that the strict lifting guarantees to reach snf, if any exist. This is obviously not the case for -sequences:
On the Interest of Surface Normal Forms.
What is the result of running a quantum program? In general, since computation is probabilistic, the result of executing a program will be a distribution over some outcomes of interest. A natural choice are programs of shape , with in surface normal form, ensuring that at this point, the qubits state is a stable piece of information (it will not further evolve in the computation). Indeed:
a program (i.e. in snf) will no longer modify the qubits state.
Remark 4.7.
Notice instead that a program (no quantum step is possible) is not necessarily done in manipulating the quantum memory. Further reductions may unblock further quantum steps. Think of from Example 2.1.
4.3 Sum-up Tables
Let us conclude the section summarizing the reduction relations at play.
Relations.
| Definition | Lifted to | Strict lifting | |
|---|---|---|---|
| contextual closure of -rules | |||
| closure by surface context of -rules | |||
| closure by surface context of -rules | |||
Reduction Sequences.
| Finite reduction sequence | |
|---|---|
| there is a -sequence from to | |
| there is a -sequence from to | |
| there is a -sequence from to |
5 Rewriting Theory for : Overview of the Results
We are now going to study reduction on multidistributions of programs, namely the general reduction (corresponding to the lifting of ) and surface reductions (corresponding to the lifting of ), and the relation between the two. Let us discuss each point.
-
1.
The reduction allows for unrestricted reduction. For example, we can rewrite in the scope of a Bang operator (perhaps to optimize the thunked code before copying it several times). We prove that is confluent, providing a general framework for rewriting theory. This (very liberal) reduction has a foundational role, in which to study the equational theory of the calculus and to analyze programs transformations.
-
2.
Surface reduction plays the role of an evaluation strategy, in which however the scheduling (how redexes should be fired) is not fully specified444This is not only convenient, as it allows for parallel implementation, but it is necessary for standardization [26]. For example has two surface redexes, enabling two different steps. We will prove (by proving a diamond property) that surface reduction () is “essentially deterministic” in the sense that while the choice of the redex to fire is non-deterministic, the order in which such choices are performed are irrelevant to the final result.
-
3.
The two reductions are related by a standardization result (˜6.7) stating that if then . Standardization is the base of normalization results, concerning properties such as “program terminates with probability .”
-
4.
We prove that is a normalization strategy for , namely if may converge to surface normal form with probability using the general reduction , then reduction must converge to surface normal form with probability . Informally, we can write that (corresponding to the last line in Table 1). To formalize and prove such a claim we will need more tools, because probabilistic termination is asymptotic, i.e. it appears as a limit of a possibly infinite reduction. We treat this in Section 7, where we rely on techniques from [2, 23, 24].
6 Confluence and Finitary Standardization
We first recall standard notions which we are going to use.
Confluence, Commutation, and all That (a quick recap).
The relation is confluent if A stricter form is the diamond , which is well known to imply confluence. Two relations and on commute if : . Confluence and factorization are both commutation properties: a relation is confluent if it commutes with itself.
An element is a -normal form if there is no such that
(written ).
On normalization.
In general, a term may or may not reduce to a normal form. And if it does, not all reduction sequences necessarily lead to normal form.
How do we compute a normal form? This is the problem tackled by normalization: by repeatedly performing only specific steps, a normal form will be computed, provided that can reduce to any.
Intuitively, a normalizing strategy for is a reduction strategy which, given a term , is guaranteed to reach normal form, if any exists.
6.1 Surface Reduction has the Diamond Property
In this section, we first prove that surface reduction ( and ) has the diamond property:
| (Diamond) |
then we show that is confluent.
Here we adapt techniques used in probabilistic rewriting [6, 22, 26]. Proving the diamond property is however significantly harder than in the case of probabilistic -calculi, because we need to take into account also the qubits state, and the corresponding registers. If a program has two different reductions, we need to join in one step not only the terms, but also their qubits states, working up to re-indexing of the registers (recall that programs are equivalence classes modulo re-indexing, see also Example 3.5). The following is an example, just using the simple construct . Measurement makes the situation even more delicate.
Example 6.1.
Let . The following are two different reduction sequences form . The two normal forms are the same program (˜3.4). Here, .
The key result is the following version of diamond (commutation). The proof – quite technical – is given in the extended version [25]. Recall that .
Lemma 6.2 (Pointed Diamond).
Assume and that has two distinct redexes, such that and . Then there exists such that and . Moreover, no term in and no term in is in snf.
From the above result we obtain the diamond property.
Proposition 6.3 (Diamond).
Surface reductions and have the diamond property.
In its stricter form, the diamond property guarantees that the non determinism in the choice of the redex is irrelevant – hence the reduction is essentially deterministic. The technical name for this property is Newman’s random descent [36]: no matter the choice of the redex, all reduction sequences behave the same way, i.e. have the same length, and if terminating, they do so in the same normal form. Formalized by Theorem 7.3, we use this fact to establish that is a normalizing strategy for .
6.2 Confluence of
We modularize the proof of confluence by using a classical technique, Hindley-Rosen lemma, stating that if and are binary relations on the same set , then their union is confluent if both and are confluent, and and commute.
Theorem 6.4.
The reduction satisfies confluence.
Proof.
The proof that is confluent, is easily obtained from Lemma 6.2, by using Hindley-Rosen Lemma. We already have most of the elements: is confluent: because is; is confluent: because it is diamond (˜6.3); and commute: by Lemma 6.2, we already know that and commute, hence we only need to verify that and commute, which is easily done.
6.3 Surface Standardization
We show that any sequence can be factorized as (˜6.7). Standardization is proved via the modular technique proposed in [1], which in our notation can be stated as follows:
Lemma 6.5 (Modular Factorization [1]).
if the following conditions hold:
-
1.
, and
-
2.
.
Condition 1. in Lemma 6.5 is immediate consequence of Simpson’s surface standardization for the calculus [43] stating that . Condition 2. in Lemma 6.5 is obtained from the following pointed version:
Lemma 6.6.
and implies .
Proof.
By induction on the context S such that and . We exploit in an essential way the fact that and have the same shape. By Lemmas 6.5 and 6.6, we obtain the main result of this section:
Theorem 6.7 (Surface Standardization).
Remark 6.8 (Strict vs non-strict).
Please observe that standardization is stated in terms of the non-strict lifting () of , as could reduce more than what is desired. Dually, normalization holds in terms of the strict lifting , for the reasons already discussed in Example 4.6.
A Reading of Surface Standardization.
A program in snf will no longer modify the qubits state. Intuitively, has already produced the maximal amount of quantum data that it could possibly do. We can read Surface Standardization as follows. Assume where all terms in are in snf (we use metavariables to indicate terms in snf). Standardization guarantees that surface steps suffice to reach a multidistribution whose programs have the exact same information content as :
implies .
This because ˜6.7 implies , and from we deduce that each element in must come form an element in where is in snf and where the qubits state (and the associated probability ) are exactly the same.
7 Probabilistic Termination and Asymptotic Normalization
What does it mean for a program to reach surface normal form (snf)? Since measurement makes the reduction probabilistic, we need to give a quantitative answer.
Probabilistic Termination.
The probability that the system described by the multidistribution is in surface normal form is expressed by a scalar which is defined as follows:
Let and . Let a reduction sequence. expresses the probability that after steps is in snf. The probability that reaches snf along the (possibly infinite) reduction sequence is easily defined as a limit: We also say that the sequence converges with probability .
Example 7.1 (Recursive coin, cont.).
Consider again Example 4.3. After 4 steps, the program terminates with probability . After 4 more steps, it terminates with probability , and so on. At the limit, the reduction sequence converges with probability .
7.1 Accounting for Several Possible Reduction Sequences
Since is not a deterministic reduction, given a multidistribution , there are several possible reduction sequences from , and therefore several outcomes (limits) are possible. Following [23], we adopt the following terminology:
Definition 7.2 (Limits).
Given , we write
-
, if there exists a -sequence from whose limit is .
-
is the set of limits of .
-
denotes the greatest element of , if any exists.
Intuitively, is the best result that any -sequence from can effectively produce. If the set has a sup but not a greatest element (think of the open interval ), it means that in fact, no reduction can produce as a limit. Notice also that, when reduction is deterministic, from any there is only one maximal reduction sequence, and so it is always the case that . Below we exploit the interplay between different rewriting relations, and their limit; it is useful to summarize our notations in Table 2.
| Convergence (Def.n7.2) | |
|---|---|
| there is a -sequence from which converges with probability | |
| there is a -sequence from which converges with probability | |
| there is a -sequence from which converges with probability |
7.2 Asymptotic Normalization
Given a quantum program , does exists? If this is the case, can we define a normalizing strategy which is guarantee to converge to ? The answer is positive. The main result of this section is that such a normalizing strategy does exist, and it is . More precisely, we show that any -reduction sequence from converges to the same limit, which is exactly . We establish the following results, for any arbitrary . Theorem 7.3 is a direct – and the most important – consequence of the diamond property of . The proof uses both point 1. and point 2. of Lemma 6.2. For Theorem 7.4, the proof relies on an abstract technique from [24].
Theorem 7.3 (Random Descent).
All -sequences from converge to the same limit.
Theorem 7.4 (Asymptotic completeness).
implies , with .
Theorem 7.4 states that, for each , if reduction may converge to snf with probability , then reduction must converge to snf with probability (at least) . Theorem 7.3 states that, for each , the limit of strict surface reductions () from is unique.
Summing-up, the limit of reduction is the best convergence result that any sequence from can produce. Since , then is also the greatest element in , i.e. . We hence have proved the following, where item (2.) is the asymptotic analogue of the normalization results in Table 1.
Theorem 7.5 (Asymptotic normalization).
For each , (1.) the limit has a greatest element , and (2.) .
8 Related Work and Discussion
In this paper, we propose a foundational notion of (untyped) quantum -calculus with a general reduction, encompassing the full strength of -reduction while staying compatible with quantum constraints. We then introduce an evaluation strategy, and derive standardization and confluence results. We finally discuss normalization of programs at the limit.
Related Works.
For quantum -calculi without measurement, hence without probabilistic behavior, confluence [13, 5] (and even a special form of standardization [13]) have been studied since early work. When dealing with measurement, the analysis is far more challenging. To our knowledge, only confluence has been studied, in pioneering work by Dal Lago, Masini and Zorzi [14]. Remarkably, in order to deal with probabilistic and asymptotic behavior, well before the advances in probabilistic rewriting of which we profit, the authors introduce a very elaborated technique. Notice that in [14] reduction is non-deterministic, but restricted to surface reduction. In our paper, their result roughly corresponds to the diamond property of , together with Theorem 7.3.
No “standard” standardization results (like the classical ones we recall in Table 1) exist in the literature for the quantum setting. Notice that the form of standardization in [13] is a reordering of the (surface) measurement-free reduction steps, so to perform first beta steps, then quantum steps, in agreement with the idea that a quantum computer consists of a classical device “setting up” a quantum circuit, which is then fed with an input. A similar refinement is also possible for the corresponding fragment of our calculus (namely measurement-free ), but clearly does not scale: think of , where the argument of a function is guarded by a measurement.
Our term language is close to [14]. How such a calculus relate with a Call-by-Value -calculus such as [42]? A first level of answer is that our setting is an untyped -calculus; linear abstraction, together with well forming rules, allows for the management of quantum data. In [42], the same role is fulfilled by the (Linear Logic based) typing system.
Despite these differences, we do expect that our results can be transferred. As already mentioned, the redex reflects a Call-by-Push-Value mechanism, which in untyped form has been extensively studied in the literature with the name of Bang calculus [20, 31, 10], as a uniform framework to encode both Call-by-Name (CbN) and Call-by-Value (CbV). Semantical but also syntactical properties, including confluence [20, 31] and standardization [24, 3] are analyzed in the Bang setting, and then transferred via reverse simulation to both CbV and CbN. More precisely, a CbV (resp. CbN) translation maps forth-and-back weak (resp. head) reduction into surface reduction. Surface normal forms are the CbV image of values (and the CbN image of head normal forms). Since the Bang calculus is exactly the fragment of Simpson’s calculus [43] without linear abstraction, one may reasonably expect that our calculus can play a similar role in the quantum setting. It seems however that a back-and forth translation of CbV (or CbN) will need to encompass types.
References
- [1] Beniamino Accattoli, Claudia Faggian, and Giulio Guerrieri. Factorize factorization. In 29th EACSL Annual Conference on Computer Science Logic, CSL 2021, January 25-28, 2021, Ljubljana, Slovenia (Virtual Conference), volume 183 of LIPIcs, pages 6:1–6:25. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.CSL.2021.6.
- [2] Zena M. Ariola and Stefan Blom. Skew confluence and the lambda calculus with letrec. Annals of Pure and Applied Logic, 117(1):95–168, 2002. doi:10.1016/S0168-0072(01)00104-X.
- [3] Victor Arrial, Giulio Guerrieri, and Delia Kesner. The benefits of diligence. International Joint Conference on Automated Reasoning, IJCAR 2024,, 2024.
- [4] Pablo Arrighi, Alejandro Díaz-Caro, and Benoît Valiron. The vectorial lambda-calculus. Information and Computation, 254:105–139, 2017. doi:10.1016/j.ic.2017.04.001.
- [5] Pablo Arrighi and Gilles Dowek. Lineal: A linear-algebraic lambda-calculus. Logical Methods in Computer Science, 13(1), 2017. doi:10.23638/LMCS-13(1:8)2017.
- [6] Martin Avanzini, Ugo Dal Lago, and Akihisa Yamada. On probabilistic term rewriting. Sci. Comput. Program., 185, 2020. doi:10.1016/J.SCICO.2019.102338.
- [7] Hendrik Pieter Barendregt. The Lambda Calculus – Its Syntax and Semantics, volume 103 of Studies in logic and the foundations of mathematics. North-Holland, 1984.
- [8] Benjamin Bichsel, Maximilian Baader, Timon Gehr, and Martin T. Vechev. Silq: a high-level quantum language with safe uncomputation and intuitive semantics. In Alastair F. Donaldson and Emina Torlak, editors, Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI’20, pages 286–300. ACM, 2020. doi:10.1145/3385412.3386007.
- [9] Olivier Bournez and Florent Garnier. Proving positive almost sure termination under strategies. In Rewriting Techniques and Applications, RTA, pages 357–371, 2006. doi:10.1007/11805618_27.
- [10] Antonio Bucciarelli, Delia Kesner, Alejandro Ríos, and Andrés Viso. The bang calculus revisited. Inf. Comput., 293:105047, 2023. doi:10.1016/J.IC.2023.105047.
- [11] Christophe Chareton, Sébastien Bardin, Franccois Bobot, Valentin Perrelle, and Benoît Valiron. An automated deductive verification framework for circuit-building quantum programs. In Nobuko Yoshida, editor, Proceedings of the 30th European Symposium on Programming Languages and Systems, ESOP 2021, volume 12648 of Lecture Notes in Computer Science, pages 148–177. Springer, 2021. doi:10.1007/978-3-030-72019-3_6.
- [12] Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu. The geometry of parallelism: classical, probabilistic, and quantum effects. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 833–845. ACM, 2017. doi:10.1145/3009837.
- [13] Ugo Dal Lago, Andrea Masini, and Margherita Zorzi. On a measurement-free quantum lambda calculus with classical control. Math. Struct. Comput. Sci., 19(2):297–335, 2009. doi:10.1017/S096012950800741X.
- [14] Ugo Dal Lago, Andrea Masini, and Margherita Zorzi. Confluence results for a quantum lambda calculus with measurements. Electr. Notes Theor. Comput. Sci., 270(2):251–261, 2011. doi:10.1016/j.entcs.2011.01.035.
- [15] Ugo Dal Lago and Margherita Zorzi. Probabilistic operational semantics for the lambda calculus. RAIRO Theor. Informatics Appl., 46(3):413–450, 2012. doi:10.1051/ita/2012012.
- [16] Ugo de’Liguoro and Adolfo Piperno. Non deterministic extensions of untyped lambda-calculus. Inf. Comput., 122(2):149–177, 1995. doi:10.1006/INCO.1995.1145.
- [17] Alessandra Di Pierro, Chris Hankin, and Herbert Wiklicky. Probabilistic lambda-calculus and quantitative program analysis. J. Log. Comput., 15(2):159–179, 2005. doi:10.1093/LOGCOM/EXI008.
- [18] Alejandro Díaz-Caro, Mauricio Guillermo, Alexandre Miquel, and Benoît Valiron. Realizability in the unitary sphere. In Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS’19, pages 1–13. IEEE, 2019. doi:10.1109/LICS.2019.8785834.
- [19] Alejandro Díaz-Caro and Guido Martinez. Confluence in probabilistic rewriting. Electr. Notes Theor. Comput. Sci., 338:115–131, 2018.
- [20] Thomas Ehrhard and Giulio Guerrieri. The bang calculus: an untyped lambda-calculus generalizing call-by-name and call-by-value. In Proceedings of the 18th International Symposium on Principles and Practice of Declarative Programming (PPDP 2016), pages 174–187. ACM, 2016. doi:10.1145/2967973.2968608.
- [21] Thomas Ehrhard, Michele Pagani, and Christine Tasson. The computational meaning of probabilistic coherence spaces. In Proceedings of the 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011, June 21-24, 2011, Toronto, Ontario, Canada, pages 87–96. IEEE Computer Society, 2011. doi:10.1109/LICS.2011.29.
- [22] Claudia Faggian. Probabilistic rewriting: Normalization, termination, and unique normal forms. In Herman Geuvers, editor, 4th International Conference on Formal Structures for Computation and Deduction, FSCD 2019, June 24-30, 2019, Dortmund, Germany, volume 131 of LIPIcs, pages 19:1–19:25. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.FSCD.2019.19.
- [23] Claudia Faggian. Probabilistic rewriting and asymptotic behaviour: on termination and unique normal forms. Log. Methods Comput. Sci., 18(2), 2022. doi:10.46298/LMCS-18(2:5)2022.
- [24] Claudia Faggian and Giulio Guerrieri. Factorization in call-by-name and call-by-value calculi via linear logic. In Foundations of Software Science and Computation Structures - 24th International Conference, FOSSACS 2021, volume 12650 of Lecture Notes in Computer Science, pages 205–225. Springer, 2021. doi:10.1007/978-3-030-71995-1_11.
- [25] Claudia Faggian, Gaetan Lopez, and Benoît Valiron. A rewriting theory for quantum -calculus. CoRR, abs/2411.14856, 2024. arXiv:2411.14856.
- [26] Claudia Faggian and Simona Ronchi Della Rocca. Lambda calculus and probabilistic computation. In 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2019, Vancouver, BC, Canada, June 24-27, 2019, pages 1–13. IEEE, 2019. doi:10.1109/LICS.2019.8785699.
- [27] Francesco Gavazzo and Claudia Faggian. A relational theory of monadic rewriting systems, part I. In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021, pages 1–14. IEEE, 2021. doi:10.1109/LICS52264.2021.9470633.
- [28] Simon J. Gay. Quantum programming languages: survey and bibliography. Mathematical Structures in Computer Science, 16(4):581–600, 2006. doi:10.1017/S0960129506005378.
- [29] Jean-Yves Girard. Linear logic. Theor. Comput. Sci., 50:1–102, 1987. doi:10.1016/0304-3975(87)90045-4.
- [30] Alexander S. Green, Peter LeFanu Lumsdaine, Neil J. Ross, Peter Selinger, and Benoît Valiron. Quipper: A scalable quantum programming language. In Hans-Juergen Boehm and Cormac Flanagan, editors, Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI’13, pages 333–342. ACM, 2013. doi:10.1145/2491956.2462177.
- [31] Giulio Guerrieri and Giulio Manzonetto. The bang calculus and the two Girard’s translations. In Proceedings Joint International Workshop on Linearity & Trends in Linear Logic and Applications (Linearity-TLLA 2018), volume 292 of EPTCS, pages 15–30, 2019. doi:10.4204/EPTCS.292.2.
- [32] Jan-Christoph Kassing, Florian Frohn, and Jürgen Giesl. From innermost to full almost-sure termination of probabilistic term rewriting. In Naoki Kobayashi and James Worrell, editors, Foundations of Software Science and Computation Structures - 27th International Conference, FoSSaCS 2024, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2024, Luxembourg City, Luxembourg, April 6-11, 2024, Proceedings, Part II, volume 14575 of Lecture Notes in Computer Science, pages 206–228. Springer, 2024. doi:10.1007/978-3-031-57231-9_10.
- [33] Maja H. Kirkeby and Henning Christiansen. Confluence and convergence in probabilistically terminating reduction systems. In Logic-Based Program Synthesis and Transformation - 27th International Symposium, LOPSTR 2017, pages 164–179, 2017. doi:10.1007/978-3-319-94460-9_10.
- [34] Emanuel H. Knill. Conventions for quantum pseudocode. Technical Report LAUR-96-2724, Los Alamos National Laboratory, Los Alamos, New Mexico, US., 1996.
- [35] Dongho Lee, Valentin Perrelle, Benoît Valiron, and Zhaowei Xu. Concrete categorical model of a quantum circuit description language with measurement. In Mikolaj Bojanczyk and Chandra Chekuri, editors, Proceedings of the 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2021, volume 213 of LIPIcs, pages 51:1–51:20, 2021. doi:10.4230/LIPIcs.FSTTCS.2021.51.
- [36] M.H.A. Newman. On theories with a combinatorial definition of equivalence. Annals of Mathematics, 43(2), 1942.
- [37] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2002.
- [38] Michele Pagani, Peter Selinger, and Benoît Valiron. Applying quantitative semantics to higher-order quantum computing. In Suresh Jagannathan and Peter Sewell, editors, Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’14), pages 647–658. ACM, 2014. doi:10.1145/2535838.2535879.
- [39] Luca Paolini, Mauro Piccolo, and Margherita Zorzi. qPCF: higher-order languages and quantum circuits. Journal of Automated Reasoning, 63(4):941–966, 2019. doi:10.1007/s10817-019-09518-y.
- [40] Jennifer Paykin, Robert Rand, and Steve Zdancewic. QWIRE: a core language for quantum circuits. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL’17, pages 846–858. ACM, 2017. doi:10.1145/3009837.3009894.
- [41] Gordon D. Plotkin. Call-by-name, call-by-value and the lambda-calculus. Theor. Comput. Sci., 1(2):125–159, 1975. doi:10.1016/0304-3975(75)90017-1.
- [42] Peter Selinger and Benoît Valiron. A lambda calculus for quantum computation with classical control. Mathematical Structures in Computer Science, 16:527–552, 2006. doi:10.1017/S0960129506005238.
- [43] Alex K. Simpson. Reduction in a linear lambda-calculus with applications to operational semantics. In Term Rewriting and Applications, 16th International Conference (RTA 2005), volume 3467 of Lecture Notes in Computer Science, pages 219–234, 2005. doi:10.1007/978-3-540-32033-3_17.
Appendix A Convention for Garbage Collection
In the definition of programs, we use the convention that the size of the memory is exactly the number of registers manipulated in the term. The memory will grow when new qubits are allocated, and shrink when qubits are read (see Figure 1): the reduction perform garbage collection on the fly.
If this makes it easy to identify identical programs, it makes the proofs a bit cumbersome. We therefore rely for them on an equivalent representation, where a program can have spurious qubits, as long as they are not entangled with the rest of the memory – i.e. when measuring them would not change the state of the registers manipulated by the term. So for instance, in this model is the same as .
Appendix B Technical properties
In all proofs we freely use the following closure property, which is immediate by definition of context and surface context.
Fact B.1 (Closure).
Surface closure (point 1.) also holds with in place of .
We will also use the following lemma (analog to substitutivity in [7], p.54) The proof is straightforward.
Lemma B.2 (Substitutivity).
Assume and . Then for each term : .
The converse also holds, and it is simply ˜B.1, that can be reformulated as follows.
Fact B.3.
Assume , and a term such that is linear in . Then
Surface Reduction.
has a prominent role. We spell-out the definition.
Surface Reduction Step
| Surface Beta Step | (Surface) q-Step |
|---|---|
Appendix C Surface Reduction has the Diamond Property
We obtain the diamond property (˜6.3) from the pointed diamond, result using the following technique (from [26]) , which allows us to work pointwise.
Lemma (pointwise Criterion (FaggianRonchi19)).
Let and their lifting. To prove that diamond-commute, i.e.
If and then s.t. and .
it suffices to prove the property (#) below (stated in terms of a single program )
(#) If and then s.t. and .
The same result holds with in place of .
The criterion together with Lemma 6.2 (Point 1.) yields
Prop (6.3).
Surface reduction has the diamond property. The same holds for .
Appendix D Finitary Standardization
Shape Preservation.
We recall a basic but key property of contextual closure. If a step is obtained by closure under non-empty context of a rule , then it preserves the shape of the term. We say that and have the same shape if both terms belong to the same production (i.e., both terms are an application, an abstraction, a variable, a register, a term of shape , , etc).
Fact D.1 (Shape preservation).
Assume, and that the context C is non-empty. Then (for each ), and have the same shape.
An easy-to-verify consequence is the following, stating that non-surface steps ()
-
do not change the quantum memory
-
do not change the shape of the terms
Notice that the qubit state is unchanged by steps, since it can only be a step
Lemma D.2 (Redexes and normal forms preservation).
Assume .
-
1.
is a redex iff is a redex. In this case, either both are -redexes, or both -redexes.
-
2.
is -normal if and only if is -normal.
Proof of Lemma 6.6.
Lemma (Lemma 6.6).
and implies .
Proof.
By induction on the context S such that and . We exploit in an essential way the fact that and have the same shape.
Appendix E Asymptotic normalization
Proof Sketch.
The proof of Theorem 7.4 relies on an abstract result from the literature [24], which here we reformulate in our setting:
Lemma E.1 (Asymptotic completeness criterion [24]).
Assume
-
i.
-factorisation: if then ;
-
ii.
-neutrality : implies .
Then: .
