Abstract 1 Introduction 2 Setting the Scene: the Rewriting Ingredients 3 Untyped Quantum 𝝀-Calculus 4 Operational Semantics 5 Rewriting Theory for 𝓠: Overview of the Results 6 Confluence and Finitary Standardization 7 Probabilistic Termination and Asymptotic Normalization 8 Related Work and Discussion References Appendix A Convention for Garbage Collection Appendix B Technical properties Appendix C Surface Reduction has the Diamond Property Appendix D Finitary Standardization Appendix E Asymptotic normalization

A Rewriting Theory for Quantum λ-Calculus

Claudia Faggian ORCID IRIF, CNRS, Université Paris Cité, France Gaetan Lopez IRIF, CNRS, Université Paris Cité, France Benoît Valiron ORCID Université Paris-Saclay, CNRS, CentraleSupélec, ENS Paris-Saclay, Inria, Laboratoire Méthodes Formelles, 91190, Gif-sur-Yvette, France
Abstract

Quantum lambda calculus has been studied mainly as an idealized programming language – the evaluation essentially corresponds to a deterministic abstract machine. Very little work has been done to develop a rewriting theory for quantum lambda calculus. Recent advances in the theory of probabilistic rewriting give us a way to tackle this task with tools unavailable a decade ago. Our primary focus are standardization and normalization results.

Keywords and phrases:
quantum lambda-calculus, probabilistic rewriting, operational semantics, asymptotic normalization, principles of quantum programming languages
Copyright and License:
[Uncaptioned image] © Claudia Faggian, Gaetan Lopez, and Benoît Valiron; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Quantum computation theory
; Theory of computation Operational semantics ; Theory of computation Equational logic and rewriting ; Theory of computation Linear logic
Related Version:
Extended Version: http://arxiv.org/abs/2411.14856 [25]
Funding:
This work has been partially funded by the French National Research Agency (ANR) by the projects PPS ANR-19-CE48-0014, TaQC ANR-22-CE47-0012 and within the framework of “Plan France 2030”, under the research projects EPIQ ANR-22-PETQ-0007, OQULUS ANR-23-PETQ-0013, HQI-Acquisition ANR-22-PNCQ-0001 and HQI-R&D ANR-22-PNCQ-0002.
Editors:
Jörg Endrullis and Sylvain Schmitz

1 Introduction

Quantum computation is a model of computation in which one has access to data coded on the state of objects governed by the law of quantum physics. Due to the unique nature of quantum mechanics, quantum data exhibits several non-intuitive properties [37]: it is non-duplicable, it can exist in superposition, and reading the memory exhibits a probabilistic behavior. Nonetheless, the mathematical formalization is well-established: the state of a quantum memory and the available manipulations thereof can be expressed within the theory of Hilbert spaces.

Knill’s QRAM model [34] describes a generic interface for interacting with such a quantum memory. The memory is modeled with uniquely identified quantum registers, each holding one quantum bit – also called a qubit. The interface should make it possible to create and discard registers and apply elementary operations on arbitrary registers. These operations consist of unitary gates and measurements. The former are internal, local modifications of the memory state, represented by a quantum circuit, while the latter are the operations for reading the memory. Measurements are probabilistic operations returning a classical bit of information.

Quantum algorithms are typically designed with a model akin to Knill’s QRAM [37]. A quantum algorithm consists of the description of a sequence of quantum operations, measurements, and classical processing. The control flow is purely classical, and generally, the algorithm’s behavior depends on the results of past measurements. An algorithm, therefore, mixes classical processing and interaction with quantum memory in a potentially arbitrary way: Quantum programming languages should be designed to handle it.

Quantum 𝝀-Calculus and Linear Logics.

In the last 20 years, many proposals for quantum programming languages have emerged [28, 30, 40, 39, 8, 11]. Similar to classical languages, the paradigm of higher-order, functional quantum programming languages have been shown to be a fertile playground for the development of well-founded, formal quantum languages aiming at the formal analysis of quantum programs [40, 11].

The quantum λ-calculus of Selinger&Valiron [42] lies arguably at the foundation of the development of higher-order, quantum functional programming languages [14, 13, 38, 12, 35]. Akin to other extensions of lambda-calculus with probabilistic [17, 15, 21] or non-deterministic behavior [16], the quantum lambda calculus extends the regular lambda calculus – core of functional programming languages – with a set interface to manipulate a quantum memory. Due to the properties of the quantum memory, quantum lambda-calculi should handle non-duplicable data and probabilistic behavior.

One of the critical points that a quantum programming language should address is the problem of the non-duplicability of quantum data. In the literature, non-duplicable data is usually captured with tools coming from linear logic. The first approach [42, 38, 12] consists in using types, imposing all functions to be linear by default, and with the use of a special type !A to explicitly pinpoint duplicable objects of type A. An alternative – untyped – approach [13, 14] considers instead an untyped lambda calculus, augmented with a special term construct “!” and validity constraints to forbid the duplication of qubits.

Probabilistic and Infinitary Behavior.

A quantum λ-calculus featuring all of quantum computation should not only permit the manipulation of quantum register with unitary operations but should also give the possibility to measure them, and retrieve a classical bit of information. As the latter is a probabilistic operation, an operational semantics for a quantum λ-calculus is inherently probabilistic. As in the non-quantum case, probabilistic choice and unbounded recursion yield subtle behaviors.
Fair Coin.   Consider the following program L, written in a mock-up ML language with quantum features, similar to the language of [42]:

L if  meas(H𝚗𝚎𝚠) then I else Ω.

For this introduction, we only describe its behavior informally. The term L above produces a qubit111The reader unfamiliar with the notation should not worry, as the formal details are not essential at this point: just retain that the state of our qubit is a superposition of two (basis) states, which play the role of head and tail. When needed, in Section 3 we provide a brief introduction to the mathematical formalism for quantum computation [37]. in state 22(|0+|1) by creating a fresh qubit in state |0 (this is the role of 𝚗𝚎𝚠), and applying the Hadamard gate H. Measuring this qubit amounts to flipping a fair coin with equal probability 12. In one case, the program returns the identity function I; otherwise, it diverges – the term Ω stands for the usual, non-terminating looping term.

The program L, therefore, uses the quantum memory only once (at the beginning of the run of the program), and it terminates with probability 12.
Unbounded Use of Fair Coin.   In the context of probabilistic behavior, unbounded loops might terminate asymptotically: A program may terminate with probability 1, but only at the limit (almost sure termination). A simple example suffices to illustrate this point.

Consider a quantum process R that flips a coin by creating and measuring a fresh qubit. If the result is head, the process stops, outputting I. If the result is tail, it starts over. In our mock-up ML, the program R is

R𝚕𝚎𝚝𝚛𝚎𝚌fx=(𝚒𝚏(𝚖𝚎𝚊𝚜x)𝚝𝚑𝚎𝚗I𝚎𝚕𝚜𝚎f(H𝚗𝚎𝚠))𝚒𝚗f(H𝚗𝚎𝚠). (1)

After n iterations, the program R is in normal form with probability 12+122++12n. Even if the termination probability is 1, this probability of termination is not reached in a finite number of steps but as a limit. The program in Equation 1 is our leading example: we formalize it in Example 4.3.

Operational Semantics of Quantum Programs.

As it is customary when dealing with choice effects, the probabilistic behavior is dealt with by fixing an evaluation strategy. Think of tossing a (quantum) coin and duplicating the result, versus tossing the coin twice, which is indeed one key issue at the core of confluence failure in such settings (as observed in [16, 13]). Following the standard approach adopted for functional languages with side effects, the evaluation strategy in quantum λ-calculi such as [42, 38, 12] is a deterministic call-by-value strategy: an argument is reduced to a value before being passed to a function.

One aspect that has been seldom examined is however the properties of the general reduction associated to the quantum lambda-calculus: this is the purpose of this paper.

A Rewriting Theory for the Quantum 𝝀-Calculus.

Lambda calculus has a rich, powerful notion of reduction, whose properties are studied by a vast amount of literature. Such a general rewriting theory provides a sound framework for reasoning about programs transformations, such as compiler optimizations or parallel implementations, and a base on which to reason about program equivalence. The two fundamental operational properties of lambda calculus are confluence and standardization. Confluence guarantees that normal forms are unique, standardization that if a normal form exists, there is a strategy that is guaranteed to terminate in such a form.

As pioneered by Plotkin [41], standardization allows to bridge between the general reduction (where programs transformation can be studied), and a specific evaluation strategy, which implements the execution of an idealized programming language. Summarizing the situation, for programming languages, there are two kinds of term rewriting: run-time rewriting (“evaluation”) and compile-time rewriting (program transformations).

In the context of quantum lambda-calculi, the only line of research discussing rewriting (rather than fixing a deterministic strategy) has been pursued by Dal Lago, Masini, and Zorzi [14, 13]: working with an untyped quantum lambda-calculus, they establish confluence results (and also a form of standardization, but only for the sub-language without measurement [13] – therefore, without the probabilistic behavior).

In this paper, we study not only confluence but also standardization and normalization results for a quantum λ-calculus featuring measurement, and where β reduction (the engine of λ-calculus) is fully unrestricted. Recent advances in probabilistic and monadic rewriting theory [9, 12, 19, 33, 6, 23, 27, 32] allow us to tackle this task with a formalism and powerful techniques unavailable a decade ago. Still, quantum rewriting is more challenging than probabilistic rewriting because we need to manage the states of the quantum memory. The design of the language is, therefore, also delicate: we need to control the duplication of qubits while allowing the full power of β-reduction.

Contributions.

We can summarize the contributions of the paper as follows. These are described in more details in Section 5, once all the necessary materials have been set up.

  • An untyped quantum lambda-calculus, closely inspired by [14] but re-designed to allow for a more general reduction, now encompassing the full strength of β-reduction; validity constraints make it quantum-compatible.

  • The calculus is equipped with a rich operational semantics, which is sound with respect to quantum computation. The general reduction enables arbitrary β-reduction; surface reduction (in the spirit of [43] and other calculi based on Linear Logic) plays the role of an evaluation strategy.

  • We study the rewriting theory for the system, proving confluence of the reduction, and standardization.

  • We obtain a normalization result that scales to the asymptotic case, defining a normalizing strategy w.r.t. termination at the limit.

Missing proofs and some more technical details are given in the extended version [25].

2 Setting the Scene: the Rewriting Ingredients

This section is devoted to a more detailed (but still informal) discussion of two key elements: the style of λ-calculus we adopt, and what standardization results are about. The calculus is then defined in Section 3, its operational semantics in Section 4; standardization and normalization in the following sections.

Untyped Quantum 𝝀-Calculus.

Our quantum calculus is built on top of Simpson’s calculus Λ! [43], a variant of untyped λ-calculus inspired by Girard’s Linear Logic [29]. In this choice, we follow [14, 13, 12]. Indeed, the fine control of duplication which Λ! inherits from linear logic makes it an ideal base for quantum computation.

The Bang operator ! plays the role of a marker for non-linear management: duplicability and discardability of resources. Abstraction is refined into linear abstraction λx.M and non-linear abstraction λ!x.M. The latter allows duplication of the argument, which is required to be suspended as thunk !N, behaving as the !-box of linear logic.

Example 2.1 (duplication, or not).

(λx.Hx)(𝚗𝚎𝚠) is a valid term, but (λx.x,x)(𝚗𝚎𝚠) which would duplicate the qubit created by 𝚗𝚎𝚠 is not. Instead, we can validly write (λ!x.𝖢𝖭𝖮𝖳Hx,x)(!𝚗𝚎𝚠) which thunks 𝚗𝚎𝚠 and then duplicate it, yielding 𝖢𝖭𝖮𝖳H𝚗𝚎𝚠,𝚗𝚎𝚠. Notice that this term produces an entangled pair of qubits.

In our paper, as well as in [14, 13, 12], surface reduction (i.e., no reduction is allowed in the scope of the ! operator) is the key ingredient to allow for the coexistence of quantum bits with duplication and erasing. Unlike previous work however, in our setting β-reduction – the engine of λ-calculus – is unconstrained. We prove that only quantum operations needs to be surface, making ours a conservative extension of the usual λ-calculus, with its full power.

Call-by-Value… or rather, Call-by-Push-Value.

The reader may have recognized that reduction in our calculus follows the Call-by-Push-Value paradigm, with the Bang operator thunking a computation. In fact, Simpson’s calculus [43], more precisely the fragment without linear abstraction, is essentially an untyped version of Call-by-Push-Value, and it has been extensively studied in the literature of Linear Logic also with the name of Bang calculus [20, 31, 10], as a unifying framework which subsumes both Call-by-Name (CbN) and Call-by-Value(CbV) calculi.

Table 1: Summarizing Standard Factorization and Normalization Results.
Call-by-name λ-calculus Call-by-value λv-calculus Linear λ!-calculus
General reduction: (β) General reduction: (βv) General reduction: (β!)
Evaluation: head (h) Evaluation: weak-left (l) Evaluation: surface (s)
1. Head factorization: 1. Weak-left factorization: 1. Surface factorization:
   MβN iff Mh¬hN    MβvN iff Ml¬lN    Mβ!N iff Ms¬sN
2. Head normalization: 2. Convergence to a value: 2. Surface normalization:
   MβH iff MhH    MβvV iff MlV    Mβ!S iff MsS

A Taste of Standardization and Normalization: Pure 𝝀-Calculi.

Termination and confluence concern the existence and the uniqueness of normal forms, which are the results of a computation. Standardization and normalization are concerned with how to compute a given outcome. For example, is there a strategy which guarantees termination, if possible? The desired outcome is generally a specific kind of terms. In the classical theory of λ-calculus (à la Barendregt), the terms of interest are head normal forms. In the Call-by-Value approach, the terms of computational interest are values.
Classical λ-calculus.   The simplest form of standardization is factorization: any reduction sequence can be re-organized so as to first performing specific steps and then everything else. A paradigmatic example is the head factorization theorem of classical λ-calculus (theorem 11.4.6 in [7]): every β-reduction sequence MβN can be re-organized/factorized so as to first reducing head redexes and then everything else – in symbols Mh¬hN.

A major consequence is head normalization, relating arbitrary β reduction with head reduction, w.r.t. head normal forms, the terms of computational interest in classical λ-calculus. A term M has head normal form if and only if head reduction terminates:

MβH(hnf) MhH(hnf)

Plotkin’s Call-by-Value.   This kind of results takes its full computational meaning in Plotkin’s [41] Call-by-Value λ-calculus The terms of interest are here values. Plotkin relates arbitary β reduction (βv) and the evaluation strategy l which only performs weak-left steps (no reduction in the scope of abstractions), by establishing

MβvV(value) MlV(value)

In words: the unconstrained reduction (βv) returns a value if and only if the evaluation strategy (l) returns a value. The proof relies on a factorization: MβvN iff Ml¬lN.
Simpson’s pure calculus.   Standardization and Normalization results have been established by Simpson also for its calculus Λ! [43]. Here, the evaluation strategy is surface reduction, i.e. no reduction is allowed in the scope of a ! operator.
Summary.   The table in Table 1 summarize the factorization and normalization result for the three calculi (respectively based on β,βv,β!) which we have discussed.

3 Untyped Quantum 𝝀-Calculus

Quantum lambda-calculus is an idealization of functional quantum programming language: following Selinger and Valiron [42], it consists of a regular λ-calculus together with specific constructs for manipulating quantum data and quantum operations. One of the problems consists in accomodating the non-duplicability of quantum information: in a typed setting [42] one can rely on a linear type system. In our untyped setting, we instead base our language on Simpson’s λ-calculus [43], extended with constructs corresponding to quantum data and quantum operations.

Due of entanglement, the state of an array of quantum bits cannot be separated into states of individual qubits: the information is non-local. A corollary is that quantum data cannot easily be written inside lambda-terms: unlike Boolean values or natural numbers, one cannot put in the grammar of terms a family of constants standing for all of the possible values a quantum bit could take. A standard procedure [42] relies on an external memory with register identifiers used as placeholders for qubits inside the lambda-term. As they stands for qubits, these registers are taken as non-duplicable.

In the original quantum lambda-calculus [42], regular free variables of type qubit were used to represent registers. In this work, being untyped we prefer to consider two kinds of variables: regular term variables, and special variables, called registers and denoted by ri with iN, corresponding to the qubit number i in the quantum memory. The language is also equipped with three term constructs to manipulate quantum information. The first term construct is 𝚗𝚎𝚠, producing the allocation of a fresh qubit222Unlike the original quantum λ-calculus [42], the term 𝚗𝚎𝚠 literally evaluates to a qubit.. The second term construct is 𝚖𝚎𝚊𝚜(ri,M0,M1), corresponding to a destructive measurement of the qubit ri. The evaluation then probabilistically continues as M0 or M1, depending on the measure being |0 or |1. Finally, assuming that the memory comes with a set of built-in unitary gates ranged over by letters A,B,C, the term UA corresponds to a function applying the gate A to the indicated qubits.

Raw Terms.

Formally, raw terms M,N,P, are built according to the following grammar.

M,N,P::= x!Mλx.Mλ!x.MMNriUA𝚗𝚎𝚠𝚖𝚎𝚊𝚜(P,M,N) (terms Λq)

where x ranges over a countable set of variables, ri over a disjoint set of registers where iN is called the identifier of the register, and UA over a set of build-in n-ary gates. In this paper, we limit the arity n to be 1 or 2. Pairs do not appear as a primitive construct, but we adopt the standard encoding, writing M,N as sugar for λf.(fM)N. We also use the shorthand M1,,Mn for M1M2,. The variable x is bound in both λx.P and λ!x.P. As usual, we silently work modulo α-equivalence. Given a term of shape 𝚖𝚎𝚊𝚜(P,M,N), we call M and N its branches. As usual, the set of free variables of a term M are denoted with 𝙵𝚅(M). The set of registers identifiers for a term M is denoted with 𝚁𝚎𝚐(M).

 Remark 3.1.

Without any constraints, one could write terms such as r0,r0 or λx.x,x. Both are invalid: the former since a qubit cannot be duplicated, the latter since λ-abstractions are meant to be linear in Simpson’s calculus.

Terms Validity.

To deal with the problem in Remark 3.1, we need to introduce the notions of context and surface context, to speak of occurrences and surface occurrences of subterms.

A context is a term with a hole. We define general contexts where the hole can appear anywhere, and surface contexts for contexts where holes do not occur in the scope of a ! operator, nor in the branches of a 𝚖𝚎𝚊𝚜(,,). They are generated by the grammars

C::= MCCMλx.Cλ!x.C𝚖𝚎𝚊𝚜(C,M,N) (contexts)
𝚖𝚎𝚊𝚜(M,C,N)𝚖𝚎𝚊𝚜(M,N,C)!C,
S::= MSSMλx.Sλ!x.S𝚖𝚎𝚊𝚜(S,M,N), (surface contexts)

where denotes the hole of the corresponding context. The notation CR (or SR) stands for the term where the only occurrence of a hole in C (or S) is replaced with the term R, potentially capturing free variables of R.

Contexts and surface contexts allow us to formalize two notions of occurence of a subterm T. The pair (C,T) (resp. (S,T)) is an occurence (resp. surface occurence) of T in M whever M=CT (resp. M=ST). By abuse of notation, we will simply speak of occurrence of a subterm in M, the context being implicit.

We can now define a notion of valid terms, agreeing with the quantum principle of no-cloning.

Definition 3.2 (Valid Terms, and Linearity).

A term M is valid whenever

  • no register occurs in M more than once, and every occurrences of registers are surface;

  • for every subterm λx.P of M, x is linear in P, i.e. x occurs free exactly once in P and, moreover, this occurrence of x is surface.

 Remark 3.3.

The validity conditions for registers and linear variables do not allow us to put registers inside branches. So for instance a term such as

λz.𝚖𝚎𝚊𝚜(r0,z(UAr1),z(UBr1)).

is invalid in our syntax. This function would measure r0 and performs an action on r1 based on the result. If one cannot write such a term directly with the constraints we have set on the language, one can however encode the corresponding behavior as follows:

(λ!f.fzr1)𝚖𝚎𝚊𝚜(r0,!(λux.u(UAx)),!(λux.u(UBx)).

The action on the register r1 is the function f whose definition is based on the result of the measurement of r0.

Quantum Operations.

Before diving into the definition of the notion of program, we briefly recall here the mathematical formalism for quantum computation [37].

The basic unit of information in quantum computation is a quantum bit or qubit. The state of a single qubit is a normalized vector of the 2-dimensional Hilbert space C2. We denote the standard basis of C2 as {|0,|1}, so that the general state of a single qubit can be written as α|0+β|1, where |α|2+|β|2=1.

The basic operations on quantum states are unitary operations and measurements. A unitary operation maps an n-qubit state to an n-qubit state, and is described by a unitary 2n×2n-matrix. We assume that the language provides a set of built-in unitary operations, including for example the Hadamard gate H and the controlled not gate 𝖢𝖭𝖮𝖳:

H12(1111)𝖢𝖭𝖮𝖳(1000010000010010) (2)

Measurement acts as a projection. When a qubit α|0+β|1 is measured, the observed outcome is a classical bit: either 0 or 1, with probabilities |α|2 and |β|2, respectively. Moreover, the state of the qubit collapses to |0 (resp. |1) if 0 (resp. 1) was observed.

Programs.

In order to associate quantum states to registers in lambda-terms, we introduce the notion of program, consisting of a quantum memory and a lambda-term of Λq. A program is closed under permutation of register identifiers.

The state of one qubit is a normalized vector in =C2. The state of a quantum memory (also called qubits state in the remainder of the document) consisting of n qubits is a normalized vector 𝚀n=(C2)n, the n-fold Kronecker product of . The size of 𝚀 is written |𝚀|n. We identify a canonical basis element of n with a string of bits of size n, denoted with |b0bn1. A state 𝚀n is therefore in general of the form 𝚀=b0,bn1{0,1}αb0bn1|b0bn1. If σ is a permutation of {0,n1}, we define σ(𝚀) as σ(𝚀)=b0,bn1{0,1}αb0bn1|bσ(0)bσ(n1).

A raw program 𝐩 is then a pair [𝚀;M], where 𝚀n and where M is a valid term such that 𝚁𝚎𝚐(M)={0,,n1}. We call n the size of 𝚀 and we denote it with |𝚀|. If σ is a permutation over the set {0..n1}, the re-indexing σ(𝐩) of 𝐩 is the pair [σ(𝚀);σ(M)] where σ(M) is M with each occurence of ri replaced by rσ(i).

Definition 3.4 (Program).

A program is an equivalence class of raw programs under re-indexing. We identify programs with their representative elements. The set of all programs is denoted with 𝒫.

Example 3.5.

The following two raw programs are equal modulo re-indexing: [|ψ|ϕ;r0,r1]=[|ϕ|ψ;r1,r0]. In both cases, |ψ is the first qubit in the pair and |ϕ the second one. Re-indexing is agnostic with respect to entanglement, and we also have the raw program [α|00+β|01+γ|10+δ|11;r0,r1] being a re-indexation of the raw program [α|00+γ|01+β|10+δ|11;r1,r0]: they are two representative elements of the same program.

4 Operational Semantics

The operational semantics of λ-calculus is usually formalized by means of a rewriting system. In the setting of λ-calculus, the rewriting rules are also known as reductions.

Rewriting System.

We recall that a rewriting system is a pair (𝒜,) consisting of a set 𝒜 and a binary relation on 𝒜 whose pairs are written ts and called reduction steps. We denote (resp. =) the transitive-reflexive (resp. reflexive) closure of . We write tu if ut. If 1,2 are binary relations on 𝒜 then 12 denotes their composition.

Probabilistic Rewriting.

In order to define the operational semantics of the quantum lambda calculus, we need to formalize probabilistic reduction. We do so by means of a rewrite system over multidistributions, adopting the monadic approach recently developed in the literature of probabilistic rewriting (see e.g. [6, 12, 19, 23]). Reduction is here defined not simply on programs, but on (monadic) structures representing probability distributions over programs, called multidistributions.

The operational semantics of the language is defined by specifying the probabilistic behavior of programs, then lifting reduction to multidistributions of programs. Let us recall the notions.
Probability Distributions.  Given a countable set Ω, a function μ:Ω[0,1] is a probability subdistribution if μ:=ωΩμ(ω)1 (a distribution if μ=1). Subdistributions allow us to deal with partial results. We write 𝙳(Ω) for the set of subdistributions on Ω, equipped with the pointwise order on functions: μρ if μ(ω)ρ(ω) for all ωΩ. 𝙳(Ω) has a bottom element (the subdistribution 𝟎) and maximal elements (all distributions).
Multidistributions.  We use multidistributions [6] to syntactically represent distributions. A multidistribution 𝚖={{qi𝐩i}}iI on the set of programs 𝒫 is a finite multiset of pairs of the form qi𝐩i, with qi]0,1], 𝐩i𝒫, and iqi1. The set of all multidistributions on 𝒫 is 𝙼𝙳(𝒫). The sum of two multidistributions is noted +, and is simply the union of the multisets. The product q𝚖 of a scalar q and a multidistribution 𝚖 is defined pointwise q{{pi𝐩i}}iI:={{(qpi)𝐩i}}iI. We write {{𝐩}} for {{ 1𝐩}}.

4.1 The Rewrite System 𝓠

𝒬 is the rewrite system (𝙼𝙳(𝒫),) where the relation 𝙼𝙳(𝒫)×𝙼𝙳(𝒫) is monadically defined in two phases. First, we define one-step reductions from a program to a multidistribution. For example, if 𝐩 is the program [12(|0+|1);𝚖𝚎𝚊𝚜(r0,M,N)], the program 𝐩 reduces in one step to {{12[|;M],12[|;N]}}. Then, we lift the definition of reduction to a binary relation on 𝙼𝙳(𝒫), in the natural way. So for instance, reusing 𝐩 above, {{12𝐩,12[𝚀;(λx.xx)F]}} reduces in one step to {{14[|;M],14[|;N],12[𝚀;FF]}}. Let us see the details.

I. Programs Reduction.

We first define the reduction of a program 𝐩 to a multidistribution. The operational behavior of 𝐩 is given by beta reduction, denoted with β, and specific rules for handling quantum operations – the corresponding reduction is denoted with q. Formally, the relations β and q (also called reduction steps) are subsets of 𝒫×𝙼𝙳(𝒫) and are defined in Figure 2 by contextual closure of the root rules β and q, given in Figure 1. The relation is then the union βq.
The root rules.   They are given in Figure 1. We call redex the term on the left-hand side of a rule. Beta rules come in two flavors, the linear one (b), which does not allow for duplication, and the non-linear one (b!), which possibly duplicate boxes (i.e. terms of shape !M). Quantum rules act on the qubits state, exactly implement the operation which we had informally described in Section 3. Notice that the rule (m) has a probabilistic behaviour. The qubit which has been measured can be discharged (as we actually do).
Contextual Closures.   They are defined in Figure 2. Observe that while the β rules are closed under arbitrary contexts C, while the quantum rules are restricted to surface contexts S (no reduction in the scope of a ! operator, nor in the branches of 𝚖𝚎𝚊𝚜(,,)). This constraints guarantee that qubits are neither duplicated nor delated.

 Remark 4.1 (Reindexing).

As in [42], reduction is defined on programs, which are equivalence classes. We define the rules on a convenient representative. For example, in Figure 1 rule (u1) reduces the redex UAr0. Modulo reindexing, the same rules can be applied to any other register.

II. Lifting.

The lifting of a relation r𝒫×𝙼𝙳(𝒫) to a relation on multidistributions is defined in Figure 3. In particular, ,β,q, lift to ,β,q, respectively.

𝜷 rules Quantum rules
(b)[𝚀;(λx.M)N]β{{[𝚀;M{N/x}]}}(!b)[𝚀;(λ!x.M)!N]β{{[𝚀;M{N/x}]}} (n)[𝚀;𝚗𝚎𝚠]q{{[𝚀|0;rn]}} where |𝚀|=n(m)[𝚀;𝚖𝚎𝚊𝚜(rn,M,N)]q{{|α0|2[𝚀0;M],|α1|2[𝚀1;N]}} where 𝚀=α0𝚀0|0+α1𝚀1|1 and Q has n+1 qubits(u1)for A unary operator:[𝚀;UAr0]q{{[𝚀;r0]}} where 𝚀 is (A𝖨𝖽)𝚀(u2) for A binary operator:[𝚀;(UAr0,r1]q{{[𝚀;r0,r1]}} where 𝚀 is (A𝖨𝖽)𝚀.
Figure 1: Root rules ().
𝜷 steps Quantum steps
[𝚀;M]β{{[𝚀;M]}}[𝚀;CM]β{{[𝚀;CM]}} [𝚀;M]q{{pi[𝚀i;Mi]}}[𝚀;SM]q{{pi[𝚀i;SMi]}}

:=βq

Figure 2: Contextual closure of root rules: ().

Reduction Sequences.

A -sequence (or reduction sequence) from 𝚖 is a sequence 𝚖=𝚖0,𝚖1,𝚖2, such that 𝚖i𝚖i+1 for every i. We write 𝚖0𝚖 to indicate the existence of a finite reduction sequence from 𝚖0, and 𝚖0k𝚖 to specify the number k of -steps. Given a program 𝐩 and 𝚖0={{𝐩}}, the sequence 𝚖0𝚖1 naturally models the evaluation of 𝐩; each 𝚖k expresses the “expected” state of the system after k steps.

Validity.

Validity of terms is preserved:

Proposition 4.2.

If M is a valid term, and [𝚀;M]{{pi[𝚀i;Mi]}}, then Mi is a valid term.

Notice that the restriction of q to surface contexts is necessary to respect the linearity of quantum computation, avoiding duplication or deletion of qubits.

Examples.

Let us see the definitions at work in a few examples. We first formalize the recursive program from Equation 1. Recall that H is the Hadamar gate, and Iλx.x.

Example 4.3 (Flipping the quantum coin).

The program in Equation 1 can be written as 𝐩[|;Δ!Δ], where | is just the empty memory and Δλ!x.𝚖𝚎𝚊𝚜(H𝚗𝚎𝚠,I,x!x). A reduction from 𝐩 behaves as follows. At every reduction step, we underline the redex.

{{[|;Δ!Δ¯]}} {{[|;𝚖𝚎𝚊𝚜(UH𝚗𝚎𝚠¯,I,Δ!Δ)]}}{{[|0;𝚖𝚎𝚊𝚜(UHr0¯,I,Δ!Δ)]}}
{{[22(|0+|1);𝚖𝚎𝚊𝚜(r0,I,Δ!Δ)¯]}}{{12[|;I],12[|;Δ!Δ]}}
{{12[|;I],14[|;I],14[|;Δ!Δ]}}

Notice that the first step is a non-linear β reduction. The reduction of 𝚗𝚎𝚠 allocates a fresh qubit in the memory, corresponding to the register r0. The redex UHr0 applies the Hadamar gate H to that qubit. The last reduction performs measurement, yielding a probabilistic outcome.

Example 4.4 (Entangled pair).

Let 𝐩[|;𝚕𝚎𝚝x,y=U𝖢𝖭𝖮𝖳UH𝚗𝚎𝚠,𝚗𝚎𝚠𝚒𝚗𝚖𝚎𝚊𝚜(y,I,I)x] (where 𝚕𝚎𝚝x,y is sugar for an opportune encoding). This program produces an entangled pair of qubits (notice how 𝖢𝖭𝖮𝖳 is applied to a pair of registers) and then measures one of the qubits. Let us formalize its behaviour:

{{𝐩}} 𝗌{{[22(|0+|1)|0;𝚕𝚎𝚝x,y=U𝖢𝖭𝖮𝖳r0,r1¯𝚒𝚗𝚖𝚎𝚊𝚜(y,I,I)x]}}
𝗌{{[22|00+22|11;𝚕𝚎𝚝x,y=r0,r1𝚒𝚗𝚖𝚎𝚊𝚜(y,I,I)x]}}
𝗌{{[22|00+22|11;𝚖𝚎𝚊𝚜(r1,I,I)r0]}}𝗌{{12[|0;Ir0],12[|1;Ir0]}}

{{𝐩}}{{𝐩}}𝐩𝚖{{𝐩}}𝚖({{𝐩i}}𝚖i)iI{{pi𝐩iiI}}iIpi𝚖i

Figure 3: Lifting of .

4.2 Surface Reduction and Surface Normal Forms

So far, we have defined a very liberal notion of reduction, in which β is unrestricted – it can validly be performed even inside a !-box. What shall we adopt as evaluation strategy?

In the setting of calculi based on linear logic, as Simpson’s calculus [43] and the Bang calculus [20], the natural candidate is surface reduction: the restriction of beta to surface contexts (𝗌β) plays a role akin to that of head reduction in classical λ-calculus, yielding to similar factorization and normalization results which relate β and 𝗌β (as recalled in Table 1). The terms of interest are here surface normal forms (snf), such as x or !M. They are the analog of values in Plotkin’s Call-by-Value λ-calculus and of head normal forms in classical λ-calculus – such an analogy can indeed be made precise [20, 31, 10]333A consequence of Girard’s translation of Call-by-Name and Call-by-Value λ-calculi into Linear Logic..

In our setting, surface reduction and surface normal forms(snf) also play a privileged role.

Surface Reduction.

Surface steps 𝗌𝒫×𝙼𝙳(𝒫) (Figure 5) are the union q𝗌β of quantum steps together with 𝗌β, i.e. the closure under surface contexts S of the β rules. A program 𝐩 is a surface normal form (snf) if 𝐩𝗌, i.e. no surface reduction is possible from it.

A -step which is not surface is noted ¬𝗌. The lifting of 𝗌,¬𝗌 to relations on multidistributions is denoted 𝗌,¬𝗌 respectively.

 Remark 4.5.

Notice that ¬𝗌 steps do not act on the qubits state, since they are β steps.

Strict Lifting.

To guarantee normalization results (Section 7), we will need a stricter form of lifting, noted 𝗌 (Figure 4), forcing a reduction step to be performed in each program of the multidistribution 𝚛, if a redex exists. Clearly 𝗌𝗌.

𝐩𝗌{{𝐩}}𝗌{{𝐩}}𝐩𝗌𝚖{{𝐩}}𝗌𝚖({{𝐩i}}𝗌𝚖i)iI{{pi𝐩iiI}}𝗌iIpi𝚖i

Figure 4: Strict lifting of 𝗌.
Example 4.6.

We will prove that the strict lifting 𝗌 guarantees to reach snf, if any exist. This is obviously not the case for 𝗌-sequences:

{{12I𝚗𝚎𝚠,12(λ!x.x!x)!(λ!x.x!x)¯}}𝗌{{12I𝚗𝚎𝚠,12(λ!x.x!x)!(λ!x.x!x)¯}}𝗌{{12I𝚗𝚎𝚠,12(λ!x.x!x)!(λ!x.x!x)¯}}𝗌

On the Interest of Surface Normal Forms.

What is the result of running a quantum program? In general, since computation is probabilistic, the result of executing a program will be a distribution over some outcomes of interest. A natural choice are programs of shape 𝐩[𝚀;S], with S in surface normal form, ensuring that at this point, the qubits state 𝚀 is a stable piece of information (it will not further evolve in the computation). Indeed:

a program 𝐩𝗌 (i.e. in snf) will no longer modify the qubits state.

 Remark 4.7.

Notice instead that a program 𝐩q (no quantum step is possible) is not necessarily done in manipulating the quantum memory. Further β reductions may unblock further quantum steps. Think of (λ!x.𝖢𝖭𝖮𝖳Hx,x)(!𝚗𝚎𝚠) from Example 2.1.

4.3 Sum-up Tables

Let us conclude the section summarizing the reduction relations at play.

Relations.

𝒫×𝙼𝙳(𝒫) Definition Lifted to 𝙼𝙳(𝒫)×𝙼𝙳(𝒫) Strict lifting
β contextual closure of β-rules β
𝗌β closure by surface context of β-rules 𝗌β 𝗌β
q closure by surface context of q-rules q q
βq
𝗌 𝗌βq 𝗌 𝗌
¬𝗌 𝗌 ¬𝗌

Reduction Sequences.

Finite reduction sequence
𝚖𝚗 there is a -sequence from 𝚖 to 𝚗
𝚖𝗌𝚗 there is a 𝗌-sequence from 𝚖 to 𝚗
𝚖𝗌𝚗 there is a 𝗌-sequence from 𝚖 to 𝚗

5 Rewriting Theory for 𝓠: Overview of the Results

We are now going to study reduction on multidistributions of programs, namely the general reduction (corresponding to the lifting of ) and surface reductions (corresponding to the lifting of 𝗌), and the relation between the two. Let us discuss each point.

  1. 1.

    The reduction allows for unrestricted β reduction. For example, we can rewrite in the scope of a Bang operator ! (perhaps to optimize the thunked code before copying it several times). We prove that is confluent, providing a general framework for rewriting theory. This (very liberal) reduction has a foundational role, in which to study the equational theory of the calculus and to analyze programs transformations.

  2. 2.

    Surface reduction 𝗌 plays the role of an evaluation strategy, in which however the scheduling (how redexes should be fired) is not fully specified444This is not only convenient, as it allows for parallel implementation, but it is necessary for standardization [26]. For example 𝐩=[|;𝚗𝚎𝚠,H𝚗𝚎𝚠] has two surface redexes, enabling two different steps. We will prove (by proving a diamond property) that surface reduction (𝗌) is “essentially deterministic” in the sense that while the choice of the redex to fire is non-deterministic, the order in which such choices are performed are irrelevant to the final result.

  3. 3.

    The two reductions are related by a standardization result (˜6.7) stating that if 𝚖𝚗 then 𝚖𝗌¬𝗌𝚗. Standardization is the base of normalization results, concerning properties such as “program 𝐩 terminates with probability p.”

  4. 4.

    We prove that 𝗌 is a normalization strategy for , namely if 𝐩 may converge to surface normal form with probability p using the general reduction , then 𝗌 reduction must converge to surface normal form with probability p. Informally, we can write that 𝚖p implies 𝚖sp (corresponding to the last line in Table 1). To formalize and prove such a claim we will need more tools, because probabilistic termination is asymptotic, i.e. it appears as a limit of a possibly infinite reduction. We treat this in Section 7, where we rely on techniques from [2, 23, 24].

6 Confluence and Finitary Standardization

We first recall standard notions which we are going to use.

Confluence, Commutation, and all That (a quick recap).

The relation is confluent if . A stricter form is the diamond  implies , which is well known to imply confluence. Two relations and on A commute if :   implies . Confluence and factorization are both commutation properties: a relation is confluent if it commutes with itself.

An element u𝒜 is a -normal form if there is no t such that ut (written u↛).
On normalization. In general, a term may or may not reduce to a normal form. And if it does, not all reduction sequences necessarily lead to normal form. How do we compute a normal form? This is the problem tackled by normalization: by repeatedly performing only specific steps, a normal form will be computed, provided that t can reduce to any. Intuitively, a normalizing strategy for is a reduction strategy which, given a term t, is guaranteed to reach normal form, if any exists.

6.1 Surface Reduction has the Diamond Property

In this section, we first prove that surface reduction (𝗌 and 𝗌) has the diamond property:

𝚛𝗌𝚖𝗌𝚜 implies 𝚛𝗌𝚗𝗌𝚜(for some 𝚗) (Diamond)

then we show that is confluent.

Here we adapt techniques used in probabilistic rewriting [6, 22, 26]. Proving the diamond property is however significantly harder than in the case of probabilistic λ-calculi, because we need to take into account also the qubits state, and the corresponding registers. If a program 𝐩=[𝚀;M] has two different reductions, we need to join in one step not only the terms, but also their qubits states, working up to re-indexing of the registers (recall that programs are equivalence classes modulo re-indexing, see also Example 3.5). The following is an example, just using the simple construct 𝚗𝚎𝚠. Measurement makes the situation even more delicate.

Example 6.1.

Let 𝐩=[|;𝚗𝚎𝚠,(H𝚗𝚎𝚠)]. The following are two different reduction sequences form 𝐩. The two normal forms are the same program (˜3.4). Here, |+22(|0+|1).

[|;𝚗𝚎𝚠¯,(H𝚗𝚎𝚠)] 𝗌{{[|0;r0,(H𝚗𝚎𝚠¯)]}}𝗌{{[|0;r0,(Hr1¯)]}}𝗌{{[|0|+;r0,r1]}}
[|;𝚗𝚎𝚠,(H𝚗𝚎𝚠¯)] 𝗌{{[|0;𝚗𝚎𝚠,(Hr0¯)]}}𝗌{{[|+;𝚗𝚎𝚠¯,(r0)]}}𝗌{{[|+|0;r1,r0]}}

The key result is the following version of diamond (commutation). The proof – quite technical – is given in the extended version [25]. Recall that 𝗌𝗌.

Lemma 6.2 (Pointed Diamond).

Assume 𝐩=[𝚀;M] and that M has two distinct redexes, such that 𝐩𝗌b𝚖1 and 𝐩𝗌c𝚖2. Then there exists 𝚗 such that 𝚖1𝗌c𝚗 and 𝚖2𝗌b𝚗. Moreover, no term Mi in 𝚖1={{pi[𝚀i;Mi]}}iI and no term Mj in 𝚖2={{pj[𝚀j;Mj]}}jJ is in snf.

From the above result we obtain the diamond property.

Proposition 6.3 (Diamond).

Surface reductions 𝗌 and 𝗌 have the diamond property.

In its stricter form, the diamond property guarantees that the non determinism in the choice of the redex is irrelevant – hence the reduction 𝗌 is essentially deterministic. The technical name for this property is Newman’s random descent [36]: no matter the choice of the redex, all reduction sequences behave the same way, i.e. have the same length, and if terminating, they do so in the same normal form. Formalized by Theorem 7.3, we use this fact to establish that 𝗌 is a normalizing strategy for .

6.2 Confluence of

We modularize the proof of confluence by using a classical technique, Hindley-Rosen lemma, stating that if 1 and 2 are binary relations on the same set , then their union 12 is confluent if both 1 and 2 are confluent, and 1 and 2 commute.

Theorem 6.4.

The reduction satisfies confluence.

Proof.

The proof that βq is confluent, is easily obtained from Lemma 6.2, by using Hindley-Rosen Lemma. We already have most of the elements: β is confluent: because β is; q is confluent: because it is diamond (˜6.3); q and β commute: by Lemma 6.2, we already know that q and 𝗌β commute, hence we only need to verify that q and ¬𝗌β commute, which is easily done.

6.3 Surface Standardization

We show that any sequence can be factorized as 𝗌¬𝗌 (˜6.7). Standardization is proved via the modular technique proposed in [1], which in our notation can be stated as follows:

Lemma 6.5 (Modular Factorization [1]).

𝗌¬𝗌 if the following conditions hold:

  1. 1.

    β𝗌β¬𝗌β, and

  2. 2.

    ¬𝗌β𝗌q𝗌qβ.

Condition 1. in Lemma 6.5 is immediate consequence of Simpson’s surface standardization for the Λ! calculus [43] stating that β𝗌β¬𝗌β. Condition 2. in Lemma 6.5 is obtained from the following pointed version:

Lemma 6.6.

[𝚀;M]¬𝗌β{{[𝚀;P]}} and [𝚀;P]q𝚗 implies [𝚀;M]qβ𝚗.

Proof.

By induction on the context S such that P=SR and [𝚀;SR]q{{pi[𝚀i;SRi]}}=𝚗. We exploit in an essential way the fact that M and P have the same shape. By Lemmas 6.5 and 6.6, we obtain the main result of this section:

Theorem 6.7 (Surface Standardization).

𝚖𝚗 implies 𝚖𝗌¬𝗌𝚗

 Remark 6.8 (Strict vs non-strict).

Please observe that standardization is stated in terms of the non-strict lifting (𝗌) of 𝗌, as 𝗌 could reduce more than what is desired. Dually, normalization holds in terms of the strict lifting 𝗌, for the reasons already discussed in Example 4.6.

A Reading of Surface Standardization.

A program 𝐩 in snf will no longer modify the qubits state. Intuitively, 𝐩 has already produced the maximal amount of quantum data that it could possibly do. We can read Surface Standardization as follows. Assume {{𝐩}}𝚗 where all terms in 𝚗 are in snf (we use metavariables Si,Si to indicate terms in snf). Standardization guarantees that surface steps suffice to reach a multidistribution 𝚗 whose programs have the exact same information content as 𝚗:

{{𝐩}}𝚗={{pi[𝚀i;Si]}}iI  implies {{𝐩}}𝗌𝚗={{pi[𝚀i;Si]}}iI.

This because ˜6.7 implies {{𝐩}}𝗌𝚗¬𝗌𝚗, and from 𝚗¬𝗌𝚗 we deduce that each element pi[𝚀i;Si] in 𝚗 must come form an element pi[𝚀i;Si] in 𝚗 where Si is in snf and where the qubits state 𝚀i (and the associated probability pi) are exactly the same.

7 Probabilistic Termination and Asymptotic Normalization

What does it mean for a program to reach surface normal form (snf)? Since measurement makes the reduction probabilistic, we need to give a quantitative answer.

Probabilistic Termination.

The probability that the system described by the multidistribution 𝚖={{pi[𝚀i;Mi]iI}} is in surface normal form is expressed by a scalar p=P(𝚖)[0,1] which is defined as follows:

P(𝚖)=iIqiqi={piif Mi snf0otherwise

Let 𝐩=[𝚀;M] and 𝚖0={{𝐩}}. Let 𝚖0𝚖1𝚖2 a reduction sequence. P(𝚖k) expresses the probability that after k steps 𝐩 is in snf. The probability that 𝐩 reaches snf along the (possibly infinite) reduction sequence 𝚖nnN is easily defined as a limit: supn{P(𝚖n)}. We also say that the sequence 𝚖nnN converges with probability supn{P(𝚖n)}.

Example 7.1 (Recursive coin, cont.).

Consider again Example 4.3. After 4 steps, the program terminates with probability 12. After 4 more steps, it terminates with probability 12+14, and so on. At the limit, the reduction sequence converges with probability k:112k=1.

7.1 Accounting for Several Possible Reduction Sequences

Since is not a deterministic reduction, given a multidistribution 𝚖, there are several possible reduction sequences from 𝚖, and therefore several outcomes (limits) are possible. Following [23], we adopt the following terminology:

Definition 7.2 (Limits).

Given 𝚖, we write

  • 𝚖p, if there exists a -sequence 𝚖nnN from 𝚖 whose limit is p.

  • 𝙻𝚒𝚖(𝚖,){p𝚖p} is the set of limits of 𝚖.

  • 𝚖 denotes the greatest element of   𝙻𝚒𝚖(𝚖,), if any exists.

Intuitively, 𝐩 is the best result that any -sequence from 𝐩 can effectively produce. If the set 𝙻𝚒𝚖(𝐩,) has a sup α but not a greatest element (think of the open interval [0,1)), it means that in fact, no reduction can produce α as a limit. Notice also that, when reduction is deterministic, from any 𝐩 there is only one maximal reduction sequence, and so it is always the case that 𝐩=supn{P(𝐩n)}. Below we exploit the interplay between different rewriting relations, and their limit; it is useful to summarize our notations in Table 2.

Table 2: Limit of (possibly infinite) reduction sequences.
Convergence (Def.n7.2)
𝚖p there is a -sequence from 𝚖 which converges with probability p
𝚖sp there is a 𝗌-sequence from 𝚖 which converges with probability p
𝚖sp there is a 𝗌-sequence from 𝚖 which converges with probability p

7.2 Asymptotic Normalization

Given a quantum program 𝐩, does 𝐩 exists? If this is the case, can we define a normalizing strategy which is guarantee to converge to 𝐩? The answer is positive. The main result of this section is that such a normalizing strategy does exist, and it is 𝗌. More precisely, we show that any 𝗌-reduction sequence from 𝐩 converges to the same limit, which is exactly 𝐩. We establish the following results, for any arbitrary 𝚖𝙼𝙳(𝒫). Theorem 7.3 is a direct – and the most important – consequence of the diamond property of 𝗌. The proof uses both point 1. and point 2. of Lemma 6.2. For Theorem 7.4, the proof relies on an abstract technique from [24].

Theorem 7.3 (Random Descent).

All 𝗌-sequences from 𝚖 converge to the same limit.

Theorem 7.4 (Asymptotic completeness).

𝚖p implies 𝚖sq, with pq.

Theorem 7.4 states that, for each 𝚖, if reduction may converge to snf with probability p, then 𝗌 reduction must converge to snf with probability (at least) p. Theorem 7.3 states that, for each 𝚖, the limit q of strict surface reductions (𝗌) from 𝚖 is unique.

Summing-up, the limit q of 𝗌 reduction is the best convergence result that any sequence from 𝚖 can produce. Since 𝗌, then q is also the greatest element in 𝙻𝚒𝚖(𝚖,), i.e. 𝚖=q. We hence have proved the following, where item (2.) is the asymptotic analogue of the normalization results in Table 1.

Theorem 7.5 (Asymptotic normalization).

For each 𝐩𝒫, (1.) the limit 𝙻𝚒𝚖(𝐩,) has a greatest element 𝐩, and (2.) 𝐩s𝐩.

8 Related Work and Discussion

In this paper, we propose a foundational notion of (untyped) quantum λ-calculus with a general reduction, encompassing the full strength of β-reduction while staying compatible with quantum constraints. We then introduce an evaluation strategy, and derive standardization and confluence results. We finally discuss normalization of programs at the limit.

Related Works.

For quantum λ-calculi without measurement, hence without probabilistic behavior, confluence [13, 5] (and even a special form of standardization [13]) have been studied since early work. When dealing with measurement, the analysis is far more challenging. To our knowledge, only confluence has been studied, in pioneering work by Dal Lago, Masini and Zorzi [14]. Remarkably, in order to deal with probabilistic and asymptotic behavior, well before the advances in probabilistic rewriting of which we profit, the authors introduce a very elaborated technique. Notice that in [14] reduction is non-deterministic, but restricted to surface reduction. In our paper, their result roughly corresponds to the diamond property of 𝗌, together with Theorem 7.3.

No “standard” standardization results (like the classical ones we recall in Table 1) exist in the literature for the quantum setting. Notice that the form of standardization in [13] is a reordering of the (surface) measurement-free reduction steps, so to perform first beta steps, then quantum steps, in agreement with the idea that a quantum computer consists of a classical device “setting up” a quantum circuit, which is then fed with an input. A similar refinement is also possible for the corresponding fragment of our calculus (namely measurement-free 𝗌), but clearly does not scale: think of (λx.x)𝚖𝚎𝚊𝚜(UH𝚗𝚎𝚠,M,N), where the argument of a function is guarded by a measurement.

Our term language is close to [14]. How such a calculus relate with a Call-by-Value λ-calculus such as [42]? A first level of answer is that our setting is an untyped λ-calculus; linear abstraction, together with well forming rules, allows for the management of quantum data. In [42], the same role is fulfilled by the (Linear Logic based) typing system.

Despite these differences, we do expect that our results can be transferred. As already mentioned, the redex (λ!x.M)!N reflects a Call-by-Push-Value mechanism, which in untyped form has been extensively studied in the literature with the name of Bang calculus [20, 31, 10], as a uniform framework to encode both Call-by-Name (CbN) and Call-by-Value (CbV). Semantical but also syntactical properties, including confluence [20, 31] and standardization [24, 3] are analyzed in the Bang setting, and then transferred via reverse simulation to both CbV and CbN. More precisely, a CbV (resp. CbN) translation maps forth-and-back weak (resp. head) reduction into surface reduction. Surface normal forms are the CbV image of values (and the CbN image of head normal forms). Since the Bang calculus is exactly the fragment of Simpson’s calculus [43] without linear abstraction, one may reasonably expect that our calculus can play a similar role in the quantum setting. It seems however that a back-and forth translation of CbV (or CbN) will need to encompass types.

A last line of works worth mentioning is the series of works based on Lineal [5, 4, 18]. However, these works differ from our approach in the sense that the λ-terms themselves are subject to superposition: the distinction between classical and quantum data in an untyped setting is unclear.

References

  • [1] Beniamino Accattoli, Claudia Faggian, and Giulio Guerrieri. Factorize factorization. In 29th EACSL Annual Conference on Computer Science Logic, CSL 2021, January 25-28, 2021, Ljubljana, Slovenia (Virtual Conference), volume 183 of LIPIcs, pages 6:1–6:25. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.CSL.2021.6.
  • [2] Zena M. Ariola and Stefan Blom. Skew confluence and the lambda calculus with letrec. Annals of Pure and Applied Logic, 117(1):95–168, 2002. doi:10.1016/S0168-0072(01)00104-X.
  • [3] Victor Arrial, Giulio Guerrieri, and Delia Kesner. The benefits of diligence. International Joint Conference on Automated Reasoning, IJCAR 2024,, 2024.
  • [4] Pablo Arrighi, Alejandro Díaz-Caro, and Benoît Valiron. The vectorial lambda-calculus. Information and Computation, 254:105–139, 2017. doi:10.1016/j.ic.2017.04.001.
  • [5] Pablo Arrighi and Gilles Dowek. Lineal: A linear-algebraic lambda-calculus. Logical Methods in Computer Science, 13(1), 2017. doi:10.23638/LMCS-13(1:8)2017.
  • [6] Martin Avanzini, Ugo Dal Lago, and Akihisa Yamada. On probabilistic term rewriting. Sci. Comput. Program., 185, 2020. doi:10.1016/J.SCICO.2019.102338.
  • [7] Hendrik Pieter Barendregt. The Lambda Calculus – Its Syntax and Semantics, volume 103 of Studies in logic and the foundations of mathematics. North-Holland, 1984.
  • [8] Benjamin Bichsel, Maximilian Baader, Timon Gehr, and Martin T. Vechev. Silq: a high-level quantum language with safe uncomputation and intuitive semantics. In Alastair F. Donaldson and Emina Torlak, editors, Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI’20, pages 286–300. ACM, 2020. doi:10.1145/3385412.3386007.
  • [9] Olivier Bournez and Florent Garnier. Proving positive almost sure termination under strategies. In Rewriting Techniques and Applications, RTA, pages 357–371, 2006. doi:10.1007/11805618_27.
  • [10] Antonio Bucciarelli, Delia Kesner, Alejandro Ríos, and Andrés Viso. The bang calculus revisited. Inf. Comput., 293:105047, 2023. doi:10.1016/J.IC.2023.105047.
  • [11] Christophe Chareton, Sébastien Bardin, Franccois Bobot, Valentin Perrelle, and Benoît Valiron. An automated deductive verification framework for circuit-building quantum programs. In Nobuko Yoshida, editor, Proceedings of the 30th European Symposium on Programming Languages and Systems, ESOP 2021, volume 12648 of Lecture Notes in Computer Science, pages 148–177. Springer, 2021. doi:10.1007/978-3-030-72019-3_6.
  • [12] Ugo Dal Lago, Claudia Faggian, Benoît Valiron, and Akira Yoshimizu. The geometry of parallelism: classical, probabilistic, and quantum effects. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 833–845. ACM, 2017. doi:10.1145/3009837.
  • [13] Ugo Dal Lago, Andrea Masini, and Margherita Zorzi. On a measurement-free quantum lambda calculus with classical control. Math. Struct. Comput. Sci., 19(2):297–335, 2009. doi:10.1017/S096012950800741X.
  • [14] Ugo Dal Lago, Andrea Masini, and Margherita Zorzi. Confluence results for a quantum lambda calculus with measurements. Electr. Notes Theor. Comput. Sci., 270(2):251–261, 2011. doi:10.1016/j.entcs.2011.01.035.
  • [15] Ugo Dal Lago and Margherita Zorzi. Probabilistic operational semantics for the lambda calculus. RAIRO Theor. Informatics Appl., 46(3):413–450, 2012. doi:10.1051/ita/2012012.
  • [16] Ugo de’Liguoro and Adolfo Piperno. Non deterministic extensions of untyped lambda-calculus. Inf. Comput., 122(2):149–177, 1995. doi:10.1006/INCO.1995.1145.
  • [17] Alessandra Di Pierro, Chris Hankin, and Herbert Wiklicky. Probabilistic lambda-calculus and quantitative program analysis. J. Log. Comput., 15(2):159–179, 2005. doi:10.1093/LOGCOM/EXI008.
  • [18] Alejandro Díaz-Caro, Mauricio Guillermo, Alexandre Miquel, and Benoît Valiron. Realizability in the unitary sphere. In Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS’19, pages 1–13. IEEE, 2019. doi:10.1109/LICS.2019.8785834.
  • [19] Alejandro Díaz-Caro and Guido Martinez. Confluence in probabilistic rewriting. Electr. Notes Theor. Comput. Sci., 338:115–131, 2018.
  • [20] Thomas Ehrhard and Giulio Guerrieri. The bang calculus: an untyped lambda-calculus generalizing call-by-name and call-by-value. In Proceedings of the 18th International Symposium on Principles and Practice of Declarative Programming (PPDP 2016), pages 174–187. ACM, 2016. doi:10.1145/2967973.2968608.
  • [21] Thomas Ehrhard, Michele Pagani, and Christine Tasson. The computational meaning of probabilistic coherence spaces. In Proceedings of the 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011, June 21-24, 2011, Toronto, Ontario, Canada, pages 87–96. IEEE Computer Society, 2011. doi:10.1109/LICS.2011.29.
  • [22] Claudia Faggian. Probabilistic rewriting: Normalization, termination, and unique normal forms. In Herman Geuvers, editor, 4th International Conference on Formal Structures for Computation and Deduction, FSCD 2019, June 24-30, 2019, Dortmund, Germany, volume 131 of LIPIcs, pages 19:1–19:25. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.FSCD.2019.19.
  • [23] Claudia Faggian. Probabilistic rewriting and asymptotic behaviour: on termination and unique normal forms. Log. Methods Comput. Sci., 18(2), 2022. doi:10.46298/LMCS-18(2:5)2022.
  • [24] Claudia Faggian and Giulio Guerrieri. Factorization in call-by-name and call-by-value calculi via linear logic. In Foundations of Software Science and Computation Structures - 24th International Conference, FOSSACS 2021, volume 12650 of Lecture Notes in Computer Science, pages 205–225. Springer, 2021. doi:10.1007/978-3-030-71995-1_11.
  • [25] Claudia Faggian, Gaetan Lopez, and Benoît Valiron. A rewriting theory for quantum λ-calculus. CoRR, abs/2411.14856, 2024. arXiv:2411.14856.
  • [26] Claudia Faggian and Simona Ronchi Della Rocca. Lambda calculus and probabilistic computation. In 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2019, Vancouver, BC, Canada, June 24-27, 2019, pages 1–13. IEEE, 2019. doi:10.1109/LICS.2019.8785699.
  • [27] Francesco Gavazzo and Claudia Faggian. A relational theory of monadic rewriting systems, part I. In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021, pages 1–14. IEEE, 2021. doi:10.1109/LICS52264.2021.9470633.
  • [28] Simon J. Gay. Quantum programming languages: survey and bibliography. Mathematical Structures in Computer Science, 16(4):581–600, 2006. doi:10.1017/S0960129506005378.
  • [29] Jean-Yves Girard. Linear logic. Theor. Comput. Sci., 50:1–102, 1987. doi:10.1016/0304-3975(87)90045-4.
  • [30] Alexander S. Green, Peter LeFanu Lumsdaine, Neil J. Ross, Peter Selinger, and Benoît Valiron. Quipper: A scalable quantum programming language. In Hans-Juergen Boehm and Cormac Flanagan, editors, Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI’13, pages 333–342. ACM, 2013. doi:10.1145/2491956.2462177.
  • [31] Giulio Guerrieri and Giulio Manzonetto. The bang calculus and the two Girard’s translations. In Proceedings Joint International Workshop on Linearity & Trends in Linear Logic and Applications (Linearity-TLLA 2018), volume 292 of EPTCS, pages 15–30, 2019. doi:10.4204/EPTCS.292.2.
  • [32] Jan-Christoph Kassing, Florian Frohn, and Jürgen Giesl. From innermost to full almost-sure termination of probabilistic term rewriting. In Naoki Kobayashi and James Worrell, editors, Foundations of Software Science and Computation Structures - 27th International Conference, FoSSaCS 2024, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2024, Luxembourg City, Luxembourg, April 6-11, 2024, Proceedings, Part II, volume 14575 of Lecture Notes in Computer Science, pages 206–228. Springer, 2024. doi:10.1007/978-3-031-57231-9_10.
  • [33] Maja H. Kirkeby and Henning Christiansen. Confluence and convergence in probabilistically terminating reduction systems. In Logic-Based Program Synthesis and Transformation - 27th International Symposium, LOPSTR 2017, pages 164–179, 2017. doi:10.1007/978-3-319-94460-9_10.
  • [34] Emanuel H. Knill. Conventions for quantum pseudocode. Technical Report LAUR-96-2724, Los Alamos National Laboratory, Los Alamos, New Mexico, US., 1996.
  • [35] Dongho Lee, Valentin Perrelle, Benoît Valiron, and Zhaowei Xu. Concrete categorical model of a quantum circuit description language with measurement. In Mikolaj Bojanczyk and Chandra Chekuri, editors, Proceedings of the 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2021, volume 213 of LIPIcs, pages 51:1–51:20, 2021. doi:10.4230/LIPIcs.FSTTCS.2021.51.
  • [36] M.H.A. Newman. On theories with a combinatorial definition of equivalence. Annals of Mathematics, 43(2), 1942.
  • [37] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2002.
  • [38] Michele Pagani, Peter Selinger, and Benoît Valiron. Applying quantitative semantics to higher-order quantum computing. In Suresh Jagannathan and Peter Sewell, editors, Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’14), pages 647–658. ACM, 2014. doi:10.1145/2535838.2535879.
  • [39] Luca Paolini, Mauro Piccolo, and Margherita Zorzi. qPCF: higher-order languages and quantum circuits. Journal of Automated Reasoning, 63(4):941–966, 2019. doi:10.1007/s10817-019-09518-y.
  • [40] Jennifer Paykin, Robert Rand, and Steve Zdancewic. QWIRE: a core language for quantum circuits. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL’17, pages 846–858. ACM, 2017. doi:10.1145/3009837.3009894.
  • [41] Gordon D. Plotkin. Call-by-name, call-by-value and the lambda-calculus. Theor. Comput. Sci., 1(2):125–159, 1975. doi:10.1016/0304-3975(75)90017-1.
  • [42] Peter Selinger and Benoît Valiron. A lambda calculus for quantum computation with classical control. Mathematical Structures in Computer Science, 16:527–552, 2006. doi:10.1017/S0960129506005238.
  • [43] Alex K. Simpson. Reduction in a linear lambda-calculus with applications to operational semantics. In Term Rewriting and Applications, 16th International Conference (RTA 2005), volume 3467 of Lecture Notes in Computer Science, pages 219–234, 2005. doi:10.1007/978-3-540-32033-3_17.

Appendix A Convention for Garbage Collection

In the definition of programs, we use the convention that the size of the memory is exactly the number of registers manipulated in the term. The memory will grow when new qubits are allocated, and shrink when qubits are read (see Figure 1): the reduction perform garbage collection on the fly.

If this makes it easy to identify identical programs, it makes the proofs a bit cumbersome. We therefore rely for them on an equivalent representation, where a program can have spurious qubits, as long as they are not entangled with the rest of the memory – i.e. when measuring them would not change the state of the registers manipulated by the term. So for instance, in this model [|0|ψ;r1] is the same as [ϕ;r0].

Appendix B Technical properties

In all proofs we freely use the following closure property, which is immediate by definition of context and surface context.

Fact B.1 (Closure).
[𝚀;M]c{{pi[𝚀i;Mi]}}1.[𝚀;SM]q{{pi[𝚀i;SMi]}}[𝚀;M]β{{[𝚀;M]}}2.[𝚀;CM]β{{[𝚀;CM]}}

Surface closure (point 1.) also holds with β in place of q.

We will also use the following lemma (analog to substitutivity in [7], p.54) The proof is straightforward.

Lemma B.2 (Substitutivity).

Assume [𝚀;P]𝒫 and [𝚀;P]{{pi[𝚀i;Pi]}}. Then for each term N : [𝚀;P{N/x}]{{pi[𝚀i;Pi{N/x}]}}.

The converse also holds, and it is simply ˜B.1, that can be reformulated as follows.

Fact B.3.

Assume [𝚀;N]𝒫, [𝚀;N]q{{pi[𝚀i;Ni]}} and P a term such that x is linear in P. Then [𝚀;P{N/x}]q{{pi[𝚀i;P{Ni/x}]}}

Surface Reduction.

has a prominent role. We spell-out the definition.

Surface Reduction Step 𝗌
𝗌:=𝗌βq

Surface Beta Step 𝗌β (Surface) q-Step q
[𝚀;M]β{{[𝚀;M]}}[𝚀;SM]𝗌β{{[𝚀;SM]}} [𝚀;M]q{{pi[𝚀i;Mi]}}[𝚀;SM]q{{pi[𝚀i;SMi]}}
Figure 5: Surface Reduction Steps.

Appendix C Surface Reduction has the Diamond Property

We obtain the diamond property (˜6.3) from the pointed diamond, result using the following technique (from [26]) , which allows us to work pointwise.

Lemma (pointwise Criterion (FaggianRonchi19)).

Let a,b𝒫×𝙼𝙳(𝒫) and a,b their lifting. To prove that a,b diamond-commute, i.e.

If 𝐩b𝚖1 and 𝐩a𝚖2, then 𝚛 s.t. 𝚗a𝚛 and 𝚜b𝚛.

it suffices to prove the property (#) below (stated in terms of a single program 𝐩)

(#) If 𝐩b𝚖1 and 𝐩a𝚖2, then 𝚛 s.t. 𝚗a𝚛 and 𝚜b𝚛.

The same result holds with in place of .

The criterion together with Lemma 6.2 (Point 1.) yields

Prop (6.3).

Surface reduction 𝗌 has the diamond property. The same holds for 𝗌.

Appendix D Finitary Standardization

Shape Preservation.

We recall a basic but key property of contextual closure. If a step γ is obtained by closure under non-empty context of a rule γ, then it preserves the shape of the term. We say that T and T have the same shape if both terms belong to the same production (i.e., both terms are an application, an abstraction, a variable, a register, a term of shape !P, 𝚗𝚎𝚠, etc).

Fact D.1 (Shape preservation).

Assume[𝚀;M]{{pi[𝚀i;Mi]}}, M=CR,Mi=CRi and that the context C is non-empty. Then (for each i), M and Mi have the same shape.

An easy-to-verify consequence is the following, stating that non-surface steps (¬𝗌)

  • do not change the quantum memory

  • do not change the shape of the terms

Notice that the qubit state is unchanged by ¬𝗌 steps, since it can only be a ¬𝗌β step

Lemma D.2 (Redexes and normal forms preservation).

Assume [𝚀;M]¬𝗌β{{[𝚀;M]}}.

  1. 1.

    M is a redex iff M is a redex. In this case, either both are β-redexes, or both meas-redexes.

  2. 2.

    M is 𝗌-normal if and only if M is 𝗌-normal.

Proof of Lemma 6.6.

Lemma (Lemma 6.6).

[𝚀;M]¬𝗌β{{[𝚀;P]}} and [𝚀;P]q𝚗 implies [𝚀;M]qβ𝚗.

Proof.

By induction on the context S such that P=SR and [𝚀;SR]q{{pi[𝚀i;SRi]}}=𝚗. We exploit in an essential way the fact that M and P have the same shape.

Appendix E Asymptotic normalization

Proof Sketch.

The proof of Theorem 7.4 relies on an abstract result from the literature [24], which here we reformulate in our setting:

Lemma E.1 (Asymptotic completeness criterion [24]).

Assume

  1. i.

    𝗌-factorisation: if 𝚖𝚗 then 𝚖𝗌¬𝗌𝚗;

  2. ii.

    ¬𝗌-neutrality : 𝚖¬𝗌𝚖 implies P(𝚖)=P(𝚖).

Then:   𝚖p implies 𝚖sp.

Proof of Theorem 7.4.

We establishing the two items below, and then compose them.

  1. 1.

    𝚖p implies 𝚖sp

  2. 2.

    𝚖sp implies 𝚖sp, with pp

Item (1.) holds because 𝗌 satisfies both conditions in Lemma E.1: point (i.) holds by ˜6.7, point (ii.) by Lemma D.2. Item (2.) is immediate.