Abstract 1 Introduction 2 Preliminaries and Notation 3 Natural Number and Function Representations 4 Function Space Corecursion 5 Computable Corecursive Functions 6 Soundness 7 Conclusions and Further Work References

A Coinductive Representation of Computable Functions

Alvin Tang ORCID The Australian National University, Canberra, Australia Dirk Pattinson ORCID The Australian National University, Canberra, Australia
Abstract

We investigate a representation of computable functions as total functions over 2, the set of finite and infinite sequences over {0,1}. In this model, infinite sequences are interpreted as non-terminating computations whilst finite sequences represent the sum of their digits. We introduce a new definition principle, function space corecursion, that simultaneously generalises minimisation and primitive recursion. This defines the class of computable corecursive functions that is closed under composition and function space corecursion. We prove computable corecursive functions represent all partial recursive functions, and show that all computable corecursive functions are indeed computable by translation into the untyped λ-calculus.

Keywords and phrases:
Computability, Coinduction
Copyright and License:
[Uncaptioned image] © Alvin Tang and Dirk Pattinson; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Models of computation
; Theory of computation Streaming models ; Theory of computation Recursive functions
Acknowledgements:
The term “hiaton” to describe a token that symbolises a computational step without visible observations was suggested by Thomas Streicher.
Editors:
Corina Cîrstea and Alexander Knapp

1 Introduction

Partial recursive functions as a model of computation intuitively operate on natural numbers, and therefore have been used as a foundation of recursion theory. A notable drawback, however, is that they are partial – function values may be undefined. This complicates their use for formal reasoning, especially with the lack of support for partial functions in many interactive theorem provers [8], and the embedding of types into partial (recursive) functions require restrictions to assert well-definedness of values [15].

Coprogramming [2, 7] has been proposed as a way to deal with non-terminating computations. The main idea here is that functions progressively reveal an increasing part of their possibly infinite output. There are plenty of models tailored to specific programming paradigms and even type theories to ensure the coherence of corecursive function definitions [13]. Partial recursive functions, on the other hand, appear to be on the opposite side of the spectrum as they do not produce infinite function values – the result of an infinite computation is simply undefined.

Conceptually, the partiality in the theory of recursive functions arises from the minimisation operator. Given a function f, we compute μf by successively evaluating f(0),f(1), and return n once we find that f(n)=0. Nonetheless, once it is known that f(0)>0, we can be sure μf must be no less than 1 if it is defined. Using a unary representation of natural numbers, it is already safe to emit the digit 1. This process can then be repeated for n=2,3, and emit more 1’s whenever we find that f(n)>0. Once we find that f(n)=0, the computation terminates.

This leads to the idea of regaining totality by representing partial recursive functions as functions that operate on the set of finite and infinite sequences of 1’s, where 1ω=(1,1,1,) represents the result of an infinite computation, such as the case for μf where f(n)>0 for all n.

This unary representation of natural numbers, albeit an intuitive coinductive data structure, would not suffice for expressing all computations, at least if we aim for computable functions to be closed under composition. Furthermore, it is necessary to consider the implications when the infinite sequence 1ω is taken as a function argument. As an example, consider the function 𝖾𝗏𝖾𝗇 that determines the parity of its argument, where we represent 𝖿𝖺𝗅𝗌𝖾 by the empty sequence and 𝗍𝗋𝗎𝖾 by any non-empty sequence. In order to be total, the computation of 𝖾𝗏𝖾𝗇(1ω) must, after finitely many steps, either emit the empty sequence as a result and terminate, whence 𝖾𝗏𝖾𝗇(1ω)=𝖿𝖺𝗅𝗌𝖾, or output an initial digit 1, which indicates 𝖾𝗏𝖾𝗇(1ω)=𝗍𝗋𝗎𝖾. At this point, 𝖾𝗏𝖾𝗇 will have only read a finite section of its input, say the first k digits. This entails 𝖾𝗏𝖾𝗇(1ω)=𝖾𝗏𝖾𝗇(1k)=𝖾𝗏𝖾𝗇(1k+1), which is an obvious contradiction. This demonstrates 𝖾𝗏𝖾𝗇 does not have a representation as a total computable function on the set of finite and infinite sequences of 1’s. In fact, this exemplifies that “computable functions are continuous” [20, page 73].

We solve this problem by considering computations over 2, the set of finite and infinite sequences over {0,1}. Here, the digit 0 represents a hiaton that can be read as “we do not know yet”. The computation of 𝖾𝗏𝖾𝗇(1ω) would then be expected to yield the value 0ω, consistent with the intuition that at any finite stage of computing 𝖾𝗏𝖾𝗇(1ω), the result is yet unknown.

This provides the starting point of this computation model – considering functions as programs operating on the coinductive data structure of binary sequences. Our main contribution is a new definition principle that we call function space corecursion, allowing us to represent precisely all partial recursive functions. We show that function space corecursion can be used to express both primitive recursion and minimisation. The interpretation of binary sequences is formally defined using mixed induction-coinduction, which is essential for correlating the behaviour of these corecursive functions with that of partial recursive functions.

Analogous to partial recursive functions, the class of computable corecursive functions contains the zero, successor and projection functions, and is closed under composition and function space corecursion. A principal result of this work is the representation theorem proving the Turing completeness of this model. This involves the completeness theorem which proves that any computable program can be represented in this functional model, and the soundness theorem asserting the fact that every computable corecursive function is indeed computable by encoding them in the untyped λ-calculus.

Whilst this paper focuses on the (co)algebraic foundations of the corecursive model and its expressive power, it opens up a large number of new lines of investigation, some of which are discussed in the conclusion.

Related Work

The interpretation of computable functions over the set 2 is closely related to lazy computation, which intuitively allows partial outputs to be emitted before the end result of the computation is known. More formally, identifying whether two sequences have the same number of 1’s leads to the lazy natural numbers, and the aspects of computability have been studied in [1, 9]. Function space corecursion, the key principle this paper contributes, is nevertheless novel. Computation on possibly infinite sequences is the domain of type-2 Turing machines [19] but the interpretation is different – each sequence is a representation of a point in a topological space, rather than a trace of an intensional computation process. Infinite-time Turing machines [10] are also different as they are machine models of computation where ordinal-many steps can be taken.

2 Preliminaries and Notation

We use the standard set-theoretic notation and basic facts about algebras and coalgebras of set-endofunctors that do not go beyond the tutorial on (co)algebras and (co)induction [11]. We write × for the Cartesian product, and + for the coproduct in the category of sets and functions. Our universal algebra is similarly basic and is covered by [4, 6, 18]. In particular, we use algebraic signatures (i.e., sets of function symbols with arities) and write TΣ(X) for the set of terms with variables in X that can be constructed from a signature Σ.

We presuppose familiarity with partial recursive functions (see [16]) that we conceptualise as the least class that contains the constant zero, the unary successor function (𝗌𝗎𝖼), all k-ary projections to the ith element (πik) and is closed under primitive recursion as well as minimisation. We write f(x1,,xk) if f(x1,,xk) is defined, and f(x1,,xk) otherwise. To be precise, we take the composition of a partial function f:n with a tuple g1,,gn of k-ary partial recursive functions to be the function fg1,,gn:k where fg1,,gn(x1,,xk)=f(g1(x1,,xk),,gn(x1,,xk)) if all of g1(x1,,xk),,gn(x1,,xk) are defined, and otherwise undefined. Note that even if all gi(x1,,xk) are defined, f might be undefined at (g1(x1,,xk),,gn(x1,,xk)). In this case, fg1,,gn(x1,,xk) holds. Similarly, if g:k and h:k+2 are partial recursive functions, 𝖯𝖱(g,h):k+1 is given by:

𝖯𝖱(g,h)(0,x1,,xk) =g(x1,,xk)
𝖯𝖱(g,h)(n+1,x1,,xk) =h(n,𝖯𝖱(g,h)(n,x1,,xk),x1,,xk)

in case all arguments of h above are defined.

A similar caveat applies to minimisation, where μ(f)(x1,,xk)=n if:

  • f(m,x1,,xk) and f(m,x1,,xk)>0 for all m=0,,n1, and

  • f(n,x1,,xk)=0.

Our conventions for the untyped λ-calculus, including natural numbers and if-then-else conditionals, follow Barendregt’s encoding [3]. We write Λ for the set of all untyped λ-terms and denote β-reduction by β. Two λ-terms are considered equal by β-equality.

The empty sequence is denoted by []. If A is a set, we write An for the n-fold Cartesian product of A, A for the set of finite sequences with elements in A, and Aω for the set of infinite sequences over A. The set of finite and infinite sequences with elements in A is denoted by A=AAω. For αA, we write αn for the n-fold concatenation of α with itself, and αω for the infinite sequence ααα that arises by concatenating α with itself countably many times.

3 Natural Number and Function Representations

To establish the relationship between partial recursive functions and the new corecursive model, the representation of natural numbers and functions operating on them is formally defined with respect to binary sequences.

Definition 1 (Coinductive Representation of Numbers).

We write 2={0,1} for the set of binary digits, and ¯=2 for the set of finite and infinite sequences over 2 that we call the intensional natural numbers. For α¯, we write α if α is infinite, and α if α{0,1} is finite. A finite intensional natural number α=(a0,ak)2 represents the number n if i=0kai=n, and we write α𝗋¯n in this case. A function f:¯k¯ represents a partial numeric function f^:k if

  • f(α1,,αk) iff f^(n1,,nk) whenever each αi𝗋¯ni for i=1,,k, and

  • f(α1,,αk)𝗋¯f^(n1,,nk) in this case.

Clearly, every natural number has countably many representations, and a finite sequence represents a terminating computation. We think of infinite sequences as non-terminating computations, but do not identify them with a single symbol or value, as they can be non-terminating in many different ways. The sequence (1,1,1,) intuitively represents a point at infinity, whereas (0,0,0,) is a non-terminating computation that never reveals any information.

It is well known that ¯ arises as the final coalgebra for the set-endofunctor FX=1+2×X (see e.g., [11]). As such, it supports coinductive definitions and proof principles. In particular, we are going to use primitive corecursion in the sequel.

Proposition 2.

Let σ:¯1+2ׯ be the structure map of the final coalgebra with FX=1+2×X and let C be a set. Then, for all ϕ:CF(C+¯), there exists a unique f:C¯ such that σf=F[f,𝗂𝖽]ϕ.

For the proof, see [17]. The unique function u in the proposition above is known as an apomorphism in the literature. Spelling this out, this means that for every pairwise disjoint partition C=C0C1C2 and all functions f0:C0C, f1:C1C and f2:C2¯, there is a uniquely defined function u:C¯ that satisfies the equations on the left below.

u(c) =0:u(f0(c))if cC0 𝖺𝖽𝖽(0:α,β) =0:𝖺𝖽𝖽(α,β)
u(c) =1:u(f1(c))if cC1 𝖺𝖽𝖽(1:α,β) =1:𝖺𝖽𝖽(α,β)
u(c) =f2(c)if cC2 𝖺𝖽𝖽([],β) =β

This allows us to define a version of addition for intensional natural numbers by concatenating both sequences by the equations on the right. We note that the addition defined above is extensional in the sense that if arguments just differ in zeros, the same is true for the result.

Figure 1: Commutative diagram for the apomorphism with F¯=1+2ׯ.

More formally, we have the following notion of extensional equivalence.

Definition 3 (Extensional equivalence).

The relation of extensional equivalence is given as the mixed fixpoint defined by the inductive rules (indicated with single-line fractions) and the co-inductive rules (written with double-line fractions) below where α,β¯.

[][]αβ0:αβαβα0:βαβ0:α0:βαβ1:α1:β

More precisely, extensional equivalence is the mixed fixpoint νR.μS.I(S)C(R) such that

I(S)={(α,β)(α,β) is a conclusion of an inductive rule with premisses in S}

and C(R) as the analogously defined operator using the coinductive rules, where νX.O(X) and μX.O(X) are the greatest and least fixpoints of a monotone operator O.

We say that a function f:¯k¯ is extensional if f(α1,,αk)f(β1,,βk) whenever αiβi for i=1,,k, and a functional Φ:(¯k¯)(¯k¯) is extensional if Φ(f) is extensional whenever f is extensional.

One way to read this definition is that two intensional natural numbers are extensionally equal if there is a possibly infinite derivation, using both inductive and coinductive rules, where there are only finitely many applications of inductive rules between any two coinductive rules. For finite sequences, extensionally equal numbers represent the same natural numbers.

Proposition 4.

Let α,β¯ with αβ. Then, α is finite if and only if β is finite. Moreover, if α𝗋¯n and β𝗋¯k for finite α and β¯ and n,k, then n=k.

Proof.

The sequence α is finite only if the inductive and coinductive rules are applied finitely many times and ultimately the axiom α=[][]=β is applicable. This is analogous when β is finite.

Assuming αβ for finite α,β¯, the coinductive rule stipulating αβ implies 1:α1:β is applied finitely many times. The same number of 1’s must be observed in α and β. Hence, α and β must represent the same natural number. For extensional functions, we can safely use the intensional natural numbers interchangeably with any of the corresponding binary sequences.

4 Function Space Corecursion

Function space corecursion is a definition principle similar to primitive recursion which allows us to generalise both primitive recursion and minimisation operating on intensional natural numbers. The intuition is best explained in comparison to primitive recursion. A definition of f=𝖯𝖱(g,h):k+1 by primitive recursion relies on two auxiliary (or already defined) functions g:k (that defines f when the first argument is zero), and h:k+2 (that is used in the successor case).

Function space corecursion defines a function f of type ¯k¯ in terms of k+1 functions (f0,,fk) that are all of the arity k. The first function f0 yields the result if the first argument is the empty sequence []¯:

f([],α2,,αk)=f0([],α2,,αk).

The remaining functions simultaneously transform all arguments in a recursive call, where 𝗌𝗎𝖼α=1:α:

f(𝗌𝗎𝖼α1,α2,,αk)=0:f(f1(α1,,αk),,fk(α1,,αk))

The crucial point of function space corecursion, and its main difference to primitive recursion, is that all functions f0,,fk can change in every recursive call. That means, we stipulate the equations

𝖥𝖢𝖱(f0,,fk)([],α2,,αk) =f0([],α2,,αk)
𝖥𝖢𝖱(f0,,fk)(0:α1,α2,,αk) =0:𝖥𝖢𝖱(f0,,fk)(α1,αk)
𝖥𝖢𝖱(f0,,fk)(1:α1,α2,,αk) =0:𝖥𝖢𝖱(f0,,fk)(α1,αk)

where αi=fi(α1,,αk) and f0,,fk are the changed versions of f0,,fk. We express this change by postulating that fi=Φi(fi) for an operator Φi:(¯k¯)(¯k¯). The functionals Φ0,,Φk then become parameters of a definition by function space corecursion. The following definition makes this precise in terms of intensional natural numbers.

Definition 5 (Function space corecursion).

Let k>0 and fix functionals Φ0,,Φk:(¯k¯)(¯k¯). We say that the function 𝖥𝖢𝖱(Φ0,,Φk):(¯k¯)k+1¯k¯ is defined by

𝖥𝖢𝖱Φf([],α)=f0([],α)𝖥𝖢𝖱Φf(0:α1,α)=0:𝖥𝖢𝖱Φf(α1,α)𝖥𝖢𝖱Φf(1:α1,α)=0:𝖥𝖢𝖱Φf(f1(α1,α),,fk(α1,α))

where Φ=(Φ0,,Φk), f=(f0,,fk), α=(α2,,αk) and f=(f0,,fk) such that fj=Φj(fj) is defined by function space corecursion using the functionals Φ0,,Φk and initial functions f0,,fk.

 Remark 6 (Variations).

There are many different possible ways to define function space corecursion. Some variations are to limit the number of functionals, possibly hard-coding the transformation of some of the arguments or being more liberal about the arities of the fi. It may also allow Φi to depend on all of f0,,fk, i.e., to change the type of Φi to (¯k¯)k+1(¯k¯). Another variation is isotone function space corecursion defined like function space corecursion, but with the last line (and the name) changed to

𝖨𝖢𝖱Φf(1:α1,α)=1:𝖨𝖢𝖱Φf(f1(α1,α),,fk(α1,α))

whilst the remaining clauses remain the same as in Definition 5. This operator is isotone, as it evidently increases the number of 1’s in the function value, but clearly not complete in the sense of being able to define all partial recursive functions. The constant zero function, for instance, is not definable. Other variations include mixed function space corecursion that combines both plain and isotone function space corecursion. Here we stipulate that:

𝖬𝖢𝖱Φf([],β,γ)=f0([],β,γ)𝖬𝖢𝖱Φf(0:α,β,γ)=0:𝖬𝖢𝖱Φf(α,β,γ)𝖬𝖢𝖱Φf(α,0:β,γ)=0:𝖬𝖢𝖱Φf(α,β,γ)𝖬𝖢𝖱Φf(1:α,[],γ)=1:𝖬𝖢𝖱Φf(f1(α,[],γ),,fk+2(α,[],γ))𝖬𝖢𝖱Φf(1:α,1:β,γ)=0:𝖬𝖢𝖱Φf(f1(α,β,γ),,fk+2(α,β,γ))

The second argument controls whether a non-zero digit will be emitted, and 𝖬𝖢𝖱 generalises both 𝖥𝖢𝖱 and 𝖨𝖢𝖱 by fixing the second argument. We mainly focus on 𝖥𝖢𝖱 as this suffices to represent all partial recursive functions. Nevertheless, all our arguments are easily adapted to 𝖬𝖢𝖱.

We make two simple observations before we turn to (lots of) examples. First, function space corecursion is indeed uniquely defined.

Proposition 7.

The function 𝖥𝖢𝖱Φ is total and uniquely defined by the equations in Definition 5.

Proof.

We use Proposition 2. Let C=((¯k¯)(¯k¯))k+1×(¯k¯)k+1ׯk, which is the uncurried domain of 𝖥𝖢𝖱. Suppose Φ=(Φ0,,Φk), f=(f0,,fk) and f=(f0,,fk) where fi=Φi(fi). Let ϕ:C1+2×(C+¯) be defined as follows:

ϕ(Φ,f,[],α) ={inL()if f0(α)=[]inR(0,inR(α))if f0(α)=0:αinR(1,inR(α))if f0(α)=1:α
ϕ(Φ,f,0:α1,α) =inR(0,inL(Φ,f,α1,α))
ϕ(Φ,f,1:α1,α) =inR(0,inL(Φ,f,α1,α))

where α=(α2,,αk) and αi=fi(α1,α) for any i=1,,k. Proposition 2 now guarantees the unique existence of a function that satisfies the equations stipulated in Definition 5. The proof readily adapts to show the uniqueness of 𝖨𝖢𝖱 and 𝖬𝖢𝖱 where C1. The fact that 0’s in the definition of 𝖥𝖢𝖱 are just passed through and do not affect the computation implies that functions defined using function space corecursion are always extensional (Definition 3).

Proposition 8.

If k>0 and all of Φ0,,Φk and f0,,fk are extensional, then so is 𝖥𝖢𝖱Φf where Φ=(Φ0,,Φk) and f=(f0,,fk).

Proof.

Let B={(α,β)α,β¯ and αβ} be the set of all extensional equivalence relations. Consider the set

E=B{(𝖥𝖢𝖱Φfα,𝖥𝖢𝖱Φfβ)αβ}

where Φ=(Φ0,,Φk), f=(f0,,fk), α=(α1,,αk) and β=(β1,βk). The goal is to prove EB. By the Knaster-Tarski theorem [12], it suffices to show EμS.I(S)C(E) with I and C as in Definition 3. Let (γ,δ)E. This proof goal clearly holds if (γ,δ)B.

Now consider γ=𝖥𝖢𝖱Φfα and δ=𝖥𝖢𝖱Φfβ with αβ, that is, αiβi for all i=1,,k. We have B=μS.I(S)C(B), that is, we can perform induction on the inductive part of extensional equivalence α1β1 to show that (γ,δ)μS.I(S)C(E), with b:α11:β for b{0,1} and α1β1 as base cases in addition to α1=β1=[].

If α1=β1=[], we have 𝖥𝖢𝖱Φfα=f0(α) and 𝖥𝖢𝖱Φfβ=f0(β). By assumption, f0 is extensional so (f0(α),f0(β))E as required.

If α1=0:α1 and β1=0:β1 with α1β1, we have

(γ,δ)=(0:𝖥𝖢𝖱Φf(α1,α2,,αk),0:𝖥𝖢𝖱Φf(β1,β2,,βk))C(E)

by definition of 𝖥𝖢𝖱 and E. Similarly, if α1=1:α1 and β1=1:β1 with α1β1, we have

(γ,δ)=(0:𝖥𝖢𝖱Φf(α~1,α~2,,α~k),0:𝖥𝖢𝖱Φf(β~1,β~2,,β~k))C(E)

with fi=Φi(fi) for i=0,,k, α~i=fi(α1,α2,,αk), and β~i=fi(β1,β2,,βk). By definition of 𝖥𝖢𝖱 as well as the extensionality of Φi and fi, this implies (γ,δ)C(E).

If α1=0:α1, we have by induction hypothesis that

(𝖥𝖢𝖱Φf(α1,α2,,αk),𝖥𝖢𝖱Φf(β1,β2,,βk))μS.I(S)C(E).

Unfolding the definition of 𝖥𝖢𝖱, and applying an inductive rule, this also implies

(0:𝖥𝖢𝖱Φf(α0,α1,,αk),𝖥𝖢𝖱Φf(β0,β1,,βk))μS.I(S)C(E).

The case where β0=0:β0 is symmetric.

Example 9.

The addition of intensional natural numbers can be defined in two different ways. Using isotone function space corecursion, we stipulate f(α,β)=β, g(α,β)=α and h(α,β)=β. With the identity functionals Φ=Ψ=Γ=𝖨𝖽:(¯2¯)(¯2¯) and 𝖺𝖽𝖽=𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h), we have:

𝖺𝖽𝖽([],β) =𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h)([],β)=f([],β)=β
𝖺𝖽𝖽(0:α,β) =𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(0:α,β)
𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(α,β)=𝖺𝖽𝖽(α,β)
𝖺𝖽𝖽(1:α,β) =𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(1:α,β)
=1:𝖨𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(g(α,β),h(α,β))=1:𝖺𝖽𝖽(α,β).

In other words, 𝖺𝖽𝖽 is the coinductive version of the usual definition of natural numbers addition. It resembles the primitive recursive addition function 𝐴𝑑𝑑:2 where 𝐴𝑑𝑑(0,m)=m and 𝐴𝑑𝑑(n+1,m)=1+𝐴𝑑𝑑(n,m) by recursion on the first argument.

Alternatively, we can use the plain function space corecursion where occurrences of 1’s in the first argument are accumulated in the second argument in every recursive call. That is, we consider Φ=Ψ=Γ=𝖨𝖽 as above, f(α,β)=β and g(α,β)=α as before, but put h(α,β)=1:β instead. Consequently, 𝖽𝖾𝗅𝖺𝗒=𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h) satisfies the equations:

𝖽𝖾𝗅𝖺𝗒([],β) =f([],β)=β
𝖽𝖾𝗅𝖺𝗒(0:α,β) 𝖽𝖾𝗅𝖺𝗒(α,β)
𝖽𝖾𝗅𝖺𝗒(1:α,β) 𝖽𝖾𝗅𝖺𝗒(α,1:β).

This presents 𝖽𝖾𝗅𝖺𝗒 as an alternative version of the coinductive addition, defined by a(0,m)=m and a(n+1,m)=a(n,1+m). In functional programming terms, 𝖺𝖽𝖽 is a little more lazy, as for any infinite sequence α, we have 𝖺𝖽𝖽(α,β)=α whereas 𝖽𝖾𝗅𝖺𝗒(α,β)=0ω.

The example above is rather trivial as the function parameters are not changing in recursive calls. We had Φi(fi)=fi for all the functionals. The next example makes use of changing parameters.

Example 10.

Consider the functionals Φ,Ψ,Γ where Φ(f)(α,β)=β and Ψ,Γ are arbitrary. Then, for g(α,β)=[] and h(α,β)=f(α,β)=α we have

𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h)([],β) =f([],β)=[]
𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(1:α,β) 𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(g(α,β),h(α,β))
=𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h)([],α)
=f([],α)=α

where f=Φ(f), g=Ψ(g) and h=Γ(h).

In other words, 𝗉𝗋𝖾𝖽(α)=𝖥𝖢𝖱(Φ,Ψ,Γ)(f,g,h)(α,[]) is the predecessor function that removes the first occurrence of 1 from α if it exists. It is intriguing to note that the complexity of 𝗉𝗋𝖾𝖽, unlike its primitive recursive counterpart, is not linear but instead depends on the representation of α.

Example 11.

Let f:¯k+1¯ be extensional. Consider the k+2 functionals Φ=(Φ0,Φ1,,Φk+1) and functions f=(f0,f1,fk+1) such that:

f0(α)=α2f1(α)=f(1:α2,α)f2(α)=1:α2fi(α)=αi for i>2

where α=(α1,,αk+1) and Φi(fi)=fi for i=0,,k+1.
It follows that:

𝖥𝖢𝖱Φf(f([],α),[],α) []𝗋¯0 if f([],α)𝗋¯0
𝖥𝖢𝖱Φf(f([],α),[],α) 𝖥𝖢𝖱Φf(f([1],α),[1],α) otherwise

Continuing in the same way, we have:

𝖥𝖢𝖱Φf(f([1],α),[1],α) 𝗋¯1 if f([1],α)𝗋¯0
𝖥𝖢𝖱Φf(f([1],α),[1],α) 𝖥𝖢𝖱Φf(f([1,1],α),[1,1],α) otherwise

By observing that α2 is incremented at each iteration, we see and could easily prove by induction that the function

𝗆𝗂𝗇f(α)=𝖥𝖢𝖱Φf(f([],α),[],α)

satisfies the pattern 𝗆𝗂𝗇f(α)n whenever:

  • f(n,α)𝗋¯0,

  • f(k,α) for all k<n, and

  • f(k,α) represents a non-zero value for all k<n.

If f:¯k+1¯ is a representation of a possibly partial numeric function f^:k+1, it follows that 𝗆𝗂𝗇f is a representation of μf^, and therefore 𝗆𝗂𝗇f is extensional by Proposition 8. Using isotone function space corecursion, we obtain a lazy and slightly simpler variant of minimisation, given by

𝗆𝗂𝗇f(α)=𝖨𝖢𝖱Φf(f([],α),[],α)

where all functionals and functions are the same as above with the exception of f2(α)=α2.

Our next, last, and most complex example shows how to obtain primitive recursion from function space corecursion.

Example 12.

Assume that r:¯k+2¯ and s:¯k¯ are given and extensional representations of total numeric functions r^:k+2 and s^:k. We construct a representation of 𝖯𝖱(s^,r^), that is, a function ϕ:¯k+1¯. We do this by defining a (k+2)-ary function θ by function space corecursion, and then putting ϕ(α,β)=θ(α,[],β). For the definition of θ, consider the functionals Φ,Ψ,Γ:(¯k+2¯)(¯k+2¯) given by

Φ(f)(α1,α2,β)=r(α2,f([],𝗉𝗋𝖾𝖽(α2),β),β)Ψ(g)=gΓ(h)=𝗌𝗎𝖼h

where 𝗌𝗎𝖼(α)=1:α is prefixing of α with 1, and 𝗉𝗋𝖾𝖽 is the predecessor function from Example 10. Now consider the initial functions

f0(α1,α2,β)=s(β)g0(α1,α2,β)=α1h0(_)=[]

and let θ=𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(f0,g0,h0,𝗂𝖽). If fn+1=Φ(fn), gn+1=Ψ(gn) and hn+1=Γ(hn), whereas 𝗂𝖽 and 𝖨𝖽 are sequences of k identity functions and functionals, we have ϕ(α,β)=𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(f0,g0,h0,𝗂𝖽)(α,[],β):

ϕ([],β) =𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(f0,g0,h0,𝗂𝖽)([],[],β)=f0([],[],β)=s(β)
ϕ(𝗌𝗎𝖼α,β) =𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(f0,g0,h0,𝗂𝖽)(𝗌𝗎𝖼α,[],β)
𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(f1,g1,h1,𝗂𝖽)(α,[],β)
𝖥𝖢𝖱(Φ,Ψ,Γ,𝖨𝖽)(fn+1,gn+1,hn+1,𝗂𝖽)([],α,β)
=fn+1([],α,β)
=r(n,fn([],𝗉𝗋𝖾𝖽(α),β),β)=r(α,ϕ(α,β),β).

In other words, ϕ satisfies the equations of the function 𝖯𝖱(s^,r^) defined by primitive recursion, i.e., ϕ([],β)=s(β) and ϕ(𝗌𝗎𝖼(α),β)=r(α,ϕ(α,β),β). That is, the function 𝖥𝖢𝖱(Φ,Ψ,Γ,𝗂𝖽)(f,g,h,𝗂𝖽)(α,[],β) is a representation of 𝖯𝖱(s^,r^),

In the example above, we obtain a representation of 𝖯𝖱(s^,r^) as we have assumed that r^ and s^ are total. For partial functions, this is not necessarily true as fg does not in general represent f^g^, even if f and g represent f^ and g^, respectively. We return to this when we show that function space corecursion indeed defines all computable functions.

5 Computable Corecursive Functions

The function space corecursion operator offers a lot of flexibility for how the functionals and functions are constructed. To define the class of computable corecursive functions, we need to be precise about how, and in particular with what functionals, the operator is used. Similar to partial recursive functions, computable corecursive functions are an inductively defined class, and every element of this class denotes a k-ary function of type ¯k¯. As for primitive recursive functions, we often identify a k-ary function symbol with an actual function. The challenge in the definition of computable corecursive functions is precisely which functionals to permit in the definition of function space corecursion. An operator is permissible if it can be “written” in terms of its arguments and previously defined computable corecursive functions.

The definition of computable corecursive functions makes this precise by representing the functionals Φ in the definition of 𝖥𝖢𝖱 by terms. To define a functional Φ:(¯k¯)(¯k¯), we need to specify Φ(ϕ) for ϕ:¯k¯, and to define Φ(ϕ) we need to prescribe Φ(ϕ)(α) for a generic α¯k. In other words, we stipulate that Φ(ϕ)(α) is an expression that uses already defined functions, the function ϕ and the arguments α1,,αk. That is, we can represent Φ(ϕ)(α), and hence Φ itself, by a term in k variables over a signature that contains symbols for already defined computable corecursive functions, plus an extra k-ary symbol ϕ.

Definition 13 (Computable corecursive functions).

The class of computable corecursive functions is the least class that contains the constant zero function 0 of arity 0, the successor function 𝗌𝗎𝖼 of arity 1, the projection functions πik of arity k for 1ik and is closed under

function composition,

i.e., if f is n-ary and g1,gn are k-ary, and all of f,g1,,gn are computable corecursive, then so is fg1,,gn, and

function space corecursion,

that is, if Σ is a set of computable corecursive functions and Φ0,,ΦkTΣ({x1,,xk}), then 𝖥𝖢𝖱(Φ0,,Φk)(f0,,fk) is a computable corecursive function of arity k, provided all of f0,,fk are k-ary and computably corecursive, and Σ=Σ{ϕ} where ϕΣ is a fresh k-ary function symbol.

Every k-ary computable corecursive function f is a term that we often identify with its denotation f:¯k¯ given as follows. We have 0=[], 𝗌𝗎𝖼(α)=1:α, πik(α1,,αk)=αi and fg1,,gk(α1,,αn)=f(g1(α1,,αn),,gk(α1,,αk)). For function space corecursion, the denotation of ΦTΣ(x1,,xk) is given in the standard way by induction on the structure of Φ, that is

Φ(g)(α)={αi,Φ=xi is a variablef(Φ1(g)(α),,Φn(g)(α)),Φ=f(Φ1,,Φn) and fΣg(Φ1(g)(α),,Φn(g)(α)),Φ=ϕ(Φ1,,Φk) and ϕΣΣ

where we use fΣ homonymously as a function symbol and its interpretation, g:¯k¯ and α=α1,,αk¯k. With this,

𝖥𝖢𝖱(Φ0,,Φk)(f0,,fk)=𝖥𝖢𝖱(Φ0,,Φk)(f0,,fk)

where function space corecursion is the same as in Definition 5.

When requiring that ΦTΣ({x1,,xk}), we use already defined, n-ary computable corecursive functions fΣ as n-ary function symbols. Unfolding the definition of the term algebra TΣ({x1,,xk}), we have that Φ is either a variable, or Φ is of the form σ(Φ1,,Φn) where σΣ is a function symbol. All Φi are already defined terms ΦiTΣ({x1,,xn}). When interpreting Φ as a functional Φ:(¯k¯)(¯k¯), we interpret a function symbol ϕΣΣ as the argument of the interpretation of Φ, and a function symbol fΣ as f (conceived as a function).

We have seen that function space corecursion can simulate both minimisation and primitive recursion. This does not suffice to show that every partial recursive function can be represented by a computable corecursive function – the composition of partial recursive functions differs from the composition of computable corecursive functions as the latter are total. Consider as an example the constant zero function z(α)=[] that represents the function z^(n)=0. The unary function f(α)=(0,0,) represents the totally undefined function f^ where f^(n) for all n. It is however not the case that zf represents z^f^ as zf([]) but z^f^(0).

To establish a representation theorem, we therefore need to make the somewhat unnatural manoeuvre, which deliberately limits the definedness of a function, to represent the composition of partial functions using function space corecursion. The key observation is that the constant zero function z^ also has a different representation z where z(α)0 for any finite sequence α2 and z(α)=(0,0,) for any infinite sequence α2ω. We start by showing that z is computable corecursive.

Lemma 14.

The function z:¯¯ defined by z(α)=0n if α2n and z(α)=0ω otherwise is computable corecursive.

Proof.

Let z=𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽) where 𝖨𝖽:(¯¯)(¯¯) is the identity functional, and 𝗂𝖽:¯¯ is the identity function. Then

z([]) =𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽)([]) = []
z(0:α) =𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽)(0:α) = 0:𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽)(α)
z(1:α) =𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽)(1:α) = 0:𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽)(α).

It is evident that z is the denotation of a computable corecursive function, and we have z=𝖥𝖢𝖱(𝖨𝖽,𝖨𝖽)(𝗂𝖽,𝗂𝖽) where formally 𝖨𝖽=ϕ(x)T({x}) is the required term representation, and 𝗂𝖽 is the evidently computable corecursive identity function π11. Our strategy, to show that the composition of partial recursive functions can again be represented by a computably corecursive function, is to prefix the concatenation z(g1(α))z(gn(α)) to the composition fg1,,gn and so making sure that the composition is undefined (i.e., an infinite sequence) if one of the arguments is undefined.

Lemma 15.

Suppose that f:¯n¯ and g1,,gn:¯k¯ are computably corecursive. If f represents f^ and gi represents g^i, then there is a computably corecursive function h that represents f^g^1,,g^n.

Proof.

We write αβ for 𝖽𝖾𝗅𝖺𝗒(α,β) and put h(α1,,αn)=z(g1(α)gn(α))f(g1(α),,gn(α)). Here, z is as in Lemma 14 and it is clear that h represents f^g1,,gk. We have now collected all the ingredients for our representation theorem.

Theorem 16.

Every partial recursive function has a computable corecursive representation.

Proof.

For total functions f^ and g^1,,g^n that are represented by f,g1,,gn, we have that fg1,,gn represents f^g^1,g^n. It then follows from Example 12 that all primitive recursive functions have a representation. To obtain a representation for all partial recursive functions, we use Kleene’s normal form theorem and Example 11 where we implement the composition of partial functions using Lemma 15.

6 Soundness

We have shown that every partial recursive function is represented by the denotation of a computable corecursive function. We could argue that this implies computable corecursive functions represent the class of all computable functions by Church’s thesis, as they are intuitively computable. Indeed, it is obvious how we implement an instance of function space corecursion in the Haskell programming language. The following shows an example of how 𝖥𝖢𝖱 can be defined.

-- finite and infinite lists
data Bit = Zero | One deriving (Eq, Show); type Seq = [Bit]

-- functions and functionals for an instance of FCR with two arguments
type Fn = Seq -> Seq -> Seq
type Fnl = Fn -> Fn

-- function space corecursion for binary functions
fcr2 :: Fnl -> Fnl -> Fnl -> Fn -> Fn -> Fn -> Fn
fcr2 phi psi gamma f g h a b = case a of
  []     -> f [] b
  Zero:a -> Zero:(fcr2 phi psi gamma f g h a b)
  One:a  -> Zero:(fcr2 phi psi gamma f’ g’ h’ a’ b’) where
    f’ = phi f; g’ = psi g; h’ = gamma h; a’ = g a b; b’ = h a b

The 𝖽𝖾𝗅𝖺𝗒 and 𝗉𝗋𝖾𝖽 functions in the previous section are then easily defined as follows.

-- delayed addition
delay :: Fn
delay = fcr2 id id id (\a b -> b) (\a b -> a) (\a b -> One:b)
-- auxiliary function to define predecessor
pred’ :: Fn
pred’ = fcr2 (\f -> \a b -> b) id id (\a b -> a) (\a b -> []) (\a b -> a)

While the argument is certainly convincing, we instead show that all computable corecursive functions are λ-representable, starting with the encoding of finite and infinite sequences of digits as untyped λ-terms. In addition to the binary digits, we introduce a blank symbol (B) to mark the end of a binary sequence.

Definition 17.

Define :2{B}Λ such that B=λxyz.x, 0=λxyz.y and 1=λxyz.z. A term M encodes a finite binary sequence α=(a0,,ak1) whenever Mi=ai for all 0i<k, and Mi+1=B for i=k. A term M encodes an infinite binary sequence (α0,α1,) whenever Mi=ai for all i.

As mentioned previously, :Λ is a mapping from natural numbers to their λ-encoding. Barendregt’s formulation [3] or any alternative encoding that provides basic functionalities (e.g., predecessor and successor functions) works the same way.

Example 18.

The empty sequence [] can be encoded as λx.B because (λx.B)0βB. An infinite sequence of 1’s can be encoded as λx.1. In fact, we can define a concatenation operator 𝖼𝗈𝗇𝗌 by

𝖼𝗈𝗇𝗌xyλw.𝗂𝖿(𝗂𝗌𝖹𝖾𝗋𝗈w)𝗍𝗁𝖾𝗇x𝖾𝗅𝗌𝖾y(𝗉𝗋𝖾𝖽w)

where 𝗂𝗌𝖹𝖾𝗋𝗈 checks whether w encodes the natural number zero, and 𝗉𝗋𝖾𝖽 decrements a natural number. The 𝗁𝖾𝖺𝖽 and 𝗍𝖺𝗂𝗅 functions are simply λx.x0 and λxy.x(𝗌𝗎𝖼y), respectively.

To facilitate case distinctions with the binary digits and the blank symbol, let the notation 𝗌𝗐𝗂𝗍𝖼𝗁W𝖼𝖺𝗌𝖾𝖡X𝖼𝖺𝗌𝖾𝟢Y𝖼𝖺𝗌𝖾𝟣Z be the syntactic sugar of WXYZ. The encoding of the blank symbol and the binary digits effectively function as truth values for the case distinction. For example,

𝗌𝗐𝗂𝗍𝖼𝗁B𝖼𝖺𝗌𝖾𝖡X𝖼𝖺𝗌𝖾𝟢Y𝖼𝖺𝗌𝖾𝟣Z
BXYZ
β (λxyz.x)XYZ
β X.
Definition 19 (λ-definability of functions).

A computable corecursive function f:¯k¯ is λ-definable if there exists FΛ such that FR1Rk is the encoding of f(α1,,αk) as per Definition 17 whenever Ri is a λ-encoding of αi. A functional Φ:(¯k¯)(¯k¯) is λ-definable if there exists PΛ such that PF is an encoding of Φ(f) whenever F is the encoding of f:¯k¯.

Example 20.

If functionals Φ1,,Φk and functions f1,,fk are λ-definable, then so is 𝖥𝖢𝖱(Φ0,,Φk)(f0,,fk). Suppose P1,,Pk are the λ-encodings of the functionals Φ0,,Φk, and F0,,Fk are the encodings of the functions f0,,fk. Devise F as

λf.P0PkF0FkR1Rk.
𝗌𝗐𝗂𝗍𝖼𝗁(𝗁𝖾𝖺𝖽R1)
𝖼𝖺𝗌𝖾𝖡(fF0R1Rk)
𝖼𝖺𝗌𝖾𝟢(𝖼𝗈𝗇𝗌0(fP0PkF0Fk(𝗍𝖺𝗂𝗅R1)R2Rk))
𝖼𝖺𝗌𝖾𝟣(𝖼𝗈𝗇𝗌0(fP0PkF0Fk(F1(𝗍𝖺𝗂𝗅R1))R2Rk))

where FiPiFi and RiFiRi. It follows that YF is the encoding of the function 𝖥𝖢𝖱(Φ0,,Φk)(f0,,fk) with Y being the Y-combinator.

Theorem 21.

All computable corecursive functions are λ-definable.

Proof.

By induction on the class of computable corecursive functions, replacing the interpretation in Definition 5 with λ that maps the terms to their λ-encodings. The base cases are evident while the encoding of 𝖥𝖢𝖱 is shown in Example 20.

7 Conclusions and Further Work

We have seen how function space corecursion and its variants can be used to represent exactly the class of all computable functions. This (co)algebraic approach offers a mathematically elegant way to represent computer programs. This computation model avoids the intricacies of state transitions and side effects such as tape manipulations found in Turing machines; it also enables certain programming features (e.g., lazy evaluation) that cannot be reflected in partial recursive functions to be introduced. Mixed induction-coinduction has been used for interpreting the coinductive data structure of binary sequences, providing extensionality and allowing us to correlate the data structure to program non-termination.

The major result of this paper has been formally investigating the (co)algebraic foundations of the model, along with proving its expressive power in relation to existing models of computation. It is envisioned to open a number of intriguing research perspectives.

The first open question is, can we establish the standard results of recursion theory also with computable corecursive functions? For example, if ϕ is an enumeration of all computable corecursive functions and f is total and recursive, do we have e such that ϕeϕf(e)? The main difference from the standard formulation of the recursion theorem is that for computable corecursive functions, ϕe and ϕf(e) also need to agree on all infinite values (modulo extensional equality ) as does not identify all infinite sequences. Similar comments apply to the second recursion theorem and to the s-m-n theorem. Does the normal form theorem extend to computable corecursive functions?

The second intriguing avenue is to explore an equational calculus for asserting extensional equality of computable corecursive functions. Given that the denotation of computable corecursive functions are total functions on intensional natural numbers, the main barrier for the design of an equational calculus has been removed. On the other hand, equality between computable functions is not recursively enumerable. We, therefore, expect this calculus to be non-wellfounded, similar to the exercise conducted for Turing machines [14]. This question is closely related to that of induction principles for intensional natural numbers that we would employ in a calculus for equality. In other words, how can we show that f(α)g(α) for all intensional natural numbers α? We hypothesise that any principle will combine induction and coinduction with reference to comparable systems [5].

Another orthogonal angle is that of the computational complexity of a recursive function. Can we establish a linkage between the complexity of a function f, that is, the number of steps it takes to evaluate f(α), with the length of the representation of f(α)? To do this, we would move from a unary to a binary coding of natural numbers, and still retain a hiaton, i.e., a token that symbolises a computational step that has not (yet) yielded an observable result. Are there definition principles that relate the length |f(α)| to the number of steps it takes to compute f(α)? Partial recursive functions are rarely used for computational complexity analysis; we envision computable corecursive functions can bridge this gap.

Finally, what is the “right” set of computable functions on 2? In this paper, we have identified a class of functions on 2 that represents all partial recursive functions. We have shown that for this purpose, the function space corecursion operator 𝖥𝖢𝖱 suffices. This operator is special, for example, when representing μf using 𝖥𝖢𝖱, no digits of outputs are produced until a zero of f has been found, as we have discussed in Example 11. A small set of functions based on functional space corecursion may suffice to achieve Turing completeness, and thus further simplify the foundational theory.

References

  • [1] Samson Abramsky and C.-H. Luke Ong. Full abstraction in the lazy lambda calculus. Inf. Comput., 105(2):159–267, 1993. doi:10.1006/inco.1993.1044.
  • [2] Robert Atkey and Conor McBride. Productive coprogramming with guarded recursion. ACM SIGPLAN Notices, 48(9):197–208, 2013. doi:10.1145/2500365.2500597.
  • [3] Henk Barendregt. The lambda calculus - its syntax and semantics, volume 103 of Studies in logic and the foundations of mathematics. North-Holland, 1985.
  • [4] Michael Barr and Charles Wells. Category Theory for Computing Science. Prentice Hall, 1989.
  • [5] Henning Basold and Helle Hvid Hansen. Well-definedness and observational equivalence for inductive-coinductive programs. Journal of Logic and Computation, 29(4):419–468, 2019. doi:10.1093/logcom/exv091.
  • [6] George Mark Bergman. An Invitation to General Algebra and Universal Constructions. Universitext. Springer, Cham, 2nd edition, 2015. doi:10.1007/978-3-319-11478-1.
  • [7] Jasmin Christian Blanchette, Andrei Popescu, and Dmitriy Traytel. Foundational extensible corecursion: a proof assistant perspective. In Kathleen Fisher and John Reppy, editors, Proc. ICFP 2015, pages 192–204. Association for Computing Machinery, 2015. doi:10.1145/2784731.2784732.
  • [8] Ana Bove, Alexander Krauss, and Matthieu Sozeau. Partiality and recursion in interactive theorem provers — an overview. Mathematical Structures in Computer Science, 26(1):38–88, 2014. doi:10.1017/S0960129514000115.
  • [9] Martín Hötzel Escardó. On lazy natural numbers with applications to computability theory and functional programming. SIGACT News, 24(1):61–67, 1993. doi:10.1145/152992.153008.
  • [10] Joel David Hamkins. A survey of infinite time turing machines. In Jérôme Olivier Durand-Lose and Maurice Margenstern, editors, Proc. MCU 2007, volume 4664 of Lecture Notes in Computer Science, pages 62–71. Springer, 2007. doi:10.1007/978-3-540-74593-8_5.
  • [11] Bart Jacobs and Jan Rutten. A Tutorial on (Co)Algebras and (Co)Induction. EATCS Bulletin, 62:222–259, 1997.
  • [12] Dexter Kozen. Theory of Computation. Texts in Computer Science. Springer, London, 2006. doi:10.1007/1-84628-477-5_7.
  • [13] Rasmus Ejlers Møgelberg. A type theory for productive coprogramming via guarded recursion. In Proc. CSL-LICS 2014, pages 1–10, New York, 2014. Association for Computing Machinery. doi:10.1145/2603088.2603132.
  • [14] Dirk Pattinson and Lutz Schröder. Program Equivalence is Coinductive. In Proc. LICS 2016, pages 337–346, New York, 2016. Association for Computing Machinery. doi:10.1145/2933575.2934506.
  • [15] Anton Setzer. Partial Recursive Functions in Martin-Löf Type Theory. In Proc. CiE 2006, volume 3988 of Lecture Notes in Computer Science, pages 505–515. Springer, 2006. doi:10.1007/11780342_51.
  • [16] Joseph Robert Shoenfield. Recursion Theory. Lecture Notes in Logic. Springer, 1993.
  • [17] Tarmo Uustalu and Varmo Vene. Primitive (Co)Recursion and Course of Value (Co)Iteration, Categorically. INFORMATICA (IMI, Lithuania), 10:5–26, 1999. doi:10.3233/INF-1999-10102.
  • [18] Wolfgang Wechler. Universal Algebra for Computer Scientists, volume 25 of EATCS Monographs on Theoretical Computer Science. Springer, 1992. doi:10.1007/978-3-642-76771-5.
  • [19] Klaus Weihrauch. Computable Analysis. Springer, 2000.
  • [20] Glynn Winskel. The formal semantics of programming languages: an introduction. MIT Press, 1993.