A Coinductive Representation of Computable Functions
Abstract
We investigate a representation of computable functions as total functions over , the set of finite and infinite sequences over . In this model, infinite sequences are interpreted as non-terminating computations whilst finite sequences represent the sum of their digits. We introduce a new definition principle, function space corecursion, that simultaneously generalises minimisation and primitive recursion. This defines the class of computable corecursive functions that is closed under composition and function space corecursion. We prove computable corecursive functions represent all partial recursive functions, and show that all computable corecursive functions are indeed computable by translation into the untyped -calculus.
Keywords and phrases:
Computability, CoinductionCopyright and License:
2012 ACM Subject Classification:
Theory of computation Models of computation ; Theory of computation Streaming models ; Theory of computation Recursive functionsAcknowledgements:
The term “hiaton” to describe a token that symbolises a computational step without visible observations was suggested by Thomas Streicher.Editors:
Corina Cîrstea and Alexander KnappSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Partial recursive functions as a model of computation intuitively operate on natural numbers, and therefore have been used as a foundation of recursion theory. A notable drawback, however, is that they are partial – function values may be undefined. This complicates their use for formal reasoning, especially with the lack of support for partial functions in many interactive theorem provers [8], and the embedding of types into partial (recursive) functions require restrictions to assert well-definedness of values [15].
Coprogramming [2, 7] has been proposed as a way to deal with non-terminating computations. The main idea here is that functions progressively reveal an increasing part of their possibly infinite output. There are plenty of models tailored to specific programming paradigms and even type theories to ensure the coherence of corecursive function definitions [13]. Partial recursive functions, on the other hand, appear to be on the opposite side of the spectrum as they do not produce infinite function values – the result of an infinite computation is simply undefined.
Conceptually, the partiality in the theory of recursive functions arises from the minimisation operator. Given a function , we compute by successively evaluating and return once we find that . Nonetheless, once it is known that , we can be sure must be no less than if it is defined. Using a unary representation of natural numbers, it is already safe to emit the digit . This process can then be repeated for and emit more ’s whenever we find that . Once we find that , the computation terminates.
This leads to the idea of regaining totality by representing partial recursive functions as functions that operate on the set of finite and infinite sequences of ’s, where represents the result of an infinite computation, such as the case for where for all .
This unary representation of natural numbers, albeit an intuitive coinductive data structure, would not suffice for expressing all computations, at least if we aim for computable functions to be closed under composition. Furthermore, it is necessary to consider the implications when the infinite sequence is taken as a function argument. As an example, consider the function that determines the parity of its argument, where we represent by the empty sequence and by any non-empty sequence. In order to be total, the computation of must, after finitely many steps, either emit the empty sequence as a result and terminate, whence , or output an initial digit , which indicates . At this point, will have only read a finite section of its input, say the first digits. This entails , which is an obvious contradiction. This demonstrates does not have a representation as a total computable function on the set of finite and infinite sequences of ’s. In fact, this exemplifies that “computable functions are continuous” [20, page 73].
We solve this problem by considering computations over , the set of finite and infinite sequences over . Here, the digit represents a hiaton that can be read as “we do not know yet”. The computation of would then be expected to yield the value , consistent with the intuition that at any finite stage of computing , the result is yet unknown.
This provides the starting point of this computation model – considering functions as programs operating on the coinductive data structure of binary sequences. Our main contribution is a new definition principle that we call function space corecursion, allowing us to represent precisely all partial recursive functions. We show that function space corecursion can be used to express both primitive recursion and minimisation. The interpretation of binary sequences is formally defined using mixed induction-coinduction, which is essential for correlating the behaviour of these corecursive functions with that of partial recursive functions.
Analogous to partial recursive functions, the class of computable corecursive functions contains the zero, successor and projection functions, and is closed under composition and function space corecursion. A principal result of this work is the representation theorem proving the Turing completeness of this model. This involves the completeness theorem which proves that any computable program can be represented in this functional model, and the soundness theorem asserting the fact that every computable corecursive function is indeed computable by encoding them in the untyped -calculus.
Whilst this paper focuses on the (co)algebraic foundations of the corecursive model and its expressive power, it opens up a large number of new lines of investigation, some of which are discussed in the conclusion.
Related Work
The interpretation of computable functions over the set is closely related to lazy computation, which intuitively allows partial outputs to be emitted before the end result of the computation is known. More formally, identifying whether two sequences have the same number of ’s leads to the lazy natural numbers, and the aspects of computability have been studied in [1, 9]. Function space corecursion, the key principle this paper contributes, is nevertheless novel. Computation on possibly infinite sequences is the domain of type-2 Turing machines [19] but the interpretation is different – each sequence is a representation of a point in a topological space, rather than a trace of an intensional computation process. Infinite-time Turing machines [10] are also different as they are machine models of computation where ordinal-many steps can be taken.
2 Preliminaries and Notation
We use the standard set-theoretic notation and basic facts about algebras and coalgebras of set-endofunctors that do not go beyond the tutorial on (co)algebras and (co)induction [11]. We write for the Cartesian product, and for the coproduct in the category of sets and functions. Our universal algebra is similarly basic and is covered by [4, 6, 18]. In particular, we use algebraic signatures (i.e., sets of function symbols with arities) and write for the set of terms with variables in that can be constructed from a signature .
We presuppose familiarity with partial recursive functions (see [16]) that we conceptualise as the least class that contains the constant zero, the unary successor function (), all -ary projections to the element () and is closed under primitive recursion as well as minimisation. We write if is defined, and otherwise. To be precise, we take the composition of a partial function with a tuple of -ary partial recursive functions to be the function where if all of are defined, and otherwise undefined. Note that even if all are defined, might be undefined at . In this case, holds. Similarly, if and are partial recursive functions, is given by:
in case all arguments of above are defined.
A similar caveat applies to minimisation, where if:
-
and for all , and
-
.
Our conventions for the untyped -calculus, including natural numbers and if-then-else conditionals, follow Barendregt’s encoding [3]. We write for the set of all untyped -terms and denote -reduction by . Two -terms are considered equal by -equality.
The empty sequence is denoted by . If is a set, we write for the -fold Cartesian product of , for the set of finite sequences with elements in , and for the set of infinite sequences over . The set of finite and infinite sequences with elements in is denoted by . For , we write for the -fold concatenation of with itself, and for the infinite sequence that arises by concatenating with itself countably many times.
3 Natural Number and Function Representations
To establish the relationship between partial recursive functions and the new corecursive model, the representation of natural numbers and functions operating on them is formally defined with respect to binary sequences.
Definition 1 (Coinductive Representation of Numbers).
We write for the set of binary digits, and for the set of finite and infinite sequences over that we call the intensional natural numbers. For , we write if is infinite, and if is finite. A finite intensional natural number represents the number if , and we write in this case. A function represents a partial numeric function if
-
iff whenever each for , and
-
in this case.
Clearly, every natural number has countably many representations, and a finite sequence represents a terminating computation. We think of infinite sequences as non-terminating computations, but do not identify them with a single symbol or value, as they can be non-terminating in many different ways. The sequence intuitively represents a point at infinity, whereas is a non-terminating computation that never reveals any information.
It is well known that arises as the final coalgebra for the set-endofunctor (see e.g., [11]). As such, it supports coinductive definitions and proof principles. In particular, we are going to use primitive corecursion in the sequel.
Proposition 2.
Let be the structure map of the final coalgebra with and let be a set. Then, for all , there exists a unique such that .
For the proof, see [17]. The unique function in the proposition above is known as an apomorphism in the literature. Spelling this out, this means that for every pairwise disjoint partition and all functions , and , there is a uniquely defined function that satisfies the equations on the left below.
This allows us to define a version of addition for intensional natural numbers by concatenating both sequences by the equations on the right. We note that the addition defined above is extensional in the sense that if arguments just differ in zeros, the same is true for the result.
More formally, we have the following notion of extensional equivalence.
Definition 3 (Extensional equivalence).
The relation of extensional equivalence is given as the mixed fixpoint defined by the inductive rules (indicated with single-line fractions) and the co-inductive rules (written with double-line fractions) below where .
More precisely, extensional equivalence is the mixed fixpoint such that
and as the analogously defined operator using the coinductive rules, where and are the greatest and least fixpoints of a monotone operator .
We say that a function is extensional if whenever for , and a functional is extensional if is extensional whenever is extensional.
One way to read this definition is that two intensional natural numbers are extensionally equal if there is a possibly infinite derivation, using both inductive and coinductive rules, where there are only finitely many applications of inductive rules between any two coinductive rules. For finite sequences, extensionally equal numbers represent the same natural numbers.
Proposition 4.
Let with . Then, is finite if and only if is finite. Moreover, if and for finite and and , then .
Proof.
The sequence is finite only if the inductive and coinductive rules are applied finitely many times and ultimately the axiom is applicable. This is analogous when is finite.
Assuming for finite , the coinductive rule stipulating implies is applied finitely many times. The same number of ’s must be observed in and . Hence, and must represent the same natural number. For extensional functions, we can safely use the intensional natural numbers interchangeably with any of the corresponding binary sequences.
4 Function Space Corecursion
Function space corecursion is a definition principle similar to primitive recursion which allows us to generalise both primitive recursion and minimisation operating on intensional natural numbers. The intuition is best explained in comparison to primitive recursion. A definition of by primitive recursion relies on two auxiliary (or already defined) functions (that defines when the first argument is zero), and (that is used in the successor case).
Function space corecursion defines a function of type in terms of functions that are all of the arity . The first function yields the result if the first argument is the empty sequence :
The remaining functions simultaneously transform all arguments in a recursive call, where :
The crucial point of function space corecursion, and its main difference to primitive recursion, is that all functions can change in every recursive call. That means, we stipulate the equations
where and are the changed versions of . We express this change by postulating that for an operator . The functionals then become parameters of a definition by function space corecursion. The following definition makes this precise in terms of intensional natural numbers.
Definition 5 (Function space corecursion).
Let and fix functionals . We say that the function is defined by
where , , and such that is defined by function space corecursion using the functionals and initial functions .
Remark 6 (Variations).
There are many different possible ways to define function space corecursion. Some variations are to limit the number of functionals, possibly hard-coding the transformation of some of the arguments or being more liberal about the arities of the . It may also allow to depend on all of , i.e., to change the type of to . Another variation is isotone function space corecursion defined like function space corecursion, but with the last line (and the name) changed to
whilst the remaining clauses remain the same as in Definition 5. This operator is isotone, as it evidently increases the number of ’s in the function value, but clearly not complete in the sense of being able to define all partial recursive functions. The constant zero function, for instance, is not definable. Other variations include mixed function space corecursion that combines both plain and isotone function space corecursion. Here we stipulate that:
The second argument controls whether a non-zero digit will be emitted, and generalises both and by fixing the second argument. We mainly focus on as this suffices to represent all partial recursive functions. Nevertheless, all our arguments are easily adapted to .
We make two simple observations before we turn to (lots of) examples. First, function space corecursion is indeed uniquely defined.
Proposition 7.
The function is total and uniquely defined by the equations in Definition 5.
Proof.
We use Proposition 2. Let , which is the uncurried domain of . Suppose , and where . Let be defined as follows:
where and for any . Proposition 2 now guarantees the unique existence of a function that satisfies the equations stipulated in Definition 5. The proof readily adapts to show the uniqueness of and where . The fact that ’s in the definition of are just passed through and do not affect the computation implies that functions defined using function space corecursion are always extensional (Definition 3).
Proposition 8.
If and all of and are extensional, then so is where and .
Proof.
Let be the set of all extensional equivalence relations. Consider the set
where , , and . The goal is to prove . By the Knaster-Tarski theorem [12], it suffices to show with and as in Definition 3. Let . This proof goal clearly holds if .
Now consider and with , that is, for all . We have , that is, we can perform induction on the inductive part of extensional equivalence to show that , with for and as base cases in addition to .
If , we have and . By assumption, is extensional so as required.
If and with , we have
by definition of and . Similarly, if and with , we have
with for , , and . By definition of as well as the extensionality of and , this implies .
If , we have by induction hypothesis that
Unfolding the definition of , and applying an inductive rule, this also implies
The case where is symmetric.
Example 9.
The addition of intensional natural numbers can be defined in two different ways. Using isotone function space corecursion, we stipulate , and . With the identity functionals and , we have:
In other words, is the coinductive version of the usual definition of natural numbers addition. It resembles the primitive recursive addition function where and by recursion on the first argument.
Alternatively, we can use the plain function space corecursion where occurrences of ’s in the first argument are accumulated in the second argument in every recursive call. That is, we consider as above, and as before, but put instead. Consequently, satisfies the equations:
This presents as an alternative version of the coinductive addition, defined by and . In functional programming terms, is a little more lazy, as for any infinite sequence , we have whereas .
The example above is rather trivial as the function parameters are not changing in recursive calls. We had for all the functionals. The next example makes use of changing parameters.
Example 10.
Consider the functionals where and are arbitrary. Then, for and we have
where , and .
In other words, is the predecessor function that removes the first occurrence of from if it exists. It is intriguing to note that the complexity of , unlike its primitive recursive counterpart, is not linear but instead depends on the representation of .
Example 11.
Let be extensional. Consider the functionals and functions such that:
where and
for .
It follows that:
| if | ||||
| otherwise |
Continuing in the same way, we have:
| if | ||||
| otherwise |
By observing that is incremented at each iteration, we see and could easily prove by induction that the function
satisfies the pattern whenever:
-
,
-
for all , and
-
represents a non-zero value for all .
If is a representation of a possibly partial numeric function , it follows that is a representation of , and therefore is extensional by Proposition 8. Using isotone function space corecursion, we obtain a lazy and slightly simpler variant of minimisation, given by
where all functionals and functions are the same as above with the exception of .
Our next, last, and most complex example shows how to obtain primitive recursion from function space corecursion.
Example 12.
Assume that and are given and extensional representations of total numeric functions and . We construct a representation of , that is, a function . We do this by defining a -ary function by function space corecursion, and then putting . For the definition of , consider the functionals given by
where is prefixing of with , and is the predecessor function from Example 10. Now consider the initial functions
and let . If , and , whereas and are sequences of identity functions and functionals, we have :
In other words, satisfies the equations of the function defined by primitive recursion, i.e., and . That is, the function is a representation of ,
In the example above, we obtain a representation of as we have assumed that and are total. For partial functions, this is not necessarily true as does not in general represent , even if and represent and , respectively. We return to this when we show that function space corecursion indeed defines all computable functions.
5 Computable Corecursive Functions
The function space corecursion operator offers a lot of flexibility for how the functionals and functions are constructed. To define the class of computable corecursive functions, we need to be precise about how, and in particular with what functionals, the operator is used. Similar to partial recursive functions, computable corecursive functions are an inductively defined class, and every element of this class denotes a -ary function of type . As for primitive recursive functions, we often identify a -ary function symbol with an actual function. The challenge in the definition of computable corecursive functions is precisely which functionals to permit in the definition of function space corecursion. An operator is permissible if it can be “written” in terms of its arguments and previously defined computable corecursive functions.
The definition of computable corecursive functions makes this precise by representing the functionals in the definition of by terms. To define a functional , we need to specify for , and to define we need to prescribe for a generic . In other words, we stipulate that is an expression that uses already defined functions, the function and the arguments . That is, we can represent , and hence itself, by a term in variables over a signature that contains symbols for already defined computable corecursive functions, plus an extra -ary symbol .
Definition 13 (Computable corecursive functions).
The class of computable corecursive functions is the least class that contains the constant zero function of arity , the successor function of arity , the projection functions of arity for and is closed under
- function composition,
-
i.e., if is -ary and are -ary, and all of are computable corecursive, then so is , and
- function space corecursion,
-
that is, if is a set of computable corecursive functions and , then is a computable corecursive function of arity , provided all of are -ary and computably corecursive, and where is a fresh -ary function symbol.
Every -ary computable corecursive function is a term that we often identify with its denotation given as follows. We have , , and . For function space corecursion, the denotation of is given in the standard way by induction on the structure of , that is
where we use homonymously as a function symbol and its interpretation, and . With this,
where function space corecursion is the same as in Definition 5.
When requiring that , we use already defined, -ary computable corecursive functions as -ary function symbols. Unfolding the definition of the term algebra , we have that is either a variable, or is of the form where is a function symbol. All are already defined terms . When interpreting as a functional , we interpret a function symbol as the argument of the interpretation of , and a function symbol as (conceived as a function).
We have seen that function space corecursion can simulate both minimisation and primitive recursion. This does not suffice to show that every partial recursive function can be represented by a computable corecursive function – the composition of partial recursive functions differs from the composition of computable corecursive functions as the latter are total. Consider as an example the constant zero function that represents the function . The unary function represents the totally undefined function where for all . It is however not the case that represents as but .
To establish a representation theorem, we therefore need to make the somewhat unnatural manoeuvre, which deliberately limits the definedness of a function, to represent the composition of partial functions using function space corecursion. The key observation is that the constant zero function also has a different representation where for any finite sequence and for any infinite sequence . We start by showing that is computable corecursive.
Lemma 14.
The function defined by if and otherwise is computable corecursive.
Proof.
Let where is the identity functional, and is the identity function. Then
It is evident that is the denotation of a computable corecursive function, and we have where formally is the required term representation, and is the evidently computable corecursive identity function . Our strategy, to show that the composition of partial recursive functions can again be represented by a computably corecursive function, is to prefix the concatenation to the composition and so making sure that the composition is undefined (i.e., an infinite sequence) if one of the arguments is undefined.
Lemma 15.
Suppose that and are computably corecursive. If represents and represents , then there is a computably corecursive function that represents .
Proof.
We write for and put . Here, is as in Lemma 14 and it is clear that represents . We have now collected all the ingredients for our representation theorem.
Theorem 16.
Every partial recursive function has a computable corecursive representation.
Proof.
For total functions and that are represented by , we have that represents . It then follows from Example 12 that all primitive recursive functions have a representation. To obtain a representation for all partial recursive functions, we use Kleene’s normal form theorem and Example 11 where we implement the composition of partial functions using Lemma 15.
6 Soundness
We have shown that every partial recursive function is represented by the denotation of a computable corecursive function. We could argue that this implies computable corecursive functions represent the class of all computable functions by Church’s thesis, as they are intuitively computable. Indeed, it is obvious how we implement an instance of function space corecursion in the Haskell programming language. The following shows an example of how can be defined.
-- finite and infinite lists
data Bit = Zero | One deriving (Eq, Show); type Seq = [Bit]
-- functions and functionals for an instance of FCR with two arguments
type Fn = Seq -> Seq -> Seq
type Fnl = Fn -> Fn
-- function space corecursion for binary functions
fcr2 :: Fnl -> Fnl -> Fnl -> Fn -> Fn -> Fn -> Fn
fcr2 phi psi gamma f g h a b = case a of
[] -> f [] b
Zero:a -> Zero:(fcr2 phi psi gamma f g h a b)
One:a -> Zero:(fcr2 phi psi gamma f’ g’ h’ a’ b’) where
f’ = phi f; g’ = psi g; h’ = gamma h; a’ = g a b; b’ = h a b
The and functions in the previous section are then easily defined as follows.
-- delayed addition delay :: Fn delay = fcr2 id id id (\a b -> b) (\a b -> a) (\a b -> One:b) -- auxiliary function to define predecessor pred’ :: Fn pred’ = fcr2 (\f -> \a b -> b) id id (\a b -> a) (\a b -> []) (\a b -> a)
While the argument is certainly convincing, we instead show that all computable corecursive functions are -representable, starting with the encoding of finite and infinite sequences of digits as untyped -terms. In addition to the binary digits, we introduce a blank symbol () to mark the end of a binary sequence.
Definition 17.
Define such that , and . A term encodes a finite binary sequence whenever for all , and for . A term encodes an infinite binary sequence whenever for all .
As mentioned previously, is a mapping from natural numbers to their -encoding. Barendregt’s formulation [3] or any alternative encoding that provides basic functionalities (e.g., predecessor and successor functions) works the same way.
Example 18.
The empty sequence can be encoded as because . An infinite sequence of ’s can be encoded as . In fact, we can define a concatenation operator by
where checks whether encodes the natural number zero, and decrements a natural number. The and functions are simply and , respectively.
To facilitate case distinctions with the binary digits and the blank symbol, let the notation be the syntactic sugar of . The encoding of the blank symbol and the binary digits effectively function as truth values for the case distinction. For example,
Definition 19 (-definability of functions).
A computable corecursive function is -definable if there exists such that is the encoding of as per Definition 17 whenever is a -encoding of . A functional is -definable if there exists such that is an encoding of whenever is the encoding of .
Example 20.
If functionals and functions are -definable, then so is . Suppose are the -encodings of the functionals , and are the encodings of the functions . Devise as
where and . It follows that is the encoding of the function with being the -combinator.
Theorem 21.
All computable corecursive functions are -definable.
Proof.
7 Conclusions and Further Work
We have seen how function space corecursion and its variants can be used to represent exactly the class of all computable functions. This (co)algebraic approach offers a mathematically elegant way to represent computer programs. This computation model avoids the intricacies of state transitions and side effects such as tape manipulations found in Turing machines; it also enables certain programming features (e.g., lazy evaluation) that cannot be reflected in partial recursive functions to be introduced. Mixed induction-coinduction has been used for interpreting the coinductive data structure of binary sequences, providing extensionality and allowing us to correlate the data structure to program non-termination.
The major result of this paper has been formally investigating the (co)algebraic foundations of the model, along with proving its expressive power in relation to existing models of computation. It is envisioned to open a number of intriguing research perspectives.
The first open question is, can we establish the standard results of recursion theory also with computable corecursive functions? For example, if is an enumeration of all computable corecursive functions and is total and recursive, do we have such that ? The main difference from the standard formulation of the recursion theorem is that for computable corecursive functions, and also need to agree on all infinite values (modulo extensional equality ) as does not identify all infinite sequences. Similar comments apply to the second recursion theorem and to the -- theorem. Does the normal form theorem extend to computable corecursive functions?
The second intriguing avenue is to explore an equational calculus for asserting extensional equality of computable corecursive functions. Given that the denotation of computable corecursive functions are total functions on intensional natural numbers, the main barrier for the design of an equational calculus has been removed. On the other hand, equality between computable functions is not recursively enumerable. We, therefore, expect this calculus to be non-wellfounded, similar to the exercise conducted for Turing machines [14]. This question is closely related to that of induction principles for intensional natural numbers that we would employ in a calculus for equality. In other words, how can we show that for all intensional natural numbers ? We hypothesise that any principle will combine induction and coinduction with reference to comparable systems [5].
Another orthogonal angle is that of the computational complexity of a recursive function. Can we establish a linkage between the complexity of a function , that is, the number of steps it takes to evaluate , with the length of the representation of ? To do this, we would move from a unary to a binary coding of natural numbers, and still retain a hiaton, i.e., a token that symbolises a computational step that has not (yet) yielded an observable result. Are there definition principles that relate the length to the number of steps it takes to compute ? Partial recursive functions are rarely used for computational complexity analysis; we envision computable corecursive functions can bridge this gap.
Finally, what is the “right” set of computable functions on ? In this paper, we have identified a class of functions on that represents all partial recursive functions. We have shown that for this purpose, the function space corecursion operator suffices. This operator is special, for example, when representing using , no digits of outputs are produced until a zero of has been found, as we have discussed in Example 11. A small set of functions based on functional space corecursion may suffice to achieve Turing completeness, and thus further simplify the foundational theory.
References
- [1] Samson Abramsky and C.-H. Luke Ong. Full abstraction in the lazy lambda calculus. Inf. Comput., 105(2):159–267, 1993. doi:10.1006/inco.1993.1044.
- [2] Robert Atkey and Conor McBride. Productive coprogramming with guarded recursion. ACM SIGPLAN Notices, 48(9):197–208, 2013. doi:10.1145/2500365.2500597.
- [3] Henk Barendregt. The lambda calculus - its syntax and semantics, volume 103 of Studies in logic and the foundations of mathematics. North-Holland, 1985.
- [4] Michael Barr and Charles Wells. Category Theory for Computing Science. Prentice Hall, 1989.
- [5] Henning Basold and Helle Hvid Hansen. Well-definedness and observational equivalence for inductive-coinductive programs. Journal of Logic and Computation, 29(4):419–468, 2019. doi:10.1093/logcom/exv091.
- [6] George Mark Bergman. An Invitation to General Algebra and Universal Constructions. Universitext. Springer, Cham, 2nd edition, 2015. doi:10.1007/978-3-319-11478-1.
- [7] Jasmin Christian Blanchette, Andrei Popescu, and Dmitriy Traytel. Foundational extensible corecursion: a proof assistant perspective. In Kathleen Fisher and John Reppy, editors, Proc. ICFP 2015, pages 192–204. Association for Computing Machinery, 2015. doi:10.1145/2784731.2784732.
- [8] Ana Bove, Alexander Krauss, and Matthieu Sozeau. Partiality and recursion in interactive theorem provers — an overview. Mathematical Structures in Computer Science, 26(1):38–88, 2014. doi:10.1017/S0960129514000115.
- [9] Martín Hötzel Escardó. On lazy natural numbers with applications to computability theory and functional programming. SIGACT News, 24(1):61–67, 1993. doi:10.1145/152992.153008.
- [10] Joel David Hamkins. A survey of infinite time turing machines. In Jérôme Olivier Durand-Lose and Maurice Margenstern, editors, Proc. MCU 2007, volume 4664 of Lecture Notes in Computer Science, pages 62–71. Springer, 2007. doi:10.1007/978-3-540-74593-8_5.
- [11] Bart Jacobs and Jan Rutten. A Tutorial on (Co)Algebras and (Co)Induction. EATCS Bulletin, 62:222–259, 1997.
- [12] Dexter Kozen. Theory of Computation. Texts in Computer Science. Springer, London, 2006. doi:10.1007/1-84628-477-5_7.
- [13] Rasmus Ejlers Møgelberg. A type theory for productive coprogramming via guarded recursion. In Proc. CSL-LICS 2014, pages 1–10, New York, 2014. Association for Computing Machinery. doi:10.1145/2603088.2603132.
- [14] Dirk Pattinson and Lutz Schröder. Program Equivalence is Coinductive. In Proc. LICS 2016, pages 337–346, New York, 2016. Association for Computing Machinery. doi:10.1145/2933575.2934506.
- [15] Anton Setzer. Partial Recursive Functions in Martin-Löf Type Theory. In Proc. CiE 2006, volume 3988 of Lecture Notes in Computer Science, pages 505–515. Springer, 2006. doi:10.1007/11780342_51.
- [16] Joseph Robert Shoenfield. Recursion Theory. Lecture Notes in Logic. Springer, 1993.
- [17] Tarmo Uustalu and Varmo Vene. Primitive (Co)Recursion and Course of Value (Co)Iteration, Categorically. INFORMATICA (IMI, Lithuania), 10:5–26, 1999. doi:10.3233/INF-1999-10102.
- [18] Wolfgang Wechler. Universal Algebra for Computer Scientists, volume 25 of EATCS Monographs on Theoretical Computer Science. Springer, 1992. doi:10.1007/978-3-642-76771-5.
- [19] Klaus Weihrauch. Computable Analysis. Springer, 2000.
- [20] Glynn Winskel. The formal semantics of programming languages: an introduction. MIT Press, 1993.
