Strong Induction Is an Up-To Technique
Abstract
Up-to techniques are enhancements of the coinduction proof principle which, in lattice theoretic terms, is the dual of induction. What is the dual of coinduction up-to? By means of duality, we illustrate a theory of induction up-to and we observe that an elementary proof technique, commonly known as strong induction, is an instance of induction up-to. We also show that, when generalising our theory from lattices to categories, one obtains an enhancement of the induction definition principle known in the literature as comonadic recursion.
Keywords and phrases:
Induction, Coinduction, Up-to Techniques, Induction up-to, Lattices, AlgebrasFunding:
Filippo Bonchi: Supported by the Ministero dell’Università e della Ricerca of Italy grant PRIN 2022 PNRR No. P2022HXNSC - RAP (Resource Awareness in Programming).Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Proof theory ; Theory of computation Logic and verificationAcknowledgements:
The authors would like to thank Jurriaan Rot for the inspiring discussions.Funding:
This study was carried out within the National Centre on HPC, Big Data and Quantum Computing - SPOKE 10 (Quantum Computing) and received funding from the European Union Next-GenerationEU - National Recovery and Resilience Plan (NRRP) – MISSION 4 COMPONENT 2, INVESTMENT N. 1.4 – CUP N. I53C22000690001. This research was partly funded by the Advanced Research + Invention Agency (ARIA) Safeguarded AI Programme.Editors:
Jörg Endrullis and Sylvain SchmitzSeries and Publisher:

1 Introduction
Induction is a fundamental tool frequently used by mathematicians, logicians, and computer scientists without much thought. It includes both definition and proof principles. The definition principle allows for the specification of data types, such as natural numbers, lists, or trees, and to define functions from them; the proof principle enables proving properties on such inductively defined structures.
The coinduction proof principle, which is formally the dual of induction, is less familiar. It first emerged in the 1970s [33] in three independent fields: set theory [14], modal logic [40], and concurrency theory [27]. Since then, it has been recognized as a fundamental principle in computer science and has been applied in various contexts [24, 1, 11, 16, 12, 31, 25, 17, 22].
Up-to techniques are enhancements of the coinduction proof principle, originally introduced by Milner in [23] to simplify coinductive arguments. Coinduction up-to has proven useful, if not essential, in numerous proofs about concurrent systems (see [30] for references). It has been used to establish decidability results [9], improve standard automata algorithms [6], and prove the completeness of domains in abstract interpretation [3].
The theory of up-to techniques was initially developed by Sangiorgi [32] and later generalized to the abstract setting of complete lattices by Pous [28, 29]. In particular, Pous introduced the notion of -compatible techniques for some monotone map . Intuitively, these are are sound up-to techniques with advantageous composition properties.
A curiosity that may have occurred to many is the following:
Since coinduction is the dual of induction, what is the dual of coinduction up-to?
In this paper, we introduce a theory of induction up-to by simply dualizing the work of Pous [28]. Our main finding is that the well-known principle of strong induction over the set natural numbers , a principle familiar even to undergraduate students, is an example of an inductive up-to technique.
More precisely, we dualize the notion of -compatible techniques from [28] to that of -cocompatible (Definition 5) up-to techniques, which, as expected, are sound (Theorem 7) and enjoy good composition properties (Proposition 8). We show that, under mild conditions, any proof by coinduction up-to can equivalently be carried out by means of induction up-to, and vice versa (Proposition 11).
For any monotone map , its down-closure, denoted by , is always -cocompatible (Corollary 9). We name induction up-to strong induction since, when is the monotone map with the least fixed point , induction up-to coincides with the usual strong induction over (Section 6.1). Unsurprisingly, the same approach can be applied to obtain strong induction on words (Section 6.2) and other inductive data types.
Overall, this shows that induction up-to, and in particular strong induction, provide enhancements for the induction proof principle. At this point, another curiosity arises:
What are enhancements of the induction definition principle?
Intuitively, one can think of recursion schemes as enhancements of the induction definition principle: they ensure that the specification of a recursive function is well-defined.
To formalize this intuition, we exploit the fact that Pous’ theory [28] has a beautiful categorical meaning [5]: when generalizing this theory from lattices to categories, one obtains Bartel’s -coinduction [2], an enhancement of the coinduction definition principle that generalizes several specification techniques common in computer science [36, 20], notably the abstract GSOS by Turi and Plotkin [37, 21].
We illustrate that, following the same pattern, the theory of induction up-to generalize to a certain recursion scheme known as comonadic recursion by Capretta, Uustalu, Vene and Pardo [39, 8] (Proposition 18). In particular, strong induction generalises to a scheme known as course-of-value iteration [38, 7] (Proposition 20). These correspondences are summarised in Table 1.
|
|
Related Work
An elegant theory of inductive enhancements has been recently introduced by Sangiorgi in [35]. This theory substantially differs from ours in its goals: Sangiorgi aims to enhance the proof methods for those behavioural equivalences and preorders [41, 42], such as trace, failure, and ready, that are defined inductively. These relations can usually be stratified, and the proposed inductive enhancements are functions on relations preserving such stratification. Like in the theory of coinductive enhancements [32, 28], the starting notion is the one of (semi-)progression and enhancements are up-closures, which is a crucial difference from our inductive invariants.
Synopsis
We begin our exposition in Section 2 with a simple example of proof by strong induction for the Fibonacci sequence. We recall the lattice-theoretic understanding of induction and coinduction in Section 3 and the theory of coinductive up-to techniques in Section 4; its dual, the theory of inductive up-to techniques, is illustrated in Section 5. In particular, Section 5.1 links coinduction up-to and induction up-to by means of an involution operator; Example 12 illustrates how, following this link, the coinductive technique of up-to equivalence becomes up-to apartness. Section 6.2 introduces strong induction as a certain up-to technique. The fact that this generalises strong induction on is not obvious and its proof is detailed in Section 6.1. Strong induction on words is discussed in Section 6.2. In Section 7, we turn from the proof principle to the definition principle: Section 7.1 quickly recalls the induction definition principle by means of initial algebras; Section 7.2 recalls comonadic corecursion and Section 7.3 course-of-value iteration, a special case of comonadic corecursion. Finally, Section 7.4, revisits the Fibonacci, by recalling that its definition is by means of course of value iteration. The appendix contains the missing proofs and some additional material.
Until Section 7, the reader only requires some familiarity with lattice theory. Then, we expect the reader to be familiar with category theory.
2 Motivating Example: the Fibonacci sequence
Induction is a proof principle that applies to inductively defined structures. For instance, for proving that a predicate hold for all natural numbers , one has to find a predicate that implies , that is true for and that, when true for , then it is true for .
(1) |
Sometimes, induction might be too weak to prove certain properties. As an example, consider the Fibonacci sequence defined as
and suppose that one would like to prove that for all . One could start a proof by induction by checking the base case, . For the inductive case, one would need to bound . At this point, one would be stuck because there is no information on from the inductive hypothesis.
Strong induction comes to our rescue by allowing a stronger inductive hypothesis. We still need to find a predicate that implies and that holds at , but when showing that it is true for , we may assume that it holds for all .
(2) |
We conclude the proof of for all by strong induction. For , . For the inductive step, we assume that for all , and we seek to bound . If then . Otherwise, if , we use the strong inductive hypothesis:
We want to draw the reader’s attention to the fact that, here, strong induction is necessary because is not–strictly speaking–inductively defined: the value of does not depend only on . We will revisit the relationship between definitions and proofs in Section 7. Until then, we will focus only on the proof principles.
3 Preliminaries and notation
A complete lattice is a partially ordered set with joins , meets , a top and a bottom elements, least upper bounds and greatest lower bounds . Henceforth, we use to range over complete lattices and to range over their elements. One lattice that we will often use is , the power set of a set , ordered by set inclusion.
Recall that a function is said to be a monotone map if it preserves the order: for all , if then . The identity and the composition of two monotone maps and are monotone. Therefore, if is a monotone map, then its powers are also monotone, where the functions are defined inductively as
(3) |
We will implicitly use the fact that monotone maps form a complete lattice with their natural point-wise order: whenever we write iff for all .
A monotone map is an up-closure operator if and . It is a down-closure operator if and . Particularly relevant to our exposition are the up-closures and the down-closure generated by a (co)continuous map , namely a monotone map preserving arbitrary least upper bounds and greatest lower bounds:
(4) |
Given a monotone map , the element is said to be a post-fixed point iff ; a pre-fixed point iff ; a fixed point iff . We write and for the least and greatest fixed point. For a monotone map on a complete lattice , the Knaster-Tarski fixed point theorem characterises as the least upper bound of all pre-fixed points of and as the greatest lower bound of all its post-fixed points:
This immediately leads to the induction and coinduction proof principles, illustrated by the inference rules below, on the left and on the right, respectively [26].
(5) |
The induction proof principle states that in order to prove that , one should provide an inductive invariant –namely, a pre-fixed point of – that is below ; dually, the coinduction proof principle states that in order to prove that , one should provide a coinductive invariant, i.e., a post-fixed point of , that is above .
Remark 1.
From this lattice theoretic perspective, it is easy to see that the coinduction proof principle is simply the dual of induction. Indeed, whenever is a lattice, then so is . Similarly, if is monotone, then so is , and the greatest fixed point of over becomes the least fixed point of over .
We illustrate inductive and coinductive invariants with an example from automata theory.
Example 2 (cf. [6, Remark 2]).
We denote by the set of words over an alphabet ; denotes the empty word and the word obtained by concatenating with . For a word , we indicate its length with .
A deterministic automaton on the alphabet is a triple , where is a set of states, is the output function, determining if a state is accepting () or not () and is the transition function which returns the next state, for each letter . Every automaton induces a function defined inductively for all , and as and . Two states are said to be language equivalent, in symbols , iff .
Alternatively, can be defined as the greatest fixed point of some map on , the lattice of relations over . The functions are defined as
(6) |
for all . One can easily check that both and are monotone and that . Thanks to this characterisation, one can prove that two states are language equivalent by means of the coinduction proof principle in (5): to show that , it is enough to provide a relation that is a post-fixed point of and such that . Such coinductive invariants are often called bisimulations.
For an example, consider the following deterministic automaton, where final states are overlined and the transition function is represented by labelled arrows. The relation consisting of dashed and dotted lines is a bisimulation witnessing that .
(7) |
One can prove that by means of induction as well: for all , the functions are defined as follows.
(8) |
Note that above, as well as in (6), are constant functions: we will sometime take the freedom to identify them with the corresponding element in the lattice. Intuitively, represents the subset of all pairs of states that are reachable from the pair . Thus, if and only if all those pairs of states are in , i.e., if and only if . The latter can be proved by exhibiting a relation that is a pre-fixed point of and such that : the relation formed by the dashed and dotted lines in (7) satisfies this condition when taking to be .
4 Coinduction up-to
Coinduction is a technique for proving for some map on a lattice by providing a coinductive invariant for . In many situations, providing such an invariant is far too complicated. Motivated by this fact, Milner [23] introduced enhancements of the coinduction proof principle which are nowadays widely known as up-to techniques. In a nutshell, an up-to technique is an up-closure . An -coinductive invariant up-to is some such that , namely a post-fixed point of . An up-to technique is said to be sound w.r.t. if the following coinduction up-to principle holds.
(Coinduction Up-To) |
In (5), one has to find an invariant such that . In (Coinduction Up-To), the search of such a is simplified since it is enough that . Since is an up-closure, , which simplifies the task of finding coinductive invariants.
Example 3 (Up-to equivalence).
We continue Example 2 to illustrate a coinductive invariant up-to. We instantiate (Coinduction Up-To) by taking to be and to be the function mapping any relation into its equivalence closure. One can check by exhibiting a relation such that .
Consider for instance the relation consisting of only the dashed lines in (7). Note that but . It is thus easy to see that , but is not included in . In other words, is a coinductive invariant up-to but not a coinductive invariant. In Example 2, to prove that by means of coinduction, we need to take the relation consisting of both dashed and dotted lines in (7). With coinduction up-to , it is thus enough to take only the dashed lines.
Of course, before using an up-to technique, one should prove it to be sound. Since this might be quite challenging, several theories [32, 28, 10, 29, 4] have been introduced for simplifying this task. In this paper we will consider the theory of Pous in [28] that focuses on the notion of compatible techniques: is -compatible iff . The key results in [28] state that compatible techniques are sound and can be nicely composed.
5 Induction up-to
Recall that for applying the induction proof principle in (5), one has to find a such that . The idea of induction up-to is to simplify this task by weakening such constraint to for some .
Note that if the map is an up-closure, as it is the case of coinduction up-to, then this would only complicate our task by imposing additional constraints; indeed .
Example 4.
As expected, the solution consists in considering down closures. An (inductive) up-to technique is a down closure . An -inductive invariant up-to is some such that , namely a pre-fixed point of . An up-to technique is said to be sound w.r.t. if the following induction up-to principle holds.
(Induction Up-To) |
To prove the soundness of inductive up-to techniques, we consider the dual of the notion of compatible functions from [28].
Definition 5 (Cocompatible map).
Let be two monotone maps. We say that is cocompatible with , shortly -cocompatible, if .
When is -cocompatible, any inductive invariant up-to gives rise to an inductive invariant.
Proposition 6.
Let be a down closure that is cocompatible with some monotone map . If is a pre-fixed point for , then is a pre-fixed point for .
Proof.
( down closure) | ||||
( is -cocompatible) | ||||
( is a pre-fixed point of ) |
Theorem 7.
If is -cocompatible, then it is sound.
Proof.
We have to prove the conclusion of (Induction Up-To) assuming its premise. By and Proposition 6, it holds that
Since , as is a downclosure, and it holds that
Thus by replacing with in (5), it holds that .
It is worth mentioning that the two results above also hold by dualising the theory in [28]. We have reported their proofs since they will be relevant in Section 7. The following result also follows easily from [28]. For convenience of the reader we report its proof in Appendix A.
Proposition 8 (The algebra of cocompatible maps).
Let be monotone maps. Let be an -indexed family of monotone maps.
-
1.
The identity is -cocompatible;
-
2.
is -cocompatible;
-
3.
If and are -cocompatible, then is -cocompatible;
-
4.
If is -cocompatible then, for all , is -cocompatible;
-
5.
If and are -cocompatible, then is -cocompatible;
-
6.
If, for all , is -cocompatible, then is -cocompatible;
-
7.
If is -cocompatible, then is -cocompatible.
Corollary 9.
Let be a continuous monotone map. Then, its down-closure is -cocompatible.
Proof.
By point 2 and 7 in Proposition 8.
Remark 10.
Note that we have defined up-to techniques to be down-closures, while compatible maps are defined as arbitrary monotone maps. This choice is justified by the fact that monotone maps compose nicely, while down-closures do not. This motivated Pous to introduce up-to techniques in the original theory in [28] as monotone maps rather than up-closures. Here, we preferred to stay with closures, as this simplifies the proofs of Proposition 6 and Theorem 7, which will be relevant in the categorical generalisation illustrated in Section 7.
Note also that restricting to down-closures does not limit the applicability of the theory: indeed, if is an -compatible monotone map, not necessarily a down-closure, then, by Proposition 8.7, is also -compatible. Moreover, if , then
since . In other words, any proof up-to is also a proof up-to .
5.1 Relating Coinduction up-to and Induction Up-to via Involution
Coinduction and induction are equivalent whenever the lattice comes with an involution operator , namely a function on such that
(9) |
for all . In this case, for any monotone map on , one has that is a monotone map. Moreover, assuming that preserves of -chains, it holds that:
Such correspondence lifts to up-to techniques: whenever one can prove the leftmost by coinduction up-to , for some -compatible technique , one can equivalently prove the rightmost by induction up-to to . This is made formal by the following result.
Proposition 11.
Let be monotone maps and be an element of .
-
1.
is an up-closure iff is a down-closure;
-
2.
is -compatible iff is -cocompatible;
-
3.
is an -coinductive invariant up-to iff is an -inductive invariant up-to .
Example 12 (Up-to apartness).
Following the above considerations, one can transform coinduction up-to equivalence in Example 3 into induction up-to apartnesses. Apartness relations are standard in constructive reals analysis and has been first axiomatised in [19]: is a an apartness relation if it is
-
irreflexive: for all ;
-
symmetric: if , then for all ;
-
co-transitive: if , then or , for all .
The reader can easily check that is an apartness relation iff is an equivalence relation, where indicates the complement of a relation. Recall from Example 3 that the up-closure mapping any relation into its equivalence closure. By Proposition 11.1, is a down closure: it maps any into the largest apartnesses relation contained in . Since is -compatible, by Proposition 11.2, is -cocompatible. Since is a boolean algebra, then ; it is easy to check that and map any into the following relations.
With this characterisation, one can see that inductive invariants for are exactly those introduced [15, Definition 2.2]. Our work enhances this proof method with up-to apartness.
6 Strong Induction is an up-to technique
This section studies the proof principle given by a particular -cocompatible map: the down-closure of . Indeed, by Corollary 9, is -compatible and, by Theorem 7, the following proof principle is always sound.
(Strong Induction) |
We call such principle strong induction. Indeed, as we illustrate below, when instantiated to usual induction on natural numbers, the above proof principle coincides with the well-known strong induction illustrated in Section 2.
6.1 Strong Induction on natural numbers
We begin by illustrating how (5) generalises standard induction over natural numbers. Consider the lattice and the monotone map defined as
(10) |
for all . The least fixed point of is the set of natural numbers, . We take the sets and to be the sets of natural numbers on which and are true, respectively.
With these choices, set inclusion corresponds to predicate implication: iff . The least fix point of is contained in iff all natural numbers are contained in , which means that holds for all .
Similarly, is a pre-fixed point of iff contains and it contains for each , which means that holds at and it holds at whenever it holds at .
These considerations show that (1) is a particular instance of (5), when we instantiate it to .
We can now illustrate our main observation: when instantiating (Strong Induction) to one obtains exactly the strong induction on natural numbers reported in (2). We start by computing the powers of .
Lemma 13.
For all , and all ,
Proof.
By induction. For the base case, we have the following derivation.
(3 ) | ||||
For the inductive case, we have the following derivation.
(3 ) | ||||
(Ind. Hyp.) | ||||
(10) | ||||
The core of our argument relies on the following result, stating that is the largest closed interval from including that is a subset of .
Lemma 14.
For any set , is characterised as .
Proof.
By definition . Thus,
In short,
(11) |
We use (11) to prove the two inclusions of separetely:
Proposition 15.
For any set , the following are equivalent
-
;
-
.
Proof.
The above proposition allows us to easily see that strong induction (2) is induction up-to . The latter is illustrated below on the left. The former is reported on the right.
The correspondence between the two rules mirrors that of induction. The conclusions of the two rules coincide in the same way that they did for induction. For the premise, observe that, by Proposition 15,
6.2 Strong Induction on Words
As expected, one can use strong induction not only on but on any inductive data type. Below, we illustrate strong induction on , the set of words over an alphabet .
As we did for natural numbers, we need to give a monotone map that gives induction on words, i.e. whose least fixed point is . The candidate monotone map mimics the definition of the monotone map for natural numbers: it maps a set to the set containing the empty word and all the successors of words in .
(12) |
Since the least fixed point of is , the induction principle (5) instantiated to give us the usual induction principle on words.
We now turn our attention to (Strong Induction) instantiated with :
What does this means in practice? To answer this question, the key is to have a handy characterisation of . This is going to resemble that of but, instead of the ordering on natural numbers, we consider the suffix partial ordering of words:
The analogue of the interval for natural numbers is, then, the set of suffixes of a word, . With this, we obtain that the down closure of gives the biggest subset that is closed under suffixes.
With this result, we can explicit the strong induction principle on words.
Compare this principle with strong induction on natural numbers: consider the singleton set . Then, words on are determined by their length, so is in bijection with the natural numbers . Through this bijection, coincides with the interval . This further justifies the name of strong induction.
7 From Lattice to Categories
Induction is both a proof principle and a definition principle. The latter can be obtained as generalisation of the former by moving from lattices to categories. As (inductive) up-to techniques are enhancements of the induction proof principle, by means of a similar generalisation, one can obtain enhancements of the induction definition principle. In this section, we illustrate that induction up-to generalises to a recursion scheme known as comonadic recursion [8] and strong induction generalises to course-of-value iteration [38, 7].
7.1 Initial agebras and the induction definition principle
Hereafter, we write and for products and coproducts in some category , for the pairing of and and for the copairing of and . The singleton set is denoted by .
Given a functor on some category , an -algebra is a pair where is an object of and is an arrow. Given two -algebras and , an algebra morphism is an arrow of making the diagram below commute. An -algebra is said to be initial if for any -algebra , there exists a unique algebra morphism .
Initial algebras give the induction definition principle: in order to specify a morphism from to some object it is enough to give an -algebra on .
Remark 16.
When the category is a complete lattice, a functor on is simply a monotone map; an -algebra is a pre-fixed point for and an initial -algebra is a least fixed-point. In this perspective, the induction definition principle collapses to the induction proof principle: specifying an arrow means exactly proving that .
Consider –the category of sets and functions– and the functor mapping a set into and a function into . An initial algebra for is provided by where assigns to the number and assigns to any , its successor . Now, given an -algebra , one obtains, by initiality, a function , as illustrated below on the left.
The fact that the diagram on the left commutes is expressed by the conditions on the right. The first condition provides the base case of an inductive definition; the second one provides the inductive case. Note that, in order to define , one uses the value . Sometimes, as in the Fibonacci sequence in Section 2, functions are specified by using not just their value at but also their values at some smaller numbers. This can be done by enhancing the induction definition principle by means of recursion schemes.
7.2 Comonadic Recursion
A comonad on a category is a functor together with two natural transformations, the counit and the comultiplication , such that and for all objects . A distributive law of a functor over the comonad is a natural transformation such that and .
Comonadic recursion exploits a comonad and a distributive law to enhance the induction definition principle. In order to define a morphism from to , rather than specifying an -algebra, one can specify an -algebra . Indeed, such gives rise to
which is an -algebra and thus, by initiality of , one obtains that can be composed with the counit to obtain the desired morphism from to .
Remark 17.
Following Remark 16, when is a complete lattice, a comonad is simply a down-closure; a distributive law witnesses that , namely that is cocompatible with . The algebra is just an inductive invariant up-to and is the corresponding inductive invariant provided by Proposition 6: observe that the three arrows in the definition of are exactly the three steps in the proof of Proposition 6. The morphism of gives us the proof of Theorem 7. All this justifies the following statement.
Proposition 18.
When is a complete lattice, comonadic recursion is induction up-to.
7.3 Course-of-Value Iteration
Course-of-value iteration is a recursion scheme that is obtained from comonadic recursion by taking to be the cofree comonad generated by . Below, we shortly recall this.
Coalgebras are the dual of algebras: a coalgebra for a functor is a pair with . Given two -coalgebras and , a coalgebra morphism is an arrow of such that . An -coalgebra is said to be final if for any -coalgebra , there exists a unique coalgebra morphism .
For an object of , we denote with the functor mapping an object into and arrow into . Whenever has a final coalgebra for all objects , one can define a comonad on . The functor maps an object into the final coalgebra and an arrow into the unique final coalgebra morphism . For all objects , the counit is defined as ; the comultiplication as .
Crucially, there exists a distributive law of over , denoted by , that is defined for all objects as .
Remark 19.
Following Remarks 16 and 17, when is a complete lattice, an -coalgebra is a post-fixed point for and a final -algebra is a greatest fixed-point. The cofree comonad simplifies to the down-closure generated by the monotone map , as defined in (4). The condition on the existence of a final -coalgebra for all objects trivially holds when is a lattice. The existence of the distributive law simplifies to Corollary 9. All this justifies the following statement.
Proposition 20.
When is a complete lattice, course of value iteration is strong induction.
7.4 Back to Fibonacci
We conclude by illustrating course of value iteration in the case when is the functor introduced in Section 7.1. First, it is convenient to recall some auxiliary ingredients.
For a set , we write for the set of all finite and infinite sequences over that are non-empty. For a sequence , we write whenever is infinite and whenever has length . Appending at the end of the word is a convenient notation to avoid confusing an element with the sequence . For all , we define and as follows.
The pairing forms a coalgebra for the functor . Actually, is a final coalgebra for : for all -coalgebra there exists a unique morphism making the following diagram on the left commute.
Commutation of the diagram is expressed by the conditions on the right: these provide a coinductive definition for .
Since for all sets there exists a final -coalgebra, then there exists the cofree comonad and a distributive law (more details in Appendix B). Most importantly, given a function (i.e, an -algebra), one can extend it to a function (i.e, an -algebra) coinductively defined for all as
(13) |
By initiality of , one has a morphism inductively defined
(14) |
that can be composed with the counit to obtain the desired morphism .
The inductive case in (14) can be conveniently rephrased (see Lemma 22 in Appendix B) as
In summary, and
Intuitively, may depend on , …
For a concrete example take to be and to be defined as
for all . The reader can easily check that , , and so on. By composing with is clear that one obtains the Fibonacci sequence recalled in Section 2.
8 Conclusions and future work
We have introduced induction up-to by dualising the framework of coinduction up-to from [28]. More precisely, we defined the notion of cocompatible functions (Definition 5) and proved that such functions provide sound up-to techniques (Theorem 7) that can be conveniently composed in various ways (Proposition 8). In particular, for any monotone function , its downclosure is always -cocompatible (Corollary 9). We refer to induction up-to as strong induction. Our main insight is that the well-known principle of strong induction over the natural numbers is induction up-to (Section 6.1), where is the map having as its least fixed point.
We then demonstrated that, by applying comonadic recursion [39] to lattices, we obtain exactly our theory of induction up-to (Proposition 18), while strong induction corresponds to the lattice-theoretic version of course-of-value iteration [38, 7] (Proposition 20).
We believe that these results shed light on the relationship between the schemes used to define programs and the techniques employed to prove their properties. It is no coincidence that, in Section 2, we required strong induction to prove properties of the Fibonacci sequence, which is defined by course-of-value iteration.
References
- [1] Roberto M Amadio and Luca Cardelli. Subtyping recursive types. ACM Transactions on Programming Languages and Systems (TOPLAS), 15(4):575–631, 1993. doi:10.1145/155183.155231.
- [2] Falk Bartels. Generalised coinduction. Mathematical Structures in Computer Science, 13(2):321–348, 2003. doi:10.1017/S0960129502003900.
- [3] Filippo Bonchi, Pierre Ganty, Roberto Giacobazzi, and Dusko Pavlovic. Sound up-to techniques and complete abstract domains. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, pages 175–184, 2018. doi:10.1145/3209108.3209169.
- [4] Filippo Bonchi, Barbara König, and Daniela Petrisan. Up-to techniques for behavioural metrics via fibrations. Math. Struct. Comput. Sci., 33(4-5):182–221, 2023. doi:10.1017/s0960129523000166.
- [5] Filippo Bonchi, Daniela Petrisan, Damien Pous, and Jurriaan Rot. A general account of coinduction up-to. Acta Informatica, 54(2):127–190, 2017. doi:10.1007/S00236-016-0271-4.
- [6] Filippo Bonchi and Damien Pous. Checking NFA equivalence with bisimulations up to congruence. In Roberto Giacobazzi and Radhia Cousot, editors, The 40th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’13, Rome, Italy - January 23 - 25, 2013, pages 457–468. ACM, 2013. doi:10.1145/2429069.2429124.
- [7] Daniela Cancila, Furio Honsell, and Marina Lenisa. Generalized coiteration schemata. Electronic Notes in Theoretical Computer Science, 82(1):76–93, 2003. doi:10.1016/S1571-0661(04)80633-9.
- [8] Venanzio Capretta, Tarmo Uustalu, and Varmo Vene. Recursive coalgebras from comonads. Information and Computation, 204(4):437–468, 2006. doi:10.1016/j.ic.2005.08.005.
- [9] Didier Caucal. Graphes canoniques de graphes algébriques. RAIRO Theor. Informatics Appl., 24:339–352, 1990. doi:10.1051/ita/1990240403391.
- [10] Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Valeria Vignudelli. Up-to techniques for generalized bisimulation metrics. In Josée Desharnais and Radha Jagadeesan, editors, 27th International Conference on Concurrency Theory, CONCUR 2016, August 23-26, 2016, Québec City, Canada, volume 59 of LIPIcs, pages 35:1–35:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.CONCUR.2016.35.
- [11] Thierry Coquand. Infinite objects in type theory. In International Workshop on Types for Proofs and Programs, pages 62–78. Springer, 1993. doi:10.1007/3-540-58085-9_72.
- [12] Erik P. de Vink and Jan J.M.M. Rutten. Bisimulation for probabilistic transition systems: a coalgebraic approach. Theoretical Computer Science, 221(1):271–293, 1999. doi:10.1016/S0304-3975(99)00035-3.
- [13] Adrien Durier, Daniel Hirschkoff, and Davide Sangiorgi. Divergence and unique solution of equations. Logical Methods in Computer Science, 15, 2019. doi:10.23638/LMCS-15(3:12)2019.
- [14] Marco Forti and Furio Honsell. Set theory with free construction principles. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 10(3):493–522, 1983.
- [15] Herman Geuvers and Bart Jacobs. Relating apartness and bisimulation. Logical Methods in Computer Science, 17, 2021. doi:10.46298/lmcs-17(3:15)2021.
- [16] Ichiro Hasuo, Bart Jacobs, and Ana Sokolova. Generic trace semantics via coinduction. Logical Methods in Computer Science, 3, 2007. doi:10.2168/LMCS-3(4:11)2007.
- [17] Ichiro Hasuo, Shunsuke Shimizu, and Corina Cîrstea. Lattice-theoretic progress measures and coalgebraic model checking. In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 718–732, 2016. doi:10.1145/2837614.2837673.
- [18] Claudio Hermida and Bart Jacobs. Structural induction and coinduction in a fibrational setting. Information and computation, 145(2):107–152, 1998. doi:10.1006/inco.1998.2725.
- [19] Arend Heyting. Zur intuitionistischen axiomatik der projektiven geometrie. Mathematische Annalen, 98(1):491–538, 1928.
- [20] Bart Jacobs, Alexandra Silva, and Ana Sokolova. Trace semantics via determinization. In International Workshop on Coalgebraic Methods in Computer Science, pages 109–129. Springer, 2012. doi:10.1007/978-3-642-32784-1_7.
- [21] Bartek Klin. Bialgebras for structural operational semantics: An introduction. Theor. Comput. Sci., 412(38):5043–5069, 2011. doi:10.1016/j.tcs.2011.03.023.
- [22] Dexter Kozen and Alexandra Silva. Practical coinduction. Mathematical Structures in Computer Science, 27(7):1132–1152, 2017. doi:10.1017/S0960129515000493.
- [23] Robin Milner. Communication and concurrency, volume 84 of PHI Series in computer science. Prentice Hall, 1989.
- [24] Robin Milner and Mads Tofte. Co-induction in relational semantics. Theoretical computer science, 87(1):209–220, 1991. doi:10.1016/0304-3975(91)90033-X.
- [25] Keiko Nakata and Tarmo Uustalu. Resumptions, weak bisimilarity and big-step semantics for while with interactive I/O: An exercise in mixed induction-coinduction. In In Proceedings of SOS 2010, pages 57–75, 2010. doi:10.4204/EPTCS.32.5.
- [26] David Park. Fixpoint induction and proofs of program properties. Machine intelligence, 5, 1969.
- [27] David Park. Concurrency and automata on infinite sequences. In Theoretical Computer Science: 5th GI-Conference Karlsruhe, March 23–25, 1981, pages 167–183. Springer, 2005.
- [28] Damien Pous. Complete lattices and up-to techniques. In Zhong Shao, editor, Programming Languages and Systems, 5th Asian Symposium, APLAS 2007, Singapore, November 29-December 1, 2007, Proceedings, volume 4807 of Lecture Notes in Computer Science, pages 351–366. Springer, 2007. doi:10.1007/978-3-540-76637-7_24.
- [29] Damien Pous. Coinduction all the way up. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, pages 307–316, 2016. doi:10.1145/2933575.2934564.
- [30] Damien Pous and Davide Sangiorgi. Enhancements of the bisimulation proof method. In Davide Sangiorgi and Jan J. M. M. Rutten, editors, Advanced Topics in Bisimulation and Coinduction, volume 52 of Cambridge tracts in theoretical computer science, pages 233–289. Cambridge University Press, UK, 2012.
- [31] Jan J.M.M. Rutten. Behavioural differential equations: a coinductive calculus of streams, automata, and power series. Theoretical Computer Science, 308(1):1–53, 2003. doi:10.1016/S0304-3975(02)00895-2.
- [32] Davide Sangiorgi. On the bisimulation proof method. Mathematical Structures in Computer Science , Volume 8 , Issue 5, 1998. doi:10.1017/S0960129598002527.
- [33] Davide Sangiorgi. On the origins of bisimulation and coinduction. ACM Transactions on Programming Languages and Systems (TOPLAS), 31(4):1–41, 2009. doi:10.1145/1516507.1516510.
- [34] Davide Sangiorgi. Equations, contractions, and unique solutions. ACM Transactions on Computational Logic (TOCL), 18(1):1–30, 2017. doi:10.1145/2971339.
- [35] Davide Sangiorgi. From enhanced coinduction towards enhanced induction. Proceedings of the ACM on Programming Languages, 6(POPL):1–29, 2022. doi:10.1145/3498679.
- [36] Alexandra Silva, Filippo Bonchi, Marcello M. Bonsangue, and Jan J. M. M. Rutten. Generalizing the powerset construction, coalgebraically. In Kamal Lodaya and Meena Mahajan, editors, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2010, December 15-18, 2010, Chennai, India, volume 8 of LIPIcs, pages 272–283. Schloss-Dagstuhl-Leibniz Zentrum für Informatik, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2010. doi:10.4230/LIPIcs.FSTTCS.2010.272.
- [37] Daniele Turi and Gordon Plotkin. Towards a mathematical operational semantics. In Proceedings of Twelfth Annual IEEE Symposium on Logic in Computer Science, pages 280–291. IEEE, 1997. doi:10.1109/LICS.1997.614955.
- [38] Tarmo Uustalu and Varmo Vene. Primitive (co) recursion and course-of-value (co) iteration, categorically. Informatica, 10(1):5–26, 1999. doi:10.3233/INF-1999-10102.
- [39] Tarmo Uustalu, Varmo Vene, and Alberto Pardo. Recursion schemes from comonads. Nordic Journal of Computing, 8(3):366–390, 2001. URL: http://www.cs.helsinki.fi/njc/References/uustaluvp2001:366.html.
- [40] Johan Van Benthem. Modal logic and classical logic. 1983.
- [41] Rob J van Glabbeek. The linear time - branching time spectrum II: The semantics of sequential systems with silent moves extended abstract. In International Conference on Concurrency Theory, pages 66–81. Springer, 1993.
- [42] Rob J Van Glabbeek. The linear time - branching time spectrum I. In Jan A. Bergstra, Alban Ponse, and Scott A. Smolka, editors, Handbook of Process Algebra, pages 3–99. North-Holland / Elsevier, 2001. doi:10.1016/b978-044482830-9/50019-9.
Appendix A Appendix to Section 5
Proof of Proposition 8.
We prove in sequence each point.
-
1.
;
-
2.
;
-
3.
By the following derivation
( is ) ( is and is monotone ) -
4.
By induction.
-
5.
Proving means proving that: for all ,
Since, for all ,
it holds that:
In summary, to prove , we need to prove that, for all , . We can proceed separately:
-
First, :
-
Similarly for .
Since is below both and , we have that
-
-
6.
The proof is analogous to the one of Point 5. We need to prove that, for all ,
Thus, it is enough to prove that , . We proceed as follows:
() ( is ) - 7.
Proof of 11.
We prove the two points separately.
-
1.
Assume that is an up-closure; Thus and for all ; By (9), and , i.e., and . The reverse implication has the same proof.
-
2.
Observe that for all monotone maps on , it holds that iff iff . Thus
-
3.
By the following derivation
Appendix B Details on Section 7.4
In Section 7.3, we give the recipe to compute the cofree comonad for an arbitrary functor and in Section 7.4 we used the cofree comonad for the functor in order to illustrate course-of-value Iteration for the natural numbers. For reader convenience, in this appendix we illustrate in details the cofree comonad and the distributive law by simply unfolding the definitions of Section 7.3.
First we illustrate the endofunctor . It maps any set into the set since, as discussed in the main text, is a final coalgebra. For a function , is the unique map from the -coalgebra to the final -coalgebra as illustrated below.
Thus, the function can be defined conductively as follows.
More explicitly, maps into
We can now illustrate the natural transformations. The counit is given, for all sets , by . The comultiplication is given by the unique morphism from the -coalgebra to the final -coalgebra , as illustrated below.
Thus, the function can be coinductively defined as follows.
Intuitively, is mapped into
So far, we have described the comonad . We can now move to the distributive law. For all sets , the distributive law is given by the unique morphism from the -coalgebra illustrated below on the left to the final -coalgebra .
Thus, the function can be coinductively defined as follows.
Intuitively maps into the sequence and any into the same sequence .
We conclude this appendix with some computation that allows for the handier characterisation of illustrated in the main text.
Lemma 21.
For all , .
Proof.
By induction on .
Lemma 22.
For all , .