24th International Conference on Rewriting Techniques and Applications (RTA 2013), RTA 2013, June 24-26, 2013, Eindhoven, The Netherlands
RTA 2013
June 24-26, 2013
Eindhoven, The Netherlands
International Conference on Rewriting Techniques and Applications
RTA
http://rewriting.loria.fr/rta/
https://dblp.org/db/conf/rta
Leibniz International Proceedings in Informatics
LIPIcs
https://www.dagstuhl.de/dagpub/1868-8969
https://dblp.org/db/series/lipics
1868-8969
Femke
van Raamsdonk
Femke van Raamsdonk
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
21
2013
978-3-939897-53-8
https://www.dagstuhl.de/dagpub/978-3-939897-53-8
Frontmatter, Table of Contents, Preface, Conference Organization
Frontmatter, Table of Contents, Preface, Conference Organization
Frontmatter
Table of Contents
Preface
Conference Organization
i-xiii
Front Matter
Femke
van Raamsdonk
Femke van Raamsdonk
10.4230/LIPIcs.RTA.2013.i
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Pattern Generation by Cellular Automata (Invited Talk)
A one-dimensional cellular automaton is a discrete dynamical system
where a sequence of symbols evolves synchronously according to a local update rule. We discuss simple update rules that make the automaton perform multiplications of numbers by a constant. If the constant and the number base are selected suitably the automaton becomes a universal pattern generator: all finite strings over its state alphabet appear from a finite seed. In particular we consider
the automata that multiply by constants 3 and 3/2 in base 6. We discuss the connections of these automata to some difficult open questions in number theory, and we pose several further questions concerning pattern generation in cellular automata.
cellular automata
pattern generation
Z-numbers
1-3
Invited Talk
Jarkko
Kari
Jarkko Kari
10.4230/LIPIcs.RTA.2013.1
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Husserl and Hilbert on Completeness and Husserl's Term Rewrite-based Theory of Multiplicity (Invited Talk)
Hilbert and Husserl presented axiomatic arithmetic theories in different ways and proposed two different notions of 'completeness' for arithmetic, at the turning of the 20th Century (1900-1901). The former led to the completion axiom, the latter completion of rewriting. We look into the latter in comparison with the former. The key notion to understand the latter is the notion of definite multiplicity or manifold (Mannigfaltigkeit). We show that his notion of multiplicity is understood by means of term rewrite theory in a very coherent manner, and that his notion of 'definite' multiplicity is understood as the relational web (or tissue) structure, the core part of which is a 'convergent' term rewrite proof structure. We examine how Husserl introduced his term rewrite theory in 1901 in the context of a controversy with Hilbert on the notion of completeness, and in the context of solving the justification problem of the use of imaginaries in mathematics, which was an important issue in the foundations of mathematics in the period.
History of term rewrite theory
Husserl
Hilbert
proof theory
Knuth-Bendix completion
4-19
Invited Talk
Mitsuhiro
Okada
Mitsuhiro Okada
10.4230/LIPIcs.RTA.2013.4
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Evidence Normalization in System FC (Invited Talk)
System FC is an explicitly typed language that serves as the target language for Haskell source programs. System FC is based on System F with the addition of erasable but explicit type equality proof witnesses. Equality proof witnesses are generated from type inference performed on source Haskell programs. Such witnesses may be very large objects, which causes performance degradation in later stages of compilation, and makes it hard to debug the results of type inference and subsequent program transformations. In this paper we present an equality proof simplification algorithm, implemented in GHC, which greatly reduces the size of the target System FC programs.
Haskell
type functions
system FC
20-38
Invited Talk
Dimitrios
Vytiniotis
Dimitrios Vytiniotis
Simon
Peyton Jones
Simon Peyton Jones
10.4230/LIPIcs.RTA.2013.20
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Linear Logic and Strong Normalization
Strong normalization for linear logic requires elaborated rewriting techniques. In this paper we give a new presentation of MELL proof nets, without any commutative cut-elimination rule. We show how this feature induces a compact and simple proof of strong normalization, via reducibility candidates. It is the first proof of strong normalization for MELL which does not rely on any form of confluence, and so it smoothly scales up to full linear logic. Moreover, it is an axiomatic proof, as more generally it holds for every set of rewriting rules satisfying three very natural requirements with respect to substitution, commutation with promotion, full composition, and Kesner's IE property. The insight indeed comes from the theory of explicit substitutions, and from looking at the exponentials as a substitution device.
linear logic
proof nets
strong normalization
explicit substitutions
39-54
Regular Paper
Beniamino
Accattoli
Beniamino Accattoli
10.4230/LIPIcs.RTA.2013.39
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
A Combination Framework for Complexity
In this paper we present a combination framework for the automated polynomial complexity analysis of term rewrite systems. The framework covers both derivational and runtime complexity analysis, and is employed as theoretical foundation in the automated complexity tool TCT. We present generalisations of powerful complexity techniques, notably a generalisation of complexity pairs and (weak) dependency pairs. Finally, we also present a novel technique, called dependency graph decomposition, that in the dependency pair setting greatly increases modularity.
program analysis
term rewriting
complexity analysis
automation
55-70
Regular Paper
Martin
Avanzini
Martin Avanzini
Georg
Moser
Georg Moser
10.4230/LIPIcs.RTA.2013.55
T. Arts and J. Giesl. Termination of Term Rewriting using Dependency Pairs. TCS, 236(1-2):133-178, 2000.
M. Avanzini. POP* and Semantic Labeling using SAT. In Proc. of ESSLLI 2008/2009 Student Session, volume 6211 of LNCS, pages 155-166. Springer, 2010.
M. Avanzini, N. Eguchi, and G. Moser. New Order-theoretic Characterisation of the Polytime Computable Functions. 2012. Submitted to TCS.
M. Avanzini and G. Moser. Closing the Gap Between Runtime Complexity and Polytime Computability. In Proc. of \nst21 RTA, volume 6 of LIPIcs, pages 33-48, 2010.
M. Avanzini and G. Moser. Polynomial Path Orders: A Maximal Model. 2012. Submitted to LMCS. Technical Report available at URL: http://arxiv.org/abs/1209.3793.
http://arxiv.org/abs/1209.3793
M. Avanzini and G. Moser. A Combination Framework for Complexity, Technical Report. CoRR, cs/CC/1302.0973, 2013. Available at URL: http://www.arxiv.org/abs/1302.0973.
http://www.arxiv.org/abs/1302.0973
M. Avanzini and G. Moser. Tyrolean Complexity Tool: Features and Usage. In Proc. of \nth24 RTA. LIPIcs, 2013.
F. Baader and T. Nipkow. Term Rewriting and All That. Cambridge University Press, 1998.
G. Bonfante, A. Cichon, J.-Y. Marion, and H. Touzet. Algorithms with Polynomial Interpretation Termination Proof. JFP, 11(1):33-53, 2001.
U. Dal Lago and S. Martini. On Constructor Rewrite Systems and the Lambda-Calculus. In Proc. of \nth36 ICALP, volume 5556 of LNCS, pages 163-174. Springer, 2009.
J. Giesl, T. Arts, and E. Ohlebusch. Modular Termination Proofs for Rewriting Using Dependency Pairs. JSC, 34:21-58, 2002.
N. Hirokawa and G. Moser. Automated Complexity Analysis Based on the Dependency Pair Method. In Proc. of \nth4 IJCAR, volume 5195 of LNAI, pages 364-380, 2008.
N. Hirokawa and G. Moser. Complexity, Graphs, and the Dependency Pair Method. In Proc. of \nth15 LPAR, pages 652-666, 2008.
N. Hirokawa and G. Moser. Automated Complexity Analysis Based on the Dependency Pair Method. 2012. Submitted to IC, available at URL: http://arxiv.org/abs/1102.3129.
http://arxiv.org/abs/1102.3129
D. Hofbauer and C. Lautemann. Termination Proofs and the Length of Derivations. In Proc. of \nrd3 RTA, volume 355 of LNCS, pages 167-177. Springer, 1989.
D. Hofbauer and J. Waldmann. Termination of String Rewriting with Matrix Interpretations. In Proc. of \nth17 RTA, volume 4098 of LNCS, pages 328-342. Springer, 2011.
S. Lucas. Fundamentals of Context-Sensitive Rewriting. In Proc. of \nth22 SOFSEM, LNCS, pages 405 - 412. Springer, 1995.
A. Middeldorp, G. Moser, F. Neurauter, J. Waldmann, and H. Zankl. Joint Spectral Radius Theory for Automated Complexity Analysis of Rewrite Systems. In Proc. of \nth4 CAI, volume 6742 of LNCS, pages 1-20. Springer, 2011.
G. Moser. Proof Theory at Work: Complexity Analysis of Term Rewrite Systems. CoRR, abs/0907.5527, 2009. Habilitation Thesis.
G. Moser, A. Schnabl, and J. Waldmann. Complexity Analysis of Term Rewriting Based on Matrix and Context Dependent Interpretations. In Proc. of the 28th FSTTCS, LIPIcs, pages 304-315, 2008.
L. Noschinski, F. Emmes, and J. Giesl. A Dependency Pair Framework for Innermost Complexity Analysis of Term Rewrite Systems. In Proc. of \nrd23 CADE, LNAI, pages 422-438. Springer, 2011.
A. Schnabl. Derivational Complexity Analysis Revisited. PhD thesis, University of Innsbruck, 2012. Available at http://cl-informatik.uibk.ac.at/research/.
http://cl-informatik.uibk.ac.at/research/
R. Thiemann. The DP Framework for Proving Termination of Term Rewriting. PhD thesis, University of Aachen, 2007. Available as Technical Report AIB-2007-17.
H. Zankl and M. Korp. Modular Complexity Analysis via Relative Complexity. In Proc. of \nst21 RTA, volume 6 of LIPIcs, pages 385-400, 2010.
H. Zantema. Termination of Context-Sensitive Rewriting. In Proc. of \nth8 RTA, volume 1232 of LNCS, pages 172-186. Springer, 1997.
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Tyrolean Complexity Tool: Features and Usage
The Tyrolean Complexity Tool, TCT for short, is an open source complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial complexity of rewrite systems and can investigate derivational and runtime complexity, for full and innermost rewriting. This system description outlines features and provides a short introduction to the usage of TCT.
program analysis
term rewriting
complexity analysis
automation
71-80
Regular Paper
Martin
Avanzini
Martin Avanzini
Georg
Moser
Georg Moser
10.4230/LIPIcs.RTA.2013.71
M. Avanzini, N. Eguchi, and G. Moser. New Order-theoretic Characterisation of the Polytime Computable Functions. In Proceedings of the \nth10 Asian Symposium Programming Languages and Systems, number 7705 in LNCS, pages 280-295. Springer, 2012.
M. Avanzini and G. Moser. Complexity Analysis by Rewriting. In Proc. of \nth8 FLOPS, volume 4989 of LNCS, pages 130-146. Springer, 2008.
M. Avanzini and G. Moser. Closing the Gap Between Runtime Complexity and Polytime Computability. In Proc. of \nst21 RTA, volume 6 of LIPIcs, pages 33-48, 2010.
M. Avanzini and G. Moser. Polynomial Path Orders: A Maximal Model. 2012. Submitted to LMCS. Technical Report available at URL: http://arxiv.org/abs/1209.3793.
http://arxiv.org/abs/1209.3793
M. Avanzini and G. Moser. A Combination Framework for Complexity. In Proc. of \nth24 RTA. LIPIcs, 2013. Technical Report available at URL: http://arxiv.org/abs/1302.0973.
http://arxiv.org/abs/1302.0973
G. Bonfante, A. Cichon, J.-Y. Marion, and H. Touzet. Algorithms with Polynomial Interpretation Termination Proof. JFP, 11(1):33-53, 2001.
U. Dal Lago and S. Martini. Derivational Complexity is an Invariant Cost Model. In Proc. of 1st FOPARA, 2009.
J. Endrullis, J. Waldmann, and H. Zantema. Matrix Interpretations for Proving Termination of Term Rewriting. JAR, 40(2-3):195-220, 2008.
A. Geser, D. Hofbauer, J. Waldmann, and H. Zantema. On Tree Automata that Certify Termination of Left-Linear Term Rewriting Systems. In Proc. of \nth16 RTA, number 3467 in LNCS, pages 353-367. Springer, 2005.
J. Giesl, R. Thiemann, P. Schneider-Kamp, and S. Falke. Mechanizing and Improving Dependency Pairs. JAR, 37(3):155-203, 2006.
N. Hirokawa and G. Moser. Automated Complexity Analysis Based on the Dependency Pair Method. In Proc. of \nth4 IJCAR, volume 5195 of LNAI, pages 364-380, 2008.
N. Hirokawa and G. Moser. Automated Complexity Analysis Based on the Dependency Pair Method. 2012. Submitted to IC, available at URL: http://arxiv.org/abs/1102.3129.
http://arxiv.org/abs/1102.3129
D. Hofbauer and C. Lautemann. Termination Proofs and the Length of Derivations. In Proc. of \nrd3 RTA, volume 355 of LNCS, pages 167-177. Springer, 1989.
A. Koprowski and J. Waldmann. Max/Plus Tree Automata for Termination of Term Rewriting. AC, 19(2):357-392, 2009.
A. Middeldorp, G. Moser, F. Neurauter, J. Waldmann, and H. Zankl. Joint Spectral Radius Theory for Automated Complexity Analysis of Rewrite Systems. In Proc. of \nth4 CAI, volume 6742 of LNCS, pages 1-20. Springer, 2011.
G. Moser. Proof Theory at Work: Complexity Analysis of Term Rewrite Systems. CoRR, abs/0907.5527, 2009. Habilitation Thesis.
G. Moser and A. Schnabl. Proving Quadratic Derivational Complexities using Context Dependent Interpretations. In Proc. of \nth19 RTA, volume 5117 of LNCS, pages 276-290. Springer, 2008.
G. Moser, A. Schnabl, and J. Waldmann. Complexity Analysis of Term Rewriting Based on Matrix and Context Dependent Interpretations. In Proc. of \nth28 FSTTCS, LIPIcs, pages 304-315, 2008.
L. Noschinski, F. Emmes, and J. Giesl. A Dependency Pair Framework for Innermost Complexity Analysis of Term Rewrite Systems. In Proc. of \nrd23 CADE, LNAI, pages 422-438. Springer, 2011.
J. Waldmann. Polynomially Bounded Matrix Interpretations. In Proc. of \nst21 RTA, volume 6 of LIPIcs, pages 357-372, 2010.
H. Zankl and M. Korp. Modular Complexity Analysis via Relative Complexity. In Proc. of \nst21 RTA, volume 6 of LIPIcs, pages 385-400, 2010.
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Abstract Logical Model Checking of Infinite-State Systems Using Narrowing
A concurrent system can be naturally specified as a rewrite theory
R = (Sigma, E, R) where states are elements of the initial algebra of terms modulo E and concurrent transitions are axiomatized by the rewrite rules R. Under simple conditions, narrowing with rules R modulo equations E can be used to symbolically represent the system's state space by means of terms with logical variables. We call this symbolic representation a "logical state space" and it can also be used for model checking verification of LTL properties. Since in general such a logical state space can be infinite, we propose several abstraction techniques for obtaining either an over-approximation or an under-approximation of the logical state space: (i) a folding abstraction that collapses patterns into more general ones, (ii) an easy-to-check method to define (bisimilar) equational abstractions, and (iii) an iterated bounded model checking method that can detect if a logical state space within a given bound is complete. We also show that folding abstractions can be faithful for safety LTL properties, so that they do not generate any spurious counterexamples. These abstraction methods can be used in combination and, as we illustrate with examples, can be effective in making the logical state space finite. We have implemented these techniques in the Maude system, providing the first narrowing-based LTL model checker we are aware of.
model checking
infinite states
rewrite theories
narrowing
81-96
Regular Paper
Kyungmin
Bae
Kyungmin Bae
Santiago
Escobar
Santiago Escobar
José
Meseguer
José Meseguer
10.4230/LIPIcs.RTA.2013.81
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Compression of Rewriting Systems for Termination Analysis
We adapt the TreeRePair tree compression algorithm and use it as an intermediate step in proving termination of term rewriting systems. We introduce a cost function that approximates the size of constraint systems that specify compatibility of matrix interpretations. We show how to integrate the compression algorithm with the Dependency Pairs transformation. Experiments show that compression reduces running times of constraint solvers, and thus improves the power of automated termination provers.
termination of rewriting
matrix interpretations
constraint solving
tree compression
97-112
Regular Paper
Alexander
Bau
Alexander Bau
Markus
Lohrey
Markus Lohrey
Eric
Nöth
Eric Nöth
Johannes
Waldmann
Johannes Waldmann
10.4230/LIPIcs.RTA.2013.97
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
A Variant of Higher-Order Anti-Unification
We present a rule-based Huet's style anti-unification algorithm for simply-typed lambda-terms in eta-long beta-normal form, which computes a least general higher-order pattern generalization. For a pair of arbitrary terms of the same type, such a generalization always exists and is unique modulo alpha-equivalence and variable renaming. The algorithm computes it in cubic time within linear space. It has been implemented and the code is freely available.
higher-order anti-unification
higher-order patterns
113-127
Regular Paper
Alexander
Baumgartner
Alexander Baumgartner
Temur
Kutsia
Temur Kutsia
Jordi
Levy
Jordi Levy
Mateu
Villaret
Mateu Villaret
10.4230/LIPIcs.RTA.2013.113
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Over-approximating Descendants by Synchronized Tree Languages
Over-approximating the descendants (successors) of a initial set of terms by a rewrite system is used in verification. The success of such verification methods depends on the quality of the approximation. To get better approximations, we are going to use non-regular languages. We present a procedure that always terminates and that computes over-approximation of descendants, using synchronized tree-(tuple) languages expressed by logic programs.
rewriting systems
non-regular approximations
logic programming
tree languages
descendants
128-142
Regular Paper
Yohan
Boichut
Yohan Boichut
Jacques
Chabin
Jacques Chabin
Pierre
Réty
Pierre Réty
10.4230/LIPIcs.RTA.2013.128
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Unifying Nominal Unification
Nominal unification is proven to be quadratic in time and space. It was so by two different approaches, both inspired by the Paterson-Wegman linear unification algorithm, but dramatically different in the way nominal and first-order constraints are dealt with.
To handle nominal constraints, Levy and Villaret introduced the notion of replacing while Calves and Fernandez use permutations and sets of atoms. To deal with structural constraints, the former use multi-equations in a way similar to the Martelli-Montanari algorithm while the later mimic Paterson-Wegman.
In this paper we abstract over these two approaches and genralize them into the notion of modality, highlighting the general ideas behind nominal unification. We show that replacings and environments are in fact isomorphic. This isomorphism is of prime importance to prove intricate properties on both sides and a step further to the real complexity of nominal unification.
unification
nominal
alpha-equivalence
term-graph rewriting
143-157
Regular Paper
Christophe
Calvès
Christophe Calvès
10.4230/LIPIcs.RTA.2013.143
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Rewriting with Linear Inferences in Propositional Logic
Linear inferences are sound implications of propositional logic where each variable appears exactly once in the premiss and conclusion. We consider a specific set of these inferences, MS, first studied by Straßburger, corresponding to the logical rules in deep inference proof theory. Despite previous results characterising the individual rules of MS, we show that there is no polynomial-time characterisation of MS, assuming that integers cannot be factorised in polynomial time. We also examine the length of rewrite paths in an extended system MSU that also has unit equations, utilising a notion dubbed trivialisation to reduce the case with units to the case without, amongst other observations on MS-rewriting and the set of linear inferences in general.
proof theory
propositional logic
complexity of proofs
deep inference
158-173
Regular Paper
Anupam
Das
Anupam Das
10.4230/LIPIcs.RTA.2013.158
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Proof Orders for Decreasing Diagrams
We present and compare some well-founded proof orders for decreasing diagrams. These proof orders order a conversion above another conversion if the latter is obtained by filling any peak in the former by a (locally) decreasing diagram. Therefore each such proof order entails the decreasing diagrams technique for proving confluence. The proof orders differ with respect to monotonicity and complexity. Our results are developed in the setting of involutive monoids. We extend these results to obtain a decreasing diagrams technique for confluence modulo.
involutive monoid
confluence modulo
decreasing diagram
proof order
174-189
Regular Paper
Bertram
Felgenhauer
Bertram Felgenhauer
Vincent
van Oostrom
Vincent van Oostrom
10.4230/LIPIcs.RTA.2013.174
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Decidable structures between Church-style and Curry-style
It is well-known that the type-checking and type-inference problems are undecidable for second order lambda-calculus in Curry-style, although those for Church-style are decidable. What causes the differences in decidability and undecidability on the problems? We examine crucial conditions on terms for the (un)decidability property from the viewpoint of partially typed terms, and what kinds of type annotations are essential for (un)decidability of type-related problems. It is revealed that there exists an intermediate structure of second order lambda-terms, called a style of hole-application, between Church-style and Curry-style, such that the type-related problems are decidable under the structure. We also extend this idea to the omega-order polymorphic calculus F-omega, and show that the type-checking and type-inference problems then become undecidable.
2nd-order lambda-calculus
type-checking
type-inference
Church-style and Curry-style
190-205
Regular Paper
Ken-etsu
Fujita
Ken-etsu Fujita
Aleksy
Schubert
Aleksy Schubert
10.4230/LIPIcs.RTA.2013.190
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Expressibility in the Lambda Calculus with Mu
We address a problem connected to the unfolding semantics of functional programming languages: give a useful characterization of those infinite lambda-terms that are lambda-letrec-expressible in the sense that they arise as infinite unfoldings of terms in lambda-letrec, the lambda-calculus with letrec. We provide two characterizations, using concepts we introduce for infinite lambda-terms: regularity, strong regularity, and binding–capturing chains.
It turns out that lambda-letrec-expressible infinite lambda-terms
form a proper subclass of the regular infinite lambda-terms.
In this paper we establish these characterizations only for
expressibility in lambda-mu, the lambda-calculus with explicit mu-recursion. We show that for all infinite lambda-terms T the following are equivalent: (i): T is lambda-mu-expressible; (ii): T is strongly regular; (iii): T is regular, and it only has finite binding–capturing chains.
We define regularity and strong regularity for infinite lambda-terms as two different generalizations of regularity for infinite first-order terms: as the existence of only finitely many subterms that are defined as the reducts of two rewrite systems for decomposing lambda-terms. These rewrite systems act on infinite lambda-terms furnished with a bracketed prefix of abstractions for collecting decomposed lambda-abstractions and keeping the terms closed under decomposition. They differ in which vacuous abstractions in the prefix are removed.
lambda-calculus
lambda-calculus with letrec
unfolding semantics
regularity for infinite lambda-terms
binding-capturing chain
206-222
Regular Paper
Clemens
Grabmayer
Clemens Grabmayer
Jan
Rochel
Jan Rochel
10.4230/LIPIcs.RTA.2013.206
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
A Homotopical Completion Procedure with Applications to Coherence of Monoids
One of the most used algorithm in rewriting theory is the Knuth-Bendix completion procedure which starts from a terminating rewriting system and iteratively adds rules to it, trying to produce an equivalent convergent rewriting system. It is in particular used to study presentations of monoids, since normal forms of the rewriting system provide canonical representatives of words modulo the congruence generated by the rules. Here, we are interested in extending this procedure in order to retrieve information about the low-dimensional homotopy properties of a monoid. We therefore consider the notion of coherent presentation, which is a generalization of rewriting systems that keeps track of the cells generated by confluence diagrams. We extend the Knuth-Bendix completion procedure to this setting, resulting in a homotopical completion procedure. It is based on a generalization of Tietze transformations, which are operations that can be iteratively applied to relate any two presentations of the same monoid. We also explain how these transformations can be used to remove useless generators, rules, or confluence diagrams in a coherent presentation, thus leading to a homotopical reduction procedure. Finally, we apply these techniques to the study of some examples coming from representation theory, to compute minimal coherent presentations for them: braid, plactic and Chinese monoids.
higher-dimensional rewriting
presentation of monoid
Knuth-Bendix completion
Tietze transformation
low-dimensional homotopy for monoids
coherence
223-238
Regular Paper
Yves
Guiraud
Yves Guiraud
Philippe
Malbos
Philippe Malbos
Samuel
Mimram
Samuel Mimram
10.4230/LIPIcs.RTA.2013.223
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Extending Abramsky's Lazy Lambda Calculus: (Non)-Conservativity of Embeddings
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
lazy lambda calculus
contextual semantics
conservativity
239-254
Regular Paper
Manfred
Schmidt-Schauß
Manfred Schmidt-Schauß
Elena
Machkasova
Elena Machkasova
David
Sabel
David Sabel
10.4230/LIPIcs.RTA.2013.239
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Algorithms for Extended Alpha-Equivalence and Complexity
Equality of expressions in lambda-calculi, higher-order programming languages, higher-order programming calculi and process calculi is defined as alpha-equivalence. Permutability of bindings in let-constructs and structural congruence axioms extend alpha-equivalence. We analyse these extended alpha-equivalences and show that there are calculi with polynomial time algorithms, that a multiple-binding "let" may make alpha-equivalence as hard as finding graph-isomorphisms, and that the replication operator in the pi-calculus may lead to an EXPSPACE-hard alpha-equivalence problem.
alpha-equivalence
higher-order calculi
deduction
pi-calculus
255-270
Regular Paper
Manfred
Schmidt-Schauß
Manfred Schmidt-Schauß
Conrad
Rau
Conrad Rau
David
Sabel
David Sabel
10.4230/LIPIcs.RTA.2013.255
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Unification Modulo Nonnested Recursion Schemes via Anchored Semi-Unification
A recursion scheme is an orthogonal rewriting system with rules of the form f(x1,...,xn) -> s. We consider terms to be equivalent if they rewrite to the same redex-free possibly infinite term after infinitary rewriting. For the restriction to the nonnested case, where nested redexes are forbidden, we prove the existence of principal unifiers modulo scheme equivalence. We give an algorithm computing principal unifiers by reducing the problem to a novel fragment of semi-unification we call anchored semi-unification. For anchored semi-unification, we develop a decision algorithm that returns a principal semi-unifier in the positive case.
recursion schemes
semi-unification
infinitary rewriting
271-286
Regular Paper
Gert
Smolka
Gert Smolka
Tobias
Tebbi
Tobias Tebbi
10.4230/LIPIcs.RTA.2013.271
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Formalizing Knuth-Bendix Orders and Knuth-Bendix Completion
We present extensions of our Isabelle Formalization of Rewriting that cover two historically related concepts: the Knuth-Bendix order and the Knuth-Bendix completion procedure.
The former, besides being the first development of its kind in a proof assistant, is based on a generalized version of the Knuth-Bendix order. We compare our version to variants from the literature and show all properties required to certify termination proofs of TRSs.
The latter comprises the formalization of important facts that are related to completion, like Birkhoff's theorem, the critical pair theorem, and a soundness proof of completion, showing that the strict encompassment condition is superfluous for finite runs. As a result, we are able to certify completion proofs.
certification
completion
confluence
termination
287-302
Regular Paper
Christian
Sternagel
Christian Sternagel
René
Thiemann
René Thiemann
10.4230/LIPIcs.RTA.2013.287
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Automatic Decidability: A Schematic Calculus for Theories with Counting Operators
Many verification problems can be reduced to a satisfiability problem
modulo theories. For building satisfiability procedures the rewriting-based approach uses a general calculus for equational reasoning named paramodulation. Schematic paramodulation, in turn, provides means to reason on the derivations computed by paramodulation. Until now, schematic paramodulation was only studied for standard paramodulation. We present a schematic paramodulation calculus modulo a fragment of arithmetics, namely the theory of Integer Offsets. This new schematic calculus is used to prove the decidability of the satisfiability problem for some theories equipped with counting operators. We illustrate our theoretical contribution on theories representing extensions of classical data structures, e.g., lists and records. An implementation within the rewriting-based Maude system constitutes a practical contribution. It enables automatic decidability proofs for theories of practical use.
decision procedures
superposition
schematic saturation
303-318
Regular Paper
Elena
Tushkanova
Elena Tushkanova
Christophe
Ringeissen
Christophe Ringeissen
Alain
Giorgetti
Alain Giorgetti
Olga
Kouchnarenko
Olga Kouchnarenko
10.4230/LIPIcs.RTA.2013.303
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Normalized Completion Revisited
Normalized completion (Marché 1996) is a widely applicable and efficient technique for com- pletion modulo theories. If successful, a normalized completion procedure computes a rewrite system that allows to decide the validity problem using normalized rewriting. In this paper we consider a slightly simplified inference system for finite normalized completion runs. We prove correctness, show faithfulness of critical pair criteria in our setting, and propose a different notion of normalizing pairs. We then show how normalized completion procedures can benefit from AC- termination tools instead of relying on a fixed AC-compatible reduction order. We outline our implementation of this approach in the completion tool mkbtt and present experimental results, including new completions.
term rewriting
completion
319-334
Regular Paper
Sarah
Winkler
Sarah Winkler
Aart
Middeldorp
Aart Middeldorp
10.4230/LIPIcs.RTA.2013.319
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Beyond Peano Arithmetic – Automatically Proving Termination of the Goodstein Sequence
Kirby and Paris (1982) proved in a celebrated paper that a theorem of Goodstein (1944) cannot be established in Peano (1889) arithmetic. We present an encoding of Goodstein's theorem as a termination problem of a finite rewrite system. Using a novel implementation of ordinal interpretations, we are able to automatically prove termination of this system, resulting in the first automatic termination proof for a system whose derivational complexity is not multiple recursive. Our method can also cope with the encoding by Touzet (1998) of the battle of Hercules and Hydra, yet another system which has been out of reach for automated tools, until now.
term rewriting
termination
automation
ordinals
335-351
Regular Paper
Sarah
Winkler
Sarah Winkler
Harald
Zankl
Harald Zankl
Aart
Middeldorp
Aart Middeldorp
10.4230/LIPIcs.RTA.2013.335
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode
Confluence by Decreasing Diagrams – Formalized
This paper presents a formalization of decreasing diagrams in the theorem prover Isabelle. It discusses mechanical proofs showing that any locally decreasing abstract rewrite system is confluent. The valley and the conversion version of decreasing diagrams are considered.
term rewriting
confluence
decreasing diagrams
formalization
352-367
Regular Paper
Harald
Zankl
Harald Zankl
10.4230/LIPIcs.RTA.2013.352
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode