Proceedings of the 21st International Conference on Rewriting Techniques and Applications, RTA 2010, July 11-13, 2010, Edinburgh, Scotland, UK
RTA 2010
July 11-13, 2010
Edinburgh, Scotland, UK
International Conference on Rewriting Techniques and Applications
RTA
http://rewriting.loria.fr/rta/
https://dblp.org/db/conf/rta
Leibniz International Proceedings in Informatics
LIPIcs
https://www.dagstuhl.de/dagpub/1868-8969
https://dblp.org/db/series/lipics
1868-8969
Christopher
Lynch
Christopher Lynch
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
6
2010
978-3-939897-18-7
https://www.dagstuhl.de/dagpub/978-3-939897-18-7
Frontmatter (Titlepage, Table of Contents, Author List, PC List, Reviewer List)
Front matter including table of contents, author list, PC list, and reviewer list.
Table of Contents
Author List
PC List
Reviewer List
i-xii
Front Matter
Christopher
Lynch
Christopher Lynch
10.4230/LIPIcs.RTA.2010.i
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Preface
Preface
Preface
13-14
Regular Paper
Christopher
Lynch
Christopher Lynch
10.4230/LIPIcs.RTA.2010.XIII
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Automata for Data Words and Data Trees
Data words and data trees appear in verification and XML processing. The term ``data'' means that positions of the word, or tree, are decorated with elements of an infinite set of data values, such as natural numbers or ASCII strings. This talk is a survey of the various automaton models that have been developed for data words and data trees.
Automata
Data Word
Data Tree
1-4
Regular Paper
Mikolaj
Bojanczyk
Mikolaj Bojanczyk
10.4230/LIPIcs.RTA.2010.1
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Realising Optimal Sharing
Realising Optimal Sharing
Optimal Sharing
5-6
Regular Paper
Vincent
van Oostrom
Vincent van Oostrom
10.4230/LIPIcs.RTA.2010.5
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Automated Confluence Proof by Decreasing Diagrams based on Rule-Labelling
Decreasing diagrams technique (van Oostrom, 1994) is a technique that
can be widely applied to prove confluence of rewrite systems. To
directly apply the decreasing diagrams technique to prove confluence
of rewrite systems, rule-labelling heuristic has been proposed by van
Oostrom (2008). We show how constraints for ensuring confluence of
term rewriting systems constructed based on the rule-labelling
heuristic are encoded as linear arithmetic constraints suitable for
solving the satisfiability of them by external SMT solvers. We point
out an additional constraint omitted in (van Oostrom, 2008) that is
needed to guarantee the soundness of confluence proofs based on the
rule-labelling heuristic extended to deal with non-right-linear rules.
We also present several extensions of the rule-labelling heuristic by
which the applicability of the technique is enlarged.
Confluence
Decreasing Diagrams
Automation
Term Rewriting Systems
7-16
Regular Paper
Takahito
Aoto
Takahito Aoto
10.4230/LIPIcs.RTA.2010.7
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Higher-Order (Non-)Modularity
We show that, contrary to the situation in first-order
term rewriting, almost none of the usual properties of
rewriting are modular for higher-order rewriting, irrespective
of the higher-order rewriting format.
We show that for the particular format of simply typed
applicative term rewriting systems modularity of confluence,
normalization, and termination can be recovered
by imposing suitable linearity constraints.
Higher-order rewriting
modularity
termination
normalization
17-32
Regular Paper
Claus
Appel
Claus Appel
Vincent
van Oostrom
Vincent van Oostrom
Jakob Grue
Simonsen
Jakob Grue Simonsen
10.4230/LIPIcs.RTA.2010.17
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Closing the Gap Between Runtime Complexity and Polytime Computability
In earlier work, we have shown
that for confluent TRSs, innermost polynomial runtime complexity
induces polytime computability of the functions defined.
In this paper, we generalise this result to full rewriting, for that
we exploit graph rewriting. We give a new proof of the adequacy of
graph rewriting for full rewriting that allows for
a precise control of the resources copied. In sum we
completely describe an implementation of rewriting
on a Turing machine (TM for short). We show that the runtime complexity of
the TRS and the runtime complexity of the TM is polynomially
related.
Our result strengthens the evidence that the complexity of
a rewrite system is truthfully represented through the length
of derivations. Moreover our result allows the classification of
nondeterministic polytime-computation based on runtime
complexity analysis of rewrite systems.
Term rewriting
graph rewriting
complexity analysis
polytime computability
33-48
Regular Paper
Martin
Avanzini
Martin Avanzini
Georg
Moser
Georg Moser
10.4230/LIPIcs.RTA.2010.33
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Abstract Models of Transfinite Reductions
We investigate transfinite reductions in abstract reduction
systems. To this end, we study two abstract models for transfinite
reductions: a metric model generalising the usual metric approach to
infinitary term rewriting and a novel partial order model. For both
models we distinguish between a weak and a strong variant of
convergence as known from infinitary term rewriting. Furthermore, we
introduce an axiomatic model of reductions that is general enough to
cover all of these models of transfinite reductions as well as the
ordinary model of finite reductions. It is shown that, in this
unifying axiomatic model, many basic relations between termination and
confluence properties known from finite reductions still hold. The
introduced models are applied to term rewriting but also to term graph
rewriting. We can show that for both term rewriting as well as for
term graph rewriting the partial order model forms a conservative
extension to the metric model.
Infinitary rewriting
metric
partial order
abstract reduction system
axiomatic
term rewriting
graph rewriting
49-66
Regular Paper
Patrick
Bahr
Patrick Bahr
10.4230/LIPIcs.RTA.2010.49
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Partial Order Infinitary Term Rewriting and Böhm Trees
We investigate an alternative model of infinitary term
rewriting. Instead of a metric, a partial order on terms is employed
to formalise (strong) convergence. We compare this partial order
convergence of orthogonal term rewriting systems to the usual metric
convergence of the corresponding B{"o}hm extensions. The B{"o}hm extension
of a term rewriting system contains additional rules to equate
so-called root-active terms. The core result we present is that
reachability w.r.t. partial order convergence coincides with
reachability w.r.t. metric convergence in the B{"o}hm extension. This
result is used to show that, unlike in the metric model, orthogonal
systems are infinitarily confluent and infinitarily normalising in the
partial order model. Moreover, we obtain, as in the metric model, a
compression lemma. A corollary of this lemma is that reachability
w.r.t. partial order convergence is a conservative extension of
reachability w.r.t. metric convergence.
Infinitary term rewriting
Böhm trees
partial order
confluence
normalisation
67-84
Regular Paper
Patrick
Bahr
Patrick Bahr
10.4230/LIPIcs.RTA.2010.67
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Unique Normal Forms in Infinitary Weakly Orthogonal Rewriting
We present some contributions to the theory of infinitary rewriting
for weakly orthogonal term rewrite systems, in which critical pairs
may occur provided they are trivial.
We show that the infinitary unique normal form property (UNinf)
fails by a simple example of a weakly orthogonal TRS with two
collapsing rules. By translating this example, we show that UNinf
also fails for the infinitary lambda-beta-eta-calculus.
As positive results we obtain the following: Infinitary confluence,
and hence UNinf, holds for weakly orthogonal TRSs that do not contain
collapsing rules. To this end we refine the compression lemma.
Furthermore, we consider the triangle and diamond properties
for infinitary developments in weakly orthogonal TRSs,
by refining an earlier cluster-analysis for the finite case.
Weakly orthogonal term rewrite systems
unique normal form property
infinitary rewriting
infinitary lambda-beta-eta-calculus,
85-102
Regular Paper
Joerg
Endrullis
Joerg Endrullis
Clemens
Grabmayer
Clemens Grabmayer
Dimitri
Hendriks
Dimitri Hendriks
Jan Willem
Klop
Jan Willem Klop
Vincent
van Oostrom
Vincent van Oostrom
10.4230/LIPIcs.RTA.2010.85
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
The Undecidability of Type Related Problems in Type-free Style System F
We consider here a number of variations on the System F, that are predicative second-order systems whose terms are intermediate between the Curry style and Church style. The terms here contain the information on where the universal quantifier elimination and introduction in the type inference process must take place, which is similar to Church forms. However, they omit the information on which types are involved in the rules, which is similar to Curry forms. In this paper we prove the undecidability of the type-checking, type inference and typability problems for the system. Moreover, the proof works for the predicative version of the system with finitely stratified polymorphic types. The result includes the bounds on the Leivant’s level numbers for types used in the instances leading to the undecidability.
Lambda calculus and related systems
type checking
typability
partial type inference
2nd order unification
undecidability
Curry style type system
103-118
Regular Paper
Ken-Etsu
Fujita
Ken-Etsu Fujita
Aleksy
Schubert
Aleksy Schubert
10.4230/LIPIcs.RTA.2010.103
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
On (Un)Soundness of Unravelings
We revisit (un)soundness of transformations of conditional into
unconditional rewrite systems. The focus here is on so-called
unravelings, the most simple and natural kind of such
transformations, for the class of normal conditional systems without
extra variables. By a systematic and thorough study of existing
counterexamples and of the potential sources of unsoundness we
obtain several new positive and negative results. In particular, we
prove the following new results: Confluence, non-erasingness and
weak left-linearity (of a given conditional system) each guarantee
soundness of the unraveled version w.r.t. the original one. The
latter result substantially extends the only known sufficient
criterion for soundness, namely left-linearity. Furthermore, by
means of counterexamples we refute various other tempting
conjectures about sufficient conditions for soundness.
Conditional rewriting
transformation into unconditional systems
unsoundness
unraveling
119-134
Regular Paper
Karl
Gmeiner
Karl Gmeiner
Bernhard
Gramlich
Bernhard Gramlich
Felix
Schernhammer
Felix Schernhammer
10.4230/LIPIcs.RTA.2010.119
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
A Proof Calculus Which Reduces Syntactic Bureaucracy
In usual proof systems, like the sequent calculus, only a very limited way of combining proofs is available through the tree structure. We present in this paper a logic-independent proof calculus, where proofs can be freely composed by connectives, and prove its basic properties. The main advantage of this proof calculus is that it allows to avoid certain types of syntactic bureaucracy inherent to all usual proof systems, in particular the sequent calculus. Proofs in this system closely reflect their atomic flow, which traces the behaviour of atoms through structural rules. The general definition is illustrated by the standard deep-inference system for propositional logic, for which there are known rewriting techniques that achieve cut elimination based only on the information in atomic flows.
Logic
Proof theory
Deep Inference
Flow graphs
Proof Systems
Open Deduction
Rewriting
Confluence
Termination
135-150
Regular Paper
Alessio
Guglielmi
Alessio Guglielmi
Tom
Gundersen
Tom Gundersen
Michel
Parigot
Michel Parigot
10.4230/LIPIcs.RTA.2010.135
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
A Rewriting Logic Semantics Approach to Modular Program Analysis
The K framework, based on rewriting logic semantics, provides a powerful logic for defining the semantics of programming languages. While most work in this area has focused on defining an evaluation semantics for a language, it is also possible to define an abstract semantics that can be used for program analysis. Using the SILF language (Hills, Serbanuta and Rosu, 2007), this paper describes one technique for defining such a semantics: policy frameworks. In policy frameworks, an analysis-generic, modular framework is first defined for a language. Individual analyses, called policies, are then defined as extensions of this framework, with each policy defining analysis-specific semantic rules and an annotation language which, in combination with support in the language front-end, allows users to annotate program types and functions with information used during program analysis. Standard term rewriting techniques are used to analyze programs by evaluating them in the policy semantics.
K
rewriting logic semantics
program analysis
151-160
Regular Paper
Mark
Hills
Mark Hills
Grigore
Rosu
Grigore Rosu
10.4230/LIPIcs.RTA.2010.151
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Infinitary Rewriting: Foundations Revisited
Infinitary Term Rewriting allows to express infinitary terms and infinitary reductions
that converge to them. As their notion of transfinite reduction in general,
and as binary relations in particular two concepts have been studied
in the past: strongly and weakly convergent reductions,
and in the last decade research has mostly focused around the former.
Finitary rewriting has a strong connection to the equational theory of its rule set:
if the rewrite system is confluent this (implies consistency of the theory and) gives rise to a semi-decision procedure for the theory,
and if the rewrite system is in addition terminating this becomes a decision procedure. This connection
is the original reason for the study of these properties in rewriting.
For infinitary rewriting there is barely an established notion of an equational theory.
The reason this issue is not trivial is that such a theory would need to include
some form of ``getting to limits'', and there are different options one can pursue.
These options are being looked at here, as well as several alternatives for the notion of reduction relation
and their relationships to these equational theories.
Infinitary rewriting,equivalence
161-176
Regular Paper
Stefan
Kahrs
Stefan Kahrs
10.4230/LIPIcs.RTA.2010.161
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Underspecified computation of normal forms
We show how to compute readings of ambiguous natural language sentences that are minimal in some way. Formally, we consider the problem of computing, out of a set C of trees and a rewrite system R, those trees in C that cannot be rewritten into a tree in C. We solve the problem for sets of trees that are described by semantic representations typically used in computational linguistics, and a certain class of rewrite systems that we use to approximate entailment, and show how to compute the irreducible trees efficiently by intersecting tree automata. Our algorithm solves the problem of computing weakest readings that has been open for 25 years in computational linguistics.
Rewrite systems tree automata normal forms computational linguistics
177-192
Regular Paper
Alexander
Koller
Alexander Koller
Stefan
Thater
Stefan Thater
10.4230/LIPIcs.RTA.2010.177
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Order-Sorted Unification with Regular Expression Sorts
We extend first-order order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The set of basic sorts is finite. The obtained signature corresponds to a finite bottom-up hedge automaton. The unification problem in such a theory generalizes some known unification problems. Its unification type is infinitary. We give a complete unification procedure and prove decidability.
Unification
sorts
regular expression
193-208
Regular Paper
Temur
Kutsia
Temur Kutsia
Mircea
Marin
Mircea Marin
10.4230/LIPIcs.RTA.2010.193
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
An Efficient Nominal Unification Algorithm
Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo alpha-equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadratic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently.
Nominal logic
unification
209-226
Regular Paper
Jordi
Levy
Jordi Levy
Mateu
Villaret
Mateu Villaret
10.4230/LIPIcs.RTA.2010.209
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Computing Critical Pairs in 2-Dimensional Rewriting Systems
Rewriting systems on words are very useful in the study of monoids. In good cases, they give finite presentations of the monoids, allowing their manipulation by a computer. Even better, when the presentation is confluent and terminating, they provide one with a notion of canonical representative for the elements of the presented monoid. Polygraphs are a higher-dimensional generalization of this notion of presentation, from the setting of monoids to the much more general setting of n-categories. Here, we are interested in proving confluence for polygraphs presenting 2-categories, which can be seen as a generalization of term rewriting systems. For this purpose, we propose an adaptation of the usual algorithm for computing critical pairs. Interestingly, this framework is much richer than term rewriting systems and requires the elaboration of a new theoretical framework for representing critical pairs, based on contexts in compact 2-categories.
Rewriting system
polygraph
presentation of a category
critical pair
unification
confluence
compact 2-category
context
227-242
Regular Paper
Samuel
Mimram
Samuel Mimram
10.4230/LIPIcs.RTA.2010.227
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Polynomial Interpretations over the Reals do not Subsume Polynomial Interpretations over the Integers
Polynomial interpretations are a useful technique for proving termination
of term rewrite systems. They come in various flavors:
polynomial interpretations with real, rational and integer coefficients.
In 2006, Lucas proved that there are rewrite systems that can be shown
polynomially terminating by polynomial interpretations with
real (algebraic)
coefficients, but cannot be shown polynomially terminating using
polynomials with rational coefficients only.
He also proved a similar theorem with respect to the use of
rational coefficients versus integer coefficients.
In this paper we show that polynomial interpretations with real or
rational coefficients do not subsume polynomial interpretations with
integer coefficients, contrary to what is commonly believed.
We further show that polynomial interpretations with real
coefficients subsume polynomial interpretations with rational
coefficients.
Term rewriting
termination
polynomial interpretations
243-258
Regular Paper
Friedrich
Neurauter
Friedrich Neurauter
Aart
Middeldorp
Aart Middeldorp
10.4230/LIPIcs.RTA.2010.243
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Automated Termination Analysis of Java Bytecode by Term Rewriting
We present an automated approach to prove termination of Java Bytecode (JBC) programs by automatically transforming them to term rewrite systems (TRSs). In this way, the numerous techniques and tools developed for TRS termination can now be used for imperative object-oriented languages like Java, which can be compiled into JBC.
Java Bytecode
termination
term rewriting
259-276
Regular Paper
Carsten
Otto
Carsten Otto
Marc
Brockschmidt
Marc Brockschmidt
Christian
von Essen
Christian von Essen
Jürgen
Giesl
Jürgen Giesl
10.4230/LIPIcs.RTA.2010.259
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Declarative Debugging of Missing Answers for Maude
Declarative debugging is a semi-automatic technique that starts
from an incorrect computation and locates a program fragment responsible for the error by building a tree representing this computation and guiding the user through it to find the error. Membership equational logic (MEL) is an equational
logic that in addition to equations allows the statement of membership axioms characterizing the elements of a sort. Rewriting logic is a logic of change that extends MEL by adding rewrite rules, that correspond to transitions between states and can be nondeterministic. In this paper we propose a calculus that allows to
infer normal forms and least sorts with the equational part, and sets of reachable terms through rules. We use an abbreviation of the proof trees computed with this calculus to build appropriate debugging trees for missing answers (results that are erroneous because they are incomplete), whose adequacy for debugging is proved.
Using these trees we have implemented a declarative debugger for Maude, a high-performance system based on rewriting logic, whose use is illustrated with an example.
Declarative debugging
Maude
Missing answers
Rewriting
277-294
Regular Paper
Adrian
Riesco
Adrian Riesco
Alberto
Verdejo
Alberto Verdejo
Narciso
Marti-Oliet
Narciso Marti-Oliet
10.4230/LIPIcs.RTA.2010.277
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Simulation in the Call-by-Need Lambda-Calculus with letrec
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
Lambda calculus
semantics
contextual equivalence
bisimulation,call-by-need
295-310
Regular Paper
Manfred
Schmidt-Schauss
Manfred Schmidt-Schauss
David
Sabel
David Sabel
Elena
Machkasova
Elena Machkasova
10.4230/LIPIcs.RTA.2010.295
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Weak Convergence and Uniform Normalization in Infinitary Rewriting
We study infinitary term rewriting systems containing finitely many rules. For these, we show that if a weakly convergent reduction is not strongly convergent, it contains a term that reduces to itself in one step (but the step itself need not be part of the reduction). Using this result, we prove
the starkly surprising result
that for any orthogonal system with finitely many rules, the system is
weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong convergence.
As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly convergent rewriting in the setting of weak convergence, if systems with finitely many rules are considered. Finally, we give a number of counterexamples showing failure of most of the results when infinite sets of rules are allowed.
Infinitary rewriting
weak convergence
uniform normalization
311-324
Regular Paper
Jakob Grue
Simonsen
Jakob Grue Simonsen
10.4230/LIPIcs.RTA.2010.311
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Certified Subterm Criterion and Certified Usable Rules
In this paper we present our formalization of two important
termination techniques for term rewrite systems: the subterm criterion and
the reduction pair processor in combination with usable rules. For both
techniques we developed executable check functions in the theorem prover
Isabelle/HOL which can certify the correct application of these techniques
in some given termination proof. As there are several variants of usable
rules we designed our check function in such a way that it accepts all
known variants, even those which are not explicitly spelled out in previous
papers.
We integrated our formalization in the publicly available IsaFoR-library.
This led to a significant increase in the power of CeTA, the corresponding
certified termination proof checker that is extracted from IsaFoR.
Term Rewriting
Certification
Termination
Theorem Proving
325-340
Regular Paper
Christian
Sternagel
Christian Sternagel
René
Thiemann
René Thiemann
10.4230/LIPIcs.RTA.2010.325
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Termination of linear bounded term rewriting systems
For the whole class of linear term rewriting systems and for each integer k, we define k-bounded rewriting as a restriction of the usual notion of rewriting.
We show that the k-bounded uniform termination, the k-bounded termination, the inverse k-bounded uniform, and the inverse k-bounded problems are decidable.
The k-bounded class (BO(k)) is, by definition, the set of linear systems for
which every derivation can be replaced by a k-bounded derivation. In general, for BO(k) systems, the uniform (respectively inverse uniform) k-bounded termination problem is not equivalent to the uniform (resp. inverse uniform) termination problem, and the k-bounded (respectively inverse k-bounded) termination problem is not equivalent to the termination (respectively inverse termination) problem.
This leads us to
define more restricted classes for which these problems are equivalent: the classes BOLP(k) of
k-bounded systems that have the length preservation property. By definition, a system is BOLP(k) if every derivation of length n can be replaced by a k-bounded derivation of length n.
We define the class BOLP of bounded systems that have the length preservation property as the union of all the BOLP(k) classes.
The class BOLP contains (strictly) several already known classes of systems:
the inverse left-basic semi-Thue systems,
the linear growing term rewriting systems, the inverse Linear-Finite-Path-Ordering systems, the strongly bottom-up systems.
Term rewriting
termination
rewriting strategy
341-356
Regular Paper
Irène
Durand
Irène Durand
Géraud
Sénizergues
Géraud Sénizergues
Marc
Sylvestre
Marc Sylvestre
10.4230/LIPIcs.RTA.2010.341
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Polynomially Bounded Matrix Interpretations
Matrix interpretations can be used to bound the derivational complexity of rewrite systems. We present a criterion that completely characterizes matrix interpretations that are polynomially bounded. It includes the method of upper triangular interpretations as a special case, and we prove that the inclusion is strict. The criterion can be expressed as a finite domain constraint system.
It translates to a Boolean constraint system with a size that is polynomial in the dimension of the interpretation. We report on performance of an implementation.
Derivational complexity of rewriting
matrix interpretation
weighted automata
ambiguity of automata
finite domain constraints
357-372
Regular Paper
Johannes
Waldmann
Johannes Waldmann
10.4230/LIPIcs.RTA.2010.357
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Optimizing mkbTT
We describe performance enhancements that have been added to mkbTT, a
modern completion tool combining multi-completion with the use of
termination tools.
Knuth-Bendix completion
termination prover
automated deduction
373-384
Regular Paper
Sarah
Winkler
Sarah Winkler
Haruhiko
Sato
Haruhiko Sato
Aart
Middeldorp
Aart Middeldorp
Masahito
Kurihara
Masahito Kurihara
10.4230/LIPIcs.RTA.2010.373
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Modular Complexity Analysis via Relative Complexity
In this paper we introduce a modular framework which allows to infer
(feasible) upper bounds on the (derivational) complexity of term rewrite
systems by combining different criteria. All current investigations to
analyze the derivational complexity are based on a single termination
proof, possibly preceded by transformations. We prove that the modular
framework is strictly more powerful than the conventional setting.
Furthermore, the results have been implemented and experiments show
significant gains in power.
Term rewriting
complexity analysis
relative complexity
derivation length
385-400
Regular Paper
Harald
Zankl
Harald Zankl
Martin
Korp
Martin Korp
10.4230/LIPIcs.RTA.2010.385
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Proving Productivity in Infinite Data Structures
For a general class of infinite data structures including streams, binary trees, and the combination of finite and infinite lists, we investigate the notion of productivity. This generalizes stream productivity. We develop a general technique to prove productivity
based on proving context-sensitive termination, by which the power of present termination tools can be exploited. In order to treat cases where the approach does not apply directly, we develop transformations extending the power of the basic approach. We present a tool combining these ingredients that can prove productivity of a wide range of examples fully automatically.
Productivity
infinite data structures
streams
401-416
Regular Paper
Hans
Zantema
Hans Zantema
Matthias
Raffelsieper
Matthias Raffelsieper
10.4230/LIPIcs.RTA.2010.401
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode