8 Search Results for "Bonelli, Eduardo"


Document
Barendregt’s Theory of the λ-Calculus, Refreshed and Formalized

Authors: Adrienne Lancelot, Beniamino Accattoli, and Maxime Vemclefs

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Barendregt’s book on the untyped λ-calculus refines the inconsistent view of β-divergence as representation of the undefined via the key concept of head reduction. In this paper, we put together recent revisitations of some key theorems laid out in Barendregt’s book, and we formalize them in the Abella proof assistant. Our work provides a compact and refreshed presentation of the core of the book. The formalization faithfully mimics pen-and-paper proofs. Two interesting aspects are the manipulation of contexts for the study of contextual equivalence and a formal alternative to the informal trick at work in Takahashi’s proof of the genericity lemma. As a by-product, we obtain an alternative definition of contextual equivalence that does not mention contexts.

Cite as

Adrienne Lancelot, Beniamino Accattoli, and Maxime Vemclefs. Barendregt’s Theory of the λ-Calculus, Refreshed and Formalized. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 13:1-13:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lancelot_et_al:LIPIcs.ITP.2025.13,
  author =	{Lancelot, Adrienne and Accattoli, Beniamino and Vemclefs, Maxime},
  title =	{{Barendregt’s Theory of the \lambda-Calculus, Refreshed and Formalized}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{13:1--13:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.13},
  URN =		{urn:nbn:de:0030-drops-246114},
  doi =		{10.4230/LIPIcs.ITP.2025.13},
  annote =	{Keywords: lambda-calculus, head reduction, equational theory}
}
Document
The Cost of Skeletal Call-By-Need, Smoothly

Authors: Beniamino Accattoli, Francesco Magliocca, Loïc Peyrot, and Claudio Sacerdoti Coen

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Skeletal call-by-need is an optimization of call-by-need evaluation also known as "fully lazy sharing": when the duplication of a value has to take place, it is first split into "skeleton", which is then duplicated, and "flesh" which is instead kept shared. Here, we provide two cost analyses of skeletal call-by-need. Firstly, we provide a family of terms showing that skeletal call-by-need can be asymptotically exponentially faster than call-by-need in both time and space; it is the first such evidence, to our knowledge. Secondly, we prove that skeletal call-by-need can be implemented efficiently, that is, with bi-linear overhead. This result is obtained by providing a new smooth presentation of ideas by Shivers and Wand for the reconstruction of skeletons, which is then smoothly plugged into the study of an abstract machine following the distillation technique by Accattoli et al.

Cite as

Beniamino Accattoli, Francesco Magliocca, Loïc Peyrot, and Claudio Sacerdoti Coen. The Cost of Skeletal Call-By-Need, Smoothly. In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{accattoli_et_al:LIPIcs.FSCD.2025.5,
  author =	{Accattoli, Beniamino and Magliocca, Francesco and Peyrot, Lo\"{i}c and Sacerdoti Coen, Claudio},
  title =	{{The Cost of Skeletal Call-By-Need, Smoothly}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.5},
  URN =		{urn:nbn:de:0030-drops-236206},
  doi =		{10.4230/LIPIcs.FSCD.2025.5},
  annote =	{Keywords: \lambda-calculus, abstract machines, call-by-need, cost models}
}
Document
Two Decreasing Measures for Simply Typed λ-Terms

Authors: Pablo Barenbaum and Cristian Sottile

Published in: LIPIcs, Volume 260, 8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023)


Abstract
This paper defines two decreasing measures for terms of the simply typed λ-calculus, called the 𝒲-measure and the 𝒯^{𝐦}-measure. A decreasing measure is a function that maps each typable λ-term to an element of a well-founded ordering, in such a way that contracting any β-redex decreases the value of the function, entailing strong normalization. Both measures are defined constructively, relying on an auxiliary calculus, a non-erasing variant of the λ-calculus. In this system, dubbed the λ^{𝐦}-calculus, each β-step creates a "wrapper" containing a copy of the argument that cannot be erased and cannot interact with the context in any other way. Both measures rely crucially on the observation, known to Turing and Prawitz, that contracting a redex cannot create redexes of higher degree, where the degree of a redex is defined as the height of the type of its λ-abstraction. The 𝒲-measure maps each λ-term to a natural number, and it is obtained by evaluating the term in the λ^{𝐦}-calculus and counting the number of remaining wrappers. The 𝒯^{𝐦}-measure maps each λ-term to a structure of nested multisets, where the nesting depth is proportional to the maximum redex degree.

Cite as

Pablo Barenbaum and Cristian Sottile. Two Decreasing Measures for Simply Typed λ-Terms. In 8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 260, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{barenbaum_et_al:LIPIcs.FSCD.2023.11,
  author =	{Barenbaum, Pablo and Sottile, Cristian},
  title =	{{Two Decreasing Measures for Simply Typed \lambda-Terms}},
  booktitle =	{8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-277-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{260},
  editor =	{Gaboardi, Marco and van Raamsdonk, Femke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2023.11},
  URN =		{urn:nbn:de:0030-drops-179956},
  doi =		{10.4230/LIPIcs.FSCD.2023.11},
  annote =	{Keywords: Lambda Calculus, Rewriting, Termination, Strong Normalization, Simple Types}
}
Document
Reductions in Higher-Order Rewriting and Their Equivalence

Authors: Pablo Barenbaum and Eduardo Bonelli

Published in: LIPIcs, Volume 252, 31st EACSL Annual Conference on Computer Science Logic (CSL 2023)


Abstract
Proof terms are syntactic expressions that represent computations in term rewriting. They were introduced by Meseguer and exploited by van Oostrom and de Vrijer to study equivalence of reductions in (left-linear) first-order term rewriting systems. We study the problem of extending the notion of proof term to higher-order rewriting, which generalizes the first-order setting by allowing terms with binders and higher-order substitution. In previous works that devise proof terms for higher-order rewriting, such as Bruggink’s, it has been noted that the challenge lies in reconciling composition of proof terms and higher-order substitution (β-equivalence). This led Bruggink to reject "nested" composition, other than at the outermost level. In this paper, we propose a notion of higher-order proof term we dub rewrites that supports nested composition. We then define two notions of equivalence on rewrites, namely permutation equivalence and projection equivalence, and show that they coincide.

Cite as

Pablo Barenbaum and Eduardo Bonelli. Reductions in Higher-Order Rewriting and Their Equivalence. In 31st EACSL Annual Conference on Computer Science Logic (CSL 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 252, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{barenbaum_et_al:LIPIcs.CSL.2023.8,
  author =	{Barenbaum, Pablo and Bonelli, Eduardo},
  title =	{{Reductions in Higher-Order Rewriting and Their Equivalence}},
  booktitle =	{31st EACSL Annual Conference on Computer Science Logic (CSL 2023)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-264-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{252},
  editor =	{Klin, Bartek and Pimentel, Elaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2023.8},
  URN =		{urn:nbn:de:0030-drops-174694},
  doi =		{10.4230/LIPIcs.CSL.2023.8},
  annote =	{Keywords: Term Rewriting, Higher-Order Rewriting, Proof terms, Equivalence of Computations}
}
Document
Invited Talk
Strong Bisimulation for Control Operators (Invited Talk)

Authors: Delia Kesner, Eduardo Bonelli, and Andrés Viso

Published in: LIPIcs, Volume 152, 28th EACSL Annual Conference on Computer Science Logic (CSL 2020)


Abstract
The purpose of this paper is to identify programs with control operators whose reduction semantics are in exact correspondence. This is achieved by introducing a relation ≃, defined over a revised presentation of Parigot’s λμ-calculus we dub ΛM. Our result builds on two fundamental ingredients: (1) factorization of λμ-reduction into multiplicative and exponential steps by means of explicit term operators of ΛM, and (2) translation of ΛM-terms into Laurent’s polarized proof-nets (PPN) such that cut-elimination in PPN simulates our calculus. Our proposed relation ≃ is shown to characterize structural equivalence in PPN. Most notably, ≃ is shown to be a strong bisimulation with respect to reduction in ΛM, i.e. two ≃-equivalent terms have the exact same reduction semantics, a result which fails for Regnier’s σ-equivalence in λ-calculus as well as for Laurent’s σ-equivalence in λμ.

Cite as

Delia Kesner, Eduardo Bonelli, and Andrés Viso. Strong Bisimulation for Control Operators (Invited Talk). In 28th EACSL Annual Conference on Computer Science Logic (CSL 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 152, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kesner_et_al:LIPIcs.CSL.2020.4,
  author =	{Kesner, Delia and Bonelli, Eduardo and Viso, Andr\'{e}s},
  title =	{{Strong Bisimulation for Control Operators}},
  booktitle =	{28th EACSL Annual Conference on Computer Science Logic (CSL 2020)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-132-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{152},
  editor =	{Fern\'{a}ndez, Maribel and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2020.4},
  URN =		{urn:nbn:de:0030-drops-116473},
  doi =		{10.4230/LIPIcs.CSL.2020.4},
  annote =	{Keywords: Lambda-mu calculus, proof-nets, strong bisimulation}
}
Document
Efficient Type Checking for Path Polymorphism

Authors: Juan Edi, Andrés Viso, and Eduardo Bonelli

Published in: LIPIcs, Volume 69, 21st International Conference on Types for Proofs and Programs (TYPES 2015) (2018)


Abstract
A type system combining type application, constants as types, union types (associative, commutative and idempotent) and recursive types has recently been proposed for statically typing path polymorphism, the ability to define functions that can operate uniformly over recursively specified applicative data structures. A typical pattern such functions resort to is \dataterm{x}{y} which decomposes a compound, in other words any applicative tree structure, into its parts. We study type-checking for this type system in two stages. First we propose algorithms for checking type equivalence and subtyping based on coinductive characterizations of those relations. We then formulate a syntax-directed presentation and prove its equivalence with the original one. This yields a type-checking algorithm which unfortunately has exponential time complexity in the worst case. A second algorithm is then proposed, based on automata techniques, which yields a polynomial-time type-checking algorithm.

Cite as

Juan Edi, Andrés Viso, and Eduardo Bonelli. Efficient Type Checking for Path Polymorphism. In 21st International Conference on Types for Proofs and Programs (TYPES 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 69, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{edi_et_al:LIPIcs.TYPES.2015.6,
  author =	{Edi, Juan and Viso, Andr\'{e}s and Bonelli, Eduardo},
  title =	{{Efficient Type Checking for Path Polymorphism}},
  booktitle =	{21st International Conference on Types for Proofs and Programs (TYPES 2015)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-030-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{69},
  editor =	{Uustalu, Tarmo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TYPES.2015.6},
  URN =		{urn:nbn:de:0030-drops-84761},
  doi =		{10.4230/LIPIcs.TYPES.2015.6},
  annote =	{Keywords: lambda-calculus, pattern matching, path polymorphism, type checking}
}
Document
Optimality and the Linear Substitution Calculus

Authors: Pablo Barenbaum and Eduardo Bonelli

Published in: LIPIcs, Volume 84, 2nd International Conference on Formal Structures for Computation and Deduction (FSCD 2017)


Abstract
We lift the theory of optimal reduction to a decomposition of the lambda calculus known as the Linear Substitution Calculus (LSC). LSC decomposes beta-reduction into finer steps that manipulate substitutions in two distinctive ways: it uses context rules that allow substitutions to act "at a distance" and rewrites modulo a set of equations that allow substitutions to "float" in a term. We propose a notion of redex family obtained by adapting Lévy labels to support these two distinctive features. This is followed by a proof of the finite family developments theorem (FFD). We then apply FFD to prove an optimal reduction theorem for LSC. We also apply FFD to deduce additional novel properties of LSC, namely an algorithm for standardisation by selection and normalisation of a linear call-by-need reduction strategy. All results are proved in the axiomatic setting of Glauert and Khashidashvili's Deterministic Residual Structures.

Cite as

Pablo Barenbaum and Eduardo Bonelli. Optimality and the Linear Substitution Calculus. In 2nd International Conference on Formal Structures for Computation and Deduction (FSCD 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 84, pp. 9:1-9:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{barenbaum_et_al:LIPIcs.FSCD.2017.9,
  author =	{Barenbaum, Pablo and Bonelli, Eduardo},
  title =	{{Optimality and the Linear Substitution Calculus}},
  booktitle =	{2nd International Conference on Formal Structures for Computation and Deduction (FSCD 2017)},
  pages =	{9:1--9:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-047-7},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{84},
  editor =	{Miller, Dale},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2017.9},
  URN =		{urn:nbn:de:0030-drops-77307},
  doi =		{10.4230/LIPIcs.FSCD.2017.9},
  annote =	{Keywords: Rewriting, Lambda Calculus, Explicit Substitutions, Optimal Reduction}
}
Document
Normalisation for Dynamic Pattern Calculi

Authors: Eduardo Bonelli, Delia Kesner, Carlos Lombardi, and Alejandro Rios

Published in: LIPIcs, Volume 15, 23rd International Conference on Rewriting Techniques and Applications (RTA'12) (2012)


Abstract
The Pure Pattern Calculus (PPC) extends the lambda-calculus, as well as the family of algebraic pattern calculi, with first-class patterns; that is, patterns can be passed as arguments, evaluated and returned as results. The notion of matching failure of the PPC not only provides a mechanism to define functions by pattern matching on cases but also supplies PPC with parallel-or-like, non-sequential behaviour. Therefore, devising normalising strategies for PPC to obtain well-behaved implementations turns out to be challenging. This paper focuses on normalising reduction strategies for PPC. We define a (multistep) strategy and show that it is normalising. The strategy generalises the leftmost-outermost strategy for lambda-calculus and is strictly finer than parallel-outermost. The normalisation proof is based on the notion of necessary set of redexes, a generalisation of the notion of needed redex encompassing non-sequential reduction systems.

Cite as

Eduardo Bonelli, Delia Kesner, Carlos Lombardi, and Alejandro Rios. Normalisation for Dynamic Pattern Calculi. In 23rd International Conference on Rewriting Techniques and Applications (RTA'12). Leibniz International Proceedings in Informatics (LIPIcs), Volume 15, pp. 117-132, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bonelli_et_al:LIPIcs.RTA.2012.117,
  author =	{Bonelli, Eduardo and Kesner, Delia and Lombardi, Carlos and Rios, Alejandro},
  title =	{{Normalisation for Dynamic Pattern Calculi}},
  booktitle =	{23rd International Conference on Rewriting Techniques and Applications (RTA'12)},
  pages =	{117--132},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-38-5},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{15},
  editor =	{Tiwari, Ashish},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.RTA.2012.117},
  URN =		{urn:nbn:de:0030-drops-34889},
  doi =		{10.4230/LIPIcs.RTA.2012.117},
  annote =	{Keywords: Pattern calculi, reduction strategies, sequentiality, neededness}
}
  • Refine by Type
  • 8 Document/PDF
  • 2 Document/HTML

  • Refine by Publication Year
  • 2 2025
  • 2 2023
  • 1 2020
  • 1 2018
  • 1 2017
  • Show More...

  • Refine by Author
  • 5 Bonelli, Eduardo
  • 3 Barenbaum, Pablo
  • 2 Accattoli, Beniamino
  • 2 Kesner, Delia
  • 2 Viso, Andrés
  • Show More...

  • Refine by Series/Journal
  • 8 LIPIcs

  • Refine by Classification
  • 4 Theory of computation → Lambda calculus
  • 2 Theory of computation → Equational logic and rewriting
  • 1 Theory of computation → Linear logic
  • 1 Theory of computation → Operational semantics
  • 1 Theory of computation → Type theory

  • Refine by Keyword
  • 2 Lambda Calculus
  • 2 Rewriting
  • 2 lambda-calculus
  • 1 Equivalence of Computations
  • 1 Explicit Substitutions
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail