LIPIcs, Volume 166

34th European Conference on Object-Oriented Programming (ECOOP 2020)



Thumbnail PDF

Event

ECOOP 2020, November 15-17, 2020, Berlin, Germany (Virtual Conference)

Editors

Robert Hirschfeld
  • Hasso Plattner Institute, University of Potsdam, Germany
Tobias Pape
  • Hasso Plattner Institute, University of Potsdam, Germany

Publication Details

  • published at: 2020-11-06
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-154-2
  • DBLP: db/conf/ecoop/ecoop2020

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 166, ECOOP 2020, Complete Volume

Authors: Robert Hirschfeld and Tobias Pape


Abstract
LIPIcs, Volume 166, ECOOP 2020, Complete Volume

Cite as

34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 1-906, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{hirschfeld_et_al:LIPIcs.ECOOP.2020,
  title =	{{LIPIcs, Volume 166, ECOOP 2020, Complete Volume}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{1--906},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020},
  URN =		{urn:nbn:de:0030-drops-131566},
  doi =		{10.4230/LIPIcs.ECOOP.2020},
  annote =	{Keywords: LIPIcs, Volume 166, ECOOP 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Robert Hirschfeld and Tobias Pape


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 0:i-0:xxviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirschfeld_et_al:LIPIcs.ECOOP.2020.0,
  author =	{Hirschfeld, Robert and Pape, Tobias},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{0:i--0:xxviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.0},
  URN =		{urn:nbn:de:0030-drops-131572},
  doi =		{10.4230/LIPIcs.ECOOP.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Sound Regular Corecursion in coFJ

Authors: Davide Ancona, Pietro Barbieri, Francesco Dagnino, and Elena Zucca


Abstract
The aim of the paper is to provide solid foundations for a programming paradigm natively supporting the creation and manipulation of cyclic data structures. To this end, we describe coFJ, a Java-like calculus where objects can be infinite and methods are equipped with a codefinition (an alternative body). We provide an abstract semantics of the calculus based on the framework of inference systems with corules. In coFJ with this semantics, FJ recursive methods on finite objects can be extended to infinite objects as well, and behave as desired by the programmer, by specifying a codefinition. We also describe an operational semantics which can be directly implemented in a programming language, and prove the soundness of such semantics with respect to the abstract one.

Cite as

Davide Ancona, Pietro Barbieri, Francesco Dagnino, and Elena Zucca. Sound Regular Corecursion in coFJ. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 1:1-1:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ancona_et_al:LIPIcs.ECOOP.2020.1,
  author =	{Ancona, Davide and Barbieri, Pietro and Dagnino, Francesco and Zucca, Elena},
  title =	{{Sound Regular Corecursion in coFJ}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{1:1--1:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.1},
  URN =		{urn:nbn:de:0030-drops-131582},
  doi =		{10.4230/LIPIcs.ECOOP.2020.1},
  annote =	{Keywords: Operational semantics, coinduction, programming paradigms, regular terms}
}
Document
Perfect Is the Enemy of Good: Best-Effort Program Synthesis

Authors: Hila Peleg and Nadia Polikarpova


Abstract
Program synthesis promises to help software developers with everyday tasks by generating code snippets automatically from input-output examples and other high-level specifications. The conventional wisdom is that a synthesizer must always satisfy the specification exactly. We conjecture that this all-or-nothing paradigm stands in the way of adopting program synthesis as a developer tool: in practice, the user-written specification often contains errors or is simply too hard for the synthesizer to solve within a reasonable time; in these cases, the user is left with a single over-fitted result or, more often than not, no result at all. In this paper we propose a new program synthesis paradigm we call best-effort program synthesis, where the synthesizer returns a ranked list of partially-valid results, i.e. programs that satisfy some part of the specification. To support this paradigm, we develop best-effort enumeration, a new synthesis algorithm that extends a popular program enumeration technique with the ability to accumulate and return multiple partially-valid results with minimal overhead. We implement this algorithm in a tool called BESTER, and evaluate it on 79 synthesis benchmarks from the literature. Contrary to the conventional wisdom, our evaluation shows that BESTER returns useful results even when the specification is flawed or too hard: i) for all benchmarks with an error in the specification, the top three BESTER results contain the correct solution, and ii) for most hard benchmarks, the top three results contain non-trivial fragments of the correct solution. We also performed an exploratory user study, which confirms our intuition that partially-valid results are useful: the study shows that programmers use the output of the synthesizer for comprehension and often incorporate it into their solutions.

Cite as

Hila Peleg and Nadia Polikarpova. Perfect Is the Enemy of Good: Best-Effort Program Synthesis. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 2:1-2:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{peleg_et_al:LIPIcs.ECOOP.2020.2,
  author =	{Peleg, Hila and Polikarpova, Nadia},
  title =	{{Perfect Is the Enemy of Good: Best-Effort Program Synthesis}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{2:1--2:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.2},
  URN =		{urn:nbn:de:0030-drops-131593},
  doi =		{10.4230/LIPIcs.ECOOP.2020.2},
  annote =	{Keywords: Program Synthesis, Programming by Example}
}
Document
Blame for Null

Authors: Abel Nieto, Marianna Rapoport, Gregor Richards, and Ondřej Lhoták


Abstract
Multiple modern programming languages, including Kotlin, Scala, Swift, and C#, have type systems where nullability is explicitly specified in the types. All of the above also need to interoperate with languages where types remain implicitly nullable, like Java. This leads to runtime errors that can manifest in subtle ways. In this paper, we show how to reason about the presence and provenance of such nullability errors using the concept of blame from gradual typing. Specifically, we introduce a calculus, λ_null, where some terms are typed as implicitly nullable and others as explicitly nullable. Just like in the original blame calculus of Wadler and Findler, interactions between both kinds of terms are mediated by casts with attached blame labels, which indicate the origin of errors. On top of λ_null, we then create a second calculus, λ_null^s, which closely models the interoperability between languages with implicit nullability and languages with explicit nullability, such as Java and Scala. Our main result is a theorem that states that nullability errors in λ_null^s can always be blamed on terms with less-precise typing; that is, terms typed as implicitly nullable. By analogy, this would mean that NullPointerExceptions in combined Java/Scala programs are always the result of unsoundness in the Java type system. We summarize our result with the slogan explicitly nullable programs can't be blamed. All our results are formalized in the Coq proof assistant.

Cite as

Abel Nieto, Marianna Rapoport, Gregor Richards, and Ondřej Lhoták. Blame for Null. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 3:1-3:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nieto_et_al:LIPIcs.ECOOP.2020.3,
  author =	{Nieto, Abel and Rapoport, Marianna and Richards, Gregor and Lhot\'{a}k, Ond\v{r}ej},
  title =	{{Blame for Null}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{3:1--3:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.3},
  URN =		{urn:nbn:de:0030-drops-131606},
  doi =		{10.4230/LIPIcs.ECOOP.2020.3},
  annote =	{Keywords: nullability, type systems, blame calculus, gradual typing}
}
Document
Static Race Detection and Mutex Safety and Liveness for Go Programs

Authors: Julia Gabet and Nobuko Yoshida


Abstract
Go is a popular concurrent programming language thanks to its ability to efficiently combine concurrency and systems programming. In Go programs, a number of concurrency bugs can be caused by a mixture of data races and communication problems. In this paper, we develop a theory based on behavioural types to statically detect data races and deadlocks in Go programs. We first specify lock safety/liveness and data race properties over a Go program model, using the happens-before relation defined in the Go memory model. We represent these properties of programs in a μ-calculus model of types, and validate them using type-level model-checking. We then extend the framework to account for Go’s channels, and implement a static verification tool which can detect concurrency errors. This is, to the best of our knowledge, the first static verification framework of this kind for the Go language, uniformly analysing concurrency errors caused by a mix of shared memory accesses and asynchronous message-passing communications.

Cite as

Julia Gabet and Nobuko Yoshida. Static Race Detection and Mutex Safety and Liveness for Go Programs. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 4:1-4:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gabet_et_al:LIPIcs.ECOOP.2020.4,
  author =	{Gabet, Julia and Yoshida, Nobuko},
  title =	{{Static Race Detection and Mutex Safety and Liveness for Go Programs}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{4:1--4:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.4},
  URN =		{urn:nbn:de:0030-drops-131615},
  doi =		{10.4230/LIPIcs.ECOOP.2020.4},
  annote =	{Keywords: Go language, behavioural types, race detection, happens-before relation, safety, liveness}
}
Document
Reconciling Event Structures with Modern Multiprocessors

Authors: Evgenii Moiseenko, Anton Podkopaev, Ori Lahav, Orestis Melkonian, and Viktor Vafeiadis


Abstract
Weakestmo is a recently proposed memory consistency model that uses event structures to resolve the infamous "out-of-thin-air" problem and to enable efficient compilation to hardware. Nevertheless, this latter property - compilation correctness - has not yet been formally established. This paper closes this gap by establishing correctness of the intended compilation schemes from Weakestmo to a wide range of formal hardware memory models (x86, POWER, ARMv7, ARMv8) in the Coq proof assistant. Our proof is the first that establishes correctness of compilation of an event-structure-based model that forbids "out-of-thin-air" behaviors, as well as the first mechanized compilation proof of a weak memory model supporting sequentially consistent accesses to such a range of hardware platforms. Our compilation proof goes via the recent Intermediate Memory Model (IMM), which we suitably extend with sequentially consistent accesses.

Cite as

Evgenii Moiseenko, Anton Podkopaev, Ori Lahav, Orestis Melkonian, and Viktor Vafeiadis. Reconciling Event Structures with Modern Multiprocessors. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 5:1-5:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{moiseenko_et_al:LIPIcs.ECOOP.2020.5,
  author =	{Moiseenko, Evgenii and Podkopaev, Anton and Lahav, Ori and Melkonian, Orestis and Vafeiadis, Viktor},
  title =	{{Reconciling Event Structures with Modern Multiprocessors}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{5:1--5:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.5},
  URN =		{urn:nbn:de:0030-drops-131622},
  doi =		{10.4230/LIPIcs.ECOOP.2020.5},
  annote =	{Keywords: Weak Memory Consistency, Event Structures, IMM, Weakestmo}
}
Document
Don't Panic! Better, Fewer, Syntax Errors for LR Parsers

Authors: Lukas Diekmann and Laurence Tratt


Abstract
Syntax errors are generally easy to fix for humans, but not for parsers in general nor LR parsers in particular. Traditional "panic mode" error recovery, though easy to implement and applicable to any grammar, often leads to a cascading chain of errors that drown out the original. More advanced error recovery techniques suffer less from this problem but have seen little practical use because their typical performance was seen as poor, their worst case unbounded, and the repairs they reported arbitrary. In this paper we introduce the CPCT+ algorithm, and an implementation of that algorithm, that address these issues. First, CPCT+ reports the complete set of minimum cost repair sequences for a given location, allowing programmers to select the one that best fits their intention. Second, on a corpus of 200,000 real-world syntactically invalid Java programs, CPCT+ is able to repair 98.37%±0.017% of files within a timeout of 0.5s. Finally, CPCT+ uses the complete set of minimum cost repair sequences to reduce the cascading error problem, where incorrect error recovery causes further spurious syntax errors to be identified. Across the test corpus, CPCT+ reports 435,812±473 error locations to the user, reducing the cascading error problem substantially relative to the 981,628±0 error locations reported by panic mode.

Cite as

Lukas Diekmann and Laurence Tratt. Don't Panic! Better, Fewer, Syntax Errors for LR Parsers. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 6:1-6:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{diekmann_et_al:LIPIcs.ECOOP.2020.6,
  author =	{Diekmann, Lukas and Tratt, Laurence},
  title =	{{Don't Panic! Better, Fewer, Syntax Errors for LR Parsers}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{6:1--6:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.6},
  URN =		{urn:nbn:de:0030-drops-131630},
  doi =		{10.4230/LIPIcs.ECOOP.2020.6},
  annote =	{Keywords: Parsing, error recovery, programming languages}
}
Document
K-LLVM: A Relatively Complete Semantics of LLVM IR

Authors: Liyi Li and Elsa L. Gunter


Abstract
LLVM [Lattner and Adve, 2004] is designed for the compile-time, link-time and run-time optimization of programs written in various programming languages. The language supported by LLVM targeted by modern compilers is LLVM IR [llvm.org, 2018]. In this paper we define K-LLVM, a reference semantics for LLVM IR. To the best of our knowledge, K-LLVM is the most complete formal LLVM IR semantics to date, including all LLVM IR instructions, intrinsic functions in the LLVM documentation and Standard-C library functions that are necessary to execute many LLVM IR programs. Additionally, K-LLVM formulates an abstract machine that executes all LLVM IR instructions. The machine allows to describe our formal semantics in terms of simulating a conceptual virtual machine that runs LLVM IR programs, including non-deterministic programs. Even though the K-LLVM memory model in this paper is assumed to be a sequentially consistent memory model and does not include all LLVM concurrency memory behaviors, the design of K-LLVM’s data layout allows the K-LLVM abstract machine to execute some LLVM IR programs that previous semantics did not cover, such as the full range of LLVM IR behaviors for the interaction among LLVM IR casting, pointer arithmetic, memory operations and some memory flags (e.g. readonly) of function headers. Additionally, the memory model is modularized in a manner that supports investigating other memory models. To validate K-LLVM, we have implemented it in 𝕂 [Roşu, 2016], which generated an interpreter for LLVM IR. Using this, we ran tests including 1,385 unit test programs and around 3,000 concrete LLVM IR programs, and K-LLVM passed all of them.

Cite as

Liyi Li and Elsa L. Gunter. K-LLVM: A Relatively Complete Semantics of LLVM IR. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 7:1-7:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2020.7,
  author =	{Li, Liyi and Gunter, Elsa L.},
  title =	{{K-LLVM: A Relatively Complete Semantics of LLVM IR}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{7:1--7:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.7},
  URN =		{urn:nbn:de:0030-drops-131649},
  doi =		{10.4230/LIPIcs.ECOOP.2020.7},
  annote =	{Keywords: LLVM, formal semantics, K framework, memory model, abstract machine}
}
Document
Space-Efficient Gradual Typing in Coercion-Passing Style

Authors: Yuya Tsuda, Atsushi Igarashi, and Tomoya Tabuchi


Abstract
Herman et al. pointed out that the insertion of run-time checks into a gradually typed program could hamper tail-call optimization and, as a result, worsen the space complexity of the program. To address the problem, they proposed a space-efficient coercion calculus, which was subsequently improved by Siek et al. The semantics of these calculi involves eager composition of run-time checks expressed by coercions to prevent the size of a term from growing. However, it relies also on a nonstandard reduction rule, which does not seem easy to implement. In fact, no compiler implementation of gradually typed languages fully supports the space-efficient semantics faithfully. In this paper, we study coercion-passing style, which Herman et al. have already mentioned, as a technique for straightforward space-efficient implementation of gradually typed languages. A program in coercion-passing style passes "the rest of the run-time checks" around - just like continuation-passing style (CPS), in which "the rest of the computation" is passed around - and (unlike CPS) composes coercions eagerly. We give a formal coercion-passing translation from λS by Siek et al. to λS₁, which is a new calculus of first-class coercions tailored for coercion-passing style, and prove correctness of the translation. We also implement our coercion-passing style transformation for the Grift compiler developed by Kuhlenschmidt et al. An experimental result shows stack overflow can be prevented properly at the cost of up to 3 times slower execution for most partially typed practical programs.

Cite as

Yuya Tsuda, Atsushi Igarashi, and Tomoya Tabuchi. Space-Efficient Gradual Typing in Coercion-Passing Style. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 8:1-8:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{tsuda_et_al:LIPIcs.ECOOP.2020.8,
  author =	{Tsuda, Yuya and Igarashi, Atsushi and Tabuchi, Tomoya},
  title =	{{Space-Efficient Gradual Typing in Coercion-Passing Style}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{8:1--8:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.8},
  URN =		{urn:nbn:de:0030-drops-131658},
  doi =		{10.4230/LIPIcs.ECOOP.2020.8},
  annote =	{Keywords: Gradual typing, coercion calculus, coercion-passing style, dynamic type checking, tail-call optimization}
}
Document
Multiparty Session Programming With Global Protocol Combinators

Authors: Keigo Imai, Rumyana Neykova, Nobuko Yoshida, and Shoji Yuen


Abstract
Multiparty Session Types (MPST) is a typing discipline for communication protocols. It ensures the absence of communication errors and deadlocks for well-typed communicating processes. The state-of-the-art implementations of the MPST theory rely on (1) runtime linearity checks to ensure correct usage of communication channels and (2) external domain-specific languages for specifying and verifying multiparty protocols. To overcome these limitations, we propose a library for programming with global combinators - a set of functions for writing and verifying multiparty protocols in OCaml. Local behaviours for all processes in a protocol are inferred at once from a global combinator. We formalise global combinators and prove a sound realisability of global combinators - a well-typed global combinator derives a set of local types, by which typed endpoint programs can ensure type and communication safety. Our approach enables fully-static verification and implementation of the whole protocol, from the protocol specification to the process implementations, to happen in the same language. We compare our implementation to untyped and continuation-passing style implementations, and demonstrate its expressiveness by implementing a plethora of protocols. We show our library can interoperate with existing libraries and services, implementing DNS (Domain Name Service) protocol and the OAuth (Open Authentication) protocol.

Cite as

Keigo Imai, Rumyana Neykova, Nobuko Yoshida, and Shoji Yuen. Multiparty Session Programming With Global Protocol Combinators. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 9:1-9:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{imai_et_al:LIPIcs.ECOOP.2020.9,
  author =	{Imai, Keigo and Neykova, Rumyana and Yoshida, Nobuko and Yuen, Shoji},
  title =	{{Multiparty Session Programming With Global Protocol Combinators}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{9:1--9:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.9},
  URN =		{urn:nbn:de:0030-drops-131662},
  doi =		{10.4230/LIPIcs.ECOOP.2020.9},
  annote =	{Keywords: Multiparty Session Types, Communication Protocol, Concurrent and Distributed Programming, OCaml}
}
Document
Pearl
Designing with Static Capabilities and Effects: Use, Mention, and Invariants (Pearl)

Authors: Colin S. Gordon


Abstract
Capabilities (whether object or reference capabilities) are fundamentally tools to restrict effects. Thus static capabilities (object or reference) and effect systems take different technical machinery to the same core problem of statically restricting or reasoning about effects in programs. Any time two approaches can in principle address the same sets of problems, it becomes important to understand the trade-offs between the approaches, how these trade-offs might interact with the problem at hand. Experts who have worked in these areas tend to find the trade-offs somewhat obvious, having considered them in context before. However, this kind of design discussion is often written down only implicitly as comparison between two approaches for a specific program reasoning problem, rather than as a discussion of general trade-offs between general classes of techniques. As a result, it is not uncommon to set out to solve a problem with one technique, only to find the other better-suited. We discuss the trade-offs between static capabilities (specifically reference capabilities) and effect systems, articulating the challenges each approach tends to have in isolation, and how these are sometimes mitigated. We also put our discussion in context, by appealing to examples of how these trade-offs were considered in the course of developing prior systems in the area. Along the way, we highlight how seemingly-minor aspects of type systems - weakening/framing and the mere existence of type contexts - play a subtle role in the efficacy of these systems.

Cite as

Colin S. Gordon. Designing with Static Capabilities and Effects: Use, Mention, and Invariants (Pearl). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 10:1-10:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gordon:LIPIcs.ECOOP.2020.10,
  author =	{Gordon, Colin S.},
  title =	{{Designing with Static Capabilities and Effects: Use, Mention, and Invariants}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{10:1--10:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.10},
  URN =		{urn:nbn:de:0030-drops-131677},
  doi =		{10.4230/LIPIcs.ECOOP.2020.10},
  annote =	{Keywords: Effect systems, reference capabilities, object capabilities}
}
Document
Owicki-Gries Reasoning for C11 RAR

Authors: Sadegh Dalvandi, Simon Doherty, Brijesh Dongol, and Heike Wehrheim


Abstract
Owicki-Gries reasoning for concurrent programs uses Hoare logic together with an interference freedom rule for concurrency. In this paper, we develop a new proof calculus for the C11 RAR memory model (a fragment of C11 with both relaxed and release-acquire accesses) that allows all Owicki-Gries proof rules for compound statements, including non-interference, to remain unchanged. Our proof method features novel assertions specifying thread-specific views on the state of programs. This is combined with a set of Hoare logic rules that describe how these assertions are affected by atomic program steps. We demonstrate the utility of our proof calculus by verifying a number of standard C11 litmus tests and Peterson’s algorithm adapted for C11. Our proof calculus and its application to program verification have been fully mechanised in the theorem prover Isabelle.

Cite as

Sadegh Dalvandi, Simon Doherty, Brijesh Dongol, and Heike Wehrheim. Owicki-Gries Reasoning for C11 RAR. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 11:1-11:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dalvandi_et_al:LIPIcs.ECOOP.2020.11,
  author =	{Dalvandi, Sadegh and Doherty, Simon and Dongol, Brijesh and Wehrheim, Heike},
  title =	{{Owicki-Gries Reasoning for C11 RAR}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{11:1--11:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.11},
  URN =		{urn:nbn:de:0030-drops-131687},
  doi =		{10.4230/LIPIcs.ECOOP.2020.11},
  annote =	{Keywords: C11, Verification, Hoare logic, Owicki-Gries, Isabelle}
}
Document
A Semantics for the Essence of React

Authors: Magnus Madsen, Ondřej Lhoták, and Frank Tip


Abstract
Traditionally, web applications have been written as HTML pages with embedded JavaScript code that implements dynamic and interactive features by manipulating the Document Object Model (DOM) through a low-level browser API. However, this unprincipled approach leads to code that is brittle, difficult to understand, non-modular, and does not facilitate incremental update of user-interfaces in response to state changes. React is a popular framework for constructing web applications that aims to overcome these problems. React applications are written in a declarative and object-oriented style, and consist of components that are organized in a tree structure. Each component has a set of properties representing input parameters, a state consisting of values that may vary over time, and a render method that declaratively specifies the subcomponents of the component. React’s concept of reconciliation determines the impact of state changes and updates the user-interface incrementally by selective mounting and unmounting of subcomponents. At designated points, the React framework invokes lifecycle hooks that enable programmers to perform actions outside the framework such as acquiring and releasing resources needed by a component. These mechanisms exhibit considerable complexity, but, to our knowledge, no formal specification of React’s semantics exists. This paper presents a small-step operational semantics that captures the essence of React, as a first step towards a long-term goal of developing automatic tools for program understanding, automatic testing, and bug finding for React web applications. To demonstrate that key operations such as mounting, unmounting, and reconciliation terminate, we define the notion of a well-behaved component and prove that well-behavedness is preserved by these operations.

Cite as

Magnus Madsen, Ondřej Lhoták, and Frank Tip. A Semantics for the Essence of React. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 12:1-12:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{madsen_et_al:LIPIcs.ECOOP.2020.12,
  author =	{Madsen, Magnus and Lhot\'{a}k, Ond\v{r}ej and Tip, Frank},
  title =	{{A Semantics for the Essence of React}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{12:1--12:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.12},
  URN =		{urn:nbn:de:0030-drops-131697},
  doi =		{10.4230/LIPIcs.ECOOP.2020.12},
  annote =	{Keywords: JavaScript, React, operational semantics, lifecycle, reconciliation}
}
Document
Tool Insights Paper
Test-Case Reduction via Test-Case Generation: Insights from the Hypothesis Reducer (Tool Insights Paper)

Authors: David R. MacIver and Alastair F. Donaldson


Abstract
We describe internal test-case reduction, the method of test-case reduction employed by Hypothesis, a widely-used property-based testing library for Python. The key idea of internal test-case reduction is that instead of applying test-case reduction externally to generated test cases, we apply it internally, to the sequence of random choices made during generation, so that a test case is reduced by continually re-generating smaller and simpler test cases that continue to trigger some property of interest (e.g. a bug in the system under test). This allows for fully generic test-case reduction without any user intervention and without the need to write a specific test-case reducer for a particular application domain. It also significantly mitigates the impact of the test-case validity problem, by ensuring that any reduced test case is one that could in principle have been generated. We describe the rationale behind this approach, explain its implementation in Hypothesis, and present an extensive evaluation comparing its effectiveness with that of several other test-case reducers, including C-Reduce and delta debugging, on applications including Python auto-formatting, C compilers, and the SymPy symbolic math library. Our hope is that these insights into the reduction mechanism employed by Hypothesis will be useful to researchers interested in randomized testing and test-case reduction, as the crux of the approach is fully generic and should be applicable to any random generator of test cases.

Cite as

David R. MacIver and Alastair F. Donaldson. Test-Case Reduction via Test-Case Generation: Insights from the Hypothesis Reducer (Tool Insights Paper). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 13:1-13:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{maciver_et_al:LIPIcs.ECOOP.2020.13,
  author =	{MacIver, David R. and Donaldson, Alastair F.},
  title =	{{Test-Case Reduction via Test-Case Generation: Insights from the Hypothesis Reducer}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{13:1--13:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.13},
  URN =		{urn:nbn:de:0030-drops-131700},
  doi =		{10.4230/LIPIcs.ECOOP.2020.13},
  annote =	{Keywords: Software testing, test-case reduction}
}
Document
Model-View-Update-Communicate: Session Types Meet the Elm Architecture

Authors: Simon Fowler


Abstract
Session types are a type discipline for communication channel endpoints which allow conformance to protocols to be checked statically. Safely implementing session types requires linearity, usually in the form of a linear type system. Unfortunately, linear typing is difficult to integrate with graphical user interfaces (GUIs), and to date most programs using session types are command line applications. In this paper, we propose the first principled integration of session typing and GUI development by building upon the Model-View-Update (MVU) architecture, pioneered by the Elm programming language. We introduce λMVU, the first formal model of the MVU architecture, and prove it sound. By extending λMVU with commands as found in Elm, along with linearity and model transitions, we show the first formal integration of session typing and GUI programming. We implement our approach in the Links web programming language, and show examples including a two-factor authentication workflow and multi-room chat server.

Cite as

Simon Fowler. Model-View-Update-Communicate: Session Types Meet the Elm Architecture. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 14:1-14:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fowler:LIPIcs.ECOOP.2020.14,
  author =	{Fowler, Simon},
  title =	{{Model-View-Update-Communicate: Session Types Meet the Elm Architecture}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{14:1--14:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.14},
  URN =		{urn:nbn:de:0030-drops-131717},
  doi =		{10.4230/LIPIcs.ECOOP.2020.14},
  annote =	{Keywords: Session types, concurrent programming, Model-View-Update}
}
Document
Static Analysis of Shape in TensorFlow Programs

Authors: Sifis Lagouvardos, Julian Dolby, Neville Grech, Anastasios Antoniadis, and Yannis Smaragdakis


Abstract
Machine learning has been widely adopted in diverse science and engineering domains, aided by reusable libraries and quick development patterns. The TensorFlow library is probably the best-known representative of this trend and most users employ the Python API to its powerful back-end. TensorFlow programs are susceptible to several systematic errors, especially in the dynamic typing setting of Python. We present Pythia, a static analysis that tracks the shapes of tensors across Python library calls and warns of several possible mismatches. The key technical aspects are a close modeling of library semantics with respect to tensor shape, and an identification of violations and error-prone patterns. Pythia is powerful enough to statically detect (with 84.62% precision) 11 of the 14 shape-related TensorFlow bugs in the recent Zhang et al. empirical study - an independent slice of real-world bugs.

Cite as

Sifis Lagouvardos, Julian Dolby, Neville Grech, Anastasios Antoniadis, and Yannis Smaragdakis. Static Analysis of Shape in TensorFlow Programs. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 15:1-15:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lagouvardos_et_al:LIPIcs.ECOOP.2020.15,
  author =	{Lagouvardos, Sifis and Dolby, Julian and Grech, Neville and Antoniadis, Anastasios and Smaragdakis, Yannis},
  title =	{{Static Analysis of Shape in TensorFlow Programs}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{15:1--15:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.15},
  URN =		{urn:nbn:de:0030-drops-131726},
  doi =		{10.4230/LIPIcs.ECOOP.2020.15},
  annote =	{Keywords: Python, TensorFlow, static analysis, Doop, Wala}
}
Document
Value Partitioning: A Lightweight Approach to Relational Static Analysis for JavaScript

Authors: Benjamin Barslev Nielsen and Anders Møller


Abstract
In static analysis of modern JavaScript libraries, relational analysis at key locations is critical to provide sound and useful results. Prior work addresses this challenge by the use of various forms of trace partitioning and syntactic patterns, which is fragile and does not scale well, or by incorporating complex backwards analysis. In this paper, we propose a new lightweight variant of trace partitioning named value partitioning that refines individual abstract values instead of entire abstract states. We describe how this approach can effectively capture important relational properties involving dynamic property accesses, functions with free variables, and predicate functions. Furthermore, we extend an existing JavaScript analyzer with value partitioning and demonstrate experimentally that it is a simple, precise, and efficient alternative to the existing approaches for analyzing widely used JavaScript libraries.

Cite as

Benjamin Barslev Nielsen and Anders Møller. Value Partitioning: A Lightweight Approach to Relational Static Analysis for JavaScript. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 16:1-16:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nielsen_et_al:LIPIcs.ECOOP.2020.16,
  author =	{Nielsen, Benjamin Barslev and M{\o}ller, Anders},
  title =	{{Value Partitioning: A Lightweight Approach to Relational Static Analysis for JavaScript}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{16:1--16:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.16},
  URN =		{urn:nbn:de:0030-drops-131731},
  doi =		{10.4230/LIPIcs.ECOOP.2020.16},
  annote =	{Keywords: JavaScript, dataflow analysis, abstract interpretation}
}
Document
Static Type Analysis by Abstract Interpretation of Python Programs

Authors: Raphaël Monat, Abdelraouf Ouadjaout, and Antoine Miné


Abstract
Python is an increasingly popular dynamic programming language, particularly used in the scientific community and well-known for its powerful and permissive high-level syntax. Our work aims at detecting statically and automatically type errors. As these type errors are exceptions that can be caught later on, we precisely track all exceptions (raised or caught). We designed a static analysis by abstract interpretation able to infer the possible types of variables, taking into account the full control-flow. It handles both typing paradigms used in Python, nominal and structural, supports Python’s object model, introspection operators allowing dynamic type testing, dynamic attribute addition, as well as exception handling. We present a flow- and context-sensitive analysis with special domains to support containers (such as lists) and infer type equalities (allowing it to express parametric polymorphism). The analysis is soundly derived by abstract interpretation from a concrete semantics of Python developed by Fromherz et al. Our analysis is designed in a modular way as a set of domains abstracting a concrete collecting semantics. It has been implemented into the MOPSA analysis framework, and leverages external type annotations from the Typeshed project to support the vast standard library. We show that it scales to benchmarks a few thousand lines long, and preliminary results show it is able to analyze a small real-life command-line utility called PathPicker. Compared to previous work, it is sound, while it keeps similar efficiency and precision.

Cite as

Raphaël Monat, Abdelraouf Ouadjaout, and Antoine Miné. Static Type Analysis by Abstract Interpretation of Python Programs. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 17:1-17:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{monat_et_al:LIPIcs.ECOOP.2020.17,
  author =	{Monat, Rapha\"{e}l and Ouadjaout, Abdelraouf and Min\'{e}, Antoine},
  title =	{{Static Type Analysis by Abstract Interpretation of Python Programs}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{17:1--17:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.17},
  URN =		{urn:nbn:de:0030-drops-131748},
  doi =		{10.4230/LIPIcs.ECOOP.2020.17},
  annote =	{Keywords: Formal Methods, Static Analysis, Abstract Interpretation, Type Analysis, Dynamic Programming Language, Python Semantics}
}
Document
Reference Mutability for DOT

Authors: Vlastimil Dort and Ondřej Lhoták


Abstract
Reference mutability is a type-based technique for controlling mutation that has been thoroughly studied in Java. We explore how reference mutability interacts with the features of Scala by adding it to the Dependent Object Types (DOT) calculus. Our extension shows how reference mutability can be encoded using existing Scala features such as path-dependent, intersection, and union types. We prove type soundness and the immutability guarantee provided by our calculus.

Cite as

Vlastimil Dort and Ondřej Lhoták. Reference Mutability for DOT. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 18:1-18:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dort_et_al:LIPIcs.ECOOP.2020.18,
  author =	{Dort, Vlastimil and Lhot\'{a}k, Ond\v{r}ej},
  title =	{{Reference Mutability for DOT}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{18:1--18:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.18},
  URN =		{urn:nbn:de:0030-drops-131755},
  doi =		{10.4230/LIPIcs.ECOOP.2020.18},
  annote =	{Keywords: Reference Mutability, Read-only References, DOT Calculus}
}
Document
Tackling the Awkward Squad for Reactive Programming: The Actor-Reactor Model

Authors: Sam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, and Wolfgang De Meuter


Abstract
Reactive programming is a programming paradigm whereby programs are internally represented by a dependency graph, which is used to automatically (re)compute parts of a program whenever its input changes. In practice reactive programming can only be used for some parts of an application: a reactive program is usually embedded in an application that is still written in ordinary imperative languages such as JavaScript or Scala. In this paper we investigate this embedding and we distill "the awkward squad for reactive programming" as 3 concerns that are essential for real-world software development, but that do not fit within reactive programming. They are related to long lasting computations, side-effects, and the coordination between imperative and reactive code. To solve these issues we design a new programming model called the Actor-Reactor Model in which programs are split up in a number of actors and reactors. Actors and reactors enforce a strict separation of imperative and reactive code, and they can be composed via a number of composition operators that make use of data streams. We demonstrate the model via our own implementation in a language called Stella.

Cite as

Sam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, and Wolfgang De Meuter. Tackling the Awkward Squad for Reactive Programming: The Actor-Reactor Model. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 19:1-19:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{vandenvonder_et_al:LIPIcs.ECOOP.2020.19,
  author =	{Van den Vonder, Sam and Renaux, Thierry and Oeyen, Bjarno and De Koster, Joeri and De Meuter, Wolfgang},
  title =	{{Tackling the Awkward Squad for Reactive Programming: The Actor-Reactor Model}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{19:1--19:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.19},
  URN =		{urn:nbn:de:0030-drops-131768},
  doi =		{10.4230/LIPIcs.ECOOP.2020.19},
  annote =	{Keywords: functional reactive programming, reactive programming, reactive streams, actors, reactors}
}
Document
Pearl
A Framework for Resource Dependent EDSLs in a Dependently Typed Language (Pearl)

Authors: Jan de Muijnck-Hughes, Edwin Brady, and Wim Vanderbauwhede


Abstract
Idris' Effects library demonstrates how to embed resource dependent algebraic effect handlers into a dependently typed host language, providing run-time and compile-time based reasoning on type-level resources. Building upon this work, Resources is a framework for realising Embedded Domain Specific Languages (EDSLs) with type systems that contain domain specific substructural properties. Differing from Effects, Resources allows a language’s substructural properties to be encoded within type-level resources that are associated with language variables. Such an association allows for multiple effect instances to be reasoned about autonomically and without explicit type-level declaration. Type-level predicates are used as proof that the language’s substructural properties hold. Several exemplar EDSLs are presented that illustrates our framework’s operation and how dependent types provide correctness-by-construction guarantees that substructural properties of written programs hold.

Cite as

Jan de Muijnck-Hughes, Edwin Brady, and Wim Vanderbauwhede. A Framework for Resource Dependent EDSLs in a Dependently Typed Language (Pearl). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 20:1-20:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{demuijnckhughes_et_al:LIPIcs.ECOOP.2020.20,
  author =	{de Muijnck-Hughes, Jan and Brady, Edwin and Vanderbauwhede, Wim},
  title =	{{A Framework for Resource Dependent EDSLs in a Dependently Typed Language}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{20:1--20:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.20},
  URN =		{urn:nbn:de:0030-drops-131773},
  doi =		{10.4230/LIPIcs.ECOOP.2020.20},
  annote =	{Keywords: Dependent Types, Algebraic Effect Handlers, Domain-Specific Languages, Embedded Domain Specific Languages, Idris, Substructural Type-Systems}
}
Document
Data Consistency in Transactional Storage Systems: A Centralised Semantics

Authors: Shale Xiong, Andrea Cerone, Azalea Raad, and Philippa Gardner


Abstract
We introduce an interleaving operational semantics for describing the client-observable behaviour of atomic transactions on distributed key-value stores. Our semantics builds on abstract states comprising centralised, global key-value stores and partial client views. Using our abstract states, we present operational definitions of well-known consistency models in the literature, and prove them to be equivalent to their existing declarative definitions using abstract executions. We explore two applications of our operational framework: 1) verifying that the COPS replicated database and the Clock-SI partitioned database satisfy their consistency models using trace refinement, and 2) proving invariant properties of client programs.

Cite as

Shale Xiong, Andrea Cerone, Azalea Raad, and Philippa Gardner. Data Consistency in Transactional Storage Systems: A Centralised Semantics. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 21:1-21:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{xiong_et_al:LIPIcs.ECOOP.2020.21,
  author =	{Xiong, Shale and Cerone, Andrea and Raad, Azalea and Gardner, Philippa},
  title =	{{Data Consistency in Transactional Storage Systems: A Centralised Semantics}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{21:1--21:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.21},
  URN =		{urn:nbn:de:0030-drops-131782},
  doi =		{10.4230/LIPIcs.ECOOP.2020.21},
  annote =	{Keywords: Operational Semantics, Consistency Models, Transactions, Distributed Key-value Stores}
}
Document
Experience Report
Putting Randomized Compiler Testing into Production (Experience Report)

Authors: Alastair F. Donaldson, Hugues Evrard, and Paul Thomson


Abstract
We describe our experience over the last 18 months on a compiler testing technology transfer project: taking the GraphicsFuzz research project on randomized metamorphic testing of graphics shader compilers, and building the necessary tooling around it to provide a highly automated process for improving the Khronos Vulkan Conformance Test Suite (CTS) with test cases that expose fuzzer-found compiler bugs, or that plug gaps in test coverage. We present this tooling for test automation - gfauto - in detail, as well as our use of differential coverage and test case reduction as a method for automatically synthesizing tests that fill coverage gaps. We explain the value that GraphicsFuzz has provided in automatically testing the ecosystem of tools for transforming, optimizing and validating Vulkan shaders, and the challenges faced when testing a tool ecosystem rather than a single tool. We discuss practical issues associated with putting automated metamorphic testing into production, related to test case validity, bug de-duplication and floating-point precision, and provide illustrative examples of bugs found during our work.

Cite as

Alastair F. Donaldson, Hugues Evrard, and Paul Thomson. Putting Randomized Compiler Testing into Production (Experience Report). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 22:1-22:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{donaldson_et_al:LIPIcs.ECOOP.2020.22,
  author =	{Donaldson, Alastair F. and Evrard, Hugues and Thomson, Paul},
  title =	{{Putting Randomized Compiler Testing into Production}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{22:1--22:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.22},
  URN =		{urn:nbn:de:0030-drops-131791},
  doi =		{10.4230/LIPIcs.ECOOP.2020.22},
  annote =	{Keywords: Compilers, metamorphic testing, 3D graphics, experience report}
}
Document
Lifting Sequential Effects to Control Operators

Authors: Colin S. Gordon


Abstract
Sequential effect systems are a class of effect system that exploits information about program order, rather than discarding it as traditional commutative effect systems do. This extra expressive power allows effect systems to reason about behavior over time, capturing properties such as atomicity, unstructured lock ownership, or even general safety properties. While we now understand the essential denotational (categorical) models fairly well, application of these ideas to real software is hampered by the variety of source level control flow constructs and control operators in real languages. We address this new problem by appeal to a classic idea: macro-expression of commonly-used programming constructs in terms of control operators. We give an effect system for a subset of Racket’s tagged delimited control operators, as a lifting of an effect system for a language without direct control operators. This gives the first account of sequential effects in the presence of general control operators. Using this system, we also re-derive the sequential effect system rules for control flow constructs previously shown sound directly, and derive sequential effect rules for new constructs not previously studied in the context of source-level sequential effect systems. This offers a way to directly extend source-level support for sequential effect systems to real programming languages.

Cite as

Colin S. Gordon. Lifting Sequential Effects to Control Operators. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 23:1-23:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gordon:LIPIcs.ECOOP.2020.23,
  author =	{Gordon, Colin S.},
  title =	{{Lifting Sequential Effects to Control Operators}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{23:1--23:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.23},
  URN =		{urn:nbn:de:0030-drops-131804},
  doi =		{10.4230/LIPIcs.ECOOP.2020.23},
  annote =	{Keywords: Type systems, effect systems, quantales, control operators, delimited continuations}
}
Document
Flow-Sensitive Type-Based Heap Cloning

Authors: Mohamad Barbar, Yulei Sui, and Shiping Chen


Abstract
By respecting program control-flow, flow-sensitive pointer analysis promises more precise results than its flow-insensitive counterpart. However, existing heap abstractions for C and C++ flow-sensitive pointer analyses model the heap by creating a single abstract heap object for each memory allocation. Two runtime heap objects which originate from the same allocation site are imprecisely modelled using one abstract object, which makes them share the same imprecise points-to sets and thus reduces the benefit of analysing heap objects flow-sensitively. On the other hand, equipping flow-sensitive analysis with context-sensitivity, whereby an abstract heap object would be created (cloned) per calling context, can yield a more precise heap model, but at the cost of uncontrollable analysis overhead when analysing larger programs. This paper presents TypeClone, a new type-based heap model for flow-sensitive analysis. Our key insight is to differentiate concrete heap objects lazily using type information at use sites within the program control-flow (e.g., when accessed via pointer dereferencing) for programs which conform to the strict aliasing rules set out by the C and C++ standards. The novelty of TypeClone lies in its lazy heap cloning: an untyped abstract heap object created at an allocation site is killed and replaced with a new object (i.e. a clone), uniquely identified by the type information at its use site, for flow-sensitive points-to propagation. Thus, heap cloning can be performed within a flow-sensitive analysis without the need for context-sensitivity. Moreover, TypeClone supports new kinds of strong updates for flow-sensitive analysis where heap objects are filtered out from imprecise points-to relations at object use sites according to the strict aliasing rules. Our method is neither strictly superior nor inferior to context-sensitive heap cloning, but rather, represents a new dimension that achieves a sweet spot between precision and efficiency. We evaluate our analysis by comparing TypeClone with state-of-the-art sparse flow-sensitive points-to analysis using the 12 largest programs in GNU Coreutils. Our experimental results also confirm that TypeClone is more precise than flow-sensitive pointer analysis and is able to, on average, answer over 15% more alias queries with a no-alias result.

Cite as

Mohamad Barbar, Yulei Sui, and Shiping Chen. Flow-Sensitive Type-Based Heap Cloning. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 24:1-24:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barbar_et_al:LIPIcs.ECOOP.2020.24,
  author =	{Barbar, Mohamad and Sui, Yulei and Chen, Shiping},
  title =	{{Flow-Sensitive Type-Based Heap Cloning}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{24:1--24:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.24},
  URN =		{urn:nbn:de:0030-drops-131819},
  doi =		{10.4230/LIPIcs.ECOOP.2020.24},
  annote =	{Keywords: Heap cloning, type-based analysis, flow-sensitivity}
}
Document
Scala with Explicit Nulls

Authors: Abel Nieto, Yaoyu Zhao, Ondřej Lhoták, Angela Chang, and Justin Pu


Abstract
The Scala programming language makes all reference types implicitly nullable. This is a problem, because null references do not support most operations that do make sense on regular objects, leading to runtime errors. In this paper, we present a modification to the Scala type system that makes nullability explicit in the types. Specifically, we make reference types non-nullable by default, while still allowing for nullable types via union types. We have implemented this design for explicit nulls as a fork of the Dotty (Scala 3) compiler. We evaluate our scheme by migrating a number of Scala libraries to use explicit nulls. Finally, we give a denotational semantics of type nullification, the interoperability layer between Java and Scala with explicit nulls. We show a soundness theorem stating that, for variants of System F_ω that model Java and Scala, nullification preserves values of types.

Cite as

Abel Nieto, Yaoyu Zhao, Ondřej Lhoták, Angela Chang, and Justin Pu. Scala with Explicit Nulls. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 25:1-25:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nieto_et_al:LIPIcs.ECOOP.2020.25,
  author =	{Nieto, Abel and Zhao, Yaoyu and Lhot\'{a}k, Ond\v{r}ej and Chang, Angela and Pu, Justin},
  title =	{{Scala with Explicit Nulls}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{25:1--25:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.25},
  URN =		{urn:nbn:de:0030-drops-131821},
  doi =		{10.4230/LIPIcs.ECOOP.2020.25},
  annote =	{Keywords: Scala, Java, nullability, language interoperability, type systems}
}
Document
A Type-Directed Operational Semantics For a Calculus with a Merge Operator

Authors: Xuejing Huang and Bruno C. d. S. Oliveira


Abstract
Calculi with disjoint intersection types and a merge operator provide general mechanisms that can subsume various other features. Such calculi can also encode highly dynamic forms of object composition, which capture common programming patterns in dynamically typed languages (such as JavaScript) in a fully statically typed manner. Unfortunately, unlike many other foundational calculi (such as System F, System F_{< :} or Featherweight Java), recent calculi with the merge operator lack a (direct) operational semantics with standard and expected properties such as determinism and subject-reduction. Furthermore the metatheory for such calculi can only account for terminating programs, which is a significant restriction in practice. This paper proposes a type-directed operational semantics (TDOS) for λ_i^{:}: a calculus with intersection types and a merge operator. The calculus is inspired by two closely related calculi by Dunfield (2014) and Oliveira et al. (2016). Although Dunfield proposes a direct small-step semantics for her calculus, her semantics lacks both determinism and subject-reduction. Using our TDOS we obtain a direct semantics for λ_i^{:} that has both properties. To fully obtain determinism, the λ_i^{:} calculus employs a disjointness restriction proposed in Oliveira et al.’s λ_i calculus. As an added benefit the TDOS approach deals with recursion in a straightforward way, unlike λ_i and subsequent calculi where recursion is problematic. To further relate λ_i^{:} to the calculi by Dunfield and Oliveira et al. we show two results. Firstly, the semantics of λ_i^{:} is sound with respect to Dunfield’s small-step semantics. Secondly, we show that the type system of λ_i^{:} is complete with respect to the λ_i type system. All results have been fully formalized in the Coq theorem prover.

Cite as

Xuejing Huang and Bruno C. d. S. Oliveira. A Type-Directed Operational Semantics For a Calculus with a Merge Operator. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 26:1-26:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ECOOP.2020.26,
  author =	{Huang, Xuejing and Oliveira, Bruno C. d. S.},
  title =	{{A Type-Directed Operational Semantics For a Calculus with a Merge Operator}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{26:1--26:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.26},
  URN =		{urn:nbn:de:0030-drops-131832},
  doi =		{10.4230/LIPIcs.ECOOP.2020.26},
  annote =	{Keywords: operational semantics, type systems, intersection types}
}
Document
Row and Bounded Polymorphism via Disjoint Polymorphism

Authors: Ningning Xie, Bruno C. d. S. Oliveira, Xuan Bi, and Tom Schrijvers


Abstract
Polymorphism and subtyping are important features in mainstream OO languages. The most common way to integrate the two is via 𝖥_{< :} style bounded quantification. A closely related mechanism is row polymorphism, which provides an alternative to subtyping, while still enabling many of the same applications. Yet another approach is to have type systems with intersection types and polymorphism. A recent addition to this design space are calculi with disjoint intersection types and disjoint polymorphism. With all these alternatives it is natural to wonder how they are related. This paper provides an answer to this question. We show that disjoint polymorphism can recover forms of both row polymorphism and bounded polymorphism, while retaining key desirable properties, such as type-safety and decidability. Furthermore, we identify the extra power of disjoint polymorphism which enables additional features that cannot be easily encoded in calculi with row polymorphism or bounded quantification alone. Ultimately we expect that our work is useful to inform language designers about the expressive power of those common features, and to simplify implementations and metatheory of feature-rich languages with polymorphism and subtyping.

Cite as

Ningning Xie, Bruno C. d. S. Oliveira, Xuan Bi, and Tom Schrijvers. Row and Bounded Polymorphism via Disjoint Polymorphism. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 27:1-27:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{xie_et_al:LIPIcs.ECOOP.2020.27,
  author =	{Xie, Ningning and Oliveira, Bruno C. d. S. and Bi, Xuan and Schrijvers, Tom},
  title =	{{Row and Bounded Polymorphism via Disjoint Polymorphism}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{27:1--27:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.27},
  URN =		{urn:nbn:de:0030-drops-131846},
  doi =		{10.4230/LIPIcs.ECOOP.2020.27},
  annote =	{Keywords: Intersection types, bounded polymorphism, row polymorphism}
}
Document
A Trusted Infrastructure for Symbolic Analysis of Event-Driven Web Applications

Authors: Gabriela Sampaio, José Fragoso Santos, Petar Maksimović, and Philippa Gardner


Abstract
We introduce a trusted infrastructure for the symbolic analysis of modern event-driven Web applications. This infrastructure consists of reference implementations of the DOM Core Level 1, DOM UI Events, JavaScript Promises and the JavaScript async/await APIs, all underpinned by a simple Core Event Semantics which is sufficiently expressive to describe the event models underlying these APIs. Our reference implementations are trustworthy in that three follow the appropriate standards line-by-line and all are thoroughly tested against the official test-suites, passing all the applicable tests. Using the Core Event Semantics and the reference implementations, we develop JaVerT.Click, a symbolic execution tool for JavaScript that, for the first time, supports reasoning about JavaScript programs that use multiple event-related APIs. We demonstrate the viability of JaVerT.Click by proving both the presence and absence of bugs in real-world JavaScript code.

Cite as

Gabriela Sampaio, José Fragoso Santos, Petar Maksimović, and Philippa Gardner. A Trusted Infrastructure for Symbolic Analysis of Event-Driven Web Applications. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 28:1-28:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sampaio_et_al:LIPIcs.ECOOP.2020.28,
  author =	{Sampaio, Gabriela and Fragoso Santos, Jos\'{e} and Maksimovi\'{c}, Petar and Gardner, Philippa},
  title =	{{A Trusted Infrastructure for Symbolic Analysis of Event-Driven Web Applications}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{28:1--28:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.28},
  URN =		{urn:nbn:de:0030-drops-131853},
  doi =		{10.4230/LIPIcs.ECOOP.2020.28},
  annote =	{Keywords: Events, DOM, JavaScript, promises, symbolic execution, bug-finding}
}
Document
The Duality of Subtyping

Authors: Bruno C. d. S. Oliveira, Cui Shaobo, and Baber Rehman


Abstract
Subtyping is a concept frequently encountered in many programming languages and calculi. Various forms of subtyping exist for different type system features, including intersection types, union types or bounded quantification. Normally these features are designed independently of each other, without exploiting obvious similarities (or dualities) between features. This paper proposes a novel methodology for designing subtyping relations that exploits duality between features. At the core of our methodology is a generalization of subtyping relations, which we call Duotyping. Duotyping is parameterized by the mode of the relation. One of these modes is the usual subtyping, while another mode is supertyping (the dual of subtyping). Using the mode it is possible to generalize the usual rules of subtyping to account not only for the intended behaviour of one particular language construct, but also of its dual. Duotyping brings multiple benefits, including: shorter specifications and implementations, dual features that come essentially for free, as well as new proof techniques for various properties of subtyping. To evaluate a design based on Duotyping against traditional designs, we formalized various calculi with common OOP features (including union types, intersection types and bounded quantification) in Coq in both styles. Our results show that the metatheory when using Duotyping does not come at a significant cost: the metatheory with Duotyping has similar complexity and size compared to the metatheory for traditional designs. However, we discover new features as duals to well-known features. Furthermore, we also show that Duotyping can significantly simplify transitivity proofs for many of the calculi studied by us.

Cite as

Bruno C. d. S. Oliveira, Cui Shaobo, and Baber Rehman. The Duality of Subtyping. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 29:1-29:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.ECOOP.2020.29,
  author =	{Oliveira, Bruno C. d. S. and Shaobo, Cui and Rehman, Baber},
  title =	{{The Duality of Subtyping}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{29:1--29:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.29},
  URN =		{urn:nbn:de:0030-drops-131864},
  doi =		{10.4230/LIPIcs.ECOOP.2020.29},
  annote =	{Keywords: DuoTyping, OOP, Duality, Subtyping, Supertyping}
}
Document
Safe, Flexible Aliasing with Deferred Borrows

Authors: Chris Fallin


Abstract
In recent years, programming-language support for static memory safety has developed significantly. In particular, borrowing and ownership systems, such as the one pioneered by the Rust language, require the programmer to abide by certain aliasing restrictions but in return guarantee that no unsafe aliasing can ever occur. This allows parallel code to be written, or existing code to be parallelized, safely and easily, and the aliasing restrictions also statically prevent a whole class of bugs such as iterator invalidation. Borrowing is easy to reason about because it matches the intuitive ownership-passing conventions often used in systems languages. Unfortunately, a borrowing-based system can sometimes be too restrictive. Because borrows enforce aliasing rules for their entire lifetimes, they cannot be used to implement some common patterns that pointers would allow. Programs often use pseudo-pointers, such as indices into an array of nodes or objects, instead, which can be error-prone: the program is still memory-safe by construction, but it is not logically memory-safe, because an object access may reach the wrong object. In this work, we propose deferred borrows, which provide the type-safety benefits of borrows without the constraints on usage patterns that they otherwise impose. Deferred borrows work by encapsulating enough state at creation time to perform the actual borrow later, while statically guaranteeing that the eventual borrow will reach the same object it would have otherwise. The static guarantee is made with a path-dependent type tying the deferred borrow to the container (struct, vector, etc.) of the borrowed object. This combines the type-safety of borrowing with the flexibility of traditional pointers, while retaining logical memory-safety.

Cite as

Chris Fallin. Safe, Flexible Aliasing with Deferred Borrows. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 30:1-30:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fallin:LIPIcs.ECOOP.2020.30,
  author =	{Fallin, Chris},
  title =	{{Safe, Flexible Aliasing with Deferred Borrows}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{30:1--30:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.30},
  URN =		{urn:nbn:de:0030-drops-131878},
  doi =		{10.4230/LIPIcs.ECOOP.2020.30},
  annote =	{Keywords: Rust, type systems, ownership types, borrowing}
}
Document
SCICO Journal-first
Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality (SCICO Journal-first)

Authors: Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach


Abstract
The vast gap between CPU and RAM speed means that on modern architectures, developers need to carefully consider data placement in memory to exploit spatial and temporal cache locality and use CPU caches effectively. To that extent, developers have devised various strategies regarding data placement; for objects that should be close in memory, a contiguous pool of objects is allocated and then new instances are constructed inside it; an array of objects is clustered into multiple arrays, each holding the values of a specific field of the objects. Such data placements, however, have to be performed manually, hence readability, maintainability, memory safety, and key OO concepts such as encapsulation and object identity need to be sacrificed and the business logic needs to be modified accordingly. We propose a language extension, SHAPES, which aims to offer developers high-level fine-grained control over data placement, whilst retaining memory safety and the look-and-feel of OO. SHAPES extends an OO language with the concepts of pools and layouts: Developers declare pools that contain objects of a specific type and specify the pool’s layout. A layout specifies how objects in a pool are laid out in memory. That is, it dictates how the values of the fields of the pool’s objects are grouped together into clusters. Objects stored in pools behave identically to ordinary, standalone objects; the type system allows the code to be oblivious to the layout being used. This means that the business logic is completely decoupled from any placement concerns and the developer need not deviate from the spirit of OO to better utilise the cache. In this paper, we present the features of SHAPES, as well as the design rationale behind each feature. We then showcase the merit of SHAPES through a sequence of case studies; we claim that, compared to the manual pooling and clustering of objects, we can observe improvement in readability and maintainability, and comparable (i.e., on par or better) performance. We also present SHAPES^h, an OO calculus which models the SHAPES ideas, we formalise the type system, and prove soundness. The SHAPES^h type system uses ideas from Ownership Types [Clarke et al., 2013] and Java Generics [Gosling et al., 2014]: In SHAPES^h, pools are part of the types; SHAPES^h class and type definitions are enriched with pool parameters. Moreover, class pool parameters are enriched with bounds, which is what allows the business logic of SHAPES to be oblivious to the layout being used. SHAPES^h types also enforce pool uniformity and homogeneity. A pool is uniform if it contains objects of the same class only; a pool is homogeneous if the corresponding fields of all its objects point to objects in the same pool. These properties allow for more efficient implementation. For performance considerations, we also designed SHAPES^l, an untyped, unsafe low-level language with no explicit support for objects or pools. We argue that it is possible to translate SHAPES^l into existing low-level intermediate representations, such as LLVM [Lattner and Adve, 2004], present the translation of SHAPES^h into SHAPES^l, and show its soundness. Thus, we expect SHAPES to offer developers more fine-grained control over data placement, without sacrificing memory safety or the OO look-and-feel.

Cite as

Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach. Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality (SCICO Journal-first). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 31:1-31:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{tasos_et_al:LIPIcs.ECOOP.2020.31,
  author =	{Tasos, Alexandros and Franco, Juliana and Drossopoulou, Sophia and Wrigstad, Tobias and Eisenbach, Susan},
  title =	{{Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{31:1--31:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.31},
  URN =		{urn:nbn:de:0030-drops-131887},
  doi =		{10.4230/LIPIcs.ECOOP.2020.31},
  annote =	{Keywords: Cache utilisation, Data representation, Memory safety}
}
Document
SCICO Journal-first
A Big Step from Finite to Infinite Computations (SCICO Journal-first)

Authors: Davide Ancona, Francesco Dagnino, Jurriaan Rot, and Elena Zucca


Abstract
The known is finite, the unknown infinite - Thomas Henry Huxley The behaviour of programs can be described by the final results of computations, and/or their interactions with the context, also seen as observations. For instance, a function call can terminate and return a value, as well as have output effects during its execution. Here, we deal with semantic definitions covering both results and observations. Often, such definitions are provided for finite computations only. Notably, in big-step style, infinite computations are simply not modelled, hence diverging and stuck terms are not distinguished. This becomes even more unsatisfactory if we have observations, since a non-terminating program may have significant infinite behaviour. Recently, examples of big-step semantics modeling divergence have been provided [Davide Ancona et al., 2017; Davide Ancona et al., 2018] by means of generalized inference systems [Davide Ancona et al., 2017; Francesco Dagnino, 2019], which allow corules to control coinduction. Indeed, modeling infinite behaviour by a purely coinductive interpretation of big-step rules would lead to spurious results [Xavier Leroy and Hervé Grall, 2009] and undetermined observation, whereas, by adding appropriate corules, we can correctly get divergence (∞) as the only result, and a uniquely determined observation. This approach has been adopted in [Davide Ancona et al., 2017; Davide Ancona et al., 2018] to design big-step definitions including infinite behaviour for lambda-calculus and a simple imperative Java-like language. However, in such works the designer of the semantics is in charge of finding the appropriate corules, and this is a non-trivial task. In this paper, we show a general construction that extends a given big-step semantics, modeling finite computations, to include infinite behaviour as well, notably by generating appropriate corules. The construction consists of two steps: 1) Starting from a monoid O modeling finite observations (e.g., finite traces), we construct an ω-monoid ⟨O, O_∞⟩ also modeling infinite observations (e.g., infinite traces). The latter structure is a variation of the notion of ω-semigroup [Dominique Perrin and Jean-Eric Pin, 2004], including a mixed product composing a finite with a possibly infinite observation, and an infinite product mapping an infinite sequence of finite observations into a single one (possibly infinite). 2) Starting from an inference system defining a big-step judgment c⇒⟨r, o⟩, with c denoting a configuration, r ∈ R a result, and o ∈ O a finite observation, we construct an inference system with corules defining an extended big-step judgment c⇒c ⇒ ⟨r_∞, o_∞⟩ with r_∞ ∈ R_∞ = R+{∞}, and o_∞ ∈ O_∞ a "possibly infinite" observation. The construction generates additional rules for propagating divergence, and corules for introducing divergence in a controlled way. The exact corules added in the construction depend on the type of observations that one starts with. To show the effectiveness of our approach, we provide several instances of the framework, with different kinds of (finite) observations. Finally, we prove a correctness result for the construction. To this end, we assume the original big-step semantics to be equivalent to (finite sequences of steps in) a reference small-step semantics, and we show that, by applying the construction, we obtain an extended big-step semantics which is still equivalent to the small-step semantics, where we consider possibly infinite sequences of steps.} As hypotheses, rather than {just} equivalence in the finite case {(which would be not enough)}, we assume a set of equivalence conditions between individual big-step rules and the small-step relation. This proof of equivalence holds for deterministic semantics; issues arising in the non-deterministic case and a possible solution are sketched in the conclusion of the full paper.

Cite as

Davide Ancona, Francesco Dagnino, Jurriaan Rot, and Elena Zucca. A Big Step from Finite to Infinite Computations (SCICO Journal-first). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 32:1-32:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ancona_et_al:LIPIcs.ECOOP.2020.32,
  author =	{Ancona, Davide and Dagnino, Francesco and Rot, Jurriaan and Zucca, Elena},
  title =	{{A Big Step from Finite to Infinite Computations}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{32:1--32:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.32},
  URN =		{urn:nbn:de:0030-drops-131895},
  doi =		{10.4230/LIPIcs.ECOOP.2020.32},
  annote =	{Keywords: Operational semantics, coinduction, infinite behaviour}
}
Document
SCICO Journal-first
Abstracting Gradual References (SCICO Journal-first)

Authors: Matías Toro and Éric Tanter


Abstract
Gradual typing is an effective approach to integrate static and dynamic typing, which supports the smooth transition between both extremes via the (programmer-controlled) precision of type annotations [Jeremy Siek and Walid Taha, 2006; Siek et al., 2015]. Imprecision is normally introduced via the unknown type ?, e.g. function type Int → Bool is more precise than ? → ?, and both more precise than ?. Gradual typing relates types of different precision using consistent type relations, such as type consistency (resp. consistent subtyping), the gradual counterpart of type equality (resp. subtyping). For instance, ? → Int is consistent with Bool → ?. This approach has been applied in a number of settings, such as objects [Jeremy Siek and Walid Taha, 2007], subtyping [Jeremy Siek and Walid Taha, 2007; Ronald Garcia et al., 2016], effects [Bañados Schwerter et al., 2014; Bañados Schwerter et al., 2016], ownership [Ilya Sergey and Dave Clarke, 2012], typestates [Roger Wolff et al., 2011; Ronald Garcia et al., 2014], information-flow typing [Tim Disney and Cormac Flanagan, 2011; Luminous Fennell and Peter Thiemann, 2013; Matías Toro et al., 2018], session types [Igarashi et al., 2017], refinements [Nico Lehmann and {É}ric Tanter, 2017], set-theoretic types [Castagna and Lanvin, 2017], Hoare logic [Johannes Bader et al., 2018], parametric polymorphism [Amal Ahmed et al., 2011; Ahmed et al., 2017; Ina and Igarashi, 2011; Igarashi et al., 2017; Ningning Xie et al., 2018; Matías Toro et al., 2019], and references [Jeremy Siek and Walid Taha, 2006; Herman et al., 2010; Siek et al., 2015]. In particular, gradual typing for mutable references has seen the elaboration of various possible semantics: invariant references [Jeremy Siek and Walid Taha, 2006], guarded references [Herman et al., 2010], monotonic references [Siek et al., 2015], and permissive references [Siek et al., 2015]. Invariant references are a form of references where reference types are invariant with respect to type consistency. Guarded references admit variance thanks to systematic runtime checks on reference reads and writes; the runtime type of an allocated cell never changes during execution. Guarded references have been formulated in a space-efficient coercion calculus, which ensures that gradual programs do not accumulate unbounded pending checks during execution. Hereafter, we refer to this language as HCC. Monotonic references favor efficiency over flexibility by only allowing reference cells to vary towards more precise types. This allows reference operations in statically-typed regions to safely proceed without any runtime checks. Permissive references are the most flexible approach, in which reference cells can be initialized and updated to any value of any type at any time. These four developments reflect different design decisions with respect to gradual references: is the reference type constructor variant under consistency? Can the programmer specify a precise bound on the static type of a reference, and hence on the corresponding heap cell type? Can the heap cell type evolve its precision at runtime, and if yes, how? There is obviously no absolute answer to these questions, as they reflect different tradeoffs such as in efficiency and precision. This work explores the semantics that results from the application of a systematic methodology to gradualize static type systems. Currently we can find in the literature two methodologies to gradualize statically-typed languages: Abstracting Gradual Typing (AGT) [Ronald Garcia et al., 2016], and the Gradualizer [Matteo Cimini and Jeremy Siek, 2016]. In this work, we consider the AGT methodology as it naturally scales to auxiliary structures such as a mutable heap. The AGT methodology helps to systematically construct gradually-typed languages by using abstract interpretation [Cousot and Cousot, 1977] at the type level. In brief, AGT interprets gradual types as an abstraction of sets of possible static types, formally captured through a Galois connection. The static semantics of a gradual language are then derived by lifting the semantics of a statically-typed language through this connection, and the dynamic semantics follow by Curry-Howard from proof normalization of the type safety argument. The AGT methodology has been shown to be effective in many contexts: records and subtyping [Ronald Garcia et al., 2016], type-and-effects [Bañados Schwerter et al., 2014; Bañados Schwerter et al., 2016], refinement types [Nico Lehmann and Éric Tanter, 2017; Niki Vazou et al., 2018], set-theoretic and union types [Castagna and Lanvin, 2017; Matías Toro and Éric Tanter, 2017], information-flow typing [Matías Toro et al., 2018], and parametric polymorphism [Matías Toro et al., 2019]. However, this methodology has never been applied to mutable references in isolation. Although Toro et al. [Matías Toro et al., 2018] apply AGT to a language with references, they only gradualize security levels of types (e.g. Ref Int_?), not whole types (e.g. Ref ? is not supported). In this article we answer the following open questions: Which semantics for gradually-type references follows by systematically applying AGT? Does AGT justify one of the existing approaches, or does it suggest yet another design? Can we recover other semantics for gradual references, if yes, how? This article first reviews the different existing gradual approaches to mutable references through examples. It then presents the semantics for gradual references that is obtained by applying AGT, and how to accommodate the other semantics. More specifically, this work makes the following contributions: - We present λ_REF~, a gradual language with support for mutable references. We derive λ_REF~ by applying the AGT methodology to a fully-static simple language with mutable references called λ_REF. This is the first application of AGT that focuses on gradually-typed mutable references. - We prove that λ_REF~ satisfies the gradual guarantee of Siek et al. [Siek et al., 2015]. We also present the first formal statement and proof of the conservative extension of the dynamic semantics of the static language [Siek et al., 2015], for a gradual language derived using AGT. - We prove that the derived language, λ_REF~, corresponds to the semantics of guarded references from HCC. Formally, given a λ_REF~ term and its compilation to HCC^+ (an adapted version of HCC extended with conditionals and binary operations) we prove that both terms are bisimilar, and that consequently they either both terminate, both fail, or both diverge. - We observe that λ_REF~ and HCC^+ differ in the order of combination of runtime checks. As a result, HCC is space efficient whereas λ_REF~ is not: we can write programs in λ_REF~ that may accumulate an unbounded number of checks. We formalize the changes needed in the dynamic semantics of λ_REF~ to achieve space efficiency. This technique to recover space efficiency is in fact independent from mutable references, and is therefore applicable to other gradual languages derived with AGT. - We formally describe how to support other gradual reference semantics in λ_REF~ by presenting λ_REF~^𝗉𝗆, an extension that additionally supports both permissive and monotonic references. Finally, we prove for the first time that monotonic references satisfy the dynamic gradual guarantee, a non-trivial result that requires careful consideration of updates to the store. Additionally, we implemented λ_REF~ as an interactive prototype that displays both typing derivations and reduction traces. All the examples mentioned in this paper are readily available in the online prototype available at https://pleiad.cl/grefs. As a result, this paper sheds further light on the design space of gradual languages with mutable references and contributes to deepening the understanding of the AGT methodology.

Cite as

Matías Toro and Éric Tanter. Abstracting Gradual References (SCICO Journal-first). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 33:1-33:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{toro_et_al:LIPIcs.ECOOP.2020.33,
  author =	{Toro, Mat{\'\i}as and Tanter, \'{E}ric},
  title =	{{Abstracting Gradual References}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{33:1--33:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.33},
  URN =		{urn:nbn:de:0030-drops-131900},
  doi =		{10.4230/LIPIcs.ECOOP.2020.33},
  annote =	{Keywords: Gradual Typing, Mutable References, Abstract interpretation}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail