16 Search Results for "Cook, James"


Document
Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure

Authors: Gal Amram, Avi Hayoun, Lior Mizrahi, and Gera Weiss

Published in: LIPIcs, Volume 246, 36th International Symposium on Distributed Computing (DISC 2022)


Abstract
We analyze correctness of implementations of the snapshot data structure in terms of linearizability. We show that such implementations can be verified in polynomial time. Additionally, we identify a set of representative executions for testing and show that the correctness of each of these executions can be validated in linear time. These results present a significant speedup considering that verifying linearizability of implementations of concurrent data structures, in general, is EXPSPACE-complete in the number of program-states, and testing linearizability is NP-complete in the length of the tested execution. The crux of our approach is identifying a class of executions, which we call simple, such that a snapshot implementation is linearizable if and only if all of its simple executions are linearizable. We then divide all possible non-linearizable simple executions into three categories and construct a small automaton that recognizes each category. We describe two implementations (one for verification and one for testing) of an automata-based approach that we develop based on this result and an evaluation that demonstrates significant improvements over existing tools. For verification, we show that restricting a state-of-the-art tool to analyzing only simple executions saves resources and allows the analysis of more complex cases. Specifically, restricting attention to simple executions finds bugs in 27 instances, whereas, without this restriction, we were only able to find 14 of the 30 bugs in the instances we examined. We also show that our technique accelerates testing performance significantly. Specifically, our implementation solves the complete set of 900 problems we generated, whereas the state-of-the-art linearizability testing tool solves only 554 problems.

Cite as

Gal Amram, Avi Hayoun, Lior Mizrahi, and Gera Weiss. Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{amram_et_al:LIPIcs.DISC.2022.5,
  author =	{Amram, Gal and Hayoun, Avi and Mizrahi, Lior and Weiss, Gera},
  title =	{{Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure}},
  booktitle =	{36th International Symposium on Distributed Computing (DISC 2022)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-255-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{246},
  editor =	{Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2022.5},
  URN =		{urn:nbn:de:0030-drops-171964},
  doi =		{10.4230/LIPIcs.DISC.2022.5},
  annote =	{Keywords: Snapshot, Linearizability, Verification, Formal Methods}
}
Document
Trading Time and Space in Catalytic Branching Programs

Authors: James Cook and Ian Mertz

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
An m-catalytic branching program (Girard, Koucký, McKenzie 2015) is a set of m distinct branching programs for f which are permitted to share internal (i.e. non-source non-sink) nodes. While originally introduced as a non-uniform analogue to catalytic space, this also gives a natural notion of amortized non-uniform space complexity for f, namely the smallest value |G|/m for an m-catalytic branching program G for f (Potechin 2017). Potechin (2017) showed that every function f has amortized size O(n), witnessed by an m-catalytic branching program where m = 2^(2ⁿ-1). We recreate this result by defining a catalytic algorithm for evaluating polynomials using a large amount of space but O(n) time. This allows us to balance this with previously known algorithms which are efficient with respect to space at the cost of time (Cook, Mertz 2020, 2021). We show that for any ε ≥ 2n^(-1), every function f has an m-catalytic branching program of size O_ε(mn), where m = 2^(2^(ε n)). We similarly recreate an improved result due to Robere and Zuiddam (2021), and show that for d ≤ n and ε ≥ 2d^(-1), the same result holds for m = 2^binom(n, ≤ ε d) as long as f is a degree-d polynomial over 𝔽₂. We also show that for certain classes of functions, m can be reduced to 2^(poly n) while still maintaining linear or quasi-linear amortized size. In the other direction, we bound the necessary length, and by extension the amortized size, of any permutation branching program for an arbitrary function between 3n and 4n-4.

Cite as

James Cook and Ian Mertz. Trading Time and Space in Catalytic Branching Programs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 8:1-8:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.CCC.2022.8,
  author =	{Cook, James and Mertz, Ian},
  title =	{{Trading Time and Space in Catalytic Branching Programs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.8},
  URN =		{urn:nbn:de:0030-drops-165708},
  doi =		{10.4230/LIPIcs.CCC.2022.8},
  annote =	{Keywords: complexity theory, branching programs, amortized, space complexity, catalytic computation}
}
Document
Characteristic Formulae for Liveness Properties of Non-Terminating CakeML Programs

Authors: Johannes Åman Pohjola, Henrik Rostedt, and Magnus O. Myreen

Published in: LIPIcs, Volume 141, 10th International Conference on Interactive Theorem Proving (ITP 2019)


Abstract
There are useful programs that do not terminate, and yet standard Hoare logics are not able to prove liveness properties about non-terminating programs. This paper shows how a Hoare-like programming logic framework (characteristic formulae) can be extended to enable reasoning about the I/O behaviour of programs that do not terminate. The approach is inspired by transfinite induction rather than coinduction, and does not require non-terminating loops to be productive. This work has been developed in the HOL4 theorem prover and has been integrated into the ecosystem of proof tools surrounding the CakeML programming language.

Cite as

Johannes Åman Pohjola, Henrik Rostedt, and Magnus O. Myreen. Characteristic Formulae for Liveness Properties of Non-Terminating CakeML Programs. In 10th International Conference on Interactive Theorem Proving (ITP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 141, pp. 32:1-32:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{amanpohjola_et_al:LIPIcs.ITP.2019.32,
  author =	{\r{A}man Pohjola, Johannes and Rostedt, Henrik and Myreen, Magnus O.},
  title =	{{Characteristic Formulae for Liveness Properties of Non-Terminating CakeML Programs}},
  booktitle =	{10th International Conference on Interactive Theorem Proving (ITP 2019)},
  pages =	{32:1--32:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-122-1},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{141},
  editor =	{Harrison, John and O'Leary, John and Tolmach, Andrew},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2019.32},
  URN =		{urn:nbn:de:0030-drops-110872},
  doi =		{10.4230/LIPIcs.ITP.2019.32},
  annote =	{Keywords: Program verification, non-termination, liveness, Hoare logic}
}
Document
Binary-Compatible Verification of Filesystems with ACL2

Authors: Mihir Parang Mehta and William R. Cook

Published in: LIPIcs, Volume 141, 10th International Conference on Interactive Theorem Proving (ITP 2019)


Abstract
Filesystems are an essential component of most computer systems. Work on the verification of filesystem functionality has been focused on constructing new filesystems in a manner which simplifies the process of verifying them against specifications. This leaves open the question of whether filesystems already in use are correct at the binary level. This paper introduces LoFAT, a model of the FAT32 filesystem which efficiently implements a subset of the POSIX filesystem operations, and HiFAT, a more abstract model of FAT32 which is simpler to reason about. LoFAT is proved to be correct in terms of refinement of HiFAT, and made executable by enabling the state of the model to be written to and read from FAT32 disk images. EqFAT, an equivalence relation for disk images, considers whether two disk images contain the same directory tree modulo reordering of files and implementation-level details regarding cluster allocation. A suite of co-simulation tests uses EqFAT to compare the operation of existing FAT32 implementations to LoFAT and check the correctness of existing implementations of FAT32 such as the mtools suite of programs and the Linux FAT32 implementation. All models and proofs are formalized and mechanically verified in ACL2.

Cite as

Mihir Parang Mehta and William R. Cook. Binary-Compatible Verification of Filesystems with ACL2. In 10th International Conference on Interactive Theorem Proving (ITP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 141, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mehta_et_al:LIPIcs.ITP.2019.25,
  author =	{Mehta, Mihir Parang and Cook, William R.},
  title =	{{Binary-Compatible Verification of Filesystems with ACL2}},
  booktitle =	{10th International Conference on Interactive Theorem Proving (ITP 2019)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-122-1},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{141},
  editor =	{Harrison, John and O'Leary, John and Tolmach, Andrew},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2019.25},
  URN =		{urn:nbn:de:0030-drops-110807},
  doi =		{10.4230/LIPIcs.ITP.2019.25},
  annote =	{Keywords: interactive theorem proving, filesystems}
}
Document
Ornaments for Proof Reuse in Coq

Authors: Talia Ringer, Nathaniel Yazdani, John Leo, and Dan Grossman

Published in: LIPIcs, Volume 141, 10th International Conference on Interactive Theorem Proving (ITP 2019)


Abstract
Ornaments express relations between inductive types with the same inductive structure. We implement fully automatic proof reuse for a particular class of ornaments in a Coq plugin, and show how such a tool can give programmers the rewards of using indexed inductive types while automating away many of the costs. The plugin works directly on Coq code; it is the first ornamentation tool for a non-embedded dependently typed language. It is also the first tool to automatically identify ornaments: To lift a function or proof, the user must provide only the source type, the destination type, and the source function or proof. In taking advantage of the mathematical properties of ornaments, our approach produces faster functions and smaller terms than a more general approach to proof reuse in Coq.

Cite as

Talia Ringer, Nathaniel Yazdani, John Leo, and Dan Grossman. Ornaments for Proof Reuse in Coq. In 10th International Conference on Interactive Theorem Proving (ITP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 141, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ringer_et_al:LIPIcs.ITP.2019.26,
  author =	{Ringer, Talia and Yazdani, Nathaniel and Leo, John and Grossman, Dan},
  title =	{{Ornaments for Proof Reuse in Coq}},
  booktitle =	{10th International Conference on Interactive Theorem Proving (ITP 2019)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-122-1},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{141},
  editor =	{Harrison, John and O'Leary, John and Tolmach, Andrew},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2019.26},
  URN =		{urn:nbn:de:0030-drops-110816},
  doi =		{10.4230/LIPIcs.ITP.2019.26},
  annote =	{Keywords: ornaments, proof reuse, proof automation}
}
Document
Bounded-Depth Frege Complexity of Tseitin Formulas for All Graphs

Authors: Nicola Galesi, Dmitry Itsykson, Artur Riazanov, and Anastasia Sofronova

Published in: LIPIcs, Volume 138, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)


Abstract
We prove that there is a constant K such that Tseitin formulas for an undirected graph G requires proofs of size 2^{tw(G)^{Omega(1/d)}} in depth-d Frege systems for d<(K log n)/(log log n), where tw(G) is the treewidth of G. This extends Håstad recent lower bound for the grid graph to any graph. Furthermore, we prove tightness of our bound up to a multiplicative constant in the top exponent. Namely, we show that if a Tseitin formula for a graph G has size s, then for all large enough d, it has a depth-d Frege proof of size 2^{tw(G)^{O(1/d)}} poly(s). Through this result we settle the question posed by M. Alekhnovich and A. Razborov of showing that the class of Tseitin formulas is quasi-automatizable for resolution.

Cite as

Nicola Galesi, Dmitry Itsykson, Artur Riazanov, and Anastasia Sofronova. Bounded-Depth Frege Complexity of Tseitin Formulas for All Graphs. In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 138, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{galesi_et_al:LIPIcs.MFCS.2019.49,
  author =	{Galesi, Nicola and Itsykson, Dmitry and Riazanov, Artur and Sofronova, Anastasia},
  title =	{{Bounded-Depth Frege Complexity of Tseitin Formulas for All Graphs}},
  booktitle =	{44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-117-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{138},
  editor =	{Rossmanith, Peter and Heggernes, Pinar and Katoen, Joost-Pieter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2019.49},
  URN =		{urn:nbn:de:0030-drops-109932},
  doi =		{10.4230/LIPIcs.MFCS.2019.49},
  annote =	{Keywords: Tseitin formula, treewidth, AC0-Frege}
}
Document
Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity

Authors: Lijie Chen and R. Ryan Williams

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We considerably sharpen the known connections between circuit-analysis algorithms and circuit lower bounds, show intriguing equivalences between the analysis of weak circuits and (apparently difficult) circuits, and provide strong new lower bounds for approximately computing Boolean functions with depth-two neural networks and related models. - We develop approaches to proving THR o THR lower bounds (a notorious open problem), by connecting algorithmic analysis of THR o THR to the provably weaker circuit classes THR o MAJ and MAJ o MAJ, where exponential lower bounds have long been known. More precisely, we show equivalences between algorithmic analysis of THR o THR and these weaker classes. The epsilon-error CAPP problem asks to approximate the acceptance probability of a given circuit to within additive error epsilon; it is the "canonical" derandomization problem. We show: - There is a non-trivial (2^n/n^{omega(1)} time) 1/poly(n)-error CAPP algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size MAJ o MAJ. - There is a delta > 0 and a non-trivial SAT (delta-error CAPP) algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size THR o MAJ. Similar results hold for depth-d linear threshold circuits and depth-d MAJORITY circuits. These equivalences are proved via new simulations of THR circuits by circuits with MAJ gates. - We strengthen the connection between non-trivial derandomization (non-trivial CAPP algorithms) for a circuit class C, and circuit lower bounds against C. Previously, [Ben-Sasson and Viola, ICALP 2014] (following [Williams, STOC 2010]) showed that for any polynomial-size class C closed under projections, non-trivial (2^{n}/n^{omega(1)} time) CAPP for OR_{poly(n)} o AND_{3} o C yields NEXP does not have C circuits. We apply Probabilistic Checkable Proofs of Proximity in a new way to show it would suffice to have a non-trivial CAPP algorithm for either XOR_2 o C, AND_2 o C or OR_2 o C. - A direct corollary of the first two bullets is that NEXP does not have THR o THR circuits would follow from either: - a non-trivial delta-error CAPP (or SAT) algorithm for poly(n)-size THR o MAJ circuits, or - a non-trivial 1/poly(n)-error CAPP algorithm for poly(n)-size MAJ o MAJ circuits. - Applying the above machinery, we extend lower bounds for depth-two neural networks and related models [R. Williams, CCC 2018] to weak approximate computations of Boolean functions. For example, for arbitrarily small epsilon > 0, we prove there are Boolean functions f computable in nondeterministic n^{log n} time such that (for infinitely many n) every polynomial-size depth-two neural network N on n inputs (with sign or ReLU activation) must satisfy max_{x in {0,1}^n}|N(x)-f(x)|>1/2-epsilon. That is, short linear combinations of ReLU gates fail miserably at computing f to within close precision. Similar results are proved for linear combinations of ACC o THR circuits, and linear combinations of low-degree F_p polynomials. These results constitute further progress towards THR o THR lower bounds.

Cite as

Lijie Chen and R. Ryan Williams. Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 19:1-19:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.19,
  author =	{Chen, Lijie and Williams, R. Ryan},
  title =	{{Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{19:1--19:43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.19},
  URN =		{urn:nbn:de:0030-drops-108419},
  doi =		{10.4230/LIPIcs.CCC.2019.19},
  annote =	{Keywords: PCP of Proximity, Circuit Lower Bounds, Derandomization, Threshold Circuits, ReLU}
}
Document
Sherali - Adams Strikes Back

Authors: Ryan O'Donnell and Tselil Schramm

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
Let G be any n-vertex graph whose random walk matrix has its nontrivial eigenvalues bounded in magnitude by 1/sqrt{Delta} (for example, a random graph G of average degree Theta(Delta) typically has this property). We show that the exp(c (log n)/(log Delta))-round Sherali - Adams linear programming hierarchy certifies that the maximum cut in such a G is at most 50.1 % (in fact, at most 1/2 + 2^{-Omega(c)}). For example, in random graphs with n^{1.01} edges, O(1) rounds suffice; in random graphs with n * polylog(n) edges, n^{O(1/log log n)} = n^{o(1)} rounds suffice. Our results stand in contrast to the conventional beliefs that linear programming hierarchies perform poorly for max-cut and other CSPs, and that eigenvalue/SDP methods are needed for effective refutation. Indeed, our results imply that constant-round Sherali - Adams can strongly refute random Boolean k-CSP instances with n^{ceil[k/2] + delta} constraints; previously this had only been done with spectral algorithms or the SOS SDP hierarchy.

Cite as

Ryan O'Donnell and Tselil Schramm. Sherali - Adams Strikes Back. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 8:1-8:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{odonnell_et_al:LIPIcs.CCC.2019.8,
  author =	{O'Donnell, Ryan and Schramm, Tselil},
  title =	{{Sherali - Adams Strikes Back}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{8:1--8:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.8},
  URN =		{urn:nbn:de:0030-drops-108309},
  doi =		{10.4230/LIPIcs.CCC.2019.8},
  annote =	{Keywords: Linear programming, Sherali, Adams, max-cut, graph eigenvalues, Sum-of-Squares}
}
Document
Pearl
Finally, a Polymorphic Linear Algebra Language (Pearl)

Authors: Amir Shaikhha and Lionel Parreaux

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Many different data analytics tasks boil down to linear algebra primitives. In practice, for each different type of workload, data scientists use a particular specialised library. In this paper, we present Pilatus, a polymorphic iterative linear algebra language, applicable to various types of data analytics workloads. The design of this domain-specific language (DSL) is inspired by both mathematics and programming languages: its basic constructs are borrowed from abstract algebra, whereas the key technology behind its polymorphic design uses the tagless final approach (a.k.a. polymorphic embedding/object algebras). This design enables us to change the behaviour of arithmetic operations to express matrix algebra, graph algorithms, logical probabilistic programs, and differentiable programs. Crucially, the polymorphic design of Pilatus allows us to use multi-stage programming and rewrite-based optimisation to recover the performance of specialised code, supporting fixed sized matrices, algebraic optimisations, and fusion.

Cite as

Amir Shaikhha and Lionel Parreaux. Finally, a Polymorphic Linear Algebra Language (Pearl). In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 25:1-25:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{shaikhha_et_al:LIPIcs.ECOOP.2019.25,
  author =	{Shaikhha, Amir and Parreaux, Lionel},
  title =	{{Finally, a Polymorphic Linear Algebra Language}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{25:1--25:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.25},
  URN =		{urn:nbn:de:0030-drops-108172},
  doi =		{10.4230/LIPIcs.ECOOP.2019.25},
  annote =	{Keywords: Linear Algebra, Domain-Specific Languages, Tagless Final, Polymorphic Embedding, Object Algebra, Multi-Stage Programming, Graph Processing, Probabilistic Programming, Automatic Differentiation}
}
Document
Transient Typechecks Are (Almost) Free

Authors: Richard Roberts, Stefan Marr, Michael Homer, and James Noble

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Transient gradual typing imposes run-time type tests that typically cause a linear slowdown. This performance impact discourages the use of type annotations because adding types to a program makes the program slower. A virtual machine can employ standard just-in-time optimizations to reduce the overhead of transient checks to near zero. These optimizations can give gradually-typed languages performance comparable to state-of-the-art dynamic languages, so programmers can add types to their code without affecting their programs' performance.

Cite as

Richard Roberts, Stefan Marr, Michael Homer, and James Noble. Transient Typechecks Are (Almost) Free. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 5:1-5:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{roberts_et_al:LIPIcs.ECOOP.2019.5,
  author =	{Roberts, Richard and Marr, Stefan and Homer, Michael and Noble, James},
  title =	{{Transient Typechecks Are (Almost) Free}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{5:1--5:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.5},
  URN =		{urn:nbn:de:0030-drops-107974},
  doi =		{10.4230/LIPIcs.ECOOP.2019.5},
  annote =	{Keywords: dynamic type checking, gradual types, optional types, Grace, Moth, object-oriented programming}
}
Document
Transferring Obligations Through Synchronizations

Authors: Jafar Hamin and Bart Jacobs

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
One common approach for verifying safety properties of multithreaded programs is assigning appropriate permissions, such as ownership of a heap location, and obligations, such as an obligation to send a message on a channel, to each thread and making sure that each thread only performs the actions for which it has permissions and it also fulfills all of its obligations before it terminates. Although permissions can be transferred through synchronizations from a sender thread, where for example a message is sent or a condition variable is notified, to a receiver thread, where that message or that notification is received, in existing approaches obligations can only be transferred when a thread is forked. In this paper we introduce two mechanisms, one for channels and the other for condition variables, that allow obligations, along with permissions, to be transferred from the sender to the receiver, while ensuring that there is no state where the transferred obligations are lost, i.e. where they are discharged from the sender thread but not loaded onto the receiver thread yet. We show how these mechanisms can be used to modularly verify deadlock-freedom of a number of interesting programs, such as some variations of client-server programs, fair readers-writers locks, and dining philosophers, which cannot be modularly verified without such transfer. We also encoded the proposed separation logic-based proof rules in the VeriFast program verifier and succeeded in verifying the mentioned programs.

Cite as

Jafar Hamin and Bart Jacobs. Transferring Obligations Through Synchronizations. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 19:1-19:58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hamin_et_al:LIPIcs.ECOOP.2019.19,
  author =	{Hamin, Jafar and Jacobs, Bart},
  title =	{{Transferring Obligations Through Synchronizations}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{19:1--19:58},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.19},
  URN =		{urn:nbn:de:0030-drops-108113},
  doi =		{10.4230/LIPIcs.ECOOP.2019.19},
  annote =	{Keywords: Hoare logic, separation logic, modular program verification, synchronization, transferring obligations, deadlock-freedom}
}
Document
Multitier Modules

Authors: Pascal Weisenburger and Guido Salvaneschi

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Multitier programming languages address the complexity of developing distributed systems abstracting over low level implementation details such as data representation, serialization and network protocols. Since the functionalities of different peers can be defined in the same compilation unit, multitier languages do not force developers to modularize software along network boundaries. Unfortunately, combining the code for all tiers into the same compilation unit poses a scalability challenge or forces developers to resort to traditional modularization abstractions that are agnostic to the multitier nature of the language. In this paper, we address this issue with a module system for multitier languages. Our module system supports encapsulating each (cross-peer) functionality and defining it over abstract peer types. As a result, we disentangle modularization and distribution and we enable the definition of a distributed system as a composition of multitier modules, each representing a subsystem. Our case studies on distributed algorithms, distributed data structures, as well as on the Apache Flink task distribution system, show that multitier modules allow the definition of reusable (abstract) patterns of interaction in distributed software and enable separating the modularization and distribution concerns, properly separating functionalities in distributed systems.

Cite as

Pascal Weisenburger and Guido Salvaneschi. Multitier Modules. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 3:1-3:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{weisenburger_et_al:LIPIcs.ECOOP.2019.3,
  author =	{Weisenburger, Pascal and Salvaneschi, Guido},
  title =	{{Multitier Modules}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{3:1--3:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.3},
  URN =		{urn:nbn:de:0030-drops-107957},
  doi =		{10.4230/LIPIcs.ECOOP.2019.3},
  annote =	{Keywords: Distributed Programming, Multitier Programming, Abstract Peer Types, Placement Types, Module Systems, Scala}
}
Document
Sub-classical Boolean Bunched Logics and the Meaning of Par

Authors: James Brotherston and Jules Villard

Published in: LIPIcs, Volume 41, 24th EACSL Annual Conference on Computer Science Logic (CSL 2015)


Abstract
We investigate intermediate logics between the bunched logics Boolean BI and Classical BI, obtained by combining classical propositional logic with various flavours of Hyland and De Paiva's full intuitionistic linear logic. Thus, in addition to the usual multiplicative conjunction (with its adjoint implication and unit), our logics also feature a multiplicative disjunction (with its adjoint co-implication and unit). The multiplicatives behave "sub-classically", in that disjunction and conjunction are related by a weak distribution principle, rather than by De Morgan equivalence. We formulate a Kripke semantics, covering all our sub-classical bunched logics, in which the multiplicatives are naturally read in terms of resource operations. Our main theoretical result is that validity according to this semantics coincides with provability in a corresponding Hilbert-style proof system. Our logical investigation sheds considerable new light on how one can understand the multiplicative disjunction, better known as linear logic's "par", in terms of resource operations. In particular, and in contrast to the earlier Classical BI, the models of our logics include the heap-like memory models of separation logic, in which disjunction can be interpreted as a property of intersection operations over heaps.

Cite as

James Brotherston and Jules Villard. Sub-classical Boolean Bunched Logics and the Meaning of Par. In 24th EACSL Annual Conference on Computer Science Logic (CSL 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 41, pp. 325-342, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{brotherston_et_al:LIPIcs.CSL.2015.325,
  author =	{Brotherston, James and Villard, Jules},
  title =	{{Sub-classical Boolean Bunched Logics and the Meaning of Par}},
  booktitle =	{24th EACSL Annual Conference on Computer Science Logic (CSL 2015)},
  pages =	{325--342},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-90-3},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{41},
  editor =	{Kreutzer, Stephan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2015.325},
  URN =		{urn:nbn:de:0030-drops-54234},
  doi =		{10.4230/LIPIcs.CSL.2015.325},
  annote =	{Keywords: Bunched logic, linear logic, modal logic, Kripke semantics, model theory}
}
Document
A Theory of Tagged Objects

Authors: Joseph Lee, Jonathan Aldrich, Troy Shaw, and Alex Potanin

Published in: LIPIcs, Volume 37, 29th European Conference on Object-Oriented Programming (ECOOP 2015)


Abstract
Foundational models of object-oriented constructs typically model objects as records with a structural type. However, many object-oriented languages are class-based; statically-typed formal models of these languages tend to sacrifice the foundational nature of the record-based models, and in addition cannot express dynamic class loading or creation. In this paper, we explore how to model statically-typed object-oriented languages that support dynamic class creation using foundational constructs of type theory. We start with an extensible tag construct motivated by type theory, and adapt it to support static reasoning about class hierarchy and the tags supported by each object. The result is a model that better explains the relationship between object-oriented and functional programming paradigms, suggests a useful enhancement to functional programming languages, and paves the way for more expressive statically typed object-oriented languages. In that vein, we describe the design and implementation of the Wyvern language, which leverages our theory.

Cite as

Joseph Lee, Jonathan Aldrich, Troy Shaw, and Alex Potanin. A Theory of Tagged Objects. In 29th European Conference on Object-Oriented Programming (ECOOP 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 37, pp. 174-197, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ECOOP.2015.174,
  author =	{Lee, Joseph and Aldrich, Jonathan and Shaw, Troy and Potanin, Alex},
  title =	{{A Theory of Tagged Objects}},
  booktitle =	{29th European Conference on Object-Oriented Programming (ECOOP 2015)},
  pages =	{174--197},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-86-6},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{37},
  editor =	{Boyland, John Tang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2015.174},
  URN =		{urn:nbn:de:0030-drops-52305},
  doi =		{10.4230/LIPIcs.ECOOP.2015.174},
  annote =	{Keywords: objects, classes, tags, nominal and structural types}
}
Document
Brand Objects for Nominal Typing

Authors: Timothy Jones, Michael Homer, and James Noble

Published in: LIPIcs, Volume 37, 29th European Conference on Object-Oriented Programming (ECOOP 2015)


Abstract
Combinations of structural and nominal object typing in systems such as Scala, Whiteoak, and Unity have focused on extending existing nominal, class-based systems with structural subtyping. The typical rules of nominal typing do not lend themselves to such an extension, resulting in major modifications. Adding object branding to an existing structural system integrates nominal and structural typing without excessively complicating the type system. We have implemented brand objects to explicitly type objects, using existing features of the structurally typed language Grace, along with a static type checker which treats the brands as nominal types. We demonstrate that the brands are useful in an existing implementation of Grace, and provide a formal model of the extension to the language.

Cite as

Timothy Jones, Michael Homer, and James Noble. Brand Objects for Nominal Typing. In 29th European Conference on Object-Oriented Programming (ECOOP 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 37, pp. 198-221, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{jones_et_al:LIPIcs.ECOOP.2015.198,
  author =	{Jones, Timothy and Homer, Michael and Noble, James},
  title =	{{Brand Objects for Nominal Typing}},
  booktitle =	{29th European Conference on Object-Oriented Programming (ECOOP 2015)},
  pages =	{198--221},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-86-6},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{37},
  editor =	{Boyland, John Tang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2015.198},
  URN =		{urn:nbn:de:0030-drops-52314},
  doi =		{10.4230/LIPIcs.ECOOP.2015.198},
  annote =	{Keywords: brands, types, structural, nominal, Grace}
}
  • Refine by Author
  • 2 Homer, Michael
  • 2 Noble, James
  • 1 Aldrich, Jonathan
  • 1 Amram, Gal
  • 1 Beyersdorff, Olaf
  • Show More...

  • Refine by Classification
  • 3 Software and its engineering → Formal software verification
  • 2 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation → Separation logic
  • 1 Computing methodologies → Distributed programming languages
  • 1 Computing methodologies → Linear algebra algorithms
  • Show More...

  • Refine by Keyword
  • 2 Grace
  • 2 Hoare logic
  • 1 AC0-Frege
  • 1 Abstract Peer Types
  • 1 Adams
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 10 2019
  • 4 2015
  • 2 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail