12 Search Results for "Jhala, Ranjit"


Document
Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines

Authors: Bob Brockbernd, Nikita Koval, Arie van Deursen, and Burcu Kulahcioglu Ozkan

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Kotlin language has recently become prominent for developing both Android and server-side applications. These programs are typically designed to be fast and responsive, with asynchrony and concurrency at their core. To enable developers to write asynchronous and concurrent code safely and concisely, Kotlin provides built-in coroutines support. However, developers unfamiliar with the coroutines concept may write programs with subtle concurrency bugs and face unexpected program behaviors. Besides the traditional concurrency bug patterns, such as data races and deadlocks, these bugs may exhibit patterns related to the coroutine semantics. Understanding these coroutine-specific bug patterns in real-world Kotlin applications is essential in avoiding common mistakes and writing correct programs. In this paper, we present the first study of real-world concurrency bugs related to Kotlin coroutines. We examined 55 concurrency bug cases selected from 7 popular open-source repositories that use Kotlin coroutines, including IntelliJ IDEA, Firefox, and Ktor, and analyzed their bug characteristics and root causes. We identified common bug patterns related to asynchrony and Kotlin’s coroutine semantics, presenting them with their root causes, misconceptions that led to the bugs, and strategies for their automated detection. Overall, this study provides insight into programming with Kotlin coroutines concurrency and its pitfalls, aiming to shed light on common bug patterns and foster further research and development of concurrency analysis tools for Kotlin programs.

Cite as

Bob Brockbernd, Nikita Koval, Arie van Deursen, and Burcu Kulahcioglu Ozkan. Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{brockbernd_et_al:LIPIcs.ECOOP.2024.8,
  author =	{Brockbernd, Bob and Koval, Nikita and van Deursen, Arie and Ozkan, Burcu Kulahcioglu},
  title =	{{Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.8},
  URN =		{urn:nbn:de:0030-drops-208579},
  doi =		{10.4230/LIPIcs.ECOOP.2024.8},
  annote =	{Keywords: Kotlin, coroutines, concurrency, asynchrony, software bugs}
}
Document
InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference

Authors: Senxi Li, Tetsuro Yamazaki, and Shigeru Chiba

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Supporting automatic type inference is in demand in modern language development. It is a challenging task but without appropriate supporting toolkits. This paper presents InferType, a Java library that helps implement constraint-based type inference. A compiler writer uses InferType’s classes and methods to describe type constraints and typing rules for type inference. InferType then performs constraint solving by translation to the Z3 SMT solver. InferType is equipped with our developed optimization technique. It reduces the search space for type variables by pre-computing the structures of those type variables for mitigating the performance bottleneck of constraint solving with deeply nested types. We use InferType to implement type inference for a subset of Python, and conduct experiments to evaluate how the developed optimization technique can affect the performance of type inference. Our results show that InferType’s optimization can greatly mitigate the performance bottleneck for programs with deeply nested types, and can potentially improve the performance for large nested types.

Cite as

Senxi Li, Tetsuro Yamazaki, and Shigeru Chiba. InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 23:1-23:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2024.23,
  author =	{Li, Senxi and Yamazaki, Tetsuro and Chiba, Shigeru},
  title =	{{InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{23:1--23:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.23},
  URN =		{urn:nbn:de:0030-drops-208728},
  doi =		{10.4230/LIPIcs.ECOOP.2024.23},
  annote =	{Keywords: Domain Specific Languages, Compilation, Static Analysis, Type Inference, Constraint Solving, SMT Solver}
}
Document
Formalizing, Mechanizing, and Verifying Class-Based Refinement Types

Authors: Ke Sun, Di Wang, Sheng Chen, Meng Wang, and Dan Hao

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Refinement types have been extensively used in class-based languages to specify and verify fine-grained logical specifications. Despite the advances in practical aspects such as applicability and usability, two fundamental issues persist. First, the soundness of existing class-based refinement type systems is inadequately explored, casting doubts on their reliability. Second, the expressiveness of existing systems is limited, restricting the depiction of semantic properties related to object-oriented constructs. This work tackles these issues through a systematic framework. We formalize a declarative class-based refinement type calculus (named RFJ), that is expressive and concise. We rigorously develop the soundness meta-theory of this calculus, followed by its mechanization in Coq. Finally, to ensure the calculus’s verifiability, we propose an algorithmic verification approach based on a fragment of first-order logic (named LFJ), and implement this approach as a type checker.

Cite as

Ke Sun, Di Wang, Sheng Chen, Meng Wang, and Dan Hao. Formalizing, Mechanizing, and Verifying Class-Based Refinement Types. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 39:1-39:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ECOOP.2024.39,
  author =	{Sun, Ke and Wang, Di and Chen, Sheng and Wang, Meng and Hao, Dan},
  title =	{{Formalizing, Mechanizing, and Verifying Class-Based Refinement Types}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{39:1--39:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.39},
  URN =		{urn:nbn:de:0030-drops-208881},
  doi =		{10.4230/LIPIcs.ECOOP.2024.39},
  annote =	{Keywords: Refinement Types, Program Verification, Object-oriented Programming}
}
Document
{CtChecker}: A Precise, Sound and Efficient Static Analysis for Constant-Time Programming

Authors: Quan Zhou, Sixuan Dang, and Danfeng Zhang

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Timing channel attacks are emerging as real-world threats to computer security. In cryptographic systems, an effective countermeasure against timing attacks is the constant-time programming discipline. However, strictly enforcing the discipline manually is both time-consuming and error-prone. While various tools exist for analyzing/verifying constant-time programs, they sacrifice at least one feature among precision, soundness and efficiency. In this paper, we build CtChecker, a sound static analysis for constant-time programming. Under the hood, CtChecker uses a static information flow analysis to identify violations of constant-time discipline. Despite the common wisdom that sound, static information flow analysis lacks precision for real-world applications, we show that by enabling field-sensitivity, context-sensitivity and partial flow-sensitivity, CtChecker reports fewer false positives compared with existing sound tools. Evaluation on real-world cryptographic systems shows that CtChecker analyzes 24K lines of source code in under one minute. Moreover, CtChecker reveals that some repaired code generated by program rewriters supposedly remove timing channels are still not constant-time.

Cite as

Quan Zhou, Sixuan Dang, and Danfeng Zhang. {CtChecker}: A Precise, Sound and Efficient Static Analysis for Constant-Time Programming. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 46:1-46:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhou_et_al:LIPIcs.ECOOP.2024.46,
  author =	{Zhou, Quan and Dang, Sixuan and Zhang, Danfeng},
  title =	{{\{CtChecker\}: A Precise, Sound and Efficient Static Analysis for Constant-Time Programming}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{46:1--46:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.46},
  URN =		{urn:nbn:de:0030-drops-208951},
  doi =		{10.4230/LIPIcs.ECOOP.2024.46},
  annote =	{Keywords: Information flow control, static analysis, side channel, constant-time programming}
}
Document
Invariants for One-Counter Automata with Disequality Tests

Authors: Dmitry Chistikov, Jérôme Leroux, Henry Sinclair-Banks, and Nicolas Waldburger

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
We study the reachability problem for one-counter automata in which transitions can carry disequality tests. A disequality test is a guard that prohibits a specified counter value. This reachability problem has been known to be NP-hard and in PSPACE, and characterising its computational complexity has been left as a challenging open question by Almagor, Cohen, Pérez, Shirmohammadi, and Worrell (2020). We reduce the complexity gap, placing the problem into the second level of the polynomial hierarchy, namely into the class coNP^NP. In the presence of both equality and disequality tests, our upper bound is at the third level, P^NP^NP. To prove this result, we show that non-reachability can be witnessed by a pair of invariants (forward and backward). These invariants are almost inductive. They aim to over-approximate only a "core" of the reachability set instead of the entire set. The invariants are also leaky: it is possible to escape the set. We complement this with separate checks as the leaks can only occur in a controlled way.

Cite as

Dmitry Chistikov, Jérôme Leroux, Henry Sinclair-Banks, and Nicolas Waldburger. Invariants for One-Counter Automata with Disequality Tests. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.CONCUR.2024.17,
  author =	{Chistikov, Dmitry and Leroux, J\'{e}r\^{o}me and Sinclair-Banks, Henry and Waldburger, Nicolas},
  title =	{{Invariants for One-Counter Automata with Disequality Tests}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.17},
  URN =		{urn:nbn:de:0030-drops-207898},
  doi =		{10.4230/LIPIcs.CONCUR.2024.17},
  annote =	{Keywords: Inductive invariant, Vector addition system, One-counter automaton}
}
Document
Coinductive Techniques for Checking Satisfiability of Generalized Nested Conditions

Authors: Lara Stoltenow, Barbara König, Sven Schneider, Andrea Corradini, Leen Lambers, and Fernando Orejas

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
We study nested conditions, a generalization of first-order logic to a categorical setting, and provide a tableau-based (semi-decision) procedure for checking (un)satisfiability and finite model generation. This generalizes earlier results on graph conditions. Furthermore we introduce a notion of witnesses, allowing the detection of infinite models in some cases. To ensure completeness, paths in a tableau must be fair, where fairness requires that all parts of a condition are processed eventually. Since the correctness arguments are non-trivial, we rely on coinductive proof methods and up-to techniques that structure the arguments. We distinguish between two types of categories: categories where all sections are isomorphisms, allowing for a simpler tableau calculus that includes finite model generation; in categories where this requirement does not hold, model generation does not work, but we still obtain a sound and complete calculus.

Cite as

Lara Stoltenow, Barbara König, Sven Schneider, Andrea Corradini, Leen Lambers, and Fernando Orejas. Coinductive Techniques for Checking Satisfiability of Generalized Nested Conditions. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{stoltenow_et_al:LIPIcs.CONCUR.2024.39,
  author =	{Stoltenow, Lara and K\"{o}nig, Barbara and Schneider, Sven and Corradini, Andrea and Lambers, Leen and Orejas, Fernando},
  title =	{{Coinductive Techniques for Checking Satisfiability of Generalized Nested Conditions}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.39},
  URN =		{urn:nbn:de:0030-drops-208113},
  doi =		{10.4230/LIPIcs.CONCUR.2024.39},
  annote =	{Keywords: satisfiability, graph conditions, coinductive techniques, category theory}
}
Document
Entailing Generalization Boosts Enumeration

Authors: Dror Fried, Alexander Nadel, Roberto Sebastiani, and Yogev Shalmon

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Given a combinational circuit Γ with a single output o, AllSAT-CT is the problem of enumerating all solutions of Γ. Recently, we introduced several state-of-the-art AllSAT-CT algorithms based on satisfying generalization, which generalizes a given total Boolean solution to a smaller ternary solution that still satisfies the circuit. We implemented them in our open-source tool HALL. In this work we draw upon recent theoretical works suggesting that utilizing generalization algorithms, which can produce solutions that entail the circuit without satisfying it, may enhance enumeration. After considering the theory and adapting it to our needs, we enrich HALL’s AllSAT-CT algorithms by incorporating several newly implemented generalization schemes and additional SAT solvers. By conducting extensive experiments we show that entailing generalization substantially boosts HALL’s performance and quality (where quality corresponds to the number of reported generalized solutions per instance), with the best results achieved by combining satisfying and entailing generalization.

Cite as

Dror Fried, Alexander Nadel, Roberto Sebastiani, and Yogev Shalmon. Entailing Generalization Boosts Enumeration. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fried_et_al:LIPIcs.SAT.2024.13,
  author =	{Fried, Dror and Nadel, Alexander and Sebastiani, Roberto and Shalmon, Yogev},
  title =	{{Entailing Generalization Boosts Enumeration}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.13},
  URN =		{urn:nbn:de:0030-drops-205351},
  doi =		{10.4230/LIPIcs.SAT.2024.13},
  annote =	{Keywords: Generalization, Minimization, Prime Implicant, AllSAT, SAT, Circuits}
}
Document
Refinements of Futures Past: Higher-Order Specification with Implicit Refinement Types

Authors: Anish Tondwalkar, Matthew Kolosick, and Ranjit Jhala

Published in: LIPIcs, Volume 194, 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
Refinement types decorate types with assertions that enable automatic verification. Like assertions, refinements are limited to binders that are in scope, and hence, cannot express higher-order specifications. Ghost variables circumvent this limitation but are prohibitively tedious to use as the programmer must divine and explicate their values at all call-sites. We introduce Implicit Refinement Types which turn ghost variables into implicit pair and function types, in a way that lets the refinement typechecker automatically synthesize their values at compile time. Implicit Refinement Types further take advantage of refinement type information, allowing them to be used as a lightweight verification tool, rather than merely as a technique to automate programming tasks. We evaluate the utility of Implicit Refinement Types by showing how they enable the modular specification and automatic verification of various higher-order examples including stateful protocols, access control, and resource usage.

Cite as

Anish Tondwalkar, Matthew Kolosick, and Ranjit Jhala. Refinements of Futures Past: Higher-Order Specification with Implicit Refinement Types. In 35th European Conference on Object-Oriented Programming (ECOOP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 194, pp. 18:1-18:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{tondwalkar_et_al:LIPIcs.ECOOP.2021.18,
  author =	{Tondwalkar, Anish and Kolosick, Matthew and Jhala, Ranjit},
  title =	{{Refinements of Futures Past: Higher-Order Specification with Implicit Refinement Types}},
  booktitle =	{35th European Conference on Object-Oriented Programming (ECOOP 2021)},
  pages =	{18:1--18:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-190-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{194},
  editor =	{M{\o}ller, Anders and Sridharan, Manu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2021.18},
  URN =		{urn:nbn:de:0030-drops-140615},
  doi =		{10.4230/LIPIcs.ECOOP.2021.18},
  annote =	{Keywords: Refinement Types, Implicit Parameters, Verification, Dependent Pairs}
}
Document
Artifact
mist: Refinements of Futures Past (Artifact)

Authors: Anish Tondwalkar, Matt Kolosick, and Ranjit Jhala

Published in: DARTS, Volume 7, Issue 2, Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
mist is a tiny language for teaching and experimenting with refinement types, in the style of https://github.com/ucsd-progsys/liquidhaskell. We use it as a platform for experimenting with and as a demonstration of implicit refinement types as presented in the ECOOP21 paper Refinements of Futures Past: Higher-Order Specification with Implicit Refinement Types. We start with the parser and AST we use to teach our undergradute compilers class, and layer upon it a refinement type checker directly translated from the typing rules presented in that paper, which produces constraints that are solved with the liquid-fixpoint horn clause solver. We present source code and binaries for mist in a container image that includes installations of the competing tools we compare to.

Cite as

Anish Tondwalkar, Matt Kolosick, and Ranjit Jhala. mist: Refinements of Futures Past (Artifact). In Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021). Dagstuhl Artifacts Series (DARTS), Volume 7, Issue 2, pp. 3:1-3:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{tondwalkar_et_al:DARTS.7.2.3,
  author =	{Tondwalkar, Anish and Kolosick, Matt and Jhala, Ranjit},
  title =	{{mist: Refinements of Futures Past (Artifact)}},
  pages =	{3:1--3:11},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2021},
  volume =	{7},
  number =	{2},
  editor =	{Tondwalkar, Anish and Kolosick, Matt and Jhala, Ranjit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.7.2.3},
  URN =		{urn:nbn:de:0030-drops-140275},
  doi =		{10.4230/DARTS.7.2.3},
  annote =	{Keywords: Refinement Types, Implicit Parameters, Verification, Dependent Pairs}
}
Document
Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131)

Authors: Marco Gaboardi, Suresh Jagannathan, Ranjit Jhala, and Stephanie Weirich

Published in: Dagstuhl Reports, Volume 6, Issue 3 (2016)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16131 "Language Based Verification Tools for Functional Programs". This seminar is motivated by two converging trends in computing -- the increasing reliance on software has led to an increased interest in seeking formal, reliable means of ensuring that programs possess crucial correctness properties, and the dramatic increase in adoption of higher-order functional languages due to the web, multicore and "big data" revolutions. While the research community has studied the problem of language based verification for imperative and first-order programs for decades – yielding important ideas like Floyd-Hoare Logics, Abstract Interpretation, Model Checking, and Separation Logic and so on – it is only relatively recently, that proposals have emerged for language baseverification tools for functional and higher-order programs. These techniques include advanced type systems, contract systems, model checking and program analyses specially tailored to exploit the structure of functional languages. These proposals are from groups based in diverse research communities, attacking the problem from different angles, yielding techniques with complementary strengths. This seminar brought diverse set of researchers together so that we could: compare the strengths and limitations of different approaches, discuss ways to unify the complementary advantages of different techniques, both conceptually and in tools, share challenging open problems and application areas where verification may be most effective, identify novel ways of using verification techniques for other software engineering tasks such as code search or synthesis, and improve the pedagogy and hence adoption of such techniques.

Cite as

Marco Gaboardi, Suresh Jagannathan, Ranjit Jhala, and Stephanie Weirich. Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131). In Dagstuhl Reports, Volume 6, Issue 3, pp. 59-77, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{gaboardi_et_al:DagRep.6.3.59,
  author =	{Gaboardi, Marco and Jagannathan, Suresh and Jhala, Ranjit and Weirich, Stephanie},
  title =	{{Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131)}},
  pages =	{59--77},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{3},
  editor =	{Gaboardi, Marco and Jagannathan, Suresh and Jhala, Ranjit and Weirich, Stephanie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.3.59},
  URN =		{urn:nbn:de:0030-drops-61494},
  doi =		{10.4230/DagRep.6.3.59},
  annote =	{Keywords: Functional Programming, Type Systems, Contracts, Dependent Types, Model Checking, Program Analysis}
}
Document
Trust, but Verify: Two-Phase Typing for Dynamic Languages

Authors: Panagiotis Vekris, Benjamin Cosman, and Ranjit Jhala

Published in: LIPIcs, Volume 37, 29th European Conference on Object-Oriented Programming (ECOOP 2015)


Abstract
A key challenge when statically typing so-called dynamic languages is the ubiquity of value-based overloading, where a given function can dynamically reflect upon and behave according to the types of its arguments. Thus, to establish basic types, the analysis must reason precisely about values, but in the presence of higher-order functions and polymorphism, this reasoning itself can require basic types. In this paper we address this chicken-and-egg problem by introducing the framework of two-phased typing. The first "trust" phase performs classical, i.e. flow-, path- and value-insensitive type checking to assign basic types to various program expressions. When the check inevitably runs into "errors" due to value-insensitivity, it wraps problematic expressions with DEAD-casts, which explicate the trust obligations that must be discharged by the second phase. The second phase uses refinement typing, a flow- and path-sensitive analysis, that decorates the first phase's types with logical predicates to track value relationships and thereby verify the casts and establish other correctness properties for dynamically typed languages.

Cite as

Panagiotis Vekris, Benjamin Cosman, and Ranjit Jhala. Trust, but Verify: Two-Phase Typing for Dynamic Languages. In 29th European Conference on Object-Oriented Programming (ECOOP 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 37, pp. 52-75, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{vekris_et_al:LIPIcs.ECOOP.2015.52,
  author =	{Vekris, Panagiotis and Cosman, Benjamin and Jhala, Ranjit},
  title =	{{Trust, but Verify: Two-Phase Typing for Dynamic Languages}},
  booktitle =	{29th European Conference on Object-Oriented Programming (ECOOP 2015)},
  pages =	{52--75},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-86-6},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{37},
  editor =	{Boyland, John Tang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2015.52},
  URN =		{urn:nbn:de:0030-drops-52173},
  doi =		{10.4230/LIPIcs.ECOOP.2015.52},
  annote =	{Keywords: Dynamic Languages, Type Systems, Refinement Types, Intersection Types, Overloading}
}
Document
Scripting Languages and Frameworks: Analysis and Verification (Dagstuhl Seminar 14271)

Authors: Fritz Henglein, Ranjit Jhala, Shriram Krishnamurthi, and Peter Thiemann

Published in: Dagstuhl Reports, Volume 4, Issue 6 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14271 "Scripting Languages and Frameworks: Analysis and Verification". The seminar brought together a broad spectrum of researchers working on the semantics, analysis and verification of scripting languages. In addition to talks describing the latest problems and research on the key issues, split roughly into four overarching themes: semantics, types, analysis, contracts, languages, and security, the seminar had breakout sessions devoted to crosscutting topics that were of broad interest across the community, including, how to create shared analysis infrastructure, how to think about the semantics of contracts and blame, and the role of soundness in analyzing real world languages, as well as several "tutorial" sessions explaining various new tools and techniques.

Cite as

Fritz Henglein, Ranjit Jhala, Shriram Krishnamurthi, and Peter Thiemann. Scripting Languages and Frameworks: Analysis and Verification (Dagstuhl Seminar 14271). In Dagstuhl Reports, Volume 4, Issue 6, pp. 84-107, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{henglein_et_al:DagRep.4.6.84,
  author =	{Henglein, Fritz and Jhala, Ranjit and Krishnamurthi, Shriram and Thiemann, Peter},
  title =	{{Scripting Languages and Frameworks: Analysis and Verification (Dagstuhl Seminar 14271)}},
  pages =	{84--107},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2014},
  volume =	{4},
  number =	{6},
  editor =	{Henglein, Fritz and Jhala, Ranjit and Krishnamurthi, Shriram and Thiemann, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.4.6.84},
  URN =		{urn:nbn:de:0030-drops-47816},
  doi =		{10.4230/DagRep.4.6.84},
  annote =	{Keywords: Scripting Languages, Frameworks, Contracts, Types, Analysis, Semantics}
}
  • Refine by Author
  • 5 Jhala, Ranjit
  • 2 Tondwalkar, Anish
  • 1 Brockbernd, Bob
  • 1 Chen, Sheng
  • 1 Chiba, Shigeru
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Program constructs
  • 2 Theory of computation → Program specifications
  • 2 Theory of computation → Program verification
  • 1 Computing methodologies → Concurrent programming languages
  • 1 Mathematics of computing → Solvers
  • Show More...

  • Refine by Keyword
  • 4 Refinement Types
  • 2 Contracts
  • 2 Dependent Pairs
  • 2 Implicit Parameters
  • 2 Type Systems
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 7 2024
  • 2 2021
  • 1 2014
  • 1 2015
  • 1 2016

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail