Search Results

Documents authored by Jagannathan, Suresh


Document
Morpheus: Automated Safety Verification of Data-Dependent Parser Combinator Programs

Authors: Ashish Mishra and Suresh Jagannathan

Published in: LIPIcs, Volume 263, 37th European Conference on Object-Oriented Programming (ECOOP 2023)


Abstract
Parser combinators are a well-known mechanism used for the compositional construction of parsers, and have shown to be particularly useful in writing parsers for rich grammars with data-dependencies and global state. Verifying applications written using them, however, has proven to be challenging in large part because of the inherently effectful nature of the parsers being composed and the difficulty in reasoning about the arbitrarily rich data-dependent semantic actions that can be associated with parsing actions. In this paper, we address these challenges by defining a parser combinator framework called Morpheus equipped with abstractions for defining composable effects tailored for parsing and semantic actions, and a rich specification language used to define safety properties over the constituent parsers comprising a program. Even though its abstractions yield many of the same expressivity benefits as other parser combinator systems, Morpheus is carefully engineered to yield a substantially more tractable automated verification pathway. We demonstrate its utility in verifying a number of realistic, challenging parsing applications, including several cases that involve non-trivial data-dependent relations.

Cite as

Ashish Mishra and Suresh Jagannathan. Morpheus: Automated Safety Verification of Data-Dependent Parser Combinator Programs. In 37th European Conference on Object-Oriented Programming (ECOOP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 263, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mishra_et_al:LIPIcs.ECOOP.2023.20,
  author =	{Mishra, Ashish and Jagannathan, Suresh},
  title =	{{Morpheus: Automated Safety Verification of Data-Dependent Parser Combinator Programs}},
  booktitle =	{37th European Conference on Object-Oriented Programming (ECOOP 2023)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-281-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{263},
  editor =	{Ali, Karim and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2023.20},
  URN =		{urn:nbn:de:0030-drops-182138},
  doi =		{10.4230/LIPIcs.ECOOP.2023.20},
  annote =	{Keywords: Parsers, Verification, Domain-specific languages, Functional programming, Refinement types, Type systems}
}
Document
Version Control Is for Your Data Too

Authors: Gowtham Kaki, KC Sivaramakrishnan, and Suresh Jagannathan

Published in: LIPIcs, Volume 136, 3rd Summit on Advances in Programming Languages (SNAPL 2019)


Abstract
Programmers regularly use distributed version control systems (DVCS) such as Git to facilitate collaborative software development. The primary purpose of a DVCS is to maintain integrity of source code in the presence of concurrent, possibly conflicting edits from collaborators. In addition to safely merging concurrent non-conflicting edits, a DVCS extensively tracks source code provenance to help programmers contextualize and resolve conflicts. Provenance also facilitates debugging by letting programmers see diffs between versions and quickly find those edits that introduced the offending conflict (e.g., via git blame). In this paper, we posit that analogous workflows to collaborative software development also arise in distributed software execution; we argue that the characteristics that make a DVCS an ideal fit for the former also make it an ideal fit for the latter. Building on this observation, we propose a distributed programming model, called carmot that views distributed shared state as an entity evolving in time, manifested as a sequence of persistent versions, and relies on an explicitly defined merge semantics to reconcile concurrent conflicting versions. We show examples demonstrating how carmot simplifies distributed programming, while also enabling novel workflows integral to modern applications such as blockchains. We also describe a prototype implementation of carmot that we use to evaluate its practicality.

Cite as

Gowtham Kaki, KC Sivaramakrishnan, and Suresh Jagannathan. Version Control Is for Your Data Too. In 3rd Summit on Advances in Programming Languages (SNAPL 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 136, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kaki_et_al:LIPIcs.SNAPL.2019.8,
  author =	{Kaki, Gowtham and Sivaramakrishnan, KC and Jagannathan, Suresh},
  title =	{{Version Control Is for Your Data Too}},
  booktitle =	{3rd Summit on Advances in Programming Languages (SNAPL 2019)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-113-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{136},
  editor =	{Lerner, Benjamin S. and Bod{\'\i}k, Rastislav and Krishnamurthi, Shriram},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SNAPL.2019.8},
  URN =		{urn:nbn:de:0030-drops-105516},
  doi =		{10.4230/LIPIcs.SNAPL.2019.8},
  annote =	{Keywords: replication, distributed systems, version control}
}
Document
Automated Detection of Serializability Violations Under Weak Consistency

Authors: Kartik Nagar and Suresh Jagannathan

Published in: LIPIcs, Volume 118, 29th International Conference on Concurrency Theory (CONCUR 2018)


Abstract
While a number of weak consistency mechanisms have been developed in recent years to improve performance and ensure availability in distributed, replicated systems, ensuring the correctness of transactional applications running on top of such systems remains a difficult and important problem. Serializability is a well-understood correctness criterion for transactional programs; understanding whether applications are serializable when executed in a weakly-consistent environment, however remains a challenging exercise. In this work, we combine a dependency graph-based characterization of serializability and leverage the framework of abstract executions to develop a fully-automated approach for statically finding bounded serializability violations under any weak consistency model. We reduce the problem of serializability to satisfiability of a formula in First-Order Logic (FOL), which allows us to harness the power of existing SMT solvers. We provide rules to automatically construct the FOL encoding from programs written in SQL (allowing loops and conditionals) and express consistency specifications as FOL formula. In addition to detecting bounded serializability violations, we also provide two orthogonal schemes to reason about unbounded executions by providing sufficient conditions (again, in the form of FOL formulae) whose satisfiability implies the absence of anomalies in any arbitrary execution. We have applied the proposed technique on TPC-C, a real-world database program with complex application logic, and were able to discover anomalies under Parallel Snapshot Isolation (PSI), and verify serializability for unbounded executions under Snapshot Isolation (SI), two consistency mechanisms substantially weaker than serializability.

Cite as

Kartik Nagar and Suresh Jagannathan. Automated Detection of Serializability Violations Under Weak Consistency. In 29th International Conference on Concurrency Theory (CONCUR 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 118, pp. 41:1-41:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nagar_et_al:LIPIcs.CONCUR.2018.41,
  author =	{Nagar, Kartik and Jagannathan, Suresh},
  title =	{{Automated Detection of Serializability Violations Under Weak Consistency}},
  booktitle =	{29th International Conference on Concurrency Theory (CONCUR 2018)},
  pages =	{41:1--41:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-087-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{118},
  editor =	{Schewe, Sven and Zhang, Lijun},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2018.41},
  URN =		{urn:nbn:de:0030-drops-95799},
  doi =		{10.4230/LIPIcs.CONCUR.2018.41},
  annote =	{Keywords: Weak Consistency, Serializability, Database Applications}
}
Document
Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131)

Authors: Marco Gaboardi, Suresh Jagannathan, Ranjit Jhala, and Stephanie Weirich

Published in: Dagstuhl Reports, Volume 6, Issue 3 (2016)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16131 "Language Based Verification Tools for Functional Programs". This seminar is motivated by two converging trends in computing -- the increasing reliance on software has led to an increased interest in seeking formal, reliable means of ensuring that programs possess crucial correctness properties, and the dramatic increase in adoption of higher-order functional languages due to the web, multicore and "big data" revolutions. While the research community has studied the problem of language based verification for imperative and first-order programs for decades – yielding important ideas like Floyd-Hoare Logics, Abstract Interpretation, Model Checking, and Separation Logic and so on – it is only relatively recently, that proposals have emerged for language baseverification tools for functional and higher-order programs. These techniques include advanced type systems, contract systems, model checking and program analyses specially tailored to exploit the structure of functional languages. These proposals are from groups based in diverse research communities, attacking the problem from different angles, yielding techniques with complementary strengths. This seminar brought diverse set of researchers together so that we could: compare the strengths and limitations of different approaches, discuss ways to unify the complementary advantages of different techniques, both conceptually and in tools, share challenging open problems and application areas where verification may be most effective, identify novel ways of using verification techniques for other software engineering tasks such as code search or synthesis, and improve the pedagogy and hence adoption of such techniques.

Cite as

Marco Gaboardi, Suresh Jagannathan, Ranjit Jhala, and Stephanie Weirich. Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131). In Dagstuhl Reports, Volume 6, Issue 3, pp. 59-77, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{gaboardi_et_al:DagRep.6.3.59,
  author =	{Gaboardi, Marco and Jagannathan, Suresh and Jhala, Ranjit and Weirich, Stephanie},
  title =	{{Language Based Verification Tools for Functional Programs (Dagstuhl Seminar 16131)}},
  pages =	{59--77},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{3},
  editor =	{Gaboardi, Marco and Jagannathan, Suresh and Jhala, Ranjit and Weirich, Stephanie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.3.59},
  URN =		{urn:nbn:de:0030-drops-61494},
  doi =		{10.4230/DagRep.6.3.59},
  annote =	{Keywords: Functional Programming, Type Systems, Contracts, Dependent Types, Model Checking, Program Analysis}
}
Document
Invited Talk
Relational Refinement Types for Higher-Order Shape Transformers (Invited Talk)

Authors: Suresh Jagannathan

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Understanding, discovering, and proving useful properties of sophisticated data structures are central problems in program verification. A particularly challenging exercise for shape analyses involves reasoning about sophisticated shape transformers that preserve the shape of a data structure (e.g., the data structure skeleton is always maintained as a balanced tree) or the relationship among values contained therein (e.g., the in-order relation of the elements of a tree or the parent-child relation of the elements of a heap) across program transformations. In this talk, we consider the specification and verification of such transformers for ML programs. The structural properties preserved by transformers can often be naturally expressed as inductively-defined relations over the recursive structure evident in the definitions of the datatypes they manipulate. By carefully augmenting a refinement type system with support for reasoning about structural relations over algebraic datatypes, we realize an expressive yet decidable specification language, capable of capturing useful structural invariants, which can nonetheless be automatically verified using off-the-shelf type checkers and theorem provers. Notably, our technique generalizes to definitions of parametric relations for polymorphic data types which, in turn, lead to highly composable specifications over higher-order polymorphic shape transformers.

Cite as

Suresh Jagannathan. Relational Refinement Types for Higher-Order Shape Transformers (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{jagannathan:LIPIcs.FSTTCS.2015.9,
  author =	{Jagannathan, Suresh},
  title =	{{Relational Refinement Types for Higher-Order Shape Transformers}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{9--9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.9},
  URN =		{urn:nbn:de:0030-drops-56406},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.9},
  annote =	{Keywords: Relational Specifications; Inductive and Parametric Relations; Refinement Types, Shape Analysis, Data Structure Verification\}}
}
Document
Cooking the Books: Formalizing JMM Implementation Recipes

Authors: Gustavo Petri, Jan Vitek, and Suresh Jagannathan

Published in: LIPIcs, Volume 37, 29th European Conference on Object-Oriented Programming (ECOOP 2015)


Abstract
The Java Memory Model (JMM) is intended to characterize the meaning of concurrent Java programs. Because of the model's complexity, however, its definition cannot be easily transplanted within an optimizing Java compiler, even though an important rationale for its design was to ensure Java compiler optimizations are not unduly hampered because of the language's concurrency features. In response, Lea's JSR-133 Cookbook for Compiler Writers, an informal guide to realizing the principles underlying the JMM on different (relaxed-memory) platforms was developed. The goal of the cookbook is to give compiler writers a relatively simple, yet reasonably efficient, set of reordering-based recipes that satisfy JMM constraints. In this paper, we present the first formalization of the cookbook, providing a semantic basis upon which the relationship between the recipes defined by the cookbook and the guarantees enforced by the JMM can be rigorously established. Notably, one artifact of our investigation is that the rules defined by the cookbook for compiling Java onto Power are inconsistent with the requirements of the JMM, a surprising result, and one which justifies our belief in the need for formally provable definitions to reason about sophisticated (and racy) concurrency patterns in Java, and their implementation on modern-day relaxed-memory hardware. Our formalization enables simulation arguments between an architecture-independent intermediate representation of the kind suggested by Lea with machine abstractions for Power and x86. Moreover, we provide fixes for cookbook recipes that are inconsistent with the behaviors admitted by the target platform, and prove the correctness of these repairs.

Cite as

Gustavo Petri, Jan Vitek, and Suresh Jagannathan. Cooking the Books: Formalizing JMM Implementation Recipes. In 29th European Conference on Object-Oriented Programming (ECOOP 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 37, pp. 445-469, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{petri_et_al:LIPIcs.ECOOP.2015.445,
  author =	{Petri, Gustavo and Vitek, Jan and Jagannathan, Suresh},
  title =	{{Cooking the Books: Formalizing JMM Implementation Recipes}},
  booktitle =	{29th European Conference on Object-Oriented Programming (ECOOP 2015)},
  pages =	{445--469},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-86-6},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{37},
  editor =	{Boyland, John Tang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2015.445},
  URN =		{urn:nbn:de:0030-drops-52334},
  doi =		{10.4230/LIPIcs.ECOOP.2015.445},
  annote =	{Keywords: Concurrency, Java, Memory Model, Relaxed-Memory}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail