10 Search Results for "Flanagan, Cormac"


Document
Optimal Concolic Dynamic Partial Order Reduction

Authors: Mohammad Hossein Khoshechin Jorshari, Michalis Kokologiannakis, Rupak Majumdar, and Srinidhi Nagendra

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
Stateless model checking (SMC) software implementations requires exploring both concurrency- and data nondeterminism. Unfortunately, most SMC algorithms focus on efficient exploration of concurrency nondeterminism, thereby neglecting an important source of bugs. We present ConDpor, an SMC algorithm for unmodified Java programs that combines optimal dynamic partial order reduction (DPOR) for concurrency nondeterminism, with concolic execution for data nondeterminism. ConDpor is sound, complete, optimal, and parametric w.r.t. the memory consistency model. Our experiments confirm that ConDpor is exponentially faster than DPOR with small-domain enumeration. Overall, ConDpor opens the door for efficient exploration of concurrent programs with data nondeterminism.

Cite as

Mohammad Hossein Khoshechin Jorshari, Michalis Kokologiannakis, Rupak Majumdar, and Srinidhi Nagendra. Optimal Concolic Dynamic Partial Order Reduction. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 26:1-26:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{khoshechinjorshari_et_al:LIPIcs.CONCUR.2025.26,
  author =	{Khoshechin Jorshari, Mohammad Hossein and Kokologiannakis, Michalis and Majumdar, Rupak and Nagendra, Srinidhi},
  title =	{{Optimal Concolic Dynamic Partial Order Reduction}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{26:1--26:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.26},
  URN =		{urn:nbn:de:0030-drops-239765},
  doi =		{10.4230/LIPIcs.CONCUR.2025.26},
  annote =	{Keywords: Stateless model checking, dynamic symbolic execution}
}
Document
Partial-Order Reduction Is Hard

Authors: Frédéric Herbreteau, Sarah Larroze-Jardiné, and Igor Walukiewicz

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
The goal of partial-order methods is to accelerate the exploration of concurrent systems by examining only a representative subset of all possible runs. The stateful approach builds a transition system with representative runs, while the stateless method simply enumerates them. The stateless approach may be preferable if the transition system is tree-like; otherwise, the stateful method is more effective. In the last decade, optimality has been a guiding principle for developing stateless partial-order reduction algorithms, and without doubt contributed to big progress in the field. In this paper we ask if we can get a similar principle for the stateful approach. We show that in stateful exploration, a polynomially close to optimal partial-order algorithm cannot exist unless P=NP. The result holds even for acyclic programs with just await instructions. This lower bound result justifies systematic study of heuristics for stateful partial-order reduction. We propose a notion of IFS oracle as a useful abstraction. The oracle can be used to get a very simple optimal stateless algorithm, which can then be adapted to a non-optimal stateful algorithm. While in general the oracle problem is NP-hard, we show a simple case where it can be solved in linear time.

Cite as

Frédéric Herbreteau, Sarah Larroze-Jardiné, and Igor Walukiewicz. Partial-Order Reduction Is Hard. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{herbreteau_et_al:LIPIcs.CONCUR.2025.22,
  author =	{Herbreteau, Fr\'{e}d\'{e}ric and Larroze-Jardin\'{e}, Sarah and Walukiewicz, Igor},
  title =	{{Partial-Order Reduction Is Hard}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.22},
  URN =		{urn:nbn:de:0030-drops-239727},
  doi =		{10.4230/LIPIcs.CONCUR.2025.22},
  annote =	{Keywords: Formal verification, Concurrent systems, Partial-order reduction, Complexity}
}
Document
IsaBIL: A Framework for Verifying (In)correctness of Binaries in Isabelle/HOL

Authors: Matt Griffin, Brijesh Dongol, and Azalea Raad

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
This paper presents IsaBIL, a binary analysis framework in Isabelle/HOL that is based on the widely used Binary Analysis Platform (BAP). Specifically, in IsaBIL, we formalise BAP’s intermediate language, called BIL and integrate it with Hoare logic (to enable proofs of correctness) as well as incorrectness logic (to enable proofs of incorrectness). IsaBIL inherits the full flexibility of BAP, allowing us to verify binaries for a wide range of languages (C, C++, Rust), toolchains (LLVM, Ghidra) and target architectures (x86, RISC-V), and can also be used when the source code for a binary is unavailable. To make verification tractable, we develop a number of big-step rules that combine BIL’s existing small-step rules at different levels of abstraction to support reuse. We develop high-level reasoning rules for RISC-V instructions (our main target architecture) to further optimise verification. Additionally, we develop Isabelle proof tactics that exploit common patterns in C binaries for RISC-V to discharge large numbers of proof goals (often in the 100s) automatically. IsaBIL includes an Isabelle/ML based parser for BIL programs, allowing one to automatically generate the associated Isabelle/HOL program locale from a BAP output. Taken together, IsaBIL provides a highly flexible proof environment for program binaries. As examples, we prove correctness of key examples from the Joint Strike Fighter coding standards and the MITRE database.

Cite as

Matt Griffin, Brijesh Dongol, and Azalea Raad. IsaBIL: A Framework for Verifying (In)correctness of Binaries in Isabelle/HOL. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 14:1-14:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{griffin_et_al:LIPIcs.ECOOP.2025.14,
  author =	{Griffin, Matt and Dongol, Brijesh and Raad, Azalea},
  title =	{{IsaBIL: A Framework for Verifying (In)correctness of Binaries in Isabelle/HOL}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{14:1--14:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.14},
  URN =		{urn:nbn:de:0030-drops-233070},
  doi =		{10.4230/LIPIcs.ECOOP.2025.14},
  annote =	{Keywords: Binary Analysis Platform, Isabelle/HOL, Hoare Logic, Incorrectness Logic}
}
Document
Pearl/Brave New Idea
Contract Systems Need Domain-Specific Notations (Pearl/Brave New Idea)

Authors: Cameron Moy, Ryan Jung, and Matthias Felleisen

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
Contract systems enable programmers to state specifications and have them enforced at run time. First-order contracts are expressed using ordinary code, while higher-order contracts are expressed using the notation familiar from type systems. Most interface descriptions, though, come with properties that involve not just assertions about single method calls, but entire call chains. Typical contract systems cannot express these specifications concisely. Such specifications demand domain-specific notations. In response, this paper proposes that contract systems abstract over the notation used for stating specifications. It presents an architecture for such a system, some illustrative examples, and an evaluation in terms of common notations from the literature.

Cite as

Cameron Moy, Ryan Jung, and Matthias Felleisen. Contract Systems Need Domain-Specific Notations (Pearl/Brave New Idea). In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 42:1-42:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{moy_et_al:LIPIcs.ECOOP.2025.42,
  author =	{Moy, Cameron and Jung, Ryan and Felleisen, Matthias},
  title =	{{Contract Systems Need Domain-Specific Notations}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{42:1--42:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.42},
  URN =		{urn:nbn:de:0030-drops-233348},
  doi =		{10.4230/LIPIcs.ECOOP.2025.42},
  annote =	{Keywords: software contracts, domain-specific languages}
}
Document
A Rewriting Theory for Quantum λ-Calculus

Authors: Claudia Faggian, Gaetan Lopez, and Benoît Valiron

Published in: LIPIcs, Volume 326, 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)


Abstract
Quantum lambda calculus has been studied mainly as an idealized programming language - the evaluation essentially corresponds to a deterministic abstract machine. Very little work has been done to develop a rewriting theory for quantum lambda calculus. Recent advances in the theory of probabilistic rewriting give us a way to tackle this task with tools unavailable a decade ago. Our primary focus are standardization and normalization results.

Cite as

Claudia Faggian, Gaetan Lopez, and Benoît Valiron. A Rewriting Theory for Quantum λ-Calculus. In 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 326, pp. 47:1-47:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{faggian_et_al:LIPIcs.CSL.2025.47,
  author =	{Faggian, Claudia and Lopez, Gaetan and Valiron, Beno\^{i}t},
  title =	{{A Rewriting Theory for Quantum \lambda-Calculus}},
  booktitle =	{33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)},
  pages =	{47:1--47:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-362-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{326},
  editor =	{Endrullis, J\"{o}rg and Schmitz, Sylvain},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2025.47},
  URN =		{urn:nbn:de:0030-drops-228046},
  doi =		{10.4230/LIPIcs.CSL.2025.47},
  annote =	{Keywords: quantum lambda-calculus, probabilistic rewriting, operational semantics, asymptotic normalization, principles of quantum programming languages}
}
Document
Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning

Authors: Cormac Flanagan and Stephen N. Freund

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Rely-guarantee (RG) logic uses thread interference specifications (relies and guarantees) to reason about the correctness of multithreaded software. Unfortunately, RG logic requires each function postcondition to be "stabilized" or specialized to the behavior of other threads, making it difficult to write function specifications that are reusable at multiple call sites. This paper presents mover logic, which extends RG logic to address this problem via the notion of atomic functions. Atomic functions behave as if they execute serially without interference from concurrent threads, and so they can be assigned more general and reusable specifications that avoid the stabilization requirement of RG logic. Several practical verifiers (Calvin-R, QED, CIVL, Armada, Anchor, etc.) have demonstrated the modularity benefits of atomic function specifications. However, the complexity of these systems and their correctness proofs makes it challenging to understand and extend these systems. Mover logic formalizes the central ideas of reduction in a declarative program logic that provides a foundation for future work in this area.

Cite as

Cormac Flanagan and Stephen N. Freund. Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 16:1-16:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{flanagan_et_al:LIPIcs.ECOOP.2024.16,
  author =	{Flanagan, Cormac and Freund, Stephen N.},
  title =	{{Mover Logic: A Concurrent Program Logic for Reduction and Rely-Guarantee Reasoning}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{16:1--16:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.16},
  URN =		{urn:nbn:de:0030-drops-208654},
  doi =		{10.4230/LIPIcs.ECOOP.2024.16},
  annote =	{Keywords: concurrent program verification, reduction, rely-guarantee reasoning, synchronization}
}
Document
SCICO Journal-first
Abstracting Gradual References (SCICO Journal-first)

Authors: Matías Toro and Éric Tanter

Published in: LIPIcs, Volume 166, 34th European Conference on Object-Oriented Programming (ECOOP 2020)


Abstract
Gradual typing is an effective approach to integrate static and dynamic typing, which supports the smooth transition between both extremes via the (programmer-controlled) precision of type annotations [Jeremy Siek and Walid Taha, 2006; Siek et al., 2015]. Imprecision is normally introduced via the unknown type ?, e.g. function type Int → Bool is more precise than ? → ?, and both more precise than ?. Gradual typing relates types of different precision using consistent type relations, such as type consistency (resp. consistent subtyping), the gradual counterpart of type equality (resp. subtyping). For instance, ? → Int is consistent with Bool → ?. This approach has been applied in a number of settings, such as objects [Jeremy Siek and Walid Taha, 2007], subtyping [Jeremy Siek and Walid Taha, 2007; Ronald Garcia et al., 2016], effects [Bañados Schwerter et al., 2014; Bañados Schwerter et al., 2016], ownership [Ilya Sergey and Dave Clarke, 2012], typestates [Roger Wolff et al., 2011; Ronald Garcia et al., 2014], information-flow typing [Tim Disney and Cormac Flanagan, 2011; Luminous Fennell and Peter Thiemann, 2013; Matías Toro et al., 2018], session types [Igarashi et al., 2017], refinements [Nico Lehmann and {É}ric Tanter, 2017], set-theoretic types [Castagna and Lanvin, 2017], Hoare logic [Johannes Bader et al., 2018], parametric polymorphism [Amal Ahmed et al., 2011; Ahmed et al., 2017; Ina and Igarashi, 2011; Igarashi et al., 2017; Ningning Xie et al., 2018; Matías Toro et al., 2019], and references [Jeremy Siek and Walid Taha, 2006; Herman et al., 2010; Siek et al., 2015]. In particular, gradual typing for mutable references has seen the elaboration of various possible semantics: invariant references [Jeremy Siek and Walid Taha, 2006], guarded references [Herman et al., 2010], monotonic references [Siek et al., 2015], and permissive references [Siek et al., 2015]. Invariant references are a form of references where reference types are invariant with respect to type consistency. Guarded references admit variance thanks to systematic runtime checks on reference reads and writes; the runtime type of an allocated cell never changes during execution. Guarded references have been formulated in a space-efficient coercion calculus, which ensures that gradual programs do not accumulate unbounded pending checks during execution. Hereafter, we refer to this language as HCC. Monotonic references favor efficiency over flexibility by only allowing reference cells to vary towards more precise types. This allows reference operations in statically-typed regions to safely proceed without any runtime checks. Permissive references are the most flexible approach, in which reference cells can be initialized and updated to any value of any type at any time. These four developments reflect different design decisions with respect to gradual references: is the reference type constructor variant under consistency? Can the programmer specify a precise bound on the static type of a reference, and hence on the corresponding heap cell type? Can the heap cell type evolve its precision at runtime, and if yes, how? There is obviously no absolute answer to these questions, as they reflect different tradeoffs such as in efficiency and precision. This work explores the semantics that results from the application of a systematic methodology to gradualize static type systems. Currently we can find in the literature two methodologies to gradualize statically-typed languages: Abstracting Gradual Typing (AGT) [Ronald Garcia et al., 2016], and the Gradualizer [Matteo Cimini and Jeremy Siek, 2016]. In this work, we consider the AGT methodology as it naturally scales to auxiliary structures such as a mutable heap. The AGT methodology helps to systematically construct gradually-typed languages by using abstract interpretation [Cousot and Cousot, 1977] at the type level. In brief, AGT interprets gradual types as an abstraction of sets of possible static types, formally captured through a Galois connection. The static semantics of a gradual language are then derived by lifting the semantics of a statically-typed language through this connection, and the dynamic semantics follow by Curry-Howard from proof normalization of the type safety argument. The AGT methodology has been shown to be effective in many contexts: records and subtyping [Ronald Garcia et al., 2016], type-and-effects [Bañados Schwerter et al., 2014; Bañados Schwerter et al., 2016], refinement types [Nico Lehmann and Éric Tanter, 2017; Niki Vazou et al., 2018], set-theoretic and union types [Castagna and Lanvin, 2017; Matías Toro and Éric Tanter, 2017], information-flow typing [Matías Toro et al., 2018], and parametric polymorphism [Matías Toro et al., 2019]. However, this methodology has never been applied to mutable references in isolation. Although Toro et al. [Matías Toro et al., 2018] apply AGT to a language with references, they only gradualize security levels of types (e.g. Ref Int_?), not whole types (e.g. Ref ? is not supported). In this article we answer the following open questions: Which semantics for gradually-type references follows by systematically applying AGT? Does AGT justify one of the existing approaches, or does it suggest yet another design? Can we recover other semantics for gradual references, if yes, how? This article first reviews the different existing gradual approaches to mutable references through examples. It then presents the semantics for gradual references that is obtained by applying AGT, and how to accommodate the other semantics. More specifically, this work makes the following contributions: - We present λ_REF~, a gradual language with support for mutable references. We derive λ_REF~ by applying the AGT methodology to a fully-static simple language with mutable references called λ_REF. This is the first application of AGT that focuses on gradually-typed mutable references. - We prove that λ_REF~ satisfies the gradual guarantee of Siek et al. [Siek et al., 2015]. We also present the first formal statement and proof of the conservative extension of the dynamic semantics of the static language [Siek et al., 2015], for a gradual language derived using AGT. - We prove that the derived language, λ_REF~, corresponds to the semantics of guarded references from HCC. Formally, given a λ_REF~ term and its compilation to HCC^+ (an adapted version of HCC extended with conditionals and binary operations) we prove that both terms are bisimilar, and that consequently they either both terminate, both fail, or both diverge. - We observe that λ_REF~ and HCC^+ differ in the order of combination of runtime checks. As a result, HCC is space efficient whereas λ_REF~ is not: we can write programs in λ_REF~ that may accumulate an unbounded number of checks. We formalize the changes needed in the dynamic semantics of λ_REF~ to achieve space efficiency. This technique to recover space efficiency is in fact independent from mutable references, and is therefore applicable to other gradual languages derived with AGT. - We formally describe how to support other gradual reference semantics in λ_REF~ by presenting λ_REF~^𝗉𝗆, an extension that additionally supports both permissive and monotonic references. Finally, we prove for the first time that monotonic references satisfy the dynamic gradual guarantee, a non-trivial result that requires careful consideration of updates to the store. Additionally, we implemented λ_REF~ as an interactive prototype that displays both typing derivations and reduction traces. All the examples mentioned in this paper are readily available in the online prototype available at https://pleiad.cl/grefs. As a result, this paper sheds further light on the design space of gradual languages with mutable references and contributes to deepening the understanding of the AGT methodology.

Cite as

Matías Toro and Éric Tanter. Abstracting Gradual References (SCICO Journal-first). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 33:1-33:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{toro_et_al:LIPIcs.ECOOP.2020.33,
  author =	{Toro, Mat{\'\i}as and Tanter, \'{E}ric},
  title =	{{Abstracting Gradual References}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{33:1--33:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.33},
  URN =		{urn:nbn:de:0030-drops-131900},
  doi =		{10.4230/LIPIcs.ECOOP.2020.33},
  annote =	{Keywords: Gradual Typing, Mutable References, Abstract interpretation}
}
Document
TLM.open: a SystemC/TLM Frontend for the CADP Verification Toolbox

Authors: Claude Helmstetter

Published in: LITES, Volume 1, Issue 1 (2014). Leibniz Transactions on Embedded Systems, Volume 1, Issue 1


Abstract
SystemC/TLM models, which are C++ programs, allow the simulation of embedded software before hardware low-level descriptions are available and are used as golden models for hardware verification. The verification of the SystemC/TLM models is an important issue since an error in the model can mislead the system designers or reveal an error in the specifications. An open-source simulator for SystemC/TLM is provided but there are no tools for formal verification.In order to apply model checking to a SystemC/TLM model, a semantics for standard C++ code and for specific SystemC/TLM features must be provided. The usual approach relies on the translation of the SystemC/TLM code into a formal language for which a model checker is available.We propose another approach that suppresses the error-prone translation effort. Given a SystemC/TLM program, the transitions are obtained by executing the original code using g++ and an extended SystemC library, and we ask the user to provide additional functions to store the current model state. These additional functions generally represent less than 20% of the size of the original model, and allow it to apply all CADP verification tools to the SystemC/TLM model itself.

Cite as

Claude Helmstetter. TLM.open: a SystemC/TLM Frontend for the CADP Verification Toolbox. In LITES, Volume 1, Issue 1 (2014). Leibniz Transactions on Embedded Systems, Volume 1, Issue 1, pp. 02:1-02:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{helmstetter:LITES-v001-i001-a002,
  author =	{Helmstetter, Claude},
  title =	{{TLM.open: a SystemC/TLM Frontend for the CADP Verification Toolbox}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{02:1--02:18},
  ISSN =	{2199-2002},
  year =	{2014},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v001-i001-a002},
  URN =		{urn:nbn:de:0030-drops-192441},
  doi =		{10.4230/LITES-v001-i001-a002},
  annote =	{Keywords: Model checking, Verification, Simulation, SystemC, Transactional modeling}
}
Document
09361 Abstracts Collection – Design and Validation of Concurrent Systems

Authors: Cormac Flanagan, Susanne Graf, Madhusan Parthasarathy, and Shaz Quadeer

Published in: Dagstuhl Seminar Proceedings, Volume 9361, Design and Validation of Concurrent Systems (2010)


Abstract
The Dagstuhl Seminar 09361 ``Design and Validation of Concurrent Systems'' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics from August 30 to September 4, 2009. During the seminar, participants presented their current research, and ongoing work and open problems were discussed. This paper reviews the seminar topics and goals, and provides abstracts of the presentations given during the seminar. Links to extended abstracts or full papers are provided, if available.

Cite as

Cormac Flanagan, Susanne Graf, Madhusan Parthasarathy, and Shaz Quadeer. 09361 Abstracts Collection – Design and Validation of Concurrent Systems. In Design and Validation of Concurrent Systems. Dagstuhl Seminar Proceedings, Volume 9361, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{flanagan_et_al:DagSemProc.09361.1,
  author =	{Flanagan, Cormac and Graf, Susanne and Parthasarathy, Madhusan and Quadeer, Shaz},
  title =	{{09361 Abstracts Collection – Design and Validation of Concurrent Systems}},
  booktitle =	{Design and Validation of Concurrent Systems},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9361},
  editor =	{Cormac Flanagan and Madhusan Parthasarathy and Shaz Quadeer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09361.1},
  URN =		{urn:nbn:de:0030-drops-25498},
  doi =		{10.4230/DagSemProc.09361.1},
  annote =	{Keywords: Concurrency, Specification, Programming, Verification, Validation, Testing}
}
Document
Verifying Optimistic Concurrency: Prophecy Variables and Backward Reasoning

Authors: Serdar Tasiran, Ali Sezgin, and Shaz Quadeer

Published in: Dagstuhl Seminar Proceedings, Volume 9361, Design and Validation of Concurrent Systems (2010)


Abstract
Several static proof systems have been developed over the years for verifying shared-memory multithreaded programs. These proof systems make use of auxiliary variables to express mutual exclusion or non-interference among shared variable accesses. Typically, the values of these variables summarize the past of the program execution; consequently, they are known as history variables. Prophecy variables, on the other hand, are the temporal dual of history variables and their values summarize the future of the program execution. In this paper, we show that prophecy variables are useful for locally constructing proofs of systems with optimistic concurrency. To enable the fullest use of prophecy variables in proof construction, we introduce tressa annotations, as the dual of the well-known assert annotations. A tressa claim states a condition for reverse reachability from an end state of the program, much like an assert claim states a condition for forward reachability from the initial state of the program. We present the proof rules and the notion of correctness of a program for two-way reasoning in a static setting: forward in time for assert claims, backward in time for tressa claims. Even though the interaction between the two is non-trivial, the formalization is intuitive and accessible. We demonstrate how to verify implementations based on optimistic concurrency which is a programming paradigm that allows conflicts to be handled after they occur. We have incorporated our proof rules into the QED verifier and have used our implementation to verify a handful of small but sophisticated algorithms. Our experience shows that the proof steps and annotations follow closely the intuition of the programmer, making the proof itself a natural extension of implementation.

Cite as

Serdar Tasiran, Ali Sezgin, and Shaz Quadeer. Verifying Optimistic Concurrency: Prophecy Variables and Backward Reasoning. In Design and Validation of Concurrent Systems. Dagstuhl Seminar Proceedings, Volume 9361, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{tasiran_et_al:DagSemProc.09361.2,
  author =	{Tasiran, Serdar and Sezgin, Ali and Quadeer, Shaz},
  title =	{{Verifying Optimistic Concurrency: Prophecy Variables and Backward Reasoning}},
  booktitle =	{Design and Validation of Concurrent Systems},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9361},
  editor =	{Cormac Flanagan and Madhusan Parthasarathy and Shaz Quadeer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09361.2},
  URN =		{urn:nbn:de:0030-drops-24306},
  doi =		{10.4230/DagSemProc.09361.2},
  annote =	{Keywords: Concurrency, Program Verification, Static Analysis}
}
  • Refine by Type
  • 10 Document/PDF
  • 5 Document/HTML

  • Refine by Publication Year
  • 5 2025
  • 1 2024
  • 1 2020
  • 1 2014
  • 2 2010

  • Refine by Author
  • 2 Flanagan, Cormac
  • 2 Quadeer, Shaz
  • 1 Dongol, Brijesh
  • 1 Faggian, Claudia
  • 1 Felleisen, Matthias
  • Show More...

  • Refine by Series/Journal
  • 7 LIPIcs
  • 1 LITES
  • 2 DagSemProc

  • Refine by Classification
  • 1 Hardware
  • 1 Hardware → Functional verification
  • 1 Hardware → Hardware validation
  • 1 Hardware → Transaction-level verification
  • 1 Software and its engineering → Concurrent programming languages
  • Show More...

  • Refine by Keyword
  • 2 Concurrency
  • 2 Verification
  • 1 Abstract interpretation
  • 1 Binary Analysis Platform
  • 1 Complexity
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail