4 Search Results for "van de Pol, Jaco"


Document
Validation of QBF Encodings with Winning Strategies

Authors: Irfansha Shaik, Maximilian Heisinger, Martina Seidl, and Jaco van de Pol

Published in: LIPIcs, Volume 271, 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023)


Abstract
When using a QBF solver for solving application problems encoded to quantified Boolean formulas (QBFs), mainly two things can potentially go wrong: (1) the solver could be buggy and return a wrong result or (2) the encoding could be incorrect. To ensure the correctness of solvers, sophisticated fuzzing and testing techniques have been presented. To ultimately trust a solving result, solvers have to provide a proof certificate that can be independently checked. Much less attention, however, has been paid to the question how to ensure the correctness of encodings. The validation of QBF encodings is particularly challenging because of the variable dependencies introduced by the quantifiers. In contrast to SAT, the solution of a true QBF is not simply a variable assignment, but a winning strategy. For each existential variable x, a winning strategy provides a function that defines how to set x based on the values of the universal variables that precede x in the quantifier prefix. Winning strategies for false formulas are defined dually. In this paper, we provide a tool for validating encodings using winning strategies and interactive game play with a QBF solver. As the representation of winning strategies can get huge, we also introduce validation based on partial winning strategies. Finally, we employ winning strategies for testing if two different encodings of one problem have the same solutions.

Cite as

Irfansha Shaik, Maximilian Heisinger, Martina Seidl, and Jaco van de Pol. Validation of QBF Encodings with Winning Strategies. In 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 271, pp. 24:1-24:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{shaik_et_al:LIPIcs.SAT.2023.24,
  author =	{Shaik, Irfansha and Heisinger, Maximilian and Seidl, Martina and van de Pol, Jaco},
  title =	{{Validation of QBF Encodings with Winning Strategies}},
  booktitle =	{26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023)},
  pages =	{24:1--24:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-286-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{271},
  editor =	{Mahajan, Meena and Slivovsky, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2023.24},
  URN =		{urn:nbn:de:0030-drops-184863},
  doi =		{10.4230/LIPIcs.SAT.2023.24},
  annote =	{Keywords: QBF, Validation, Winning Strategy, Equivalence, Certificates}
}
Document
Programming with Purity Reflection: Peaceful Coexistence of Effects, Laziness, and Parallelism

Authors: Magnus Madsen and Jaco van de Pol

Published in: LIPIcs, Volume 263, 37th European Conference on Object-Oriented Programming (ECOOP 2023)


Abstract
We present purity reflection, a programming language feature that enables higher-order functions to inspect the purity of their function arguments and to vary their behavior based on this information. The upshot is that operations on data structures can selectively use lazy and/or parallel evaluation while ensuring that side effects are never lost or re-ordered. The technique builds on a recent Hindley-Milner style type and effect system based on Boolean unification which supports both effect polymorphism and complete type inference. We illustrate that avoiding the so-called 'poisoning problem' is crucial to support purity reflection. We propose several new data structures that use purity reflection to switch between eager and lazy, sequential and parallel evaluation. We propose a DelayList, which is maximally lazy but switches to eager evaluation for impure operations. We also propose a DelayMap which is maximally lazy in its values, but also exploits eager and parallel evaluation. We implement purity reflection as an extension of the Flix programming language. We present a new effect-aware form of monomorphization that eliminates purity reflection at compile-time. And finally, we evaluate the cost of this new monomorphization on compilation time and on code size, and determine that it is minimal.

Cite as

Magnus Madsen and Jaco van de Pol. Programming with Purity Reflection: Peaceful Coexistence of Effects, Laziness, and Parallelism. In 37th European Conference on Object-Oriented Programming (ECOOP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 263, pp. 18:1-18:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{madsen_et_al:LIPIcs.ECOOP.2023.18,
  author =	{Madsen, Magnus and van de Pol, Jaco},
  title =	{{Programming with Purity Reflection: Peaceful Coexistence of Effects, Laziness, and Parallelism}},
  booktitle =	{37th European Conference on Object-Oriented Programming (ECOOP 2023)},
  pages =	{18:1--18:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-281-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{263},
  editor =	{Ali, Karim and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2023.18},
  URN =		{urn:nbn:de:0030-drops-182112},
  doi =		{10.4230/LIPIcs.ECOOP.2023.18},
  annote =	{Keywords: type and effect systems, purity reflection, lazy evaluation, parallel evaluation}
}
Document
Invited Talk
Concurrent Algorithms and Data Structures for Model Checking (Invited Talk)

Authors: Jaco van de Pol

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
Model checking is a successful method for checking properties on the state space of concurrent, reactive systems. Since it is based on exhaustive search, scaling this method to industrial systems has been a challenge since its conception. Research has focused on clever data structures and algorithms, to reduce the size of the state space or its representation; smart search heuristics, to reveal potential bugs and counterexamples early; and high-performance computing, to deploy the brute force processing power of clusters of compute-servers. The main challenge is to combine these approaches - brute-force alone (when implemented carefully) can bring a linear speedup in the number of processors. This is great, since it reduces model-checking times from days to minutes. On the other hand, proper algorithms and data structures can lead to exponential gains. Therefore, the parallelization bonus is only real if we manage to speedup clever algorithms. There are some obstacles though: many linear-time graph algorithms depend on a depth-first exploration order, which is hard to parallelize. Examples include the detection of strongly connected components (SCC) and the nested depth-first-search (NDFS) algorithm. Both are used in model checking LTL properties. Symbolic representations, like binary decision diagrams (BDDs), reduce model checking to "pointer-chasing", leading to irregular memory-access patterns. This poses severe challenges on achieving actual speedup in (clusters of) modern multi-core computer architectures. This talk presents some of the solutions found over the last 10 years, which led to the high-performance model checker LTSmin [Gijs Kant et al., 2015]. These include parallel NDFS (based on the PhD thesis of Alfons Laarman [Alfons Laarman, 2014]), the parallel detection of SCCs with concurrent Union-Find (based on the PhD thesis of Vincent Bloemen [Vincent Bloemen, 2019]), and concurrent BDDs (based on the PhD thesis of Tom van Dijk [Tom van Dijk, 2016]). Finally, I will sketch a perspective on moving forward from high-performance model checking to high-performance synthesis algorithms. Examples include parameter synthesis for stochastic and timed systems, and strategy synthesis for (stochastic and timed) games.

Cite as

Jaco van de Pol. Concurrent Algorithms and Data Structures for Model Checking (Invited Talk). In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{vandepol:LIPIcs.CONCUR.2019.4,
  author =	{van de Pol, Jaco},
  title =	{{Concurrent Algorithms and Data Structures for Model Checking}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.4},
  URN =		{urn:nbn:de:0030-drops-109066},
  doi =		{10.4230/LIPIcs.CONCUR.2019.4},
  annote =	{Keywords: model checking, parallel algorithms, concurrent datastructures}
}
Document
A Typical Verification Challenge for the GRID

Authors: Jaco van de Pol

Published in: Dagstuhl Seminar Proceedings, Volume 8332, Distributed Verification and Grid Computing (2008)


Abstract
A typical verification challenge for the GRID community is presented. The concrete challenge is to implement a simple recursive algorithm for finding the strongly connected components in a graph. The graph is typically stored in the collective memory of a number of computers, so a distributed algorithm is necessary. The implementation should be efficient and scalable, and separate synchronization and implementation details from the purely algorithmic aspects. In the end, a framework is envisaged for distributed algorithms on very large graphs. This would be useful to explore various alternative algorithmic choices.

Cite as

Jaco van de Pol. A Typical Verification Challenge for the GRID. In Distributed Verification and Grid Computing. Dagstuhl Seminar Proceedings, Volume 8332, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{vandepol:DagSemProc.08332.3,
  author =	{van de Pol, Jaco},
  title =	{{A Typical Verification Challenge for the GRID}},
  booktitle =	{Distributed Verification and Grid Computing},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8332},
  editor =	{Henri E. Bal and Lubos Brim and Martin Leucker},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08332.3},
  URN =		{urn:nbn:de:0030-drops-16293},
  doi =		{10.4230/DagSemProc.08332.3},
  annote =	{Keywords: Strongly connected components, distributed algorithms, breadth first search}
}
  • Refine by Author
  • 4 van de Pol, Jaco
  • 1 Heisinger, Maximilian
  • 1 Madsen, Magnus
  • 1 Seidl, Martina
  • 1 Shaik, Irfansha

  • Refine by Classification
  • 1 Computing methodologies → Artificial intelligence
  • 1 Software and its engineering → Model checking
  • 1 Theory of computation → Logic and verification
  • 1 Theory of computation → Parallel algorithms
  • 1 Theory of computation → Program semantics
  • Show More...

  • Refine by Keyword
  • 1 Certificates
  • 1 Equivalence
  • 1 QBF
  • 1 Strongly connected components
  • 1 Validation
  • Show More...

  • Refine by Type
  • 4 document

  • Refine by Publication Year
  • 2 2023
  • 1 2008
  • 1 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail