28 Search Results for "Tse, David"


Document
Certified Implementability of Global Multiparty Protocols

Authors: Elaine Li and Thomas Wies

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Implementability is the decision problem at the heart of top-down approaches to protocol verification. In this paper, we present a mechanization of a recently proposed precise implementability characterization by Li et al. for a large class of protocols that subsumes many existing formalisms in the literature. Our protocols and implementations model asynchronous commmunication, and can exhibit infinite behavior. We improve upon their pen-and-paper results by unifying distinct formalisms, simplifying existing proof arguments, elaborating on the construction of canonical implementations, and even uncovering a subtle bug in the semantics for infinite words. As a corollary of our mechanization, we show that the original characterization of implementability applies even to protocols with infinitely many participants. We also contribute a reusable library for reasoning about generic communicating state machines. Our mechanization consists of about 15k lines of Rocq code. We believe that our mechanization can provide the foundation for deductively proving the implementability of protocols beyond the reach of prior work, extracting certified implementations for finite protocols, and investigating implementability under alternative asynchronous communication models.

Cite as

Elaine Li and Thomas Wies. Certified Implementability of Global Multiparty Protocols. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 15:1-15:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ITP.2025.15,
  author =	{Li, Elaine and Wies, Thomas},
  title =	{{Certified Implementability of Global Multiparty Protocols}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{15:1--15:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.15},
  URN =		{urn:nbn:de:0030-drops-246139},
  doi =		{10.4230/LIPIcs.ITP.2025.15},
  annote =	{Keywords: Asynchronous protocols, communicating state machines, labeled transition systems, infinite semantics, realizability, multiparty session types, choreographies, deadlock freedom}
}
Document
Exploration and Complexity Management in Graph-Based Programming Environments

Authors: Max Boksem and L. Thomas van Binsbergen

Published in: OASIcs, Volume 134, Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)


Abstract
Programmers often rely on different environments depending on the nature of their tasks. For large-scale software projects, IDEs help manage complexity through structured abstractions like files, modules, and classes, and provide tools for code visualization and navigation. In contrast, exploratory programming tasks - such as data analysis, rapid prototyping, and design space exploration - are better served by interactive environments like REPLs and Notebooks, which support incremental development and immediate feedback. However, these tools tend to prioritize either complexity management or exploration, limiting their effectiveness across contexts. This paper investigates a hybrid graph-based programming environment that bridges these two modes by building on Incremental Graph Code (IGC), a graph-based system for structuring, visualizing, and interacting with source code. We explore how IGC can support both complexity management and exploratory programming through three key features: projectional views for aggregating and navigating interrelated code and documentation, graph-type nodes for encapsulating subgraphs to manage structural complexity, and an exploratory programming view for managing branching executions and promoting experimentation. Together, these features suggest that graph-based environments like IGC can offer a unified platform for both systematic software engineering and dynamic, exploratory development.

Cite as

Max Boksem and L. Thomas van Binsbergen. Exploration and Complexity Management in Graph-Based Programming Environments. In Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025). Open Access Series in Informatics (OASIcs), Volume 134, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{boksem_et_al:OASIcs.Programming.2025.6,
  author =	{Boksem, Max and van Binsbergen, L. Thomas},
  title =	{{Exploration and Complexity Management in Graph-Based Programming Environments}},
  booktitle =	{Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)},
  pages =	{6:1--6:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-382-9},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{134},
  editor =	{Edwards, Jonathan and Perera, Roly and Petricek, Tomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Programming.2025.6},
  URN =		{urn:nbn:de:0030-drops-242906},
  doi =		{10.4230/OASIcs.Programming.2025.6},
  annote =	{Keywords: Graph-based Programming Environments, Exploratory Programming, Complexity Management, Incremental Graph Code (IGC), Projectional Views}
}
Document
On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course

Authors: José F. Vélez, A. Belén Moreno, Victoria Ruiz-Parrado, and Ángel Sánchez

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
This paper presents an ongoing work on the application of concept maps to teaching an introductory Object-Oriented Analysis and Design course for Computer Science students. There exist previous works that introduce the concept map model in these object-oriented courses. However, these works do not usually go deeply enough into the transition from concept maps to static class diagrams. Although concept maps present some clear advantages when defining the abstractions present in object-oriented software modeling, some drawbacks may also appear if the transformation from these maps to class diagrams when the task is carried out through simplistic rules. In this paper we propose an approach, which is illustrated through a use case, to transition from a concept map to a static class diagram in a more realistic way.

Cite as

José F. Vélez, A. Belén Moreno, Victoria Ruiz-Parrado, and Ángel Sánchez. On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 3:1-3:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{velez_et_al:OASIcs.ICPEC.2025.3,
  author =	{V\'{e}lez, Jos\'{e} F. and Moreno, A. Bel\'{e}n and Ruiz-Parrado, Victoria and S\'{a}nchez, \'{A}ngel},
  title =	{{On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{3:1--3:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.3},
  URN =		{urn:nbn:de:0030-drops-240337},
  doi =		{10.4230/OASIcs.ICPEC.2025.3},
  annote =	{Keywords: Object-Oriented Programming, Concept Map, Abstraction, Unified Modeling Language (UML), Static Class Diagram}
}
Document
Can Open Large Language Models Catch Vulnerabilities?

Authors: Diogo Gaspar Lopes, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
As Large Language Models (LLMs) become increasingly integrated into secure software development workflows, a critical question remains unanswered: can these models not only detect insecure code but also reliably classify vulnerabilities according to standardized taxonomies? In this work, we conduct a systematic evaluation of three state-of-the-art LLMs - Llama3, Codestral, and Deepseek R1 - using a carefully filtered subset of the Big-Vul dataset annotated with eight representative Common Weakness Enumeration categories. Adopting a closed-world classification setup, we assess each model’s performance in both identifying the presence of vulnerabilities and mapping them to the correct CWE label. Our findings reveal a sharp contrast between high detection rates and markedly poor classification accuracy, with frequent overgeneralization and misclassification. Moreover, we analyze model-specific biases and common failure modes, shedding light on the limitations of current LLMs in performing fine-grained security reasoning.These insights are especially relevant in educational contexts, where LLMs are being adopted as learning aids despite their limitations. A nuanced understanding of their behaviour is essential to prevent the propagation of misconceptions among students. Our results expose key challenges that must be addressed before LLMs can be reliably deployed in security-sensitive environments.

Cite as

Diogo Gaspar Lopes, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque. Can Open Large Language Models Catch Vulnerabilities?. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gasparlopes_et_al:OASIcs.ICPEC.2025.4,
  author =	{Gaspar Lopes, Diogo and Espinha Gasiba, Tiago and Amburi, Sathwik and Pinto-Albuquerque, Maria},
  title =	{{Can Open Large Language Models Catch Vulnerabilities?}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{4:1--4:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.4},
  URN =		{urn:nbn:de:0030-drops-240340},
  doi =		{10.4230/OASIcs.ICPEC.2025.4},
  annote =	{Keywords: Large Language Models (LLMs), Secure Coding, CWE Classification, Machine Learning, Software Vulnerability Detection, Artificial Intelligence, Code Analysis, Big-Vul Dataset}
}
Document
Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education

Authors: Maria Camila Santos Galeano, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
With the increasing integration of large language models (LLMs) into software development and programming education, concerns have emerged about the security of AI-generated code. This study investigates the security of three open source code generation models. Codestral, DeepSeek R1, and LLaMA 3.3 70B using structured prompts in Python, C, and Java. Some prompts were designed to explicitly trigger known vulnerability patterns, such as unsanitized input handling or unsafe memory operations, in order to assess how each model responds to security-sensitive tasks. The findings reveal recurring issues, including command execution vulnerabilities, insecure memory handling, and insufficient input validation. In response, we propose a set of recommendations for integrating secure prompt design and code auditing practices into developer training. These guidelines aim to help future developers generate safer code and better identify flaws in GenAI-generated output. This work offers an initial analysis of the limitations of GenAI-assisted code generation and provides actionable strategies to support the more secure and responsible use of these tools in professional and educational contexts.

Cite as

Maria Camila Santos Galeano, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque. Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{santosgaleano_et_al:OASIcs.ICPEC.2025.9,
  author =	{Santos Galeano, Maria Camila and Espinha Gasiba, Tiago and Amburi, Sathwik and Pinto-Albuquerque, Maria},
  title =	{{Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{9:1--9:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.9},
  URN =		{urn:nbn:de:0030-drops-240395},
  doi =		{10.4230/OASIcs.ICPEC.2025.9},
  annote =	{Keywords: Generative AI, Code Security, Programming Education, Prompt Engineering, Secure Coding, Static Analysis}
}
Document
APPROX
Multipass Linear Sketches for Geometric LP-Type Problems

Authors: N. Efe Çekirge, William Gay, and David P. Woodruff

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
LP-type problems such as the Minimum Enclosing Ball (MEB), Linear Support Vector Machine (SVM), Linear Programming (LP), and Semidefinite Programming (SDP) are fundamental combinatorial optimization problems, with many important applications in machine learning applications such as classification, bioinformatics, and noisy learning. We study LP-type problems in several streaming and distributed big data models, giving ε-approximation linear sketching algorithms with a focus on the high accuracy regime with low dimensionality d, that is, when d < (1/ε)^0.999. Our main result is an O(ds) pass algorithm with O(s(√d/ε)^{3d/s}) ⋅ poly(d, log (1/ε)) space complexity in words, for any parameter s ∈ [1, d log (1/ε)], to solve ε-approximate LP-type problems of O(d) combinatorial and VC dimension. Notably, by taking s = d log (1/ε), we achieve space complexity polynomial in d and polylogarithmic in 1/ε, presenting exponential improvements in 1/ε over current algorithms. We complement our results by showing lower bounds of (1/ε)^Ω(d) for any 1-pass algorithm solving the (1 + ε)-approximation MEB and linear SVM problems, further motivating our multi-pass approach.

Cite as

N. Efe Çekirge, William Gay, and David P. Woodruff. Multipass Linear Sketches for Geometric LP-Type Problems. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 8:1-8:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cekirge_et_al:LIPIcs.APPROX/RANDOM.2025.8,
  author =	{\c{C}ekirge, N. Efe and Gay, William and Woodruff, David P.},
  title =	{{Multipass Linear Sketches for Geometric LP-Type Problems}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{8:1--8:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.8},
  URN =		{urn:nbn:de:0030-drops-243741},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.8},
  annote =	{Keywords: Streaming, sketching, LP-type problems}
}
Document
Linear Time Subsequence and Supersequence Regex Matching

Authors: Antoine Amarilli, Florin Manea, Tina Ringleb, and Markus L. Schmid

Published in: LIPIcs, Volume 345, 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)


Abstract
It is well-known that checking whether a given string w matches a given regular expression r can be done in quadratic time O(|w|⋅ |r|) and that this cannot be improved to a truly subquadratic running time of O((|w|⋅ |r|)^{1-ε}) assuming the strong exponential time hypothesis (SETH). We study a different matching paradigm where we ask instead whether w has a subsequence that matches r, and show that regex matching in this sense can be solved in linear time O(|w| + |r|). Further, the same holds if we ask for a supersequence. We show that the quantitative variants where we want to compute a longest or shortest subsequence or supersequence of w that matches r can be solved in O(|w|⋅ |r|), i. e., asymptotically no worse than classical regex matching; and we show that O(|w| + |r|) is conditionally not possible for these problems. We also investigate these questions with respect to other natural string relations like the infix, prefix, left-extension or extension relation instead of the subsequence and supersequence relation. We further study the complexity of the universal problem where we ask if all subsequences (or supersequences, infixes, prefixes, left-extensions or extensions) of an input string satisfy a given regular expression.

Cite as

Antoine Amarilli, Florin Manea, Tina Ringleb, and Markus L. Schmid. Linear Time Subsequence and Supersequence Regex Matching. In 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 345, pp. 9:1-9:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{amarilli_et_al:LIPIcs.MFCS.2025.9,
  author =	{Amarilli, Antoine and Manea, Florin and Ringleb, Tina and Schmid, Markus L.},
  title =	{{Linear Time Subsequence and Supersequence Regex Matching}},
  booktitle =	{50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)},
  pages =	{9:1--9:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-388-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{345},
  editor =	{Gawrychowski, Pawe{\l} and Mazowiecki, Filip and Skrzypczak, Micha{\l}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2025.9},
  URN =		{urn:nbn:de:0030-drops-241162},
  doi =		{10.4230/LIPIcs.MFCS.2025.9},
  annote =	{Keywords: subsequence, supersequence, regular language, regular expression, automata}
}
Document
Omega-Regular Verification and Control for Distributional Specifications in MDPs

Authors: S. Akshay, Ouldouz Neysari, and Ðorđe Žikelić

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
A classical approach to studying Markov decision processes (MDPs) is to view them as state transformers. However, MDPs can also be viewed as distribution transformers, where an MDP under a strategy generates a sequence of probability distributions over MDP states. This view arises in several applications, even as the probabilistic model checking problem becomes much harder compared to the classical state transformer counterpart. It is known that even distributional reachability and safety problems become computationally intractable (Skolem- and positivity-hard). To address this challenge, recent works focused on sound but possibly incomplete methods for verification and control of MDPs under the distributional view. However, existing automated methods are applicable only to distributional reachability, safety and reach-avoidance specifications. In this work, we present the first automated method for verification and control of MDPs with respect to distributional omega-regular specifications. To achieve this, we propose a novel notion of distributional certificates, which are sound and complete proof rules for proving that an MDP under a distributionally memoryless strategy satisfies some distributional omega-regular specification. We then use our distributional certificates to design the first fully automated algorithms for verification and control of MDPs with respect to distributional omega-regular specifications. Our algorithms follow a template-based synthesis approach and provide soundness and relative completeness guarantees, while running in PSPACE. Our prototype implementation demonstrates practical applicability of our algorithms to challenging examples collected from the literature.

Cite as

S. Akshay, Ouldouz Neysari, and Ðorđe Žikelić. Omega-Regular Verification and Control for Distributional Specifications in MDPs. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.CONCUR.2025.6,
  author =	{Akshay, S. and Neysari, Ouldouz and \v{Z}ikeli\'{c}, Ðor{\d}e},
  title =	{{Omega-Regular Verification and Control for Distributional Specifications in MDPs}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.6},
  URN =		{urn:nbn:de:0030-drops-239562},
  doi =		{10.4230/LIPIcs.CONCUR.2025.6},
  annote =	{Keywords: MDPs, Distributional objectives, \omega-regularity, Certificates}
}
Document
New Fault Domains for Conformance Testing of Finite State Machines

Authors: Frits Vaandrager and Ivo Melse

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
A fault domain reflects a tester’s assumptions about faults that may occur in an implementation and that need to be detected during testing. A fault domain that has been widely studied in the literature on black-box conformance testing is the class of finite state machines (FSMs) with at most m states. Numerous strategies for generating test suites have been proposed that guarantee fault coverage for this class. These so-called m-complete test suites grow exponentially in m-n, where n is the number of states of the specification, so one can only run them for small values of m-n. But the assumption that m-n is small is not realistic in practice. In his seminal paper from 1964, Hennie raised the challenge to design checking experiments in which the number of states may increase appreciably. In order to solve this long-standing open problem, we propose (much larger) fault domains that capture the assumption that all states in an implementation can be reached by first performing a sequence from some set A (typically a state cover for the specification), followed by k arbitrary inputs, for some small k. The number of states of FSMs in these fault domains grows exponentially in k. We present a sufficient condition for k-A-completeness of test suites with respect to these fault domains. Our condition implies k-A-completeness of two prominent m-complete test suite generation strategies, the Wp and HSI methods. Thus these strategies are complete for much larger fault domains than those for which they were originally designed, and thereby solve Hennie’s challenge. We show that three other prominent m-complete methods (H, SPY and SPYH) do not always generate k-A-complete test suites.

Cite as

Frits Vaandrager and Ivo Melse. New Fault Domains for Conformance Testing of Finite State Machines. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 34:1-34:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vaandrager_et_al:LIPIcs.CONCUR.2025.34,
  author =	{Vaandrager, Frits and Melse, Ivo},
  title =	{{New Fault Domains for Conformance Testing of Finite State Machines}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{34:1--34:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.34},
  URN =		{urn:nbn:de:0030-drops-239843},
  doi =		{10.4230/LIPIcs.CONCUR.2025.34},
  annote =	{Keywords: conformance testing, finite state machines, Mealy machines, apartness, observation tree, fault domains, k-A-complete test suites}
}
Document
An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT

Authors: Ahsan Sanaullah, Degui Zhi, and Shaojie Zhang

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
String matching problems in bioinformatics are typically for finding exact substring matches between a query and a reference text. Previous formulations often focus on maximum exact matches (MEMs). However, multiple occurrences of substrings of the query in the text that are long enough but not maximal may not be captured by MEMs. Such long matches can be informative, especially when the text is a collection of similar sequences such as genomes. In this paper, we describe a new type of match between a pattern and a text that aren't necessarily maximal in the query, but still contain useful matching information: locally maximal exact matches (LEMs). There are usually a large amount of LEMs, so we only consider those above some length threshold ℒ. These are referred to as long LEMs. The purpose of long LEMs is to capture substring matches between a query and a text that are not necessarily maximal in the pattern but still long enough to be important. Therefore efficient long LEMs finding algorithms are desired for these datasets. However, these datasets are too large to query on traditional string indexes. Fortunately, these datasets are very repetitive. Recently, compressed string indexes that take advantage of the redundancy in the data but retain efficient querying capability have been proposed as a solution. We therefore give an efficient algorithm for computing all the long LEMs of a query and a text in a BWT runs compressed string index. We describe an O(m+occ) expected time algorithm that relies on an O(r) words space string index for outputting all long LEMs of a pattern with respect to a text given the matching statistics of the pattern with respect to the text. Here m is the length of the query, occ is the number of long LEMs outputted, and r is the number of runs in the BWT of the text. The O(r) space string index we describe relies on an adaptation of the move data structure by Nishimoto and Tabei. We are able to support LCP[i] queries in constant time given SA[i]. In other words, we answer PLCP[i] queries in constant time. These PLCP queries enable the efficient long LEM query. Long LEMs may provide useful similarity information between a pattern and a text that MEMs may ignore. This information is particularly useful in pangenome and biobank scale haplotype panel contexts.

Cite as

Ahsan Sanaullah, Degui Zhi, and Shaojie Zhang. An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sanaullah_et_al:LIPIcs.WABI.2025.17,
  author =	{Sanaullah, Ahsan and Zhi, Degui and Zhang, Shaojie},
  title =	{{An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.17},
  URN =		{urn:nbn:de:0030-drops-239433},
  doi =		{10.4230/LIPIcs.WABI.2025.17},
  annote =	{Keywords: BWT, LEM, Long LEM, MEM, Run Length Compressed BWT, Move Data Structure, Pangenome}
}
Document
Unit Types for MiniZinc

Authors: Jip J. Dekker, Jason Nguyen, Peter J. Stuckey, and Guido Tack

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
Discrete optimization models often reason about discrete sets of objects, but discrete optimization solvers only deal with integers. One of the key challenges when building models for discrete optimization problems is avoiding bugs. Because the model only defines constraints, decisions, and an objective that are then run on a solver, bugs in the model can be very difficult to track down. Hence, modelling languages should have strong type systems to detect as many bugs as possible at the modelling level. In this paper, we propose unit types for MiniZinc. Unit types allow us to differentiate between different integers appearing in the model. Almost all integer decisions in models are either about a set of objects or some measurable resource type. Using unit types, we can add more type safety to our models by avoiding confusion of decisions on different resource types. Compared to other programming languages, unit types in our proposal are unusual. MiniZinc models often deal with multiple levels of granularity of the same resource, e.g., scheduling to the minute, but doing resource allocation on the half day, or use an unspecified granularity, e.g., the same job-shop scheduling model could use task durations given in minutes or days. Our proposed unit types also differentiate between coordinate unit types, e.g., the time when an event occurred, and the usual delta unit types, e.g., the time difference between two events. Errors arising from mixing coordinate and delta types can be very challenging to debug, so we extend the type system to track this for us.

Cite as

Jip J. Dekker, Jason Nguyen, Peter J. Stuckey, and Guido Tack. Unit Types for MiniZinc. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dekker_et_al:LIPIcs.CP.2025.10,
  author =	{Dekker, Jip J. and Nguyen, Jason and Stuckey, Peter J. and Tack, Guido},
  title =	{{Unit Types for MiniZinc}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.10},
  URN =		{urn:nbn:de:0030-drops-238718},
  doi =		{10.4230/LIPIcs.CP.2025.10},
  annote =	{Keywords: Modelling, Type Safety, Unit Types}
}
Document
Invited Talk
Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs (Invited Talk)

Authors: Matthew L. Daggitt, Wen Kokke, Robert Atkey, Ekaterina Komendantskaya, Natalia Slusarz, and Luca Arnaboldi

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Neuro-symbolic programs, i.e. programs containing both machine learning components and traditional symbolic code, are becoming increasingly widespread. Finding a general methodology for verifying such programs is challenging due to both the number of different tools involved and the intricate interface between the "neural" and "symbolic" program components. In this paper we present a general decomposition of the neuro-symbolic verification problem into parts, and examine the problem of the embedding gap that occurs when one tries to combine proofs about the neural and symbolic components. To address this problem we then introduce Vehicle - standing as an abbreviation for a "verification condition language" - an intermediate programming language interface between machine learning frameworks, automated theorem provers, and dependently-typed formalisations of neuro-symbolic programs. Vehicle allows users to specify the properties of the neural components of neuro-symbolic programs once, and then safely compile the specification to each interface using a tailored typing and compilation procedure. We give a high-level overview of Vehicle’s overall design, its interfaces and compilation & type-checking procedures, and then demonstrate its utility by formally verifying the safety of a simple autonomous car controlled by a neural network, operating in a stochastic environment with imperfect information.

Cite as

Matthew L. Daggitt, Wen Kokke, Robert Atkey, Ekaterina Komendantskaya, Natalia Slusarz, and Luca Arnaboldi. Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs (Invited Talk). In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{daggitt_et_al:LIPIcs.FSCD.2025.2,
  author =	{Daggitt, Matthew L. and Kokke, Wen and Atkey, Robert and Komendantskaya, Ekaterina and Slusarz, Natalia and Arnaboldi, Luca},
  title =	{{Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.2},
  URN =		{urn:nbn:de:0030-drops-236172},
  doi =		{10.4230/LIPIcs.FSCD.2025.2},
  annote =	{Keywords: Neural Network Verification, Types, Interactive Theorem Provers}
}
Document
Track A: Algorithms, Complexity and Games
Algorithms for the Diverse-k-SAT Problem: The Geometry of Satisfying Assignments

Authors: Per Austrin, Ioana O. Bercea, Mayank Goswami, Nutan Limaye, and Adarsh Srinivasan

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Given a k-CNF formula and an integer s ≥ 2, we study algorithms that obtain s solutions to the formula that are as dispersed as possible. For s = 2, this problem of computing the diameter of a k-CNF formula was initiated by Creszenzi and Rossi, who showed strong hardness results even for k = 2. The current best upper bound [Angelsmark and Thapper '04] goes to 4ⁿ as k → ∞. As our first result, we show that this quadratic blow up is not necessary by utilizing the Fast-Fourier transform (FFT) to give a O^*(2ⁿ) time exact algorithm for computing the diameter of any k-CNF formula. For s > 2, the problem was raised in the SAT community (Nadel '11) and several heuristics have been proposed for it, but no algorithms with theoretical guarantees are known. We give exact algorithms using FFT and clique-finding that run in O^*(2^{(s-1)n}) and O^*(s² |Ω_{𝐅}|^{ω ⌈ s/3 ⌉}) respectively, where |Ω_{𝐅}| is the size of the solutions space of the formula 𝐅 and ω is the matrix multiplication exponent. However, current SAT algorithms for finding one solution run in time O^*(2^{ε_{k}n}) for ε_{k} ≈ 1-Θ(1/k), which is much faster than all above run times. As our main result, we analyze two popular SAT algorithms - PPZ (Paturi, Pudlák, Zane '97) and Schöning’s ('02) algorithms, and show that in time poly(s)O^*(2^{ε_{k}n}), they can be used to approximate diameter as well as the dispersion (s > 2) problem. While we need to modify Schöning’s original algorithm for technical reasons, we show that the PPZ algorithm, without any modification, samples solutions in a geometric sense. We believe this geometric sampling property of PPZ may be of independent interest. Finally, we focus on diverse solutions to NP-complete optimization problems, and give bi-approximations running in time poly(s)O^*(2^{ε n}) with ε < 1 for several problems such as Maximum Independent Set, Minimum Vertex Cover, Minimum Hitting Set, Feedback Vertex Set, Multicut on Trees and Interval Vertex Deletion. For all of these problems, all existing exact methods for finding optimal diverse solutions have a runtime with at least an exponential dependence on the number of solutions s. Our methods show that by relaxing to bi-approximations, this dependence on s can be made polynomial.

Cite as

Per Austrin, Ioana O. Bercea, Mayank Goswami, Nutan Limaye, and Adarsh Srinivasan. Algorithms for the Diverse-k-SAT Problem: The Geometry of Satisfying Assignments. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.ICALP.2025.14,
  author =	{Austrin, Per and Bercea, Ioana O. and Goswami, Mayank and Limaye, Nutan and Srinivasan, Adarsh},
  title =	{{Algorithms for the Diverse-k-SAT Problem: The Geometry of Satisfying Assignments}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.14},
  URN =		{urn:nbn:de:0030-drops-233916},
  doi =		{10.4230/LIPIcs.ICALP.2025.14},
  annote =	{Keywords: Exponential time algorithms, Satisfiability, k-SAT, PPZ, Sch\"{o}ning, Dispersion, Diversity}
}
Document
Taming and Dissecting Recursions Through Interprocedural Weak Topological Ordering

Authors: Jiawei Yang, Xiao Cheng, Bor-Yuh Evan Chang, Xiapu Luo, and Yulei Sui

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
Abstract interpretation provides a foundational framework for approximating program semantics by interpreting code through abstract domains using semantic functions over ordered sets along a program’s control flow graph (CFG). To facilitate fixpoint computation in abstract interpretation, weak topological ordering (WTO) is an effective strategy for handling loops, as it identifies strategic control points in the CFG where widening and narrowing operations should be applied. However, existing abstract interpreters still face challenges when extending WTO computation in the presence of recursive programs. Computing a precise whole-program WTO requires full context-sensitive analysis which is not scalable for large programs, while context-insensitive analysis introduces spurious cycles that compromise precision. Current approaches either ignore recursion (resulting in unsoundness) or rely on conservative approximations, sacrificing precision by adopting the greatest elements of abstract domains and applying widening at function boundaries without subsequent narrowing refinements. These can lead to undesired results for downstream tasks, such as bug detection. To address the above limitations, we present RecTopo, a new technique to boost the efficiency of precise abstract interpretation in the presence of recursive programs through interprocedural weak topological ordering (IWTO). Rather than pursuing an expensive whole-program WTO analysis, RecTopo employs an on-demand approach that strategically decomposes programs at recursion boundaries and constructs targeted IWTOs for each recursive component. RecTopo dissects and analyzes (nested) recursions through interleaved widening and narrowing operations. This approach enables precise control over interpretation ordering within recursive structures while eliminating spurious recursions through systematic correlation of control flow and call graphs. We implemented RecTopo and evaluated its effectiveness using an assertion-based checking client focused on buffer overflow detection, comparing it against three popular open-source abstract interpreters (IKOS, Clam, CSA). The experiments on 8312 programs from the NIST dataset demonstrate that, on average, RecTopo is 31.99% more precise and achieves a 17.49% higher recall rate compared to three other tools. Moreover, RecTopo exhibits an average precision improvement of 46.51% and a higher recall rate of 32.98% compared to our baselines across ten large open-source projects. Further ablation studies reveal that IWTO reduces spurious widening operations compared to whole-program WTO, resulting in a 12.83% reduction in analysis time.

Cite as

Jiawei Yang, Xiao Cheng, Bor-Yuh Evan Chang, Xiapu Luo, and Yulei Sui. Taming and Dissecting Recursions Through Interprocedural Weak Topological Ordering. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 34:1-34:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.ECOOP.2025.34,
  author =	{Yang, Jiawei and Cheng, Xiao and Chang, Bor-Yuh Evan and Luo, Xiapu and Sui, Yulei},
  title =	{{Taming and Dissecting Recursions Through Interprocedural Weak Topological Ordering}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{34:1--34:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.34},
  URN =		{urn:nbn:de:0030-drops-233265},
  doi =		{10.4230/LIPIcs.ECOOP.2025.34},
  annote =	{Keywords: Abstract interpretation, recursion, weak topological ordering}
}
Document
FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation

Authors: Amber Gorzynski and Alastair F. Donaldson

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
Decompilation is the process of translating compiled code into high-level code. Control flow recovery is a challenging part of the process. "Misdecompilations" can occur, whereby the decompiled code does not accurately represent the semantics of the compiled code, despite it being syntactically valid. This is problematic because it can mislead users who are trying to reason about the program. We present CFG-based program generation: a novel approach to randomised testing that aims to improve the control flow recovery of decompilers. CFG-based program generation involves randomly generating control flow graphs (CFGs) and paths through each graph. Inspired by prior work in the domain of GPU computing, (CFG, path) pairs are "fleshed" into test programs. Each program is decompiled and recompiled. The test oracle verifies whether the actual runtime path through the graph matches the expected path. Any difference in the execution paths after recompilation indicates a possible misdecompilation. A key benefit of this approach is that it is largely independent of the source and target languages in question because it is focused on control flow. The approach is therefore applicable to numerous decompilation settings. The trade-off resulting from the focus on control flow is that misdecompilation bugs that do not relate to control flow (e.g. bugs that involve specific arithmetic operations) are out of scope. We have implemented this approach in FuzzFlesh, an open-source randomised testing tool. FuzzFlesh can be easily configured to target a variety of low-level languages and decompiler toolchains because most of the CFG and path generation process is language-independent. At present, FuzzFlesh supports testing decompilation of Java bytecode, .NET assembly and x86 machine code. In addition to program generation, FuzzFlesh also includes an automated test-case reducer that operates on the CFG rather than the low-level program, which means that it can be applied to any of the target languages. We present a large experimental campaign applying FuzzFlesh to a variety of decompilers, leading to the discovery of 12 previously-unknown bugs across two language formats, six of which have been fixed. We present experiments comparing our generic FuzzFlesh tool to two state-of-the-art decompiler testing tools targeted at specific languages. As expected, the coverage our generic FuzzFlesh tool achieves on a given decompiler is lower than the coverage achieved by a tool specifically designed for the input format of that decompiler. However, due to its focus on control flow, FuzzFlesh is able to cover sections of control flow recovery code that the targeted tools cannot reach, and identify control flow related bugs that the targeted tools miss.

Cite as

Amber Gorzynski and Alastair F. Donaldson. FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 13:1-13:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gorzynski_et_al:LIPIcs.ECOOP.2025.13,
  author =	{Gorzynski, Amber and Donaldson, Alastair F.},
  title =	{{FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{13:1--13:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.13},
  URN =		{urn:nbn:de:0030-drops-233062},
  doi =		{10.4230/LIPIcs.ECOOP.2025.13},
  annote =	{Keywords: Decompiler, Reverse Engineering, Control Flow, Software Testing, Fuzzing}
}
  • Refine by Type
  • 28 Document/PDF
  • 23 Document/HTML

  • Refine by Publication Year
  • 23 2025
  • 1 2024
  • 2 2017
  • 1 2016
  • 1 2015

  • Refine by Author
  • 2 Amburi, Sathwik
  • 2 Espinha Gasiba, Tiago
  • 2 Pinto-Albuquerque, Maria
  • 2 Tse, David
  • 1 Akshay, S.
  • Show More...

  • Refine by Series/Journal
  • 20 LIPIcs
  • 5 OASIcs
  • 3 LITES

  • Refine by Classification
  • 3 Security and privacy → Software security engineering
  • 3 Theory of computation → Distributed algorithms
  • 2 Software and its engineering → Formal software verification
  • 2 Software and its engineering → Software development techniques
  • 2 Software and its engineering → Software testing and debugging
  • Show More...

  • Refine by Keyword
  • 2 Secure Coding
  • 1 Abstract interpretation
  • 1 Abstraction
  • 1 Abundance Estimation
  • 1 Android
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail