33 Search Results for "Jacobs, Michael"


Document
Fréchet Distance in Unweighted Planar Graphs

Authors: Ivor van der Hoog, Thijs van der Horst, Eva Rotenberg, and Lasse Wulf

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
The Fréchet distance is a distance measure between trajectories in ℝ^d or walks in a graph G. Given constant-time shortest path queries, the Discrete Fréchet distance D_G(P, Q) between two walks P and Q can be computed in O(|P|⋅|Q|) time using a dynamic program. Driemel, van der Hoog, and Rotenberg [SoCG'22] show that for weighted planar graphs this approach is likely tight, as there can be no strongly-subquadratic algorithm to compute a 1.01-approximation of D_G(P, Q) unless the Orthogonal Vector Hypothesis (OVH) fails. Such quadratic-time conditional lower bounds are common to many Fréchet distance variants. However, they can be circumvented by assuming that the input comes from some well-behaved class: There exist (1+ε)-approximations, both in weighted graphs and in ℝ^d, that take near-linear time for c-packed or κ-straight walks in the graph. In ℝ^d there also exists a near-linear time algorithm to compute the Fréchet distance whenever all input edges are long compared to the distance. We consider computing the Fréchet distance in unweighted planar graphs. We show that there exist no strongly-subquadratic 1.25-approximations of the discrete Fréchet distance between two disjoint simple paths in an unweighted planar graph in strongly subquadratic time, unless OVH fails. This improves the previous lower bound, both in terms of generality and approximation factor. We subsequently show that adding graph structure circumvents this lower bound: If the graph is a regular tiling with unit-weighted edges, then there exists an Õ((|P|+|Q|)^{1.5})-time algorithm to compute D_G(P, Q). Our result has natural implications in the plane, as it allows us to define a new class of well-behaved curves that facilitate (1+ε)-approximations of their discrete Fréchet distance in subquadratic time.

Cite as

Ivor van der Hoog, Thijs van der Horst, Eva Rotenberg, and Lasse Wulf. Fréchet Distance in Unweighted Planar Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ESA.2025.24,
  author =	{van der Hoog, Ivor and van der Horst, Thijs and Rotenberg, Eva and Wulf, Lasse},
  title =	{{Fr\'{e}chet Distance in Unweighted Planar Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.24},
  URN =		{urn:nbn:de:0030-drops-244924},
  doi =		{10.4230/LIPIcs.ESA.2025.24},
  annote =	{Keywords: Fr\'{e}chet distance, planar graphs, lower bounds, approximation algorithms}
}
Document
Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better

Authors: Jacobus Conradi and Anne Driemel

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
Many application areas collect unstructured trajectory data. In subtrajectory clustering, one is interested to find patterns in this data using a hybrid combination of segmentation and clustering. We analyze two variants of this problem based on the well-known SetCover and CoverageMaximization problems. In both variants the set system is induced by metric balls under the Fréchet distance centered at polygonal curves. Our algorithms focus on improving the running time of the update step of the generic greedy algorithm by means of a careful combination of sweeps through a candidate space. In the first variant, we are given a polygonal curve P of complexity n, distance threshold Δ and complexity bound 𝓁 and the goal is to identify a minimum-size set of center curves 𝒞, where each center curve is of complexity at most 𝓁 and every point p on P is covered. A point p on P is covered if it is part of a subtrajectory π_p of P such that there is a center c ∈ 𝒞 whose Fréchet distance to π_p is at most Δ. We present an approximation algorithm for this problem with a running time of 𝒪((n²𝓁 + √{k_Δ}n^{5/2})log²n), where k_Δ is the size of an optimal solution. The algorithm gives a bicriterial approximation guarantee that relaxes the Fréchet distance threshold by a constant factor and the size of the solution by a factor of 𝒪(log n). The second problem variant asks for the maximum fraction of the input curve P that can be covered using k center curves, where k ≤ n is a parameter to the algorithm. For the second problem variant, our techniques lead to an algorithm with a running time of 𝒪((k+𝓁)n²log²n) and similar approximation guarantees. Note that in both algorithms k,k_Δ ∈ O(n) and hence the running time is cubic, or better if k ≪ n.

Cite as

Jacobus Conradi and Anne Driemel. Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 12:1-12:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{conradi_et_al:LIPIcs.ESA.2025.12,
  author =	{Conradi, Jacobus and Driemel, Anne},
  title =	{{Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.12},
  URN =		{urn:nbn:de:0030-drops-244806},
  doi =		{10.4230/LIPIcs.ESA.2025.12},
  annote =	{Keywords: Clustering, Set cover, Fr\'{e}chet distance, Approximation algorithms}
}
Document
A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching

Authors: Joshua M. Cohen

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Algebraic data types (ADTs) and pattern matching are widely used to write elegant functional programs and to specify program behavior. These constructs are critical to most general-purpose interactive theorem provers (e.g. Lean, Rocq/Coq), first-order SMT-based deductive verifiers (e.g. Dafny, VeriFast), and intermediate verification languages (e.g. Why3). Such features require layers of compilation - in Rocq, pattern matches are compiled to remove nesting, while SMT-based tools further axiomatize ADTs with a first-order specification. However, these critical steps have been omitted from prior formalizations of such toolchains (e.g. MetaRocq). We give the first proved-sound sophisticated pattern matching compiler (based on Maranget’s compilation to decision trees) and first-order axiomatization of ADTs, both based on Why3 implementations. We prove the soundness of exhaustiveness checking, extending pen-and-paper proofs from the literature, and formulate a robustness property with which we find an exhaustiveness-related bug in Why3. We show that many of our proofs could be useful for reasoning about any first-order program verifier supporting ADTs.

Cite as

Joshua M. Cohen. A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cohen:LIPIcs.ITP.2025.5,
  author =	{Cohen, Joshua M.},
  title =	{{A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.5},
  URN =		{urn:nbn:de:0030-drops-246046},
  doi =		{10.4230/LIPIcs.ITP.2025.5},
  annote =	{Keywords: Pattern Matching Compilation, Algebraic Data Types, First-Order Logic}
}
Document
Verifying Datalog Reasoning with Lean

Authors: Johannes Tantow, Lukas Gerlach, Stephan Mennicke, and Markus Krötzsch

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Datalog is an essential logical rule language with many applications, and modern rule engines compute logical consequences for Datalog with high performance and scalability. While Datalog is rather simple and, in principle, explainable by design, such sophisticated implementations and optimizations are hard to verify. We therefore propose a certificate-based approach to validate results of Datalog reasoners in a formally verified checker for Datalog proofs. Using the proof assistant Lean, we implement such a checker and verify its correctness against direct formalizations of the Datalog semantics. We propose two JSON encodings for Datalog proofs: one using the widely supported Datalog proof trees, and one using directed acyclic graphs for succinctness. To evaluate the practical feasibility and performance of our approach, we validate proofs that we obtain by converting derivation traces of an existing Datalog reasoner into our tool-independent format.

Cite as

Johannes Tantow, Lukas Gerlach, Stephan Mennicke, and Markus Krötzsch. Verifying Datalog Reasoning with Lean. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 36:1-36:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{tantow_et_al:LIPIcs.ITP.2025.36,
  author =	{Tantow, Johannes and Gerlach, Lukas and Mennicke, Stephan and Kr\"{o}tzsch, Markus},
  title =	{{Verifying Datalog Reasoning with Lean}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{36:1--36:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.36},
  URN =		{urn:nbn:de:0030-drops-246342},
  doi =		{10.4230/LIPIcs.ITP.2025.36},
  annote =	{Keywords: Certifying Algorithms, Datalog, Formal Verification}
}
Document
Human-AI Interaction in Space: Insights from a Mars Analog Mission with the Harmony Large Language Model

Authors: Hippolyte Hilgers, Jean Vanderdonckt, and Radu-Daniel Vatavu

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
The operational complexities of space missions require reliable, context-aware technical assistance for astronauts, especially when technical expertise is not available onboard and communication with Earth is delayed or limited. In this context, Large Language Models present a promising opportunity to augment human capabilities. To this end, we present Harmony, a model designed to provide astronauts with real-time technical assistance, fostering human-AI collaboration during analog missions. We report empirical results from an experiment involving seven analog astronauts that evaluated their user experience with Harmony in both a conventional environment and an isolated, confined, and extreme physical setting at the Mars Desert Research Station over four sessions, and discuss how the Mars analog environment impacted their experience. Our findings reveal the extent to which human-AI interactions evolve across various user experience dimensions and suggest how Harmony can be further adapted to suit extreme environments, with a focus on SpaceCHI.

Cite as

Hippolyte Hilgers, Jean Vanderdonckt, and Radu-Daniel Vatavu. Human-AI Interaction in Space: Insights from a Mars Analog Mission with the Harmony Large Language Model. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hilgers_et_al:OASIcs.SpaceCHI.2025.1,
  author =	{Hilgers, Hippolyte and Vanderdonckt, Jean and Vatavu, Radu-Daniel},
  title =	{{Human-AI Interaction in Space: Insights from a Mars Analog Mission with the Harmony Large Language Model}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{1:1--1:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.1},
  URN =		{urn:nbn:de:0030-drops-239912},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.1},
  annote =	{Keywords: Extreme user experience, Human-AI interaction, Isolated-confined-extreme environment, Interaction design, Large Language Models, Mars Desert Research Station, Space mission, Technical assistance, Technical documentation, User experience}
}
Document
Movement in Low Gravity (MoLo) – LUNA: Biomechanical Modelling to Mitigate Lunar Surface Operation Risks

Authors: David Andrew Green

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
The Artemis programme seeks to develop and test concepts, hardware and approaches to support long term habitation of the Lunar surface, and future missions to Mars. In preparation for the Artemis missions determination of tasks to be performed, the functional requirements of such tasks and as mission duration extends whether physiological deconditioning becomes functionally significant, compromising the crew member’s ability to perform critical tasks on the surface, and/or upon return to earth [MoLo-LUNA – leveraging the Molo programme (and several other activities) - could become a key supporting activity for LUNA incl. validation of the Puppeteer offloading system itself via creation of a complementary MoLo-LUNA-LAB. Furthermore, the MoLo-LUNA programme could become a key facilitator of simulator suit instrumentation/definition, broader astronaut training activities and mission architecture development – including Artemis mission simulations. By employing a Puppeteer system external to the LUNA chamber hall it will optimise utilisation and cost-effectiveness of LUNA, and as such represents a critical service to future LUNA stakeholders. Furthermore, MoLo-LUNA would generate a unique data set that can be leveraged to predict de-conditioning on the Lunar surface - and thereby optimise functionality, and minimise mission risk – including informing the need for, and prescription of exercise countermeasures on the Lunar Surface and in transit. Thus, MoLo-LUNA offers a unique opportunity to place LUNA, and ESA as a key ongoing provider of evidence to define, optimise and support crew Artemis surface missions.

Cite as

David Andrew Green. Movement in Low Gravity (MoLo) – LUNA: Biomechanical Modelling to Mitigate Lunar Surface Operation Risks. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 26:1-26:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{green:OASIcs.SpaceCHI.2025.26,
  author =	{Green, David Andrew},
  title =	{{Movement in Low Gravity (MoLo) – LUNA: Biomechanical Modelling to Mitigate Lunar Surface Operation Risks}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{26:1--26:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.26},
  URN =		{urn:nbn:de:0030-drops-240166},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.26},
  annote =	{Keywords: Locomotion, hypogravity, modelling, Lunar}
}
Document
Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits

Authors: Neil J. Ross and Scott Wesley

Published in: LIPIcs, Volume 345, 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)


Abstract
Many promising quantum algorithms in economics, medical science, and material science rely on circuits that are parameterized by a large number of angles. To ensure that these algorithms are efficient, these parameterized circuits must be heavily optimized. However, most quantum circuit optimizers are not verified, so this procedure is known to be error-prone. For this reason, there is growing interest in the design of equivalence checking algorithms for parameterized quantum circuits. In this paper, we define a generalized class of parameterized circuits with arbitrary rotations and show that this problem is decidable for cyclotomic gate sets. We propose a cutoff-based procedure which reduces the problem of verifying the equivalence of parameterized quantum circuits to the problem of verifying the equivalence of finitely many parameter-free quantum circuits. Because the number of parameter-free circuits grows exponentially with the number of parameters, we also propose a probabilistic variant of the algorithm for cases when the number of parameters is intractably large. We show that our techniques extend to equivalence modulo global phase, and describe an efficient angle sampling procedure for cyclotomic gate sets.

Cite as

Neil J. Ross and Scott Wesley. Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits. In 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 345, pp. 85:1-85:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ross_et_al:LIPIcs.MFCS.2025.85,
  author =	{Ross, Neil J. and Wesley, Scott},
  title =	{{Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits}},
  booktitle =	{50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)},
  pages =	{85:1--85:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-388-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{345},
  editor =	{Gawrychowski, Pawe{\l} and Mazowiecki, Filip and Skrzypczak, Micha{\l}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2025.85},
  URN =		{urn:nbn:de:0030-drops-241921},
  doi =		{10.4230/LIPIcs.MFCS.2025.85},
  annote =	{Keywords: Quantum Circuits, Parameterized Equivalence Checking}
}
Document
Just Verification of Mutual Exclusion Algorithms

Authors: Rob van Glabbeek, Bas Luttik, and Myrthe S. C. Spronck

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
We verify the correctness of a variety of mutual exclusion algorithms through model checking. We look at algorithms where communication is via shared read/write registers, where those registers can be atomic or non-atomic. For the verification of liveness properties, it is necessary to assume a completeness criterion to eliminate spurious counterexamples. We use justness as completeness criterion. Justness depends on a concurrency relation; we consider several such relations, modelling different assumptions on the working of the shared registers. We present executions demonstrating the violation of correctness properties by several algorithms, and in some cases suggest improvements.

Cite as

Rob van Glabbeek, Bas Luttik, and Myrthe S. C. Spronck. Just Verification of Mutual Exclusion Algorithms. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanglabbeek_et_al:LIPIcs.CONCUR.2025.17,
  author =	{van Glabbeek, Rob and Luttik, Bas and Spronck, Myrthe S. C.},
  title =	{{Just Verification of Mutual Exclusion Algorithms}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.17},
  URN =		{urn:nbn:de:0030-drops-239670},
  doi =		{10.4230/LIPIcs.CONCUR.2025.17},
  annote =	{Keywords: Mutual exclusion, safe registers, regular registers, overlapping reads and writes, atomicity, safety, liveness, starvation freedom, justness, model checking, mCRL2}
}
Document
New Fault Domains for Conformance Testing of Finite State Machines

Authors: Frits Vaandrager and Ivo Melse

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
A fault domain reflects a tester’s assumptions about faults that may occur in an implementation and that need to be detected during testing. A fault domain that has been widely studied in the literature on black-box conformance testing is the class of finite state machines (FSMs) with at most m states. Numerous strategies for generating test suites have been proposed that guarantee fault coverage for this class. These so-called m-complete test suites grow exponentially in m-n, where n is the number of states of the specification, so one can only run them for small values of m-n. But the assumption that m-n is small is not realistic in practice. In his seminal paper from 1964, Hennie raised the challenge to design checking experiments in which the number of states may increase appreciably. In order to solve this long-standing open problem, we propose (much larger) fault domains that capture the assumption that all states in an implementation can be reached by first performing a sequence from some set A (typically a state cover for the specification), followed by k arbitrary inputs, for some small k. The number of states of FSMs in these fault domains grows exponentially in k. We present a sufficient condition for k-A-completeness of test suites with respect to these fault domains. Our condition implies k-A-completeness of two prominent m-complete test suite generation strategies, the Wp and HSI methods. Thus these strategies are complete for much larger fault domains than those for which they were originally designed, and thereby solve Hennie’s challenge. We show that three other prominent m-complete methods (H, SPY and SPYH) do not always generate k-A-complete test suites.

Cite as

Frits Vaandrager and Ivo Melse. New Fault Domains for Conformance Testing of Finite State Machines. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 34:1-34:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vaandrager_et_al:LIPIcs.CONCUR.2025.34,
  author =	{Vaandrager, Frits and Melse, Ivo},
  title =	{{New Fault Domains for Conformance Testing of Finite State Machines}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{34:1--34:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.34},
  URN =		{urn:nbn:de:0030-drops-239843},
  doi =		{10.4230/LIPIcs.CONCUR.2025.34},
  annote =	{Keywords: conformance testing, finite state machines, Mealy machines, apartness, observation tree, fault domains, k-A-complete test suites}
}
Document
PrintTalk: A Language for Constraint-Based 3D Modelling

Authors: Jef Jacobs, Wolfgang De Meuter, and Jens Nicolay

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
Programmatic CAD (PCAD) is an emerging alternative to traditional visual CAD software. However, state-of-the-art PCAD tools have limited or no support for constraints. Consequently, these tools depend solely on parametrisation for variability, reusability, and composition of shapes. This leads to problems such as parameter explosion, leaky compositional abstraction, and prevents a declarative approach to defining spatial patterns (linear, grid, circular, etc.) for the constituents of a composition. This paper describes the design of PrintTalk, a PCAD language that supports 3D modelling by composing shapes and expressing relations between them using first-class constraints. Evaluating PrintTalk against state-of-the-art PCAD tools demonstrates that its expressive abstraction and composition mechanisms facilitate the design and promotes the reuse of shapes.

Cite as

Jef Jacobs, Wolfgang De Meuter, and Jens Nicolay. PrintTalk: A Language for Constraint-Based 3D Modelling. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jacobs_et_al:LIPIcs.CP.2025.16,
  author =	{Jacobs, Jef and De Meuter, Wolfgang and Nicolay, Jens},
  title =	{{PrintTalk: A Language for Constraint-Based 3D Modelling}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.16},
  URN =		{urn:nbn:de:0030-drops-238775},
  doi =		{10.4230/LIPIcs.CP.2025.16},
  annote =	{Keywords: Programmatic 3D Modelling, PCAD, Domain specific language, Constraints}
}
Document
Cancellative Convex Semilattices

Authors: Ana Sokolova and Harald Woracek

Published in: LIPIcs, Volume 342, 11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025)


Abstract
Convex semilattices are algebras that are at the same time a convex algebra and a semilattice, together with a distributivity axiom. These algebras have attracted some attention in the last years as suitable algebras for probability and nondeterminism, in particular by being the Eilenberg-Moore algebras of the nonempty finitely-generated convex subsets of the distributions monad. A convex semilattice is cancellative if the underlying convex algebra is cancellative. Cancellative convex algebras have been characterized by M. H. Stone and by H. Kneser: A convex algebra is cancellative if and only if it is isomorphic to a convex subset of a vector space (with canonical convex algebra operations). We prove an analogous theorem for convex semilattices: A convex semilattice is cancellative if and only if it is isomorphic to a convex subset of a Riesz space, i.e., a lattice-ordered vector space (with canonical convex semilattice operations).

Cite as

Ana Sokolova and Harald Woracek. Cancellative Convex Semilattices. In 11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 342, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sokolova_et_al:LIPIcs.CALCO.2025.12,
  author =	{Sokolova, Ana and Woracek, Harald},
  title =	{{Cancellative Convex Semilattices}},
  booktitle =	{11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025)},
  pages =	{12:1--12:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-383-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{342},
  editor =	{C\^{i}rstea, Corina and Knapp, Alexander},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CALCO.2025.12},
  URN =		{urn:nbn:de:0030-drops-235714},
  doi =		{10.4230/LIPIcs.CALCO.2025.12},
  annote =	{Keywords: convex semilattice, cancellativity, Riesz space}
}
Document
A Coinductive Representation of Computable Functions

Authors: Alvin Tang and Dirk Pattinson

Published in: LIPIcs, Volume 342, 11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025)


Abstract
We investigate a representation of computable functions as total functions over 2^∞, the set of finite and infinite sequences over {0,1}. In this model, infinite sequences are interpreted as non-terminating computations whilst finite sequences represent the sum of their digits. We introduce a new definition principle, function space corecursion, that simultaneously generalises minimisation and primitive recursion. This defines the class of computable corecursive functions that is closed under composition and function space corecursion. We prove computable corecursive functions represent all partial recursive functions, and show that all computable corecursive functions are indeed computable by translation into the untyped λ-calculus.

Cite as

Alvin Tang and Dirk Pattinson. A Coinductive Representation of Computable Functions. In 11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 342, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{tang_et_al:LIPIcs.CALCO.2025.7,
  author =	{Tang, Alvin and Pattinson, Dirk},
  title =	{{A Coinductive Representation of Computable Functions}},
  booktitle =	{11th Conference on Algebra and Coalgebra in Computer Science (CALCO 2025)},
  pages =	{7:1--7:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-383-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{342},
  editor =	{C\^{i}rstea, Corina and Knapp, Alexander},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CALCO.2025.7},
  URN =		{urn:nbn:de:0030-drops-235662},
  doi =		{10.4230/LIPIcs.CALCO.2025.7},
  annote =	{Keywords: Computability, Coinduction}
}
Document
DAMA: A Dual Arbitration Mechanism for Mixed-Criticality Applications

Authors: Wafic Lawand and Rodolfo Pellizzoni

Published in: LIPIcs, Volume 335, 37th Euromicro Conference on Real-Time Systems (ECRTS 2025)


Abstract
We discuss hardware resource management in mixed-criticality systems, where requestors may issue latency-critical (LTC) and non-latency-critical (NLTC) requests. LTC requests must adhere to strict latency bounds imposed by safety-critical applications, but timely servicing NLTC requests is necessary to maximize overall system performance in the average case. In this paper, we address this tradeoff for a shared memory resource by proposing DAMA, a dual arbitration mechanism that imposes an upper bound on the cumulative latency of LTC requests without unduly impacting NLTC performance. DAMA comprises a high-performance arbiter, a real-time arbiter, and a mechanism that constantly monitors the cumulative latency of requests suffered by each requestor. DAMA primarily executes in high-performance mode and only switches to real-time mode in the rare instances when its incorporated mechanism detects a violation of a task’s timing guarantee. We demonstrate the effectiveness of our arbitration scheme by adapting a predictable prefetcher that issues NLTC requests and attaching it to the L1 caches of our cores. We show both formally and experimentally that DAMA provides timing guarantees for LTC requests while processing other NLTC requests. We also demonstrate that with a negligible overhead of less than 1.5% on the cumulative latency bound of LTC requests, DAMA can achieve an equivalent average performance to a prefetcher that processes requests under a high-performance arbitration scheme.

Cite as

Wafic Lawand and Rodolfo Pellizzoni. DAMA: A Dual Arbitration Mechanism for Mixed-Criticality Applications. In 37th Euromicro Conference on Real-Time Systems (ECRTS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 335, pp. 9:1-9:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lawand_et_al:LIPIcs.ECRTS.2025.9,
  author =	{Lawand, Wafic and Pellizzoni, Rodolfo},
  title =	{{DAMA: A Dual Arbitration Mechanism for Mixed-Criticality Applications}},
  booktitle =	{37th Euromicro Conference on Real-Time Systems (ECRTS 2025)},
  pages =	{9:1--9:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-377-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{335},
  editor =	{Mancuso, Renato},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2025.9},
  URN =		{urn:nbn:de:0030-drops-235875},
  doi =		{10.4230/LIPIcs.ECRTS.2025.9},
  annote =	{Keywords: Real-time Systems, Mixed-criticality Applications, Memory controllers, Prefetchers}
}
Document
The Cost of Skeletal Call-By-Need, Smoothly

Authors: Beniamino Accattoli, Francesco Magliocca, Loïc Peyrot, and Claudio Sacerdoti Coen

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Skeletal call-by-need is an optimization of call-by-need evaluation also known as "fully lazy sharing": when the duplication of a value has to take place, it is first split into "skeleton", which is then duplicated, and "flesh" which is instead kept shared. Here, we provide two cost analyses of skeletal call-by-need. Firstly, we provide a family of terms showing that skeletal call-by-need can be asymptotically exponentially faster than call-by-need in both time and space; it is the first such evidence, to our knowledge. Secondly, we prove that skeletal call-by-need can be implemented efficiently, that is, with bi-linear overhead. This result is obtained by providing a new smooth presentation of ideas by Shivers and Wand for the reconstruction of skeletons, which is then smoothly plugged into the study of an abstract machine following the distillation technique by Accattoli et al.

Cite as

Beniamino Accattoli, Francesco Magliocca, Loïc Peyrot, and Claudio Sacerdoti Coen. The Cost of Skeletal Call-By-Need, Smoothly. In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{accattoli_et_al:LIPIcs.FSCD.2025.5,
  author =	{Accattoli, Beniamino and Magliocca, Francesco and Peyrot, Lo\"{i}c and Sacerdoti Coen, Claudio},
  title =	{{The Cost of Skeletal Call-By-Need, Smoothly}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.5},
  URN =		{urn:nbn:de:0030-drops-236206},
  doi =		{10.4230/LIPIcs.FSCD.2025.5},
  annote =	{Keywords: \lambda-calculus, abstract machines, call-by-need, cost models}
}
Document
Solving Guarded Domain Equations in Presheaves over Ordinals and Mechanizing It

Authors: Sergei Stepanenko and Amin Timany

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Constructing solutions to recursive domain equations is a well-known, important problem in the study of programs and programming languages. Mathematically speaking, the problem is finding a fixed point (up to isomorphism) of a suitable functor over a suitable category. A particularly useful instance, inspired by the step-indexing technique, is where the functor is over (a subcategory of) the category of presheaves over the ordinal ω and the functors are locally-contractive, also known as guarded functors. This corresponds to step-indexing over natural numbers. However, for certain problems, e.g., when dealing with infinite non-determinism, one needs to employ trans-finite step-indexing, i.e., consider presheaf categories over higher ordinals. Prior work on trans-finite step-indexing either only considers a very narrow class of functors over a particularly restricted subcategory of presheaves over higher ordinals, or treats the problem very generally working with sheaves over an arbitrary complete Heyting algebra with a well-founded basis. In this paper we present a solution to the guarded domain equations problem over all guarded functors over the category of presheaves over ordinal numbers, as well as its mechanization in the Rocq Prover. As the categories of sheaves and presheaves over ordinals are equivalent, our main contribution is simplifying prior work from the setting of the category of sheaves to the setting of the category of presheaves and mechanizing it - presheaves are more amenable to mechanization in a proof assistant.

Cite as

Sergei Stepanenko and Amin Timany. Solving Guarded Domain Equations in Presheaves over Ordinals and Mechanizing It. In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{stepanenko_et_al:LIPIcs.FSCD.2025.33,
  author =	{Stepanenko, Sergei and Timany, Amin},
  title =	{{Solving Guarded Domain Equations in Presheaves over Ordinals and Mechanizing It}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{33:1--33:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.33},
  URN =		{urn:nbn:de:0030-drops-236486},
  doi =		{10.4230/LIPIcs.FSCD.2025.33},
  annote =	{Keywords: Domain Equations, Guarded Fixed Points, Fixed Points, Category Theory, Rocq, Presheaves, Ordinals}
}
  • Refine by Type
  • 33 Document/PDF
  • 29 Document/HTML

  • Refine by Publication Year
  • 28 2025
  • 1 2024
  • 1 2023
  • 2 2022
  • 1 2017

  • Refine by Author
  • 3 van der Hoog, Ivor
  • 2 Rotenberg, Eva
  • 2 Sokolova, Ana
  • 2 van der Horst, Thijs
  • 1 Accattoli, Beniamino
  • Show More...

  • Refine by Series/Journal
  • 25 LIPIcs
  • 5 OASIcs
  • 2 LITES
  • 1 TGDK

  • Refine by Classification
  • 3 Theory of computation → Computational geometry
  • 2 Computer systems organization → Embedded software
  • 2 Computer systems organization → Real-time system architecture
  • 2 Theory of computation → Categorical semantics
  • 2 Theory of computation → Concurrency
  • Show More...

  • Refine by Keyword
  • 4 Fréchet distance
  • 2 Approximation algorithms
  • 2 linear logic
  • 1 (Bi-)simulations
  • 1 Algebraic Data Types
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail