22 Search Results for "Graf, Daniel"


Document
Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems

Authors: Bingbing Hu and Adam Polak

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
Most of the known tight lower bounds for dynamic problems are based on the Online Boolean Matrix-Vector Multiplication (OMv) Hypothesis, which is not as well studied and understood as some more popular hypotheses in fine-grained complexity. It would be desirable to base hardness of dynamic problems on a more believable hypothesis. We propose analogues of the OMv Hypothesis for variants of matrix multiplication that are known to be harder than Boolean product in the offline setting, namely: equality, dominance, min-witness, min-max, and bounded monotone min-plus products. These hypotheses are a priori weaker assumptions than the standard (Boolean) OMv Hypothesis and yet we show that they are actually equivalent to it. This establishes the first such fine-grained equivalence class for dynamic problems.

Cite as

Bingbing Hu and Adam Polak. Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 54:1-54:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ESA.2025.54,
  author =	{Hu, Bingbing and Polak, Adam},
  title =	{{Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{54:1--54:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.54},
  URN =		{urn:nbn:de:0030-drops-245228},
  doi =		{10.4230/LIPIcs.ESA.2025.54},
  annote =	{Keywords: Fine-grained complexity, OMv hypothesis, reductions, equivalence class}
}
Document
A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching

Authors: Joshua M. Cohen

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Algebraic data types (ADTs) and pattern matching are widely used to write elegant functional programs and to specify program behavior. These constructs are critical to most general-purpose interactive theorem provers (e.g. Lean, Rocq/Coq), first-order SMT-based deductive verifiers (e.g. Dafny, VeriFast), and intermediate verification languages (e.g. Why3). Such features require layers of compilation - in Rocq, pattern matches are compiled to remove nesting, while SMT-based tools further axiomatize ADTs with a first-order specification. However, these critical steps have been omitted from prior formalizations of such toolchains (e.g. MetaRocq). We give the first proved-sound sophisticated pattern matching compiler (based on Maranget’s compilation to decision trees) and first-order axiomatization of ADTs, both based on Why3 implementations. We prove the soundness of exhaustiveness checking, extending pen-and-paper proofs from the literature, and formulate a robustness property with which we find an exhaustiveness-related bug in Why3. We show that many of our proofs could be useful for reasoning about any first-order program verifier supporting ADTs.

Cite as

Joshua M. Cohen. A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cohen:LIPIcs.ITP.2025.5,
  author =	{Cohen, Joshua M.},
  title =	{{A Mechanized First-Order Theory of Algebraic Data Types with Pattern Matching}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.5},
  URN =		{urn:nbn:de:0030-drops-246046},
  doi =		{10.4230/LIPIcs.ITP.2025.5},
  annote =	{Keywords: Pattern Matching Compilation, Algebraic Data Types, First-Order Logic}
}
Document
(Can't) Fly Me to the Moon or Mars? Context of Use Analysis Approaches for Space Exploration

Authors: Tilo Mentler

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
Contexts of use are a central concept of research and development in human-computer interaction (HCI). Their in-depth understanding is a key for usable and acceptable computer-aided solutions and a particular challenge in connection with space exploration. It is necessary to examine which of the established approaches can be implemented here and where methodological adjustments are necessary. This article provides a systematic consideration of three perspectives to understand space contexts of use: theory and literature, imparted experiential knowledge, and personal experience. Potentials and risks are evaluated. The findings of HCI research in safety-critical contexts and under COVID-19 conditions that can be transferred to space HCI are taken up.

Cite as

Tilo Mentler. (Can't) Fly Me to the Moon or Mars? Context of Use Analysis Approaches for Space Exploration. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 23:1-23:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mentler:OASIcs.SpaceCHI.2025.23,
  author =	{Mentler, Tilo},
  title =	{{(Can't) Fly Me to the Moon or Mars? Context of Use Analysis Approaches for Space Exploration}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{23:1--23:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.23},
  URN =		{urn:nbn:de:0030-drops-240135},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.23},
  annote =	{Keywords: Context of Use, Experiential Knowledge, Remote Contextual Inquiry}
}
Document
Computational Geometry with Probabilistically Noisy Primitive Operations

Authors: David Eppstein, Michael T. Goodrich, and Vinesh Sridhar

Published in: LIPIcs, Volume 349, 19th International Symposium on Algorithms and Data Structures (WADS 2025)


Abstract
Much prior work has been done on designing computational geometry algorithms that handle input degeneracies, data imprecision, and arithmetic round-off errors. We take a new approach, inspired by the noisy sorting literature, and study computational geometry algorithms subject to noisy Boolean primitive operations in which, e.g., the comparison "is point q above line 𝓁?" returns the wrong answer with some fixed probability. We propose a novel technique called path-guided pushdown random walks that generalizes the results of noisy sorting. We apply this technique to solve point-location, plane-sweep, convex hulls in 2D and 3D, and Delaunay triangulations for noisy primitives in optimal time with high probability.

Cite as

David Eppstein, Michael T. Goodrich, and Vinesh Sridhar. Computational Geometry with Probabilistically Noisy Primitive Operations. In 19th International Symposium on Algorithms and Data Structures (WADS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 349, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eppstein_et_al:LIPIcs.WADS.2025.24,
  author =	{Eppstein, David and Goodrich, Michael T. and Sridhar, Vinesh},
  title =	{{Computational Geometry with Probabilistically Noisy Primitive Operations}},
  booktitle =	{19th International Symposium on Algorithms and Data Structures (WADS 2025)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-398-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{349},
  editor =	{Morin, Pat and Oh, Eunjin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WADS.2025.24},
  URN =		{urn:nbn:de:0030-drops-242552},
  doi =		{10.4230/LIPIcs.WADS.2025.24},
  annote =	{Keywords: Computational geometry, noisy comparisons, random walks}
}
Document
Partial-Order Reduction Is Hard

Authors: Frédéric Herbreteau, Sarah Larroze-Jardiné, and Igor Walukiewicz

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
The goal of partial-order methods is to accelerate the exploration of concurrent systems by examining only a representative subset of all possible runs. The stateful approach builds a transition system with representative runs, while the stateless method simply enumerates them. The stateless approach may be preferable if the transition system is tree-like; otherwise, the stateful method is more effective. In the last decade, optimality has been a guiding principle for developing stateless partial-order reduction algorithms, and without doubt contributed to big progress in the field. In this paper we ask if we can get a similar principle for the stateful approach. We show that in stateful exploration, a polynomially close to optimal partial-order algorithm cannot exist unless P=NP. The result holds even for acyclic programs with just await instructions. This lower bound result justifies systematic study of heuristics for stateful partial-order reduction. We propose a notion of IFS oracle as a useful abstraction. The oracle can be used to get a very simple optimal stateless algorithm, which can then be adapted to a non-optimal stateful algorithm. While in general the oracle problem is NP-hard, we show a simple case where it can be solved in linear time.

Cite as

Frédéric Herbreteau, Sarah Larroze-Jardiné, and Igor Walukiewicz. Partial-Order Reduction Is Hard. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{herbreteau_et_al:LIPIcs.CONCUR.2025.22,
  author =	{Herbreteau, Fr\'{e}d\'{e}ric and Larroze-Jardin\'{e}, Sarah and Walukiewicz, Igor},
  title =	{{Partial-Order Reduction Is Hard}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.22},
  URN =		{urn:nbn:de:0030-drops-239727},
  doi =		{10.4230/LIPIcs.CONCUR.2025.22},
  annote =	{Keywords: Formal verification, Concurrent systems, Partial-order reduction, Complexity}
}
Document
Model Checking as Program Verification by Abstract Interpretation

Authors: Paolo Baldan, Roberto Bruni, Francesco Ranzato, and Diletta Rigo

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
Abstract interpretation offers a powerful toolset for static analysis, tackling precision, complexity and state-explosion issues. In the literature, state partitioning abstractions based on (bi)simulation and property-preserving state relations have been successfully applied to abstract model checking. Here, we pursue a different track in which model checking is seen as an instance of program verification. To this purpose, we introduce a suitable language - called MOKA (for MOdel checking as abstract interpretation of 𝖪leene 𝖠lgebras) - which is used to encode temporal formulae as programs. In particular, we show that (universal fragments of) temporal logics, such as ACTL or, more generally, universal μ-calculus can be transformed into MOKA programs. Such programs return all and only the initial states which violate the formula. By applying abstract interpretation to MOKA programs, we pave the way for reusing more general abstractions than partitions as well as for tuning the precision of the abstraction to remove or avoid false alarms. We show how to perform model checking via a program logic that combines under-approximation and abstract interpretation analysis to avoid false alarms. The notion of locally complete abstraction is used to dynamically improve the analysis precision via counterexample-guided domain refinement.

Cite as

Paolo Baldan, Roberto Bruni, Francesco Ranzato, and Diletta Rigo. Model Checking as Program Verification by Abstract Interpretation. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{baldan_et_al:LIPIcs.CONCUR.2025.8,
  author =	{Baldan, Paolo and Bruni, Roberto and Ranzato, Francesco and Rigo, Diletta},
  title =	{{Model Checking as Program Verification by Abstract Interpretation}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.8},
  URN =		{urn:nbn:de:0030-drops-239583},
  doi =		{10.4230/LIPIcs.CONCUR.2025.8},
  annote =	{Keywords: ACTL, \mu-calculus, model checking, abstract interpretation, program analysis, local completeness, abstract interpretation repair, domain refinement, Kleene algebra with tests}
}
Document
New Fault Domains for Conformance Testing of Finite State Machines

Authors: Frits Vaandrager and Ivo Melse

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
A fault domain reflects a tester’s assumptions about faults that may occur in an implementation and that need to be detected during testing. A fault domain that has been widely studied in the literature on black-box conformance testing is the class of finite state machines (FSMs) with at most m states. Numerous strategies for generating test suites have been proposed that guarantee fault coverage for this class. These so-called m-complete test suites grow exponentially in m-n, where n is the number of states of the specification, so one can only run them for small values of m-n. But the assumption that m-n is small is not realistic in practice. In his seminal paper from 1964, Hennie raised the challenge to design checking experiments in which the number of states may increase appreciably. In order to solve this long-standing open problem, we propose (much larger) fault domains that capture the assumption that all states in an implementation can be reached by first performing a sequence from some set A (typically a state cover for the specification), followed by k arbitrary inputs, for some small k. The number of states of FSMs in these fault domains grows exponentially in k. We present a sufficient condition for k-A-completeness of test suites with respect to these fault domains. Our condition implies k-A-completeness of two prominent m-complete test suite generation strategies, the Wp and HSI methods. Thus these strategies are complete for much larger fault domains than those for which they were originally designed, and thereby solve Hennie’s challenge. We show that three other prominent m-complete methods (H, SPY and SPYH) do not always generate k-A-complete test suites.

Cite as

Frits Vaandrager and Ivo Melse. New Fault Domains for Conformance Testing of Finite State Machines. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 34:1-34:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vaandrager_et_al:LIPIcs.CONCUR.2025.34,
  author =	{Vaandrager, Frits and Melse, Ivo},
  title =	{{New Fault Domains for Conformance Testing of Finite State Machines}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{34:1--34:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.34},
  URN =		{urn:nbn:de:0030-drops-239843},
  doi =		{10.4230/LIPIcs.CONCUR.2025.34},
  annote =	{Keywords: conformance testing, finite state machines, Mealy machines, apartness, observation tree, fault domains, k-A-complete test suites}
}
Document
PtrHash: Minimal Perfect Hashing at RAM Throughput

Authors: Ragnar Groot Koerkamp

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Motivation. Given a set K of n keys, a minimal perfect hash function (MPHF) is a collision-free bijective map H_mphf from K to {0, … , n-1}. These functions have uses in databases, search engines, and are used in bioinformatics indexing tools such as Pufferfish (using BBHash), and Piscem (PTHash). PTHash is also used in SSHash, a data structure on k-mers that supports membership queries. PTHash only takes around 5% of the total space of SSHash, and thus, trading slightly more space for faster queries is beneficial. Thus, this work presents a (minimal) perfect hash function that first prioritizes query throughput, while also allowing efficient construction for 10⁹ or more elements using 2.4 bits of memory per key. Contributions. Both PTHash and PHOBIC first map all n keys to n/λ < n buckets. Then, each bucket stores a pilot that controls the final hash value of the keys mapping to it. PtrHash builds on this by using 1) fixed-width (uncompressed) 8-bit pilots, 2) a construction algorithm similar to Cuckoo hashing to find suitable pilot values. Further, it partitions the keys, so that keys in each part map to their own set of slots. PtrHash 3) uses the same number of buckets and slots for each part, with 4) a single remap table to map intermediate positions ≥ n to < n, 5) encoded using per-cacheline Elias-Fano coding. Lastly, 6) PtrHash supports streaming queries, where we use prefetching to answer a stream of multiple queries more efficiently than one-by-one processing. Results. With default parameters, PtrHash takes 2.4 bits per key. On 300 million string keys, PtrHash is as fast or faster to build than other MPHFs at a similar size, and at least 2.1× faster to query. When streaming multiple queries, this improves to 3.3× speedup over the fastest alternative, while also being significantly faster to construct. When using 10⁹ integer keys instead, query times are as low as 12 ns/key when iterating in a for loop, or even down to 8 ns/key when using the streaming approach, just short of the 7.4 ns inverse throughput of random memory accesses.

Cite as

Ragnar Groot Koerkamp. PtrHash: Minimal Perfect Hashing at RAM Throughput. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 21:1-21:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grootkoerkamp:LIPIcs.SEA.2025.21,
  author =	{Groot Koerkamp, Ragnar},
  title =	{{PtrHash: Minimal Perfect Hashing at RAM Throughput}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{21:1--21:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.21},
  URN =		{urn:nbn:de:0030-drops-232597},
  doi =		{10.4230/LIPIcs.SEA.2025.21},
  annote =	{Keywords: Minimal perfect hashing, Compressed Data Structures}
}
Document
Blocked Bloom Filters with Choices

Authors: Johanna Elena Schmitz, Jens Zentgraf, and Sven Rahmann

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Probabilistic filters are approximate set membership data structures that represent a set of keys in small space, and answer set membership queries without false negative answers, but with a certain allowed false positive probability. Such filters are widely used in database systems, networks, storage systems and in biological sequence analysis because of their fast query times and low space requirements. Starting with Bloom filters in the 1970s, many filter data structures have been developed, each with its own advantages and disadvantages, e.g., Blocked Bloom filters, Cuckoo filters, XOR filters, Ribbon filters, and more. We introduce Blocked Bloom filters with choices that work similarly to Blocked Bloom filters, except that for each key there are two (or more) alternative choices of blocks where the key’s information may be stored. When inserting a key, we select the block using a cost function which takes into account the current load and the additional number of bits to be set in the candidate blocks. The result is a filter that partially inherits the advantages of a Blocked Bloom filter, such as the ability to insert keys rapidly online or the ability to slightly overload the filter with only a small penalty to the false positive rate. At the same time, it avoids the major disadvantage of a Blocked Bloom filter, namely the larger space consumption. Our new data structure uses less space at the same false positive rate, or has a lower false positive rate at the same space consumption as a Blocked Bloom filter. We discuss the methodology, cost functions for block selection, engineered implementation, a detailed performance evaluation and use cases in bioinformatics of Blocked Bloom filters with choices, showing that they can be of practical value. The implementation of the evaluated filters and the workflows used are provided via Gitlab at https://gitlab.com/rahmannlab/blowchoc-filters.

Cite as

Johanna Elena Schmitz, Jens Zentgraf, and Sven Rahmann. Blocked Bloom Filters with Choices. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schmitz_et_al:LIPIcs.SEA.2025.25,
  author =	{Schmitz, Johanna Elena and Zentgraf, Jens and Rahmann, Sven},
  title =	{{Blocked Bloom Filters with Choices}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.25},
  URN =		{urn:nbn:de:0030-drops-232631},
  doi =		{10.4230/LIPIcs.SEA.2025.25},
  annote =	{Keywords: Probabilistic filter, Bloom filter, power of two choices}
}
Document
Computing Betti Tables and Minimal Presentations of Zero-Dimensional Persistent Homology

Authors: Dmitriy Morozov and Luis Scoccola

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
The Betti tables of a multigraded module encode the grades at which there is an algebraic change in the module. Multigraded modules show up in many areas of pure and applied mathematics, and in particular in topological data analysis, where they are known as persistence modules, and where their Betti tables describe the places at which the homology of filtered simplicial complexes changes. Although Betti tables of singly and bigraded modules are already being used in applications of topological data analysis, their computation in the bigraded case (which relies on an algorithm that is cubic in the size of the filtered simplicial complex) is a bottleneck when working with large datasets. We show that, in the special case of 0-dimensional homology (relevant for clustering and graph classification) Betti tables of bigraded modules can be computed in log-linear time. We also consider the problem of computing minimal presentations, and show that minimal presentations of 0-dimensional persistent homology can be computed in quadratic time, regardless of the grading poset.

Cite as

Dmitriy Morozov and Luis Scoccola. Computing Betti Tables and Minimal Presentations of Zero-Dimensional Persistent Homology. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 69:1-69:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{morozov_et_al:LIPIcs.SoCG.2025.69,
  author =	{Morozov, Dmitriy and Scoccola, Luis},
  title =	{{Computing Betti Tables and Minimal Presentations of Zero-Dimensional Persistent Homology}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{69:1--69:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.69},
  URN =		{urn:nbn:de:0030-drops-232219},
  doi =		{10.4230/LIPIcs.SoCG.2025.69},
  annote =	{Keywords: Multiparameter persistence, Zero-dimensional homology, Minimal presentation, Betti table}
}
Document
Towards a Coq-verified Chain of Esterel Semantics

Authors: Lionel Rieg and Gérard Berry

Published in: LITES, Volume 10, Issue 1 (2025). Leibniz Transactions on Embedded Systems, Volume 10, Issue 1


Abstract
This article focuses on formally specifying and verifying the chain of formal semantics of the Esterel synchronous programming language using the Coq proof assistant. In particular, in addition to the standard logical (LBS) semantics, constructive semantics (CBS) and constructive state semantics (CSS), we introduce a novel microstep semantics that gets rid of the Must/Can potential function pair of the constructive semantics and can be viewed as an abstract version of Esterel’s circuit semantics used by compilers to generate software code and hardware designs. The article also comes with formal proofs in Coq of the equivalence between the CBS and CSS semantics and of the refinement of the CSS by the microstep semantics, except for the loop construct of Esterel.

Cite as

Lionel Rieg and Gérard Berry. Towards a Coq-verified Chain of Esterel Semantics. In LITES, Volume 10, Issue 1 (2025). Leibniz Transactions on Embedded Systems, Volume 10, Issue 1, pp. 2:1-2:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{rieg_et_al:LITES.10.1.2,
  author =	{Rieg, Lionel and Berry, G\'{e}rard},
  title =	{{Towards a Coq-verified Chain of Esterel Semantics}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{2:1--2:54},
  ISSN =	{2199-2002},
  year =	{2025},
  volume =	{10},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.10.1.2},
  URN =		{urn:nbn:de:0030-drops-230144},
  doi =		{10.4230/LITES.10.1.2},
  annote =	{Keywords: Esterel programming language, formal verification, Coq proof assistant}
}
Document
Noisy (Binary) Searching: Simple, Fast and Correct

Authors: Dariusz Dereniowski, Aleksander Łukasiewicz, and Przemysław Uznański

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
This work considers the problem of the noisy binary search in a sorted array. The noise is modeled by a parameter p that dictates that a comparison can be incorrect with probability p, independently of other queries. We state two types of upper bounds on the number of queries: the worst-case and expected query complexity scenarios. The bounds improve the ones known to date, i.e., our algorithms require fewer queries. Additionally, they have simpler statements, and work for the full range of parameters. All query complexities for the expected query scenarios are tight up to lower order terms. For the problem where the target prior is uniform over all possible inputs, we provide an algorithm with expected complexity upperbounded by (log₂ n + log₂ δ^{-1} + 3)/I(p), where n is the domain size, 0 ≤ p < 1/2 is the noise ratio, and δ > 0 is the failure probability, and I(p) is the information gain function. As a side-effect, we close some correctness issues regarding previous work. Also, en route, we obtain new and improved query complexities for the search generalized to arbitrary graphs. This paper continues and improves the lines of research of Burnashev-Zigangirov [Prob. Per. Informatsii, 1974], Ben-Or and Hassidim [FOCS 2008], Gu and Xu [STOC 2023], and Emamjomeh-Zadeh et al. [STOC 2016], Dereniowski et al. [SOSA@SODA 2019].

Cite as

Dariusz Dereniowski, Aleksander Łukasiewicz, and Przemysław Uznański. Noisy (Binary) Searching: Simple, Fast and Correct. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dereniowski_et_al:LIPIcs.STACS.2025.29,
  author =	{Dereniowski, Dariusz and {\L}ukasiewicz, Aleksander and Uzna\'{n}ski, Przemys{\l}aw},
  title =	{{Noisy (Binary) Searching: Simple, Fast and Correct}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.29},
  URN =		{urn:nbn:de:0030-drops-228551},
  doi =		{10.4230/LIPIcs.STACS.2025.29},
  annote =	{Keywords: Graph Algorithms, Noisy Binary Search, Query Complexity, Reliability}
}
Document
The Complexity of Deciding Characteristic Formulae in Van Glabbeek’s Branching-Time Spectrum

Authors: Luca Aceto, Antonis Achilleos, Aggeliki Chalki, and Anna Ingólfsdóttir

Published in: LIPIcs, Volume 326, 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)


Abstract
Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek’s branching-time spectrum. Since characteristic formulae in each of those logics are exactly the satisfiable and prime ones, this article presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.

Cite as

Luca Aceto, Antonis Achilleos, Aggeliki Chalki, and Anna Ingólfsdóttir. The Complexity of Deciding Characteristic Formulae in Van Glabbeek’s Branching-Time Spectrum. In 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 326, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aceto_et_al:LIPIcs.CSL.2025.26,
  author =	{Aceto, Luca and Achilleos, Antonis and Chalki, Aggeliki and Ing\'{o}lfsd\'{o}ttir, Anna},
  title =	{{The Complexity of Deciding Characteristic Formulae in Van Glabbeek’s Branching-Time Spectrum}},
  booktitle =	{33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)},
  pages =	{26:1--26:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-362-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{326},
  editor =	{Endrullis, J\"{o}rg and Schmitz, Sylvain},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2025.26},
  URN =		{urn:nbn:de:0030-drops-227836},
  doi =		{10.4230/LIPIcs.CSL.2025.26},
  annote =	{Keywords: Characteristic formulae, prime formulae, bisimulation, simulation relations, modal logics, complexity theory, satisfiability}
}
Document
Crash-Tolerant Exploration of Trees by Energy-Sharing Mobile Agents

Authors: Quentin Bramas, Toshimitsu Masuzawa, and Sébastien Tixeuil

Published in: LIPIcs, Volume 324, 28th International Conference on Principles of Distributed Systems (OPODIS 2024)


Abstract
We consider the problem of graph exploration by energy sharing mobile agents that are subject to crash faults. More precisely, we consider a team of two agents where at most one of them may fail unpredictably, and the considered topology is that of connected acyclic graphs (i.e. trees). We consider both the asynchronous and the synchronous settings, and we provide necessary and sufficient conditions about the energy.

Cite as

Quentin Bramas, Toshimitsu Masuzawa, and Sébastien Tixeuil. Crash-Tolerant Exploration of Trees by Energy-Sharing Mobile Agents. In 28th International Conference on Principles of Distributed Systems (OPODIS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 324, pp. 9:1-9:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bramas_et_al:LIPIcs.OPODIS.2024.9,
  author =	{Bramas, Quentin and Masuzawa, Toshimitsu and Tixeuil, S\'{e}bastien},
  title =	{{Crash-Tolerant Exploration of Trees by Energy-Sharing Mobile Agents}},
  booktitle =	{28th International Conference on Principles of Distributed Systems (OPODIS 2024)},
  pages =	{9:1--9:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-360-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{324},
  editor =	{Bonomi, Silvia and Galletta, Letterio and Rivi\`{e}re, Etienne and Schiavoni, Valerio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2024.9},
  URN =		{urn:nbn:de:0030-drops-225452},
  doi =		{10.4230/LIPIcs.OPODIS.2024.9},
  annote =	{Keywords: Mobile Agents, Distributed Algorithms, Energy sharing}
}
Document
Track A: Algorithms, Complexity and Games
Faster Algorithms for All-Pairs Bounded Min-Cuts

Authors: Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
The All-Pairs Min-Cut problem (aka All-Pairs Max-Flow) asks to compute a minimum s-t cut (or just its value) for all pairs of vertices s,t. We study this problem in directed graphs with unit edge/vertex capacities (corresponding to edge/vertex connectivity). Our focus is on the k-bounded case, where the algorithm has to find all pairs with min-cut value less than k, and report only those. The most basic case k=1 is the Transitive Closure (TC) problem, which can be solved in graphs with n vertices and m edges in time O(mn) combinatorially, and in time O(n^{omega}) where omega<2.38 is the matrix-multiplication exponent. These time bounds are conjectured to be optimal. We present new algorithms and conditional lower bounds that advance the frontier for larger k, as follows: - A randomized algorithm for vertex capacities that runs in time {O}((nk)^{omega}). This is only a factor k^omega away from the TC bound, and nearly matches it for all k=n^{o(1)}. - Two deterministic algorithms for edge capacities (which is more general) that work in DAGs and further reports a minimum cut for each pair. The first algorithm is combinatorial (does not involve matrix multiplication) and runs in time {O}(2^{{O}(k^2)}* mn). The second algorithm can be faster on dense DAGs and runs in time {O}((k log n)^{4^{k+o(k)}}* n^{omega}). Previously, Georgiadis et al. [ICALP 2017], could match the TC bound (up to n^{o(1)} factors) only when k=2, and now our two algorithms match it for all k=o(sqrt{log n}) and k=o(log log n). - The first super-cubic lower bound of n^{omega-1-o(1)} k^2 time under the 4-Clique conjecture, which holds even in the simplest case of DAGs with unit vertex capacities. It improves on the previous (SETH-based) lower bounds even in the unbounded setting k=n. For combinatorial algorithms, our reduction implies an n^{2-o(1)} k^2 conditional lower bound. Thus, we identify new settings where the complexity of the problem is (conditionally) higher than that of TC. Our three sets of results are obtained via different techniques. The first one adapts the network coding method of Cheung, Lau, and Leung [SICOMP 2013] to vertex-capacitated digraphs. The second set exploits new insights on the structure of latest cuts together with suitable algebraic tools. The lower bounds arise from a novel reduction of a different structure than the SETH-based constructions.

Cite as

Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf. Faster Algorithms for All-Pairs Bounded Min-Cuts. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2019.7,
  author =	{Abboud, Amir and Georgiadis, Loukas and Italiano, Giuseppe F. and Krauthgamer, Robert and Parotsidis, Nikos and Trabelsi, Ohad and Uzna\'{n}ski, Przemys{\l}aw and Wolleb-Graf, Daniel},
  title =	{{Faster Algorithms for All-Pairs Bounded Min-Cuts}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{7:1--7:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.7},
  URN =		{urn:nbn:de:0030-drops-105833},
  doi =		{10.4230/LIPIcs.ICALP.2019.7},
  annote =	{Keywords: All-pairs min-cut, k-reachability, network coding, Directed graphs, fine-grained complexity}
}
  • Refine by Type
  • 22 Document/PDF
  • 13 Document/HTML

  • Refine by Publication Year
  • 14 2025
  • 3 2019
  • 2 2018
  • 3 2017

  • Refine by Author
  • 5 Graf, Daniel
  • 3 Bärtschi, Andreas
  • 3 Uznanski, Przemyslaw
  • 3 Uznański, Przemysław
  • 3 Wolleb-Graf, Daniel
  • Show More...

  • Refine by Series/Journal
  • 18 LIPIcs
  • 3 OASIcs
  • 1 LITES

  • Refine by Classification
  • 4 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Computational geometry
  • 2 Theory of computation → Data structures design and analysis
  • 2 Theory of computation → Logic and verification
  • 2 Theory of computation → Modal and temporal logics
  • Show More...

  • Refine by Keyword
  • 2 delivery
  • 2 energy optimization
  • 2 fine-grained complexity
  • 2 mobile agents
  • 1 2-reachability
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail