32 Search Results for "Schmidt, Michael"


Document
APPROX
Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment

Authors: Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The online joint replenishment problem (JRP) is a fundamental problem in the area of online problems with delay. Over the last decade, several works have studied generalizations of JRP with different cost functions for servicing requests. Most prior works on JRP and its generalizations have focused on the clairvoyant setting. Recently, Touitou [Noam Touitou, 2023] developed a non-clairvoyant framework that provided an O(√{n log n}) upper bound for a wide class of generalized JRP, where n is the number of request types. We advance the study of non-clairvoyant algorithms by providing a simpler, modular framework that matches the competitive ratio established by Touitou for the same class of generalized JRP. Our key insight is to leverage universal algorithms for Set Cover to approximate arbitrary monotone subadditive functions using a simple class of functions termed disjoint. This allows us to reduce the problem to several independent instances of the TCP Acknowledgement problem, for which a simple 2-competitive non-clairvoyant algorithm is known. The modularity of our framework is a major advantage as it allows us to tailor the reduction to specific problems and obtain better competitive ratios. In particular, we obtain tight O(√n)-competitive algorithms for two significant problems: Multi-Level Aggregation and Weighted Symmetric Subadditive Joint Replenishment. We also show that, in contrast, Touitou’s algorithm is Ω(√{n log n})-competitive for both of these problems.

Cite as

Tomer Ezra, Stefano Leonardi, Michał Pawłowski, Matteo Russo, and Seeun William Umboh. Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ezra_et_al:LIPIcs.APPROX/RANDOM.2024.12,
  author =	{Ezra, Tomer and Leonardi, Stefano and Paw{\l}owski, Micha{\l} and Russo, Matteo and Umboh, Seeun William},
  title =	{{Universal Optimization for Non-Clairvoyant Subadditive Joint Replenishment}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  URN =		{urn:nbn:de:0030-drops-210050},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.12},
  annote =	{Keywords: Set Cover, Joint Replenishment, TCP-Acknowledgment, Subadditive Function Approximation, Multi-Level Aggregation}
}
Document
APPROX
Learning-Augmented Maximum Independent Set

Authors: Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, and Chen Wang

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the Maximum Independent Set (MIS) problem on general graphs within the framework of learning-augmented algorithms. The MIS problem is known to be NP-hard and is also NP-hard to approximate to within a factor of n^(1-δ) for any δ > 0. We show that we can break this barrier in the presence of an oracle obtained through predictions from a machine learning model that answers vertex membership queries for a fixed MIS with probability 1/2+ε. In the first setting we consider, the oracle can be queried once per vertex to know if a vertex belongs to a fixed MIS, and the oracle returns the correct answer with probability 1/2 + ε. Under this setting, we show an algorithm that obtains an Õ((√Δ)/ε)-approximation in O(m) time where Δ is the maximum degree of the graph. In the second setting, we allow multiple queries to the oracle for a vertex, each of which is correct with probability 1/2 + ε. For this setting, we show an O(1)-approximation algorithm using O(n/ε²) total queries and Õ(m) runtime.

Cite as

Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, and Chen Wang. Learning-Augmented Maximum Independent Set. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 24:1-24:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX/RANDOM.2024.24,
  author =	{Braverman, Vladimir and Dharangutte, Prathamesh and Shah, Vihan and Wang, Chen},
  title =	{{Learning-Augmented Maximum Independent Set}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{24:1--24:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.24},
  URN =		{urn:nbn:de:0030-drops-210179},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.24},
  annote =	{Keywords: Learning-augmented algorithms, maximum independent set, graph algorithms}
}
Document
Constrictor: Immutability as a Design Concept

Authors: Elad Kinsbruner, Shachar Itzhaky, and Hila Peleg

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Many object-oriented applications in algorithm design rely on objects never changing during their lifetime. This is often tackled by marking object references as read-only, e.g., using the const keyword in C++. In other languages like Python or Java where such a concept does not exist, programmers rely on best practices that are entirely unenforced. While reliance on best practices is obviously too permissive, const-checking is too restrictive: it is possible for a method to mutate the internal state while still satisfying the property we expect from an "immutable" object in this setting. We would therefore like to enforce the immutability of an object’s abstract state. We check an object’s immutability through a view of its abstract state: for instances of an immutable class, the view does not change when running any of the class’s methods, even if some of the internal state does change. If all methods of a class are verified as non-mutating, we can deem the entire class view-immutable. We present an SMT-based algorithm to check view-immutability, and implement it in our linter/verifier, Constrictor. We evaluate Constrictor on 51 examples of immutability-related design violations. Our evaluation shows that Constrictor is effective at catching a variety of prototypical design violations, and does so in seconds. We also explore Constrictor with two real-world case studies.

Cite as

Elad Kinsbruner, Shachar Itzhaky, and Hila Peleg. Constrictor: Immutability as a Design Concept. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 22:1-22:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kinsbruner_et_al:LIPIcs.ECOOP.2024.22,
  author =	{Kinsbruner, Elad and Itzhaky, Shachar and Peleg, Hila},
  title =	{{Constrictor: Immutability as a Design Concept}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{22:1--22:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.22},
  URN =		{urn:nbn:de:0030-drops-208715},
  doi =		{10.4230/LIPIcs.ECOOP.2024.22},
  annote =	{Keywords: Immutability, Design Enforcement, SMT, Liskov Substitution Principle, Object-oriented Programming}
}
Document
Failure Transparency in Stateful Dataflow Systems

Authors: Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Failure transparency enables users to reason about distributed systems at a higher level of abstraction, where complex failure-handling logic is hidden. This is especially true for stateful dataflow systems, which are the backbone of many cloud applications. In particular, this paper focuses on proving failure transparency in Apache Flink, a popular stateful dataflow system. Even though failure transparency is a critical aspect of Apache Flink, to date it has not been formally proven. Showing that the failure transparency mechanism is correct, however, is challenging due to the complexity of the mechanism itself. Nevertheless, this complexity can be effectively hidden behind a failure transparent programming interface. To show that Apache Flink is failure transparent, we model it in small-step operational semantics. Next, we provide a novel definition of failure transparency based on observational explainability, a concept which relates executions according to their observations. Finally, we provide a formal proof of failure transparency for the implementation model; i.e., we prove that the failure-free model correctly abstracts from the failure-related details of the implementation model. We also show liveness of the implementation model under a fair execution assumption. These results are a first step towards a verified stack for stateful dataflow systems.

Cite as

Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller. Failure Transparency in Stateful Dataflow Systems. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 42:1-42:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{veresov_et_al:LIPIcs.ECOOP.2024.42,
  author =	{Veresov, Aleksey and Spenger, Jonas and Carbone, Paris and Haller, Philipp},
  title =	{{Failure Transparency in Stateful Dataflow Systems}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{42:1--42:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.42},
  URN =		{urn:nbn:de:0030-drops-208911},
  doi =		{10.4230/LIPIcs.ECOOP.2024.42},
  annote =	{Keywords: Failure transparency, stateful dataflow, operational semantics, checkpoint recovery}
}
Document
A Modular Formalization of Superposition in Isabelle/HOL

Authors: Martin Desharnais, Balazs Toth, Uwe Waldmann, Jasmin Blanchette, and Sophie Tourret

Published in: LIPIcs, Volume 309, 15th International Conference on Interactive Theorem Proving (ITP 2024)


Abstract
Superposition is an efficient proof calculus for reasoning about first-order logic with equality that is implemented in many automatic theorem provers. It works by saturating the given set of clauses and is refutationally complete, meaning that if the set is inconsistent, the saturation will contain a contradiction. In this work, we restructured the completeness proof to cleanly separate the ground (i.e., variable-free) and nonground aspects, and we formalized the result in Isabelle/HOL. We relied on the IsaFoR library for first-order terms and on the Isabelle saturation framework.

Cite as

Martin Desharnais, Balazs Toth, Uwe Waldmann, Jasmin Blanchette, and Sophie Tourret. A Modular Formalization of Superposition in Isabelle/HOL. In 15th International Conference on Interactive Theorem Proving (ITP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 309, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{desharnais_et_al:LIPIcs.ITP.2024.12,
  author =	{Desharnais, Martin and Toth, Balazs and Waldmann, Uwe and Blanchette, Jasmin and Tourret, Sophie},
  title =	{{A Modular Formalization of Superposition in Isabelle/HOL}},
  booktitle =	{15th International Conference on Interactive Theorem Proving (ITP 2024)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-337-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{309},
  editor =	{Bertot, Yves and Kutsia, Temur and Norrish, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2024.12},
  URN =		{urn:nbn:de:0030-drops-207401},
  doi =		{10.4230/LIPIcs.ITP.2024.12},
  annote =	{Keywords: Superposition, verification, first-order logic, higher-order logic}
}
Document
Invariants for One-Counter Automata with Disequality Tests

Authors: Dmitry Chistikov, Jérôme Leroux, Henry Sinclair-Banks, and Nicolas Waldburger

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
We study the reachability problem for one-counter automata in which transitions can carry disequality tests. A disequality test is a guard that prohibits a specified counter value. This reachability problem has been known to be NP-hard and in PSPACE, and characterising its computational complexity has been left as a challenging open question by Almagor, Cohen, Pérez, Shirmohammadi, and Worrell (2020). We reduce the complexity gap, placing the problem into the second level of the polynomial hierarchy, namely into the class coNP^NP. In the presence of both equality and disequality tests, our upper bound is at the third level, P^NP^NP. To prove this result, we show that non-reachability can be witnessed by a pair of invariants (forward and backward). These invariants are almost inductive. They aim to over-approximate only a "core" of the reachability set instead of the entire set. The invariants are also leaky: it is possible to escape the set. We complement this with separate checks as the leaks can only occur in a controlled way.

Cite as

Dmitry Chistikov, Jérôme Leroux, Henry Sinclair-Banks, and Nicolas Waldburger. Invariants for One-Counter Automata with Disequality Tests. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.CONCUR.2024.17,
  author =	{Chistikov, Dmitry and Leroux, J\'{e}r\^{o}me and Sinclair-Banks, Henry and Waldburger, Nicolas},
  title =	{{Invariants for One-Counter Automata with Disequality Tests}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.17},
  URN =		{urn:nbn:de:0030-drops-207898},
  doi =		{10.4230/LIPIcs.CONCUR.2024.17},
  annote =	{Keywords: Inductive invariant, Vector addition system, One-counter automaton}
}
Document
A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers

Authors: Maarten Flippo, Konstantin Sidorov, Imko Marijnissen, Jeff Smits, and Emir Demirović

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Proof logging is used to increase trust in the optimality and unsatisfiability claims of solvers. However, to this date, no constraint programming solver can practically produce proofs without significantly impacting performance, which hinders mainstream adoption. We address this issue by introducing a novel proof generation framework, together with a CP proof format and proof checker. Our approach is to divide the proof generation into three steps. At runtime, we require the CP solver to only produce a proof sketch, which we call a scaffold. After the solving is done, our proof processor trims and expands the scaffold into a full CP proof, which is subsequently verified. Our framework is agnostic to the solver and the verification approach. Through MiniZinc benchmarks, we demonstrate that with our framework, the overhead of logging during solving is often less than 10%, significantly lower than other approaches, and that our proof processing step can reduce the overall size of the proof by orders of magnitude and by extension the proof checking time. Our results demonstrate that proof logging has the potential to become an integral part of the CP community.

Cite as

Maarten Flippo, Konstantin Sidorov, Imko Marijnissen, Jeff Smits, and Emir Demirović. A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{flippo_et_al:LIPIcs.CP.2024.11,
  author =	{Flippo, Maarten and Sidorov, Konstantin and Marijnissen, Imko and Smits, Jeff and Demirovi\'{c}, Emir},
  title =	{{A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.11},
  URN =		{urn:nbn:de:0030-drops-206969},
  doi =		{10.4230/LIPIcs.CP.2024.11},
  annote =	{Keywords: proof logging, formal verification, constraint programming}
}
Document
Applying the Safe-And-Complete Framework to Practical Genome Assembly

Authors: Sebastian Schmidt, Santeri Toivonen, Paul Medvedev, and Alexandru I. Tomescu

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
Despite the long history of genome assembly research, there remains a large gap between the theoretical and practical work. There is practical software with little theoretical underpinning of accuracy on one hand and theoretical algorithms which have not been adopted in practice on the other. In this paper we attempt to bridge the gap between theory and practice by showing how the theoretical safe-and-complete framework can be integrated into existing assemblers in order to improve contiguity. The optimal algorithm in this framework, called the omnitig algorithm, has not been used in practice due to its complexity and its lack of robustness to real data. Instead, we pursue a simplified notion of omnitigs (simple omnitigs), giving an efficient algorithm to compute them and demonstrating their safety under certain conditions. We modify two assemblers (wtdbg2 and Flye) by replacing their unitig algorithm with the simple omnitig algorithm. We test our modifications using real HiFi data from the D. melanogaster and the C. elegans genomes. Our modified algorithms lead to a substantial improvement in alignment-based contiguity, with negligible additional computational costs and either no or a small increase in the number of misassemblies.

Cite as

Sebastian Schmidt, Santeri Toivonen, Paul Medvedev, and Alexandru I. Tomescu. Applying the Safe-And-Complete Framework to Practical Genome Assembly. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schmidt_et_al:LIPIcs.WABI.2024.8,
  author =	{Schmidt, Sebastian and Toivonen, Santeri and Medvedev, Paul and Tomescu, Alexandru I.},
  title =	{{Applying the Safe-And-Complete Framework to Practical Genome Assembly}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.8},
  URN =		{urn:nbn:de:0030-drops-206520},
  doi =		{10.4230/LIPIcs.WABI.2024.8},
  annote =	{Keywords: Genome assembly, Omnitigs, Safe-and-complete framework, graph algorithm, HiFi sequencing data, Assembly evaluation}
}
Document
Reconstructing Rearrangement Phylogenies of Natural Genomes

Authors: Leonard Bohnenkämper, Jens Stoye, and Daniel Dörr

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
We study the classical problem of inferring ancestral genomes from a set of extant genomes under a given phylogeny, known as the Small Parsimony Problem (SPP). Genomes are represented as sequences of oriented markers, organized in one or more linear or circular chromosomes. Any marker may appear in several copies, without restriction on orientation or genomic location, known as the natural genomes model. Evolutionary events along the branches of the phylogeny encompass large scale rearrangements, including segmental inversions, translocations, gain and loss (DCJ-indel model). Even under simpler rearrangement models, such as the classical breakpoint model without duplicates, the SPP is computationally intractable. Nevertheless, the SPP for natural genomes under the DCJ-indel model has been studied recently, with limited success. Here, we improve on that earlier work, giving a highly optimized ILP that is able to solve the SPP for sufficiently small phylogenies and gene families. A notable improvement w.r.t. the previous result is an optimized way of handling both circular and linear chromosomes. This is especially relevant to the SPP, since the chromosomal structure of ancestral genomes is unknown and the solution space for this chromosomal structure is typically large. We benchmark our method on simulated and real data. On simulated phylogenies we observe a considerable performance improvement on problems that include linear chromosomes. And even when the ground truth contains only one circular chromosome per genome, our method outperforms its predecessor due to its optimized handling of the solution space. The practical advantage becomes also visible in an analysis of seven Anopheles taxa.

Cite as

Leonard Bohnenkämper, Jens Stoye, and Daniel Dörr. Reconstructing Rearrangement Phylogenies of Natural Genomes. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bohnenkamper_et_al:LIPIcs.WABI.2024.12,
  author =	{Bohnenk\"{a}mper, Leonard and Stoye, Jens and D\"{o}rr, Daniel},
  title =	{{Reconstructing Rearrangement Phylogenies of Natural Genomes}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.12},
  URN =		{urn:nbn:de:0030-drops-206564},
  doi =		{10.4230/LIPIcs.WABI.2024.12},
  annote =	{Keywords: genome rearrangement, ancestral reconstruction, small parsimony, integer linear programming, double-cut-and-join}
}
Document
The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees

Authors: Alitzel López Sánchez, José Antonio Ramírez-Rafael, Alejandro Flores-Lamas, Maribel Hernández-Rosales, and Manuel Lafond

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
In this study, we investigate the problem of comparing gene trees reconciled with the same species tree using a novel semi-metric, called the Path-Label Reconciliation (PLR) dissimilarity measure. This approach not only quantifies differences in the topology of reconciled gene trees, but also considers discrepancies in predicted ancestral gene-species maps and speciation/duplication events, offering a refinement of existing metrics such as Robinson-Foulds (RF) and their labeled extensions LRF and ELRF. A tunable parameter α also allows users to adjust the balance between its species map and event labeling components. We show that PLR can be computed in linear time and that it is a semi-metric. We also discuss the diameters of reconciled gene tree measures, which are important in practice for normalization, and provide initial bounds on PLR, LRF, and ELRF. To validate PLR, we simulate reconciliations and perform comparisons with LRF and ELRF. The results show that PLR provides a more evenly distributed range of distances, making it less susceptible to overestimating differences in the presence of small topological changes, while at the same time being computationally efficient. Our findings suggest that the theoretical diameter is rarely reached in practice. The PLR measure advances phylogenetic reconciliation by combining theoretical rigor with practical applicability. Future research will refine its mathematical properties, explore its performance on different tree types, and integrate it with existing bioinformatics tools for large-scale evolutionary analyses. The open source code is available at: https://pypi.org/project/parle/.

Cite as

Alitzel López Sánchez, José Antonio Ramírez-Rafael, Alejandro Flores-Lamas, Maribel Hernández-Rosales, and Manuel Lafond. The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lopezsanchez_et_al:LIPIcs.WABI.2024.20,
  author =	{L\'{o}pez S\'{a}nchez, Alitzel and Ram{\'\i}rez-Rafael, Jos\'{e} Antonio and Flores-Lamas, Alejandro and Hern\'{a}ndez-Rosales, Maribel and Lafond, Manuel},
  title =	{{The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.20},
  URN =		{urn:nbn:de:0030-drops-206645},
  doi =		{10.4230/LIPIcs.WABI.2024.20},
  annote =	{Keywords: Reconciliation, gene trees, species trees, evolutionary scenarios}
}
Document
Anchorage Accurately Assembles Anchor-Flanked Synthetic Long Reads

Authors: Xiaofei Carl Zang, Xiang Li, Kyle Metcalfe, Tuval Ben-Yehezkel, Ryan Kelley, and Mingfu Shao

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
Modern sequencing technologies allow for the addition of short-sequence tags, known as anchors, to both ends of a captured molecule. Anchors are useful in assembling the full-length sequence of a captured molecule as they can be used to accurately determine the endpoints. One representative of such anchor-enabled technology is LoopSeq Solo, a synthetic long read (SLR) sequencing protocol. LoopSeq Solo also achieves ultra-high sequencing depth and high purity of short reads covering the entire captured molecule. Despite the availability of many assembly methods, constructing full-length sequence from these anchor-enabled, ultra-high coverage sequencing data remains challenging due to the complexity of the underlying assembly graphs and the lack of specific algorithms leveraging anchors. We present Anchorage, a novel assembler that performs anchor-guided assembly for ultra-high-depth sequencing data. Anchorage starts with a kmer-based approach for precise estimation of molecule lengths. It then formulates the assembly problem as finding an optimal path that connects the two nodes determined by anchors in the underlying compact de Bruijn graph. The optimality is defined as maximizing the weight of the smallest node while matching the estimated sequence length. Anchorage uses a modified dynamic programming algorithm to efficiently find the optimal path. Through both simulations and real data, we show that Anchorage outperforms existing assembly methods, particularly in the presence of sequencing artifacts. Anchorage fills the gap in assembling anchor-enabled data. We anticipate its broad use as anchor-enabled sequencing technologies become prevalent. Anchorage is freely available at https://github.com/Shao-Group/anchorage; the scripts and documents that can reproduce all experiments in this manuscript are available at https://github.com/Shao-Group/anchorage-test.

Cite as

Xiaofei Carl Zang, Xiang Li, Kyle Metcalfe, Tuval Ben-Yehezkel, Ryan Kelley, and Mingfu Shao. Anchorage Accurately Assembles Anchor-Flanked Synthetic Long Reads. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 22:1-22:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zang_et_al:LIPIcs.WABI.2024.22,
  author =	{Zang, Xiaofei Carl and Li, Xiang and Metcalfe, Kyle and Ben-Yehezkel, Tuval and Kelley, Ryan and Shao, Mingfu},
  title =	{{Anchorage Accurately Assembles Anchor-Flanked Synthetic Long Reads}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{22:1--22:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.22},
  URN =		{urn:nbn:de:0030-drops-206660},
  doi =		{10.4230/LIPIcs.WABI.2024.22},
  annote =	{Keywords: Genome assembly, de Bruijn graph, synthetic long reads, anchor-guided assembly, LoopSeq}
}
Document
Faster Approximation Schemes for (Constrained) k-Means with Outliers

Authors: Zhen Zhang, Junyu Huang, and Qilong Feng

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given a set of n points in ℝ^d and two positive integers k and m, the Euclidean k-means with outliers problem aims to remove at most m points, referred to as outliers, and minimize the k-means cost function for the remaining points. Developing algorithms for this problem remains an active area of research due to its prevalence in applications involving noisy data. In this paper, we give a (1+ε)-approximation algorithm that runs in n²d((k+m)ε^{-1})^O(kε^{-1}) time for the problem. When combined with a coreset construction method, the running time of the algorithm can be improved to be linear in n. For the case where k is a constant, this represents the first polynomial-time approximation scheme for the problem: Existing algorithms with the same approximation guarantee run in polynomial time only when both k and m are constants. Furthermore, our approach generalizes to variants of k-means with outliers incorporating additional constraints on instances, such as those related to capacities and fairness.

Cite as

Zhen Zhang, Junyu Huang, and Qilong Feng. Faster Approximation Schemes for (Constrained) k-Means with Outliers. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.MFCS.2024.84,
  author =	{Zhang, Zhen and Huang, Junyu and Feng, Qilong},
  title =	{{Faster Approximation Schemes for (Constrained) k-Means with Outliers}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.84},
  URN =		{urn:nbn:de:0030-drops-206408},
  doi =		{10.4230/LIPIcs.MFCS.2024.84},
  annote =	{Keywords: Approximation algorithms, clustering}
}
Document
On the Relative Efficiency of Dynamic and Static Top-Down Compilation to Decision-DNNF

Authors: Alexis de Colnet

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Top-down compilers of CNF formulas to circuits in decision-DNNF (Decomposable Negation Normal Form) have proved to be useful for model counting. These compilers rely on a common set of techniques including DPLL-style exploration of the set of models, caching of residual formulas, and connected components detection. Differences between compilers lie in the variable selection heuristics and in the additional processing techniques they may use. We investigate, from a theoretical perspective, the ability of top-down compilation algorithms to find small decision-DNNF circuits for two different variable selection strategies. Both strategies are guided by a graph of the CNF formula and are inspired by what is done in practice. The first uses a dynamic graph-partitioning approach while the second works with a static tree decomposition. We show that the dynamic approach performs significantly better than the static approach for some formulas, and that the opposite also holds for other formulas. Our lower bounds are proved despite loose settings where the compilation algorithm is only forced to follow its designed variable selection strategy and where everything else, including the many opportunities for tie-breaking, can be handled non-deterministically.

Cite as

Alexis de Colnet. On the Relative Efficiency of Dynamic and Static Top-Down Compilation to Decision-DNNF. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{decolnet:LIPIcs.SAT.2024.11,
  author =	{de Colnet, Alexis},
  title =	{{On the Relative Efficiency of Dynamic and Static Top-Down Compilation to Decision-DNNF}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.11},
  URN =		{urn:nbn:de:0030-drops-205339},
  doi =		{10.4230/LIPIcs.SAT.2024.11},
  annote =	{Keywords: Knowledge compilation, top-down compilation, decision-DNNF Circuits}
}
Document
Strategy Extraction by Interpolation

Authors: Friedrich Slivovsky

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
In applications, QBF solvers are often required to generate strategies. This typically involves a process known as strategy extraction, where a Boolean circuit encoding a strategy is computed from a proof. It has previously been observed that Craig interpolation in propositional logic can be seen as a special case of QBF strategy extraction. In this paper we explore this connection further and show that, conversely, any strategy for a false QBF corresponds to a sequence of interpolants in its complete (Herbrand) expansion. Inspired by this correspondence, we present a new strategy extraction algorithm for the expansion-based proof system Exp+Res. Its asymptotic running time matches the best known bound of O(mn) for a proof with m lines and n universally quantified variables. We report on experiments comparing this algorithm with a strategy extraction algorithm based on combining partial strategies, as well as with round-based strategy extraction.

Cite as

Friedrich Slivovsky. Strategy Extraction by Interpolation. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{slivovsky:LIPIcs.SAT.2024.28,
  author =	{Slivovsky, Friedrich},
  title =	{{Strategy Extraction by Interpolation}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.28},
  URN =		{urn:nbn:de:0030-drops-205509},
  doi =		{10.4230/LIPIcs.SAT.2024.28},
  annote =	{Keywords: QBF, Expansion, Strategy Extraction, Interpolation}
}
Document
The Platin Multi-Target Worst-Case Analysis Tool

Authors: Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann

Published in: OASIcs, Volume 121, 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)


Abstract
With the increasing number of applications that require reliable runtime guarantees, the relevance of static worst-case analysis tools that can provide such guarantees increases. These analysis tools determine resource-consumption bounds of application tasks, with a model of the underlying hardware, to meet given resource budgets during runtime, such as deadlines of real-time tasks. This paper presents enhancements to the Platin worst-case analysis tool developed since its original release more than ten years ago. These novelties comprise Platin’s support for new architectures (i.e., ARMv6-M, RISC-V, and AVR) in addition to the previous backends for Patmos and ARMv7-M. Further, Platin now features system-wide analysis methods and annotation support to express system-level constraints. Besides an overview of these enhancements, we evaluate Platin’s accuracy for the two supported architecture implementations, Patmos and RISC-V.

Cite as

Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann. The Platin Multi-Target Worst-Case Analysis Tool. In 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024). Open Access Series in Informatics (OASIcs), Volume 121, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{maroun_et_al:OASIcs.WCET.2024.2,
  author =	{Maroun, Emad Jacob and Dengler, Eva and Dietrich, Christian and Hepp, Stefan and Herzog, Henriette and Huber, Benedikt and Knoop, Jens and Wiltsche-Prokesch, Daniel and Puschner, Peter and Raffeck, Phillip and Schoeberl, Martin and Schuster, Simon and W\"{a}gemann, Peter},
  title =	{{The Platin Multi-Target Worst-Case Analysis Tool}},
  booktitle =	{22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)},
  pages =	{2:1--2:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-346-1},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{121},
  editor =	{Carle, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2024.2},
  URN =		{urn:nbn:de:0030-drops-204704},
  doi =		{10.4230/OASIcs.WCET.2024.2},
  annote =	{Keywords: worst-case resource consumption, WCET, static analysis tool}
}
  • Refine by Author
  • 2 Dengler, Eva
  • 2 Wägemann, Peter
  • 1 Abdel-Rahman, Amira
  • 1 Accattoli, Beniamino
  • 1 Alman, Josh
  • Show More...

  • Refine by Classification
  • 3 Computer systems organization → Real-time systems
  • 3 Theory of computation → Operational semantics
  • 2 Computing methodologies → Knowledge representation and reasoning
  • 2 Software and its engineering → Automated static analysis
  • 2 Software and its engineering → Compilers
  • Show More...

  • Refine by Keyword
  • 2 Approximation algorithms
  • 2 Genome assembly
  • 2 Lambda calculus
  • 2 clustering
  • 1 Annotation Support
  • Show More...

  • Refine by Type
  • 32 document

  • Refine by Publication Year
  • 28 2024
  • 1 2009
  • 1 2016
  • 1 2018
  • 1 2020

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail