LIPIcs, Volume 160

18th International Symposium on Experimental Algorithms (SEA 2020)



Thumbnail PDF

Event

SEA 2020, June 16-18, 2020, Catania, Italy

Editors

Simone Faro
  • University of Catania, Italy
Domenico Cantone
  • University of Catania, Italy

Publication Details

  • published at: 2020-06-12
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-148-1
  • DBLP: db/conf/wea/sea2020

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 160, SEA 2020, Complete Volume

Authors: Simone Faro and Domenico Cantone


Abstract
LIPIcs, Volume 160, SEA 2020, Complete Volume

Cite as

18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 1-366, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{faro_et_al:LIPIcs.SEA.2020,
  title =	{{LIPIcs, Volume 160, SEA 2020, Complete Volume}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{1--366},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020},
  URN =		{urn:nbn:de:0030-drops-120730},
  doi =		{10.4230/LIPIcs.SEA.2020},
  annote =	{Keywords: LIPIcs, Volume 160, SEA 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Simone Faro and Domenico Cantone


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{faro_et_al:LIPIcs.SEA.2020.0,
  author =	{Faro, Simone and Cantone, Domenico},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.0},
  URN =		{urn:nbn:de:0030-drops-120741},
  doi =		{10.4230/LIPIcs.SEA.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Algorithm Engineering for High-Dimensional Similarity Search Problems (Invited Talk)

Authors: Martin Aumüller


Abstract
Similarity search problems in high-dimensional data arise in many areas of computer science such as data bases, image analysis, machine learning, and natural language processing. One of the most prominent problems is finding the k nearest neighbors of a data point q ∈ ℝ^d in a large set of data points S ⊂ ℝ^d, under same distance measure such as Euclidean distance. In contrast to lower dimensional settings, we do not know of worst-case efficient data structures for such search problems in high-dimensional data, i.e., data structures that are faster than a linear scan through the data set. However, there is a rich body of (often heuristic) approaches that solve nearest neighbor search problems much faster than such a scan on many real-world data sets. As a necessity, the term solve means that these approaches give approximate results that are close to the true k-nearest neighbors. In this talk, we survey recent approaches to nearest neighbor search and related problems. The talk consists of three parts: (1) What makes nearest neighbor search difficult? (2) How do current state-of-the-art algorithms work? (3) What are recent advances regarding similarity search on GPUs, in distributed settings, or in external memory?

Cite as

Martin Aumüller. Algorithm Engineering for High-Dimensional Similarity Search Problems (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 1:1-1:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aumuller:LIPIcs.SEA.2020.1,
  author =	{Aum\"{u}ller, Martin},
  title =	{{Algorithm Engineering for High-Dimensional Similarity Search Problems}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{1:1--1:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.1},
  URN =		{urn:nbn:de:0030-drops-120751},
  doi =		{10.4230/LIPIcs.SEA.2020.1},
  annote =	{Keywords: Nearest neighbor search, Benchmarking}
}
Document
Invited Talk
Algorithm Engineering for Sorting and Searching, and All That (Invited Talk)

Authors: Stefan Edelkamp


Abstract
We look at several proposals to engineer the set of fundamental searching and sorting algorithms. Aspects are improving locality of disk access and cache access, the efficiency tuning by reducing the number of branch mispredictions, and reducing at leading factors hidden in the Big-Oh notation. These studies in algorithm engineering, in turn, lead to exiting new algorithm designs. On the practical side, we will establish that efficient sorting and searching algorithms are in tight collaboration, as sorting is used for finding duplicates in disk-based search, and heap structures designed for efficient graph search can be exploited in classical and adaptive sorting. We indicate the effects of engineered sorting and searching for combined task and motion planning.

Cite as

Stefan Edelkamp. Algorithm Engineering for Sorting and Searching, and All That (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 2:1-2:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{edelkamp:LIPIcs.SEA.2020.2,
  author =	{Edelkamp, Stefan},
  title =	{{Algorithm Engineering for Sorting and Searching, and All That}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{2:1--2:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.2},
  URN =		{urn:nbn:de:0030-drops-120766},
  doi =		{10.4230/LIPIcs.SEA.2020.2},
  annote =	{Keywords: Sorting, Searching, Algorithm Engineering}
}
Document
Invited Talk
Indexing Compressed Text: A Tale of Time and Space (Invited Talk)

Authors: Nicola Prezza


Abstract
Text indexing is a classical algorithmic problem that has been studied for over four decades. The earliest optimal-time solution to the problem, the suffix tree [Weiner, 1973], dates back to 1973 and requires up to two orders of magnitude more space than the text to be stored. In the year 2000, two breakthrough works [Grossi and Vitter, 2000; Ferragina and Manzini, 2000] showed that this space overhead is not necessary: both the index and the text can be stored in a space proportional to the text’s entropy. These contributions had an enormous impact in bioinformatics: nowadays, the two most widely-used DNA aligners employ compressed indexes [Li and Durbin, 2009; Langmead et al., 2009]. In recent years, it became apparent that entropy had reached its limits: modern datasets (for example, collections of thousands of human genomes) are extremely large but very repetitive and, by its very definition, entropy cannot compress repetitive texts [S. Kreft and G. Navarro, 2013]. To overcome this problem, a new generation of indexes based on dictionary compressors (for example, LZ77 and run-length BWT) emerged [S. Kreft and G. Navarro, 2013; Gagie et al., 2020; F. Claude and G. Navarro, 2012], together with generalizations of the indexing problem to labeled graphs [Ferragina et al., 2009; Sirén et al., 2014; Travis Gagie et al., 2017]. This talk is a short and friendly survey of the landmarks of this fascinating path that took us from suffix trees to the most modern compressed indexes on labeled graphs.

Cite as

Nicola Prezza. Indexing Compressed Text: A Tale of Time and Space (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{prezza:LIPIcs.SEA.2020.3,
  author =	{Prezza, Nicola},
  title =	{{Indexing Compressed Text: A Tale of Time and Space}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.3},
  URN =		{urn:nbn:de:0030-drops-120772},
  doi =		{10.4230/LIPIcs.SEA.2020.3},
  annote =	{Keywords: Compressed Text Indexing}
}
Document
High-Quality Hierarchical Process Mapping

Authors: Marcelo Fonseca Faraj, Alexander van der Grinten, Henning Meyerhenke, Jesper Larsson Träff, and Christian Schulz


Abstract
Partitioning graphs into blocks of roughly equal size such that few edges run between blocks is a frequently needed operation when processing graphs on a parallel computer. When a topology of a distributed system is known, an important task is then to map the blocks of the partition onto the processors such that the overall communication cost is reduced. We present novel multilevel algorithms that integrate graph partitioning and process mapping. Important ingredients of our algorithm include fast label propagation, more localized local search, initial partitioning, as well as a compressed data structure to compute processor distances without storing a distance matrix. Moreover, our algorithms are able to exploit a given hierarchical structure of the distributed system under consideration. Experiments indicate that our algorithms speed up the overall mapping process and, due to the integrated multilevel approach, also find much better solutions in practice. For example, one configuration of our algorithm yields similar solution quality as the previous state-of-the-art in terms of mapping quality for large numbers of partitions while being a factor 9.3 faster. Compared to the currently fastest iterated multilevel mapping algorithm Scotch, we obtain 16% better solutions while investing slightly more running time.

Cite as

Marcelo Fonseca Faraj, Alexander van der Grinten, Henning Meyerhenke, Jesper Larsson Träff, and Christian Schulz. High-Quality Hierarchical Process Mapping. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{faraj_et_al:LIPIcs.SEA.2020.4,
  author =	{Faraj, Marcelo Fonseca and van der Grinten, Alexander and Meyerhenke, Henning and Tr\"{a}ff, Jesper Larsson and Schulz, Christian},
  title =	{{High-Quality Hierarchical Process Mapping}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.4},
  URN =		{urn:nbn:de:0030-drops-120782},
  doi =		{10.4230/LIPIcs.SEA.2020.4},
  annote =	{Keywords: Process Mapping, Graph Partitioning, Algorithm Engineering}
}
Document
Probing a Set of Trajectories to Maximize Captured Information

Authors: Sándor P. Fekete, Alexander Hill, Dominik Krupke, Tyler Mayer, Joseph S. B. Mitchell, Ojas Parekh, and Cynthia A. Phillips


Abstract
We study a trajectory analysis problem we call the Trajectory Capture Problem (TCP), in which, for a given input set T of trajectories in the plane, and an integer k≥ 2, we seek to compute a set of k points ("portals") to maximize the total weight of all subtrajectories of T between pairs of portals. This problem naturally arises in trajectory analysis and summarization. We show that the TCP is NP-hard (even in very special cases) and give some first approximation results. Our main focus is on attacking the TCP with practical algorithm-engineering approaches, including integer linear programming (to solve instances to provable optimality) and local search methods. We study the integrality gap arising from such approaches. We analyze our methods on different classes of data, including benchmark instances that we generate. Our goal is to understand the best performing heuristics, based on both solution time and solution quality. We demonstrate that we are able to compute provably optimal solutions for real-world instances.

Cite as

Sándor P. Fekete, Alexander Hill, Dominik Krupke, Tyler Mayer, Joseph S. B. Mitchell, Ojas Parekh, and Cynthia A. Phillips. Probing a Set of Trajectories to Maximize Captured Information. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.SEA.2020.5,
  author =	{Fekete, S\'{a}ndor P. and Hill, Alexander and Krupke, Dominik and Mayer, Tyler and Mitchell, Joseph S. B. and Parekh, Ojas and Phillips, Cynthia A.},
  title =	{{Probing a Set of Trajectories to Maximize Captured Information}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.5},
  URN =		{urn:nbn:de:0030-drops-120796},
  doi =		{10.4230/LIPIcs.SEA.2020.5},
  annote =	{Keywords: Algorithm engineering, optimization, complexity, approximation, trajectories}
}
Document
Storing Set Families More Compactly with Top ZDDs

Authors: Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane


Abstract
Zero-suppressed Binary Decision Diagrams (ZDDs) are data structures for representing set families in a compressed form. With ZDDs, many valuable operations on set families can be done in time polynomial in ZDD size. In some cases, however, the size of ZDDs for representing large set families becomes too huge to store them in the main memory. This paper proposes top ZDD, a novel representation of ZDDs which uses less space than existing ones. The top ZDD is an extension of top tree, which compresses trees, to compress directed acyclic graphs by sharing identical subgraphs. We prove that navigational operations on ZDDs can be done in time poly-logarithmic in ZDD size, and show that there exist set families for which the size of the top ZDD is exponentially smaller than that of the ZDD. We also show experimentally that our top ZDDs have smaller size than ZDDs for real data.

Cite as

Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane. Storing Set Families More Compactly with Top ZDDs. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{matsuda_et_al:LIPIcs.SEA.2020.6,
  author =	{Matsuda, Kotaro and Denzumi, Shuhei and Sadakane, Kunihiko},
  title =	{{Storing Set Families More Compactly with Top ZDDs}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.6},
  URN =		{urn:nbn:de:0030-drops-120809},
  doi =		{10.4230/LIPIcs.SEA.2020.6},
  annote =	{Keywords: top tree, Zero-suppressed Decision Diagram, space-efficient data structure}
}
Document
Fast and Simple Compact Hashing via Bucketing

Authors: Dominik Köppl, Simon J. Puglisi, and Rajeev Raman


Abstract
Compact hash tables store a set S of n key-value pairs, where the keys are from the universe U = {0,…,u-1}, and the values are v-bit integers, in close to B(u, n) + nv bits of space, where {b(u, n)} = log₂ binom(u,n) is the information-theoretic lower bound for representing the set of keys in S, and support operations insert, delete and lookup on S. Compact hash tables have received significant attention in recent years, and approaches dating back to Cleary [IEEE T. Comput, 1984], as well as more recent ones have been implemented and used in a number of applications. However, the wins on space usage of these approaches are outweighed by their slowness relative to conventional hash tables. In this paper, we demonstrate that compact hash tables based upon a simple idea of bucketing practically outperform existing compact hash table implementations in terms of memory usage and construction time, and existing fast hash table implementations in terms of memory usage (and sometimes also in terms of construction time). A related notion is that of a compact Hash ID map, which stores a set Ŝ of n keys from U, and implicitly associates each key in Ŝ with a unique value (its ID), chosen by the data structure itself, which is an integer of magnitude O(n), and supports inserts and lookups on Ŝ, while using close to B(u,n) bits. One of our approaches is suitable for use as a compact Hash ID map.

Cite as

Dominik Köppl, Simon J. Puglisi, and Rajeev Raman. Fast and Simple Compact Hashing via Bucketing. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{koppl_et_al:LIPIcs.SEA.2020.7,
  author =	{K\"{o}ppl, Dominik and Puglisi, Simon J. and Raman, Rajeev},
  title =	{{Fast and Simple Compact Hashing via Bucketing}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.7},
  URN =		{urn:nbn:de:0030-drops-120817},
  doi =		{10.4230/LIPIcs.SEA.2020.7},
  annote =	{Keywords: compact hashing, hash table, separate chaining}
}
Document
Effect of Initial Assignment on Local Search Performance for Max Sat

Authors: Daniel Berend and Yochai Twitto


Abstract
In this paper, we explore the correlation between the quality of initial assignments provided to local search heuristics and that of the corresponding final assignments. We restrict our attention to the Max r-Sat problem and to one of the leading local search heuristics - Configuration Checking Local Search (CCLS). We use a tailored version of the Method of Conditional Expectations (MOCE) to generate initial assignments of diverse quality. We show that the correlation in question is significant and long-lasting. Namely, even when we delve deeper into the local search, we are still in the shadow of the initial assignment. Thus, under practical time constraints, the quality of the initial assignment is crucial to the performance of local search heuristics. To demonstrate our point, we improve CCLS by combining it with MOCE. Instead of starting CCLS from random initial assignments, we start it from excellent initial assignments, provided by MOCE. Indeed, it turns out that this kind of initialization provides a significant improvement of this state-of-the-art solver. This improvement becomes more and more significant as the instance grows.

Cite as

Daniel Berend and Yochai Twitto. Effect of Initial Assignment on Local Search Performance for Max Sat. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{berend_et_al:LIPIcs.SEA.2020.8,
  author =	{Berend, Daniel and Twitto, Yochai},
  title =	{{Effect of Initial Assignment on Local Search Performance for Max Sat}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.8},
  URN =		{urn:nbn:de:0030-drops-120823},
  doi =		{10.4230/LIPIcs.SEA.2020.8},
  annote =	{Keywords: Combinatorial Optimization, Maximum Satisfiability, Local Search, Probabilistic Algorithms}
}
Document
Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams

Authors: Yu Nakahata, Masaaki Nishino, Jun Kawahara, and Shin-ichi Minato


Abstract
Subgraph enumeration is a fundamental task in computer science. Since the number of subgraphs can be large, some enumeration algorithms exploit compressed representations for efficiency. One such representation is the Zero-suppressed Binary Decision Diagram (ZDD). ZDDs can represent the set of subgraphs compactly and support several poly-time queries, such as counting and random sampling. Researchers have proposed efficient algorithms to construct ZDDs representing the set of subgraphs under several constraints, which yield fruitful results in many applications. Recently, Zero-suppressed Sentential Decision Diagrams (ZSDDs) have been proposed as variants of ZDDs. ZSDDs can be smaller than ZDDs when representing the same set of subgraphs. However, efficient algorithms to construct ZSDDs are known only for specific types of subgraphs: matchings and paths. We propose a novel framework to construct ZSDDs representing sets of subgraphs under given constraints. Using our framework, we can construct ZSDDs representing several sets of subgraphs such as matchings, paths, cycles, and spanning trees. We show the bound of sizes of constructed ZSDDs by the branch-width of the input graph, which is smaller than that of ZDDs by the path-width. Experiments show that our methods can construct ZSDDs faster than ZDDs and that the constructed ZSDDs are smaller than ZDDs when representing the same set of subgraphs.

Cite as

Yu Nakahata, Masaaki Nishino, Jun Kawahara, and Shin-ichi Minato. Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nakahata_et_al:LIPIcs.SEA.2020.9,
  author =	{Nakahata, Yu and Nishino, Masaaki and Kawahara, Jun and Minato, Shin-ichi},
  title =	{{Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.9},
  URN =		{urn:nbn:de:0030-drops-120831},
  doi =		{10.4230/LIPIcs.SEA.2020.9},
  annote =	{Keywords: Subgraph, Enumeration, Decision Diagram, Zero-suppressed Sentential Decision Diagram (ZSDD), Top-down construction algorithm}
}
Document
Engineering Exact Quasi-Threshold Editing

Authors: Lars Gottesbüren, Michael Hamann, Philipp Schoch, Ben Strasser, Dorothea Wagner, and Sven Zühlsdorf


Abstract
Quasi-threshold graphs are {C₄, P₄}-free graphs, i.e., they do not contain any cycle or path of four nodes as an induced subgraph. We study the {C₄, P₄}-free editing problem, which is the problem of finding a minimum number of edge insertions or deletions to transform an input graph into a quasi-threshold graph. This problem is NP-hard but fixed-parameter tractable (FPT) in the number of edits by using a branch-and-bound algorithm and admits a simple integer linear programming formulation (ILP). Both methods are also applicable to the general ℱ-free editing problem for any finite set of graphs ℱ. For the FPT algorithm, we introduce a fast heuristic for computing high-quality lower bounds and an improved branching strategy. For the ILP, we engineer several variants of row generation. We evaluate both methods for quasi-threshold editing on a large set of protein similarity graphs. For most instances, our optimizations speed up the FPT algorithm by one to three orders of magnitude. The running time of the ILP, that we solve using Gurobi, becomes only slightly faster. With all optimizations, the FPT algorithm is slightly faster than the ILP, even when listing all solutions. Additionally, we show that for almost all graphs, solutions of the previously proposed quasi-threshold editing heuristic QTM are close to optimal.

Cite as

Lars Gottesbüren, Michael Hamann, Philipp Schoch, Ben Strasser, Dorothea Wagner, and Sven Zühlsdorf. Engineering Exact Quasi-Threshold Editing. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2020.10,
  author =	{Gottesb\"{u}ren, Lars and Hamann, Michael and Schoch, Philipp and Strasser, Ben and Wagner, Dorothea and Z\"{u}hlsdorf, Sven},
  title =	{{Engineering Exact Quasi-Threshold Editing}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.10},
  URN =		{urn:nbn:de:0030-drops-120849},
  doi =		{10.4230/LIPIcs.SEA.2020.10},
  annote =	{Keywords: Edge Editing, Integer Linear Programming, FPT algorithm, Quasi-Threshold Editing}
}
Document
Advanced Flow-Based Multilevel Hypergraph Partitioning

Authors: Lars Gottesbüren, Michael Hamann, Sebastian Schlag, and Dorothea Wagner


Abstract
The balanced hypergraph partitioning problem is to partition a hypergraph into k disjoint blocks of bounded size such that the sum of the number of blocks connected by each hyperedge is minimized. We present an improvement to the flow-based refinement framework of KaHyPar-MF, the current state-of-the-art multilevel k-way hypergraph partitioning algorithm for high-quality solutions. Our improvement is based on the recently proposed HyperFlowCutter algorithm for computing bipartitions of unweighted hypergraphs by solving a sequence of incremental maximum flow problems. Since vertices and hyperedges are aggregated during the coarsening phase, refinement algorithms employed in the multilevel setting must be able to handle both weighted hyperedges and weighted vertices - even if the initial input hypergraph is unweighted. We therefore enhance HyperFlowCutter to handle weighted instances and propose a technique for computing maximum flows directly on weighted hypergraphs. We compare the performance of two configurations of our new algorithm with KaHyPar-MF and seven other partitioning algorithms on a comprehensive benchmark set with instances from application areas such as VLSI design, scientific computing, and SAT solving. Our first configuration, KaHyPar-HFC, computes slightly better solutions than KaHyPar-MF using significantly less running time. The second configuration, KaHyPar-HFC*, computes solutions of significantly better quality and is still slightly faster than KaHyPar-MF. Furthermore, in terms of solution quality, both configurations also outperform all other competing partitioners.

Cite as

Lars Gottesbüren, Michael Hamann, Sebastian Schlag, and Dorothea Wagner. Advanced Flow-Based Multilevel Hypergraph Partitioning. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2020.11,
  author =	{Gottesb\"{u}ren, Lars and Hamann, Michael and Schlag, Sebastian and Wagner, Dorothea},
  title =	{{Advanced Flow-Based Multilevel Hypergraph Partitioning}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.11},
  URN =		{urn:nbn:de:0030-drops-120859},
  doi =		{10.4230/LIPIcs.SEA.2020.11},
  annote =	{Keywords: Hypergraph Partitioning, Maximum Flows, Refinement}
}
Document
Pattern Discovery in Colored Strings

Authors: Zsuzsanna Lipták, Simon J. Puglisi, and Massimiliano Rossi


Abstract
We consider the problem of identifying patterns of interest in colored strings. A colored string is a string in which each position is colored with one of a finite set of colors. Our task is to find substrings that always occur followed by the same color at the same distance. The problem is motivated by applications in embedded systems verification, in particular, assertion mining. The goal there is to automatically infer properties of the embedded system from the analysis of its simulation traces. We show that the number of interesting patterns is upper-bounded by 𝒪(n²) where n is the length of the string. We introduce a baseline algorithm with 𝒪(n²) running time which identifies all interesting patterns for all colors in the string satisfying certain minimality conditions. When one is interested in patterns related to only one color, we provide an algorithm that identifies patterns in 𝒪(n²log n) time, but is faster than the first algorithm in practice, both on simulated and on real-world patterns.

Cite as

Zsuzsanna Lipták, Simon J. Puglisi, and Massimiliano Rossi. Pattern Discovery in Colored Strings. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{liptak_et_al:LIPIcs.SEA.2020.12,
  author =	{Lipt\'{a}k, Zsuzsanna and Puglisi, Simon J. and Rossi, Massimiliano},
  title =	{{Pattern Discovery in Colored Strings}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.12},
  URN =		{urn:nbn:de:0030-drops-120862},
  doi =		{10.4230/LIPIcs.SEA.2020.12},
  annote =	{Keywords: property testing, suffix tree, pattern mining}
}
Document
Fast and Linear-Time String Matching Algorithms Based on the Distances of q-Gram Occurrences

Authors: Satoshi Kobayashi, Diptarama Hendrian, Ryo Yoshinaka, and Ayumi Shinohara


Abstract
Given a text T of length n and a pattern P of length m, the string matching problem is a task to find all occurrences of P in T. In this study, we propose an algorithm that solves this problem in O((n + m)q) time considering the distance between two adjacent occurrences of the same q-gram contained in P. We also propose a theoretical improvement of it which runs in O(n + m) time, though it is not necessarily faster in practice. We compare the execution times of our and existing algorithms on various kinds of real and artificial datasets such as an English text, a genome sequence and a Fibonacci string. The experimental results show that our algorithm is as fast as the state-of-the-art algorithms in many cases, particularly when a pattern frequently appears in a text.

Cite as

Satoshi Kobayashi, Diptarama Hendrian, Ryo Yoshinaka, and Ayumi Shinohara. Fast and Linear-Time String Matching Algorithms Based on the Distances of q-Gram Occurrences. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kobayashi_et_al:LIPIcs.SEA.2020.13,
  author =	{Kobayashi, Satoshi and Hendrian, Diptarama and Yoshinaka, Ryo and Shinohara, Ayumi},
  title =	{{Fast and Linear-Time String Matching Algorithms Based on the Distances of q-Gram Occurrences}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.13},
  URN =		{urn:nbn:de:0030-drops-120878},
  doi =		{10.4230/LIPIcs.SEA.2020.13},
  annote =	{Keywords: String matching algorithm, text processing}
}
Document
Faster Fully Dynamic Transitive Closure in Practice

Authors: Kathrin Hanauer, Monika Henzinger, and Christian Schulz


Abstract
The fully dynamic transitive closure problem asks to maintain reachability information in a directed graph between arbitrary pairs of vertices, while the graph undergoes a sequence of edge insertions and deletions. The problem has been thoroughly investigated in theory and many specialized algorithms for solving it have been proposed in the last decades. In two large studies [Frigioni ea, 2001; Krommidas and Zaroliagis, 2008], a number of these algorithms have been evaluated experimentally against simple, static algorithms for graph traversal, showing the competitiveness and even superiority of the simple algorithms in practice, except for very dense random graphs or very high ratios of queries. A major drawback of those studies is that only small and mostly randomly generated graphs are considered. In this paper, we engineer new algorithms to maintain all-pairs reachability information which are simple and space-efficient. Moreover, we perform an extensive experimental evaluation on both generated and real-world instances that are several orders of magnitude larger than those in the previous studies. Our results indicate that our new algorithms outperform all state-of-the-art algorithms on all types of input considerably in practice.

Cite as

Kathrin Hanauer, Monika Henzinger, and Christian Schulz. Faster Fully Dynamic Transitive Closure in Practice. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hanauer_et_al:LIPIcs.SEA.2020.14,
  author =	{Hanauer, Kathrin and Henzinger, Monika and Schulz, Christian},
  title =	{{Faster Fully Dynamic Transitive Closure in Practice}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.14},
  URN =		{urn:nbn:de:0030-drops-120887},
  doi =		{10.4230/LIPIcs.SEA.2020.14},
  annote =	{Keywords: Dynamic Graph Algorithms, Reachability, Transitive Closure}
}
Document
Concurrent Expandable AMQs on the Basis of Quotient Filters

Authors: Tobias Maier, Peter Sanders, and Robert Williger


Abstract
A quotient filter is a cache efficient Approximate Membership Query (AMQ) data structure. Depending on the fill degree of the filter most insertions and queries only need to access one or two consecutive cache lines. This makes quotient filters very fast compared to the more commonly used Bloom filters that incur multiple independent memory accesses depending on the false positive rate. However, concurrent Bloom filters are easy to implement and can be implemented lock-free while concurrent quotient filters are not as simple. Usually concurrent quotient filters work by using an external array of locks - each protecting a region of the table. Accessing this array incurs one additional memory access per operation. We propose a new locking scheme that has no memory overhead. Using this new locking scheme we achieve 1.6× times higher insertion performance and over 2.1× higher query performance than with the common external locking scheme. Another advantage of quotient filters over Bloom filters is that a quotient filter can change its capacity when it is becoming full. We implement this growing technique for our concurrent quotient filters and adapt it in a way that allows unbounded growing while keeping a bounded false positive rate. We call the resulting data structure a fully expandable quotient filter. Its design is similar to scalable Bloom filters, but we exploit some concepts inherent to quotient filters to improve the space efficiency and the query speed. Additionally, we propose several quotient filter variants that are aimed to reduce the number of status bits (2-status-bit variant) or to simplify concurrent implementations (linear probing quotient filter). The linear probing quotient filter even leads to a lock-free concurrent filter implementation. This is especially interesting, since we show that any lock-free implementation of other common quotient filter variants would incur significant overheads in the form of additional data fields or multiple passes over the accessed data. The code produced as part of this submission can be found at https://www.github.com/Toobiased/lpqfilter.

Cite as

Tobias Maier, Peter Sanders, and Robert Williger. Concurrent Expandable AMQs on the Basis of Quotient Filters. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{maier_et_al:LIPIcs.SEA.2020.15,
  author =	{Maier, Tobias and Sanders, Peter and Williger, Robert},
  title =	{{Concurrent Expandable AMQs on the Basis of Quotient Filters}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.15},
  URN =		{urn:nbn:de:0030-drops-120890},
  doi =		{10.4230/LIPIcs.SEA.2020.15},
  annote =	{Keywords: Quotient filter, Concurrent data structures, Locking}
}
Document
Faster Multi-Modal Route Planning With Bike Sharing Using ULTRA

Authors: Jonas Sauer, Dorothea Wagner, and Tobias Zündorf


Abstract
We study multi-modal route planning in a network comprised of schedule-based public transportation, unrestricted walking, and cycling with bikes available from bike sharing stations. So far this problem has only been considered for scenarios with at most one bike sharing operator, for which MCR is the best known algorithm [Delling et al., 2013]. However, for practical applications, algorithms should be able to distinguish between bike sharing stations of multiple competing bike sharing operators. Furthermore, MCR has recently been outperformed by ULTRA for multi-modal route planning scenarios without bike sharing [Baum et al., 2019]. In this paper, we present two approaches for modeling multi-modal transportation networks with multiple bike sharing operators: The operator-dependent model requires explicit handling of bike sharing stations within the algorithm, which we demonstrate with an adapted version of MCR. In the operator-expanded model, all relevant information is encoded within an expanded network. This allows for applying any multi-modal public transit algorithm without modification, which we show for ULTRA. We proceed by describing an additional preprocessing step called operator pruning, which can be used to accelerate both approaches. We conclude our work with an extensive experimental evaluation on the networks of London, Switzerland, and Germany. Our experiments show that the new preprocessing technique accelerates both approaches significantly, with the fastest algorithm (ULTRA-RAPTOR with operator pruning) being more than an order of magnitude faster than the basic MCR approach. Moreover, the ULTRA preprocessing step also benefits from operator pruning, as its running time is reduced by a factor of 14 to 20.

Cite as

Jonas Sauer, Dorothea Wagner, and Tobias Zündorf. Faster Multi-Modal Route Planning With Bike Sharing Using ULTRA. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 16:1-16:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sauer_et_al:LIPIcs.SEA.2020.16,
  author =	{Sauer, Jonas and Wagner, Dorothea and Z\"{u}ndorf, Tobias},
  title =	{{Faster Multi-Modal Route Planning With Bike Sharing Using ULTRA}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{16:1--16:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.16},
  URN =		{urn:nbn:de:0030-drops-120905},
  doi =		{10.4230/LIPIcs.SEA.2020.16},
  annote =	{Keywords: Algorithms, Route Planning, Bike Sharing, Public Transportation}
}
Document
Efficient Route Planning with Temporary Driving Bans, Road Closures, and Rated Parking Areas

Authors: Alexander Kleff, Frank Schulz, Jakob Wagenblatt, and Tim Zeitz


Abstract
We study the problem of planning routes in road networks when certain streets or areas are closed at certain times. For heavy vehicles, such areas may be very large since many European countries impose temporary driving bans during the night or on weekends. In this setting, feasible routes may require waiting at parking areas, and several feasible routes with different trade-offs between waiting and driving detours around closed areas may exist. We propose a novel model in which driving and waiting are assigned abstract costs, and waiting costs are location-dependent to reflect the different quality of the parking areas. Our goal is to find Pareto-optimal routes with regards to arrival time at the destination and total cost. We investigate the complexity of the model and determine a necessary constraint on the cost parameters such that the problem is solvable in polynomial time. We present a thoroughly engineered implementation and perform experiments on a production-grade real world data set. The experiments show that our implementation can answer realistic queries in around a second or less which makes it feasible for practical application.

Cite as

Alexander Kleff, Frank Schulz, Jakob Wagenblatt, and Tim Zeitz. Efficient Route Planning with Temporary Driving Bans, Road Closures, and Rated Parking Areas. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kleff_et_al:LIPIcs.SEA.2020.17,
  author =	{Kleff, Alexander and Schulz, Frank and Wagenblatt, Jakob and Zeitz, Tim},
  title =	{{Efficient Route Planning with Temporary Driving Bans, Road Closures, and Rated Parking Areas}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.17},
  URN =		{urn:nbn:de:0030-drops-120910},
  doi =		{10.4230/LIPIcs.SEA.2020.17},
  annote =	{Keywords: driving bans, realistic road networks, route planning, shortest paths}
}
Document
Space and Time Trade-Off for the k Shortest Simple Paths Problem

Authors: Ali Al Zoobi, David Coudert, and Nicolas Nisse


Abstract
The k shortest simple path problem (kSSP) asks to compute a set of top-k shortest simple paths from a vertex s to a vertex t in a digraph. Yen (1971) proposed the first algorithm with the best known theoretical complexity of O(kn(m+n log n)) for a digraph with n vertices and m arcs. Since then, the problem has been widely studied from an algorithm engineering perspective, and impressive improvements have been achieved. In particular, Kurz and Mutzel (2016) proposed a sidetracks-based (SB) algorithm which is currently the fastest solution. In this work, we propose two improvements of this algorithm. We first show how to speed up the SB algorithm using dynamic updates of shortest path trees. We did experiments on some road networks of the 9th DIMAC'S challenge with up to about half a million nodes and one million arcs. Our computational results show an average speed up by a factor of 1.5 to 2 with a similar working memory consumption as SB. We then propose a second algorithm enabling to significantly reduce the working memory at the cost of an increase of the running time (up to two times slower). Our experiments on the same data set show, on average, a reduction by a factor of 1.5 to 2 of the working memory.

Cite as

Ali Al Zoobi, David Coudert, and Nicolas Nisse. Space and Time Trade-Off for the k Shortest Simple Paths Problem. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 18:1-18:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alzoobi_et_al:LIPIcs.SEA.2020.18,
  author =	{Al Zoobi, Ali and Coudert, David and Nisse, Nicolas},
  title =	{{Space and Time Trade-Off for the k Shortest Simple Paths Problem}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{18:1--18:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.18},
  URN =		{urn:nbn:de:0030-drops-120925},
  doi =		{10.4230/LIPIcs.SEA.2020.18},
  annote =	{Keywords: k shortest simple paths, graph algorithm, space-time trade-off}
}
Document
An Algorithm for the Exact Treedepth Problem

Authors: James Trimble


Abstract
We present a novel algorithm for the minimum-depth elimination tree problem, which is equivalent to the optimal treedepth decomposition problem. Our algorithm makes use of two cheaply-computed lower bound functions to prune the search tree, along with symmetry-breaking and domination rules. We present an empirical study showing that the algorithm outperforms the current state-of-the-art solver (which is based on a SAT encoding) by orders of magnitude on a range of graph classes.

Cite as

James Trimble. An Algorithm for the Exact Treedepth Problem. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{trimble:LIPIcs.SEA.2020.19,
  author =	{Trimble, James},
  title =	{{An Algorithm for the Exact Treedepth Problem}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.19},
  URN =		{urn:nbn:de:0030-drops-120938},
  doi =		{10.4230/LIPIcs.SEA.2020.19},
  annote =	{Keywords: Treedepth, Elimination Tree, Graph Algorithms}
}
Document
Algorithms for New Types of Fair Stable Matchings

Authors: Frances Cooper and David Manlove


Abstract
We study the problem of finding "fair" stable matchings in the Stable Marriage problem with Incomplete lists (SMI). For an instance I of SMI there may be many stable matchings, providing significantly different outcomes for the sets of men and women. We introduce two new notions of fairness in SMI. Firstly, a regret-equal stable matching minimises the difference in ranks of a worst-off man and a worst-off woman, among all stable matchings. Secondly, a min-regret sum stable matching minimises the sum of ranks of a worst-off man and a worst-off woman, among all stable matchings. We present two new efficient algorithms to find stable matchings of these types. Firstly, the Regret-Equal Degree Iteration Algorithm finds a regret-equal stable matching in O(d₀ nm) time, where d₀ is the absolute difference in ranks between a worst-off man and a worst-off woman in the man-optimal stable matching, n is the number of men or women, and m is the total length of all preference lists. Secondly, the Min-Regret Sum Algorithm finds a min-regret sum stable matching in O(d_s m) time, where d_s is the difference in the ranks between a worst-off man in each of the woman-optimal and man-optimal stable matchings. Experiments to compare several types of fair optimal stable matchings were conducted and show that the Regret-Equal Degree Iteration Algorithm produces matchings that are competitive with respect to other fairness objectives. On the other hand, existing types of "fair" stable matchings did not provide as close an approximation to regret-equal stable matchings.

Cite as

Frances Cooper and David Manlove. Algorithms for New Types of Fair Stable Matchings. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cooper_et_al:LIPIcs.SEA.2020.20,
  author =	{Cooper, Frances and Manlove, David},
  title =	{{Algorithms for New Types of Fair Stable Matchings}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.20},
  URN =		{urn:nbn:de:0030-drops-120945},
  doi =		{10.4230/LIPIcs.SEA.2020.20},
  annote =	{Keywords: Stable marriage, Algorithms, Optimality, Fair stable matchings, Regret-equality, Min-regret sum}
}
Document
Crystal Structure Prediction via Oblivious Local Search

Authors: Dmytro Antypov, Argyrios Deligkas, Vladimir Gusev, Matthew J. Rosseinsky, Paul G. Spirakis, and Michail Theofilatos


Abstract
We study Crystal Structure Prediction, one of the major problems in computational chemistry. This is essentially a continuous optimization problem, where many different, simple and sophisticated, methods have been proposed and applied. The simple searching techniques are easy to understand, usually easy to implement, but they can be slow in practice. On the other hand, the more sophisticated approaches perform well in general, however almost all of them have a large number of parameters that require fine tuning and, in the majority of the cases, chemical expertise is needed in order to properly set them up. In addition, due to the chemical expertise involved in the parameter-tuning, these approaches can be biased towards previously-known crystal structures. Our contribution is twofold. Firstly, we formalize the Crystal Structure Prediction problem, alongside several other intermediate problems, from a theoretical computer science perspective. Secondly, we propose an oblivious algorithm for Crystal Structure Prediction that is based on local search. Oblivious means that our algorithm requires minimal knowledge about the composition we are trying to compute a crystal structure for. In addition, our algorithm can be used as an intermediate step by any method. Our experiments show that our algorithms outperform the standard basin hopping, a well studied algorithm for the problem.

Cite as

Dmytro Antypov, Argyrios Deligkas, Vladimir Gusev, Matthew J. Rosseinsky, Paul G. Spirakis, and Michail Theofilatos. Crystal Structure Prediction via Oblivious Local Search. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{antypov_et_al:LIPIcs.SEA.2020.21,
  author =	{Antypov, Dmytro and Deligkas, Argyrios and Gusev, Vladimir and Rosseinsky, Matthew J. and Spirakis, Paul G. and Theofilatos, Michail},
  title =	{{Crystal Structure Prediction via Oblivious Local Search}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.21},
  URN =		{urn:nbn:de:0030-drops-120950},
  doi =		{10.4230/LIPIcs.SEA.2020.21},
  annote =	{Keywords: crystal structure prediction, local search, combinatorial neighborhood}
}
Document
Variable Shift SDD: A More Succinct Sentential Decision Diagram

Authors: Kengo Nakamura, Shuhei Denzumi, and Masaaki Nishino


Abstract
The Sentential Decision Diagram (SDD) is a tractable representation of Boolean functions that subsumes the famous Ordered Binary Decision Diagram (OBDD) as a strict subset. SDDs are attracting much attention because they are more succinct than OBDDs, as well as having canonical forms and supporting many useful queries and transformations such as model counting and Apply operation. In this paper, we propose a more succinct variant of SDD named Variable Shift SDD (VS-SDD). The key idea is to create a unique representation for Boolean functions that are equivalent under a specific variable substitution. We show that VS-SDDs are never larger than SDDs and there are cases in which the size of a VS-SDD is exponentially smaller than that of an SDD. Moreover, despite such succinctness, we show that numerous basic operations that are supported in polytime with SDD are also supported in polytime with VS-SDD. Experiments confirm that VS-SDDs are significantly more succinct than SDDs when applied to classical planning instances, where inherent symmetry exists.

Cite as

Kengo Nakamura, Shuhei Denzumi, and Masaaki Nishino. Variable Shift SDD: A More Succinct Sentential Decision Diagram. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 22:1-22:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nakamura_et_al:LIPIcs.SEA.2020.22,
  author =	{Nakamura, Kengo and Denzumi, Shuhei and Nishino, Masaaki},
  title =	{{Variable Shift SDD: A More Succinct Sentential Decision Diagram}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{22:1--22:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.22},
  URN =		{urn:nbn:de:0030-drops-120968},
  doi =		{10.4230/LIPIcs.SEA.2020.22},
  annote =	{Keywords: Boolean function, Data structure, Sentential decision diagram}
}
Document
Engineering Fused Lasso Solvers on Trees

Authors: Elias Kuthe and Sven Rahmann


Abstract
The graph fused lasso optimization problem seeks, for a given input signal y=(y_i) on nodes i∈ V of a graph G=(V,E), a reconstructed signal x=(x_i) that is both element-wise close to y in quadratic error and also has bounded total variation (sum of absolute differences across edges), thereby favoring regionally constant solutions. An important application is denoising of spatially correlated data, especially for medical images. Currently, fused lasso solvers for general graph input reduce the problem to an iteration over a series of "one-dimensional" problems (on paths or line graphs), which can be solved in linear time. Recently, a direct fused lasso algorithm for tree graphs has been presented, but no implementation of it appears to be available. We here present a simplified exact algorithm and additionally a fast approximation scheme for trees, together with engineered implementations for both. We empirically evaluate their performance on different kinds of trees with distinct degree distributions (simulated trees; spanning trees of road networks, grid graphs of images, social networks). The exact algorithm is very efficient on trees with low node degrees, which covers many naturally arising graphs, while the approximation scheme can perform better on trees with several higher-degree nodes when limiting the desired accuracy to values that are useful in practice.

Cite as

Elias Kuthe and Sven Rahmann. Engineering Fused Lasso Solvers on Trees. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kuthe_et_al:LIPIcs.SEA.2020.23,
  author =	{Kuthe, Elias and Rahmann, Sven},
  title =	{{Engineering Fused Lasso Solvers on Trees}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.23},
  URN =		{urn:nbn:de:0030-drops-120977},
  doi =		{10.4230/LIPIcs.SEA.2020.23},
  annote =	{Keywords: fused lasso, regularization, tree traversal, cache effects}
}
Document
Finding Structurally and Temporally Similar Trajectories in Graphs

Authors: Roberto Grossi, Andrea Marino, and Shima Moghtasedi


Abstract
The analysis of similar motions in a network provides useful information for different applications like route recommendation. We are interested in algorithms to efficiently retrieve trajectories that are similar to a given query trajectory. For this task many studies have focused on extracting the geometrical information of trajectories. In this paper we investigate the properties of trajectories moving along the paths of a network. We provide a similarity function by making use of both the temporal aspect of trajectories and the structure of the underlying network. We propose an approximation technique that offers the top-k similar trajectories with respect to a query trajectory in an efficient way with acceptable precision. We investigate our method over real-world networks, and our experimental results show the effectiveness of the proposed method.

Cite as

Roberto Grossi, Andrea Marino, and Shima Moghtasedi. Finding Structurally and Temporally Similar Trajectories in Graphs. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossi_et_al:LIPIcs.SEA.2020.24,
  author =	{Grossi, Roberto and Marino, Andrea and Moghtasedi, Shima},
  title =	{{Finding Structurally and Temporally Similar Trajectories in Graphs}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.24},
  URN =		{urn:nbn:de:0030-drops-120989},
  doi =		{10.4230/LIPIcs.SEA.2020.24},
  annote =	{Keywords: Graph trajectory, approximated similarity, top-k similarity query}
}
Document
Zipping Segment Trees

Authors: Lukas Barth and Dorothea Wagner


Abstract
Stabbing queries in sets of intervals are usually answered using segment trees. A dynamic variant of segment trees has been presented by van Kreveld and Overmars, which uses red-black trees to do rebalancing operations. This paper presents zipping segment trees - dynamic segment trees based on zip trees, which were recently introduced by Tarjan et al. To facilitate zipping segment trees, we show how to uphold certain segment tree properties during the operations of a zip tree. We present an in-depth experimental evaluation and comparison of dynamic segment trees based on red-black trees, weight-balanced trees and several variants of the novel zipping segment trees. Our results indicate that zipping segment trees perform better than rotation-based alternatives.

Cite as

Lukas Barth and Dorothea Wagner. Zipping Segment Trees. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 25:1-25:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barth_et_al:LIPIcs.SEA.2020.25,
  author =	{Barth, Lukas and Wagner, Dorothea},
  title =	{{Zipping Segment Trees}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.25},
  URN =		{urn:nbn:de:0030-drops-120990},
  doi =		{10.4230/LIPIcs.SEA.2020.25},
  annote =	{Keywords: Segment Trees, Dynamic Segment Trees, Zip Trees, Data Structures}
}
Document
Fast and Stable Repartitioning of Road Networks

Authors: Valentin Buchhold, Daniel Delling, Dennis Schieferdecker, and Michael Wegner


Abstract
We study the problem of graph partitioning for evolving road networks. While the road network of the world is mostly stable, small updates happen on a relatively frequent basis, as can been observed with the OpenStreetMap project (http://www.openstreetmap.org). For various reasons, professional applications demand the graph partition to stay roughly the same over time, and that changes are limited to areas where graph updates occur. In this work, we define the problem, present algorithms to satisfy the stability needs, and evaluate our techniques on continental-sized road networks. Besides the stability gains, we show that, when the changes are low and local, running our novel techniques is an order of magnitude faster than running graph partitioning from scratch.

Cite as

Valentin Buchhold, Daniel Delling, Dennis Schieferdecker, and Michael Wegner. Fast and Stable Repartitioning of Road Networks. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{buchhold_et_al:LIPIcs.SEA.2020.26,
  author =	{Buchhold, Valentin and Delling, Daniel and Schieferdecker, Dennis and Wegner, Michael},
  title =	{{Fast and Stable Repartitioning of Road Networks}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.26},
  URN =		{urn:nbn:de:0030-drops-121000},
  doi =		{10.4230/LIPIcs.SEA.2020.26},
  annote =	{Keywords: Graph repartitioning, stable partitions, road networks, algorithm engineering}
}
Document
Path Query Data Structures in Practice

Authors: Meng He and Serikzhan Kazi


Abstract
We perform experimental studies on data structures that answer path median, path counting, and path reporting queries in weighted trees. These query problems generalize the well-known range median query problem in arrays, as well as the 2d orthogonal range counting and reporting problems in planar point sets, to tree structured data. We propose practical realizations of the latest theoretical results on path queries. Our data structures, which use tree extraction, heavy-path decomposition and wavelet trees, are implemented in both succinct and pointer-based form. Our succinct data structures are further specialized to be plain or entropy-compressed. Through experiments on large sets, we show that succinct data structures for path queries may present a viable alternative to standard pointer-based realizations, in practical scenarios. Compared to naïve approaches that compute the answer by explicit traversal of the query path, our succinct data structures are several times faster in path median queries and perform comparably in path counting and path reporting queries, while being several times more space-efficient. Plain pointer-based realizations of our data structures, requiring a few times more space than the naïve ones, yield up to 100-times speed-up over them.

Cite as

Meng He and Serikzhan Kazi. Path Query Data Structures in Practice. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{he_et_al:LIPIcs.SEA.2020.27,
  author =	{He, Meng and Kazi, Serikzhan},
  title =	{{Path Query Data Structures in Practice}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.27},
  URN =		{urn:nbn:de:0030-drops-121012},
  doi =		{10.4230/LIPIcs.SEA.2020.27},
  annote =	{Keywords: path query, path median, path counting, path reporting, weighted tree}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail