32 Search Results for "Cantone, Domenico"


Volume

LIPIcs, Volume 160

18th International Symposium on Experimental Algorithms (SEA 2020)

SEA 2020, June 16-18, 2020, Catania, Italy

Editors: Simone Faro and Domenico Cantone

Document
Sequence Searching Allowing for Non-Overlapping Adjacent Unbalanced Translocations

Authors: Domenico Cantone, Simone Faro, and Arianna Pavone

Published in: LIPIcs, Volume 172, 20th International Workshop on Algorithms in Bioinformatics (WABI 2020)


Abstract
Unbalanced translocations are among the most frequent chromosomal alterations, accounted for 30% of all losses of heterozygosity, a major genetic event causing inactivation of tumor suppressor genes. Despite of their central role in genomic sequence analysis, little attention has been devoted to the problem of matching sequences allowing for this kind of chromosomal alteration. In this paper we investigate the approximate string matching problem when the edit operations are non-overlapping unbalanced translocations of adjacent factors. In particular, we first present a 𝒪(nm³)-time and 𝒪(m²)-space algorithm based on the dynamic-programming approach. Then we improve our first result by designing a second solution which makes use of the Directed Acyclic Word Graph of the pattern. In particular, we show that under the assumptions of equiprobability and independence of characters, our algorithm has a 𝒪(nlog²_{σ} m) average time complexity, for an alphabet of size σ, still maintaining the 𝒪(nm³)-time and the 𝒪(m²)-space complexity in the worst case. To the best of our knowledge this is the first solution in literature for the approximate string matching problem allowing for unbalanced translocations of factors.

Cite as

Domenico Cantone, Simone Faro, and Arianna Pavone. Sequence Searching Allowing for Non-Overlapping Adjacent Unbalanced Translocations. In 20th International Workshop on Algorithms in Bioinformatics (WABI 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 172, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cantone_et_al:LIPIcs.WABI.2020.19,
  author =	{Cantone, Domenico and Faro, Simone and Pavone, Arianna},
  title =	{{Sequence Searching Allowing for Non-Overlapping Adjacent Unbalanced Translocations}},
  booktitle =	{20th International Workshop on Algorithms in Bioinformatics (WABI 2020)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-161-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{172},
  editor =	{Kingsford, Carl and Pisanti, Nadia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2020.19},
  URN =		{urn:nbn:de:0030-drops-128086},
  doi =		{10.4230/LIPIcs.WABI.2020.19},
  annote =	{Keywords: Text processing, approximate matching, inversions, sequence matching}
}
Document
Complete Volume
LIPIcs, Volume 160, SEA 2020, Complete Volume

Authors: Simone Faro and Domenico Cantone

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
LIPIcs, Volume 160, SEA 2020, Complete Volume

Cite as

18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 1-366, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{faro_et_al:LIPIcs.SEA.2020,
  title =	{{LIPIcs, Volume 160, SEA 2020, Complete Volume}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{1--366},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020},
  URN =		{urn:nbn:de:0030-drops-120730},
  doi =		{10.4230/LIPIcs.SEA.2020},
  annote =	{Keywords: LIPIcs, Volume 160, SEA 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Simone Faro and Domenico Cantone

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{faro_et_al:LIPIcs.SEA.2020.0,
  author =	{Faro, Simone and Cantone, Domenico},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.0},
  URN =		{urn:nbn:de:0030-drops-120741},
  doi =		{10.4230/LIPIcs.SEA.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Algorithm Engineering for High-Dimensional Similarity Search Problems (Invited Talk)

Authors: Martin Aumüller

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Similarity search problems in high-dimensional data arise in many areas of computer science such as data bases, image analysis, machine learning, and natural language processing. One of the most prominent problems is finding the k nearest neighbors of a data point q ∈ ℝ^d in a large set of data points S ⊂ ℝ^d, under same distance measure such as Euclidean distance. In contrast to lower dimensional settings, we do not know of worst-case efficient data structures for such search problems in high-dimensional data, i.e., data structures that are faster than a linear scan through the data set. However, there is a rich body of (often heuristic) approaches that solve nearest neighbor search problems much faster than such a scan on many real-world data sets. As a necessity, the term solve means that these approaches give approximate results that are close to the true k-nearest neighbors. In this talk, we survey recent approaches to nearest neighbor search and related problems. The talk consists of three parts: (1) What makes nearest neighbor search difficult? (2) How do current state-of-the-art algorithms work? (3) What are recent advances regarding similarity search on GPUs, in distributed settings, or in external memory?

Cite as

Martin Aumüller. Algorithm Engineering for High-Dimensional Similarity Search Problems (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 1:1-1:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aumuller:LIPIcs.SEA.2020.1,
  author =	{Aum\"{u}ller, Martin},
  title =	{{Algorithm Engineering for High-Dimensional Similarity Search Problems}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{1:1--1:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.1},
  URN =		{urn:nbn:de:0030-drops-120751},
  doi =		{10.4230/LIPIcs.SEA.2020.1},
  annote =	{Keywords: Nearest neighbor search, Benchmarking}
}
Document
Invited Talk
Algorithm Engineering for Sorting and Searching, and All That (Invited Talk)

Authors: Stefan Edelkamp

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
We look at several proposals to engineer the set of fundamental searching and sorting algorithms. Aspects are improving locality of disk access and cache access, the efficiency tuning by reducing the number of branch mispredictions, and reducing at leading factors hidden in the Big-Oh notation. These studies in algorithm engineering, in turn, lead to exiting new algorithm designs. On the practical side, we will establish that efficient sorting and searching algorithms are in tight collaboration, as sorting is used for finding duplicates in disk-based search, and heap structures designed for efficient graph search can be exploited in classical and adaptive sorting. We indicate the effects of engineered sorting and searching for combined task and motion planning.

Cite as

Stefan Edelkamp. Algorithm Engineering for Sorting and Searching, and All That (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 2:1-2:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{edelkamp:LIPIcs.SEA.2020.2,
  author =	{Edelkamp, Stefan},
  title =	{{Algorithm Engineering for Sorting and Searching, and All That}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{2:1--2:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.2},
  URN =		{urn:nbn:de:0030-drops-120766},
  doi =		{10.4230/LIPIcs.SEA.2020.2},
  annote =	{Keywords: Sorting, Searching, Algorithm Engineering}
}
Document
Invited Talk
Indexing Compressed Text: A Tale of Time and Space (Invited Talk)

Authors: Nicola Prezza

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Text indexing is a classical algorithmic problem that has been studied for over four decades. The earliest optimal-time solution to the problem, the suffix tree [Weiner, 1973], dates back to 1973 and requires up to two orders of magnitude more space than the text to be stored. In the year 2000, two breakthrough works [Grossi and Vitter, 2000; Ferragina and Manzini, 2000] showed that this space overhead is not necessary: both the index and the text can be stored in a space proportional to the text’s entropy. These contributions had an enormous impact in bioinformatics: nowadays, the two most widely-used DNA aligners employ compressed indexes [Li and Durbin, 2009; Langmead et al., 2009]. In recent years, it became apparent that entropy had reached its limits: modern datasets (for example, collections of thousands of human genomes) are extremely large but very repetitive and, by its very definition, entropy cannot compress repetitive texts [S. Kreft and G. Navarro, 2013]. To overcome this problem, a new generation of indexes based on dictionary compressors (for example, LZ77 and run-length BWT) emerged [S. Kreft and G. Navarro, 2013; Gagie et al., 2020; F. Claude and G. Navarro, 2012], together with generalizations of the indexing problem to labeled graphs [Ferragina et al., 2009; Sirén et al., 2014; Travis Gagie et al., 2017]. This talk is a short and friendly survey of the landmarks of this fascinating path that took us from suffix trees to the most modern compressed indexes on labeled graphs.

Cite as

Nicola Prezza. Indexing Compressed Text: A Tale of Time and Space (Invited Talk). In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{prezza:LIPIcs.SEA.2020.3,
  author =	{Prezza, Nicola},
  title =	{{Indexing Compressed Text: A Tale of Time and Space}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.3},
  URN =		{urn:nbn:de:0030-drops-120772},
  doi =		{10.4230/LIPIcs.SEA.2020.3},
  annote =	{Keywords: Compressed Text Indexing}
}
Document
High-Quality Hierarchical Process Mapping

Authors: Marcelo Fonseca Faraj, Alexander van der Grinten, Henning Meyerhenke, Jesper Larsson Träff, and Christian Schulz

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Partitioning graphs into blocks of roughly equal size such that few edges run between blocks is a frequently needed operation when processing graphs on a parallel computer. When a topology of a distributed system is known, an important task is then to map the blocks of the partition onto the processors such that the overall communication cost is reduced. We present novel multilevel algorithms that integrate graph partitioning and process mapping. Important ingredients of our algorithm include fast label propagation, more localized local search, initial partitioning, as well as a compressed data structure to compute processor distances without storing a distance matrix. Moreover, our algorithms are able to exploit a given hierarchical structure of the distributed system under consideration. Experiments indicate that our algorithms speed up the overall mapping process and, due to the integrated multilevel approach, also find much better solutions in practice. For example, one configuration of our algorithm yields similar solution quality as the previous state-of-the-art in terms of mapping quality for large numbers of partitions while being a factor 9.3 faster. Compared to the currently fastest iterated multilevel mapping algorithm Scotch, we obtain 16% better solutions while investing slightly more running time.

Cite as

Marcelo Fonseca Faraj, Alexander van der Grinten, Henning Meyerhenke, Jesper Larsson Träff, and Christian Schulz. High-Quality Hierarchical Process Mapping. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{faraj_et_al:LIPIcs.SEA.2020.4,
  author =	{Faraj, Marcelo Fonseca and van der Grinten, Alexander and Meyerhenke, Henning and Tr\"{a}ff, Jesper Larsson and Schulz, Christian},
  title =	{{High-Quality Hierarchical Process Mapping}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.4},
  URN =		{urn:nbn:de:0030-drops-120782},
  doi =		{10.4230/LIPIcs.SEA.2020.4},
  annote =	{Keywords: Process Mapping, Graph Partitioning, Algorithm Engineering}
}
Document
Probing a Set of Trajectories to Maximize Captured Information

Authors: Sándor P. Fekete, Alexander Hill, Dominik Krupke, Tyler Mayer, Joseph S. B. Mitchell, Ojas Parekh, and Cynthia A. Phillips

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
We study a trajectory analysis problem we call the Trajectory Capture Problem (TCP), in which, for a given input set T of trajectories in the plane, and an integer k≥ 2, we seek to compute a set of k points ("portals") to maximize the total weight of all subtrajectories of T between pairs of portals. This problem naturally arises in trajectory analysis and summarization. We show that the TCP is NP-hard (even in very special cases) and give some first approximation results. Our main focus is on attacking the TCP with practical algorithm-engineering approaches, including integer linear programming (to solve instances to provable optimality) and local search methods. We study the integrality gap arising from such approaches. We analyze our methods on different classes of data, including benchmark instances that we generate. Our goal is to understand the best performing heuristics, based on both solution time and solution quality. We demonstrate that we are able to compute provably optimal solutions for real-world instances.

Cite as

Sándor P. Fekete, Alexander Hill, Dominik Krupke, Tyler Mayer, Joseph S. B. Mitchell, Ojas Parekh, and Cynthia A. Phillips. Probing a Set of Trajectories to Maximize Captured Information. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.SEA.2020.5,
  author =	{Fekete, S\'{a}ndor P. and Hill, Alexander and Krupke, Dominik and Mayer, Tyler and Mitchell, Joseph S. B. and Parekh, Ojas and Phillips, Cynthia A.},
  title =	{{Probing a Set of Trajectories to Maximize Captured Information}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.5},
  URN =		{urn:nbn:de:0030-drops-120796},
  doi =		{10.4230/LIPIcs.SEA.2020.5},
  annote =	{Keywords: Algorithm engineering, optimization, complexity, approximation, trajectories}
}
Document
Storing Set Families More Compactly with Top ZDDs

Authors: Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Zero-suppressed Binary Decision Diagrams (ZDDs) are data structures for representing set families in a compressed form. With ZDDs, many valuable operations on set families can be done in time polynomial in ZDD size. In some cases, however, the size of ZDDs for representing large set families becomes too huge to store them in the main memory. This paper proposes top ZDD, a novel representation of ZDDs which uses less space than existing ones. The top ZDD is an extension of top tree, which compresses trees, to compress directed acyclic graphs by sharing identical subgraphs. We prove that navigational operations on ZDDs can be done in time poly-logarithmic in ZDD size, and show that there exist set families for which the size of the top ZDD is exponentially smaller than that of the ZDD. We also show experimentally that our top ZDDs have smaller size than ZDDs for real data.

Cite as

Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane. Storing Set Families More Compactly with Top ZDDs. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{matsuda_et_al:LIPIcs.SEA.2020.6,
  author =	{Matsuda, Kotaro and Denzumi, Shuhei and Sadakane, Kunihiko},
  title =	{{Storing Set Families More Compactly with Top ZDDs}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.6},
  URN =		{urn:nbn:de:0030-drops-120809},
  doi =		{10.4230/LIPIcs.SEA.2020.6},
  annote =	{Keywords: top tree, Zero-suppressed Decision Diagram, space-efficient data structure}
}
Document
Fast and Simple Compact Hashing via Bucketing

Authors: Dominik Köppl, Simon J. Puglisi, and Rajeev Raman

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Compact hash tables store a set S of n key-value pairs, where the keys are from the universe U = {0,…,u-1}, and the values are v-bit integers, in close to B(u, n) + nv bits of space, where {b(u, n)} = log₂ binom(u,n) is the information-theoretic lower bound for representing the set of keys in S, and support operations insert, delete and lookup on S. Compact hash tables have received significant attention in recent years, and approaches dating back to Cleary [IEEE T. Comput, 1984], as well as more recent ones have been implemented and used in a number of applications. However, the wins on space usage of these approaches are outweighed by their slowness relative to conventional hash tables. In this paper, we demonstrate that compact hash tables based upon a simple idea of bucketing practically outperform existing compact hash table implementations in terms of memory usage and construction time, and existing fast hash table implementations in terms of memory usage (and sometimes also in terms of construction time). A related notion is that of a compact Hash ID map, which stores a set Ŝ of n keys from U, and implicitly associates each key in Ŝ with a unique value (its ID), chosen by the data structure itself, which is an integer of magnitude O(n), and supports inserts and lookups on Ŝ, while using close to B(u,n) bits. One of our approaches is suitable for use as a compact Hash ID map.

Cite as

Dominik Köppl, Simon J. Puglisi, and Rajeev Raman. Fast and Simple Compact Hashing via Bucketing. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{koppl_et_al:LIPIcs.SEA.2020.7,
  author =	{K\"{o}ppl, Dominik and Puglisi, Simon J. and Raman, Rajeev},
  title =	{{Fast and Simple Compact Hashing via Bucketing}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.7},
  URN =		{urn:nbn:de:0030-drops-120817},
  doi =		{10.4230/LIPIcs.SEA.2020.7},
  annote =	{Keywords: compact hashing, hash table, separate chaining}
}
Document
Effect of Initial Assignment on Local Search Performance for Max Sat

Authors: Daniel Berend and Yochai Twitto

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
In this paper, we explore the correlation between the quality of initial assignments provided to local search heuristics and that of the corresponding final assignments. We restrict our attention to the Max r-Sat problem and to one of the leading local search heuristics - Configuration Checking Local Search (CCLS). We use a tailored version of the Method of Conditional Expectations (MOCE) to generate initial assignments of diverse quality. We show that the correlation in question is significant and long-lasting. Namely, even when we delve deeper into the local search, we are still in the shadow of the initial assignment. Thus, under practical time constraints, the quality of the initial assignment is crucial to the performance of local search heuristics. To demonstrate our point, we improve CCLS by combining it with MOCE. Instead of starting CCLS from random initial assignments, we start it from excellent initial assignments, provided by MOCE. Indeed, it turns out that this kind of initialization provides a significant improvement of this state-of-the-art solver. This improvement becomes more and more significant as the instance grows.

Cite as

Daniel Berend and Yochai Twitto. Effect of Initial Assignment on Local Search Performance for Max Sat. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{berend_et_al:LIPIcs.SEA.2020.8,
  author =	{Berend, Daniel and Twitto, Yochai},
  title =	{{Effect of Initial Assignment on Local Search Performance for Max Sat}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.8},
  URN =		{urn:nbn:de:0030-drops-120823},
  doi =		{10.4230/LIPIcs.SEA.2020.8},
  annote =	{Keywords: Combinatorial Optimization, Maximum Satisfiability, Local Search, Probabilistic Algorithms}
}
Document
Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams

Authors: Yu Nakahata, Masaaki Nishino, Jun Kawahara, and Shin-ichi Minato

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Subgraph enumeration is a fundamental task in computer science. Since the number of subgraphs can be large, some enumeration algorithms exploit compressed representations for efficiency. One such representation is the Zero-suppressed Binary Decision Diagram (ZDD). ZDDs can represent the set of subgraphs compactly and support several poly-time queries, such as counting and random sampling. Researchers have proposed efficient algorithms to construct ZDDs representing the set of subgraphs under several constraints, which yield fruitful results in many applications. Recently, Zero-suppressed Sentential Decision Diagrams (ZSDDs) have been proposed as variants of ZDDs. ZSDDs can be smaller than ZDDs when representing the same set of subgraphs. However, efficient algorithms to construct ZSDDs are known only for specific types of subgraphs: matchings and paths. We propose a novel framework to construct ZSDDs representing sets of subgraphs under given constraints. Using our framework, we can construct ZSDDs representing several sets of subgraphs such as matchings, paths, cycles, and spanning trees. We show the bound of sizes of constructed ZSDDs by the branch-width of the input graph, which is smaller than that of ZDDs by the path-width. Experiments show that our methods can construct ZSDDs faster than ZDDs and that the constructed ZSDDs are smaller than ZDDs when representing the same set of subgraphs.

Cite as

Yu Nakahata, Masaaki Nishino, Jun Kawahara, and Shin-ichi Minato. Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nakahata_et_al:LIPIcs.SEA.2020.9,
  author =	{Nakahata, Yu and Nishino, Masaaki and Kawahara, Jun and Minato, Shin-ichi},
  title =	{{Enumerating All Subgraphs Under Given Constraints Using Zero-Suppressed Sentential Decision Diagrams}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.9},
  URN =		{urn:nbn:de:0030-drops-120831},
  doi =		{10.4230/LIPIcs.SEA.2020.9},
  annote =	{Keywords: Subgraph, Enumeration, Decision Diagram, Zero-suppressed Sentential Decision Diagram (ZSDD), Top-down construction algorithm}
}
Document
Engineering Exact Quasi-Threshold Editing

Authors: Lars Gottesbüren, Michael Hamann, Philipp Schoch, Ben Strasser, Dorothea Wagner, and Sven Zühlsdorf

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Quasi-threshold graphs are {C₄, P₄}-free graphs, i.e., they do not contain any cycle or path of four nodes as an induced subgraph. We study the {C₄, P₄}-free editing problem, which is the problem of finding a minimum number of edge insertions or deletions to transform an input graph into a quasi-threshold graph. This problem is NP-hard but fixed-parameter tractable (FPT) in the number of edits by using a branch-and-bound algorithm and admits a simple integer linear programming formulation (ILP). Both methods are also applicable to the general ℱ-free editing problem for any finite set of graphs ℱ. For the FPT algorithm, we introduce a fast heuristic for computing high-quality lower bounds and an improved branching strategy. For the ILP, we engineer several variants of row generation. We evaluate both methods for quasi-threshold editing on a large set of protein similarity graphs. For most instances, our optimizations speed up the FPT algorithm by one to three orders of magnitude. The running time of the ILP, that we solve using Gurobi, becomes only slightly faster. With all optimizations, the FPT algorithm is slightly faster than the ILP, even when listing all solutions. Additionally, we show that for almost all graphs, solutions of the previously proposed quasi-threshold editing heuristic QTM are close to optimal.

Cite as

Lars Gottesbüren, Michael Hamann, Philipp Schoch, Ben Strasser, Dorothea Wagner, and Sven Zühlsdorf. Engineering Exact Quasi-Threshold Editing. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2020.10,
  author =	{Gottesb\"{u}ren, Lars and Hamann, Michael and Schoch, Philipp and Strasser, Ben and Wagner, Dorothea and Z\"{u}hlsdorf, Sven},
  title =	{{Engineering Exact Quasi-Threshold Editing}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.10},
  URN =		{urn:nbn:de:0030-drops-120849},
  doi =		{10.4230/LIPIcs.SEA.2020.10},
  annote =	{Keywords: Edge Editing, Integer Linear Programming, FPT algorithm, Quasi-Threshold Editing}
}
Document
Advanced Flow-Based Multilevel Hypergraph Partitioning

Authors: Lars Gottesbüren, Michael Hamann, Sebastian Schlag, and Dorothea Wagner

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
The balanced hypergraph partitioning problem is to partition a hypergraph into k disjoint blocks of bounded size such that the sum of the number of blocks connected by each hyperedge is minimized. We present an improvement to the flow-based refinement framework of KaHyPar-MF, the current state-of-the-art multilevel k-way hypergraph partitioning algorithm for high-quality solutions. Our improvement is based on the recently proposed HyperFlowCutter algorithm for computing bipartitions of unweighted hypergraphs by solving a sequence of incremental maximum flow problems. Since vertices and hyperedges are aggregated during the coarsening phase, refinement algorithms employed in the multilevel setting must be able to handle both weighted hyperedges and weighted vertices - even if the initial input hypergraph is unweighted. We therefore enhance HyperFlowCutter to handle weighted instances and propose a technique for computing maximum flows directly on weighted hypergraphs. We compare the performance of two configurations of our new algorithm with KaHyPar-MF and seven other partitioning algorithms on a comprehensive benchmark set with instances from application areas such as VLSI design, scientific computing, and SAT solving. Our first configuration, KaHyPar-HFC, computes slightly better solutions than KaHyPar-MF using significantly less running time. The second configuration, KaHyPar-HFC*, computes solutions of significantly better quality and is still slightly faster than KaHyPar-MF. Furthermore, in terms of solution quality, both configurations also outperform all other competing partitioners.

Cite as

Lars Gottesbüren, Michael Hamann, Sebastian Schlag, and Dorothea Wagner. Advanced Flow-Based Multilevel Hypergraph Partitioning. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2020.11,
  author =	{Gottesb\"{u}ren, Lars and Hamann, Michael and Schlag, Sebastian and Wagner, Dorothea},
  title =	{{Advanced Flow-Based Multilevel Hypergraph Partitioning}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.11},
  URN =		{urn:nbn:de:0030-drops-120859},
  doi =		{10.4230/LIPIcs.SEA.2020.11},
  annote =	{Keywords: Hypergraph Partitioning, Maximum Flows, Refinement}
}
  • Refine by Author
  • 4 Cantone, Domenico
  • 4 Wagner, Dorothea
  • 3 Faro, Simone
  • 2 Denzumi, Shuhei
  • 2 Gottesbüren, Lars
  • Show More...

  • Refine by Classification
  • 8 Theory of computation → Design and analysis of algorithms
  • 6 Mathematics of computing → Graph algorithms
  • 3 Theory of computation → Data structures design and analysis
  • 3 Theory of computation → Pattern matching
  • 3 Theory of computation → Shortest paths
  • Show More...

  • Refine by Keyword
  • 2 Algorithm Engineering
  • 2 Algorithms
  • 1 Algorithm engineering
  • 1 Benchmarking
  • 1 Bike Sharing
  • Show More...

  • Refine by Type
  • 31 document
  • 1 volume

  • Refine by Publication Year
  • 31 2020
  • 1 2011

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail