LIPIcs, Volume 144

27th Annual European Symposium on Algorithms (ESA 2019)



Thumbnail PDF

Event

ESA 2019, September 9-11, 2019, Munich/Garching, Germany

Editors

Michael A. Bender
  • Stony Brook University, NY, USA
Ola Svensson
  • EPFL, Lausanne, Switzerland
Grzegorz Herman
  • Jagiellonian University, Kraków, Poland

Publication Details

  • published at: 2019-09-06
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-124-5
  • DBLP: db/conf/esa/esa2019

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 144, ESA'19, Complete Volume

Authors: Michael A. Bender, Ola Svensson, and Grzegorz Herman


Abstract
LIPIcs, Volume 144, ESA'19, Complete Volume

Cite as

27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{bender_et_al:LIPIcs.ESA.2019,
  title =	{{LIPIcs, Volume 144, ESA'19, Complete Volume}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019},
  URN =		{urn:nbn:de:0030-drops-113004},
  doi =		{10.4230/LIPIcs.ESA.2019},
  annote =	{Keywords: Applied computing, Transportation; Computing methodologies, Algebraic algorithms; Hardware, External storage; Human-centered computing, Graph drawings}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Michael A. Bender, Ola Svensson, and Grzegorz Herman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bender_et_al:LIPIcs.ESA.2019.0,
  author =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.0},
  URN =		{urn:nbn:de:0030-drops-111215},
  doi =		{10.4230/LIPIcs.ESA.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Constant-Factor FPT Approximation for Capacitated k-Median

Authors: Marek Adamczyk, Jarosław Byrka, Jan Marcinkowski, Syed M. Meesum, and Michał Włodarczyk


Abstract
Capacitated k-median is one of the few outstanding optimization problems for which the existence of a polynomial time constant factor approximation algorithm remains an open problem. In a series of recent papers algorithms producing solutions violating either the number of facilities or the capacity by a multiplicative factor were obtained. However, to produce solutions without violations appears to be hard and potentially requires different algorithmic techniques. Notably, if parameterized by the number of facilities k, the problem is also W[2] hard, making the existence of an exact FPT algorithm unlikely. In this work we provide an FPT-time constant factor approximation algorithm preserving both cardinality and capacity of the facilities. The algorithm runs in time 2^O(k log k) n^O(1) and achieves an approximation ratio of 7+epsilon.

Cite as

Marek Adamczyk, Jarosław Byrka, Jan Marcinkowski, Syed M. Meesum, and Michał Włodarczyk. Constant-Factor FPT Approximation for Capacitated k-Median. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{adamczyk_et_al:LIPIcs.ESA.2019.1,
  author =	{Adamczyk, Marek and Byrka, Jaros{\l}aw and Marcinkowski, Jan and Meesum, Syed M. and W{\l}odarczyk, Micha{\l}},
  title =	{{Constant-Factor FPT Approximation for Capacitated k-Median}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.1},
  URN =		{urn:nbn:de:0030-drops-111225},
  doi =		{10.4230/LIPIcs.ESA.2019.1},
  annote =	{Keywords: K-median, Clustering, Approximation Algorithms, Fixed Parameter Tractability}
}
Document
Fragile Complexity of Comparison-Based Algorithms

Authors: Peyman Afshani, Rolf Fagerberg, David Hammer, Riko Jacob, Irina Kostitsyna, Ulrich Meyer, Manuel Penschuck, and Nodari Sitchinava


Abstract
We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.

Cite as

Peyman Afshani, Rolf Fagerberg, David Hammer, Riko Jacob, Irina Kostitsyna, Ulrich Meyer, Manuel Penschuck, and Nodari Sitchinava. Fragile Complexity of Comparison-Based Algorithms. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ESA.2019.2,
  author =	{Afshani, Peyman and Fagerberg, Rolf and Hammer, David and Jacob, Riko and Kostitsyna, Irina and Meyer, Ulrich and Penschuck, Manuel and Sitchinava, Nodari},
  title =	{{Fragile Complexity of Comparison-Based Algorithms}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.2},
  URN =		{urn:nbn:de:0030-drops-111235},
  doi =		{10.4230/LIPIcs.ESA.2019.2},
  annote =	{Keywords: Algorithms, comparison based algorithms, lower bounds}
}
Document
Universal Reconfiguration of Facet-Connected Modular Robots by Pivots: The O(1) Musketeers

Authors: Hugo A. Akitaya, Esther M. Arkin, Mirela Damian, Erik D. Demaine, Vida Dujmović, Robin Flatland, Matias Korman, Belen Palop, Irene Parada, André van Renssen, and Vera Sacristán


Abstract
We present the first universal reconfiguration algorithm for transforming a modular robot between any two facet-connected square-grid configurations using pivot moves. More precisely, we show that five extra "helper" modules ("musketeers") suffice to reconfigure the remaining n modules between any two given configurations. Our algorithm uses O(n^2) pivot moves, which is worst-case optimal. Previous reconfiguration algorithms either require less restrictive "sliding" moves, do not preserve facet-connectivity, or for the setting we consider, could only handle a small subset of configurations defined by a local forbidden pattern. Configurations with the forbidden pattern do have disconnected reconfiguration graphs (discrete configuration spaces), and indeed we show that they can have an exponential number of connected components. But forbidding the local pattern throughout the configuration is far from necessary, as we show that just a constant number of added modules (placed to be freely reconfigurable) suffice for universal reconfigurability. We also classify three different models of natural pivot moves that preserve facet-connectivity, and show separations between these models.

Cite as

Hugo A. Akitaya, Esther M. Arkin, Mirela Damian, Erik D. Demaine, Vida Dujmović, Robin Flatland, Matias Korman, Belen Palop, Irene Parada, André van Renssen, and Vera Sacristán. Universal Reconfiguration of Facet-Connected Modular Robots by Pivots: The O(1) Musketeers. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{akitaya_et_al:LIPIcs.ESA.2019.3,
  author =	{Akitaya, Hugo A. and Arkin, Esther M. and Damian, Mirela and Demaine, Erik D. and Dujmovi\'{c}, Vida and Flatland, Robin and Korman, Matias and Palop, Belen and Parada, Irene and van Renssen, Andr\'{e} and Sacrist\'{a}n, Vera},
  title =	{{Universal Reconfiguration of Facet-Connected Modular Robots by Pivots: The O(1) Musketeers}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.3},
  URN =		{urn:nbn:de:0030-drops-111247},
  doi =		{10.4230/LIPIcs.ESA.2019.3},
  annote =	{Keywords: Reconfiguration, geometric algorithm, pivoting squares, modular robots}
}
Document
Constructing Light Spanners Deterministically in Near-Linear Time

Authors: Stephen Alstrup, Søren Dahlgaard, Arnold Filtser, Morten Stöckel, and Christian Wulff-Nilsen


Abstract
Graph spanners are well-studied and widely used both in theory and practice. In a recent breakthrough, Chechik and Wulff-Nilsen [Shiri Chechik and Christian Wulff-Nilsen, 2018] improved the state-of-the-art for light spanners by constructing a (2k-1)(1+epsilon)-spanner with O(n^(1+1/k)) edges and O_epsilon(n^(1/k)) lightness. Soon after, Filtser and Solomon [Arnold Filtser and Shay Solomon, 2016] showed that the classic greedy spanner construction achieves the same bounds. The major drawback of the greedy spanner is its running time of O(mn^(1+1/k)) (which is faster than [Shiri Chechik and Christian Wulff-Nilsen, 2018]). This makes the construction impractical even for graphs of moderate size. Much faster spanner constructions do exist but they only achieve lightness Omega_epsilon(kn^(1/k)), even when randomization is used. The contribution of this paper is deterministic spanner constructions that are fast, and achieve similar bounds as the state-of-the-art slower constructions. Our first result is an O_epsilon(n^(2+1/k+epsilon')) time spanner construction which achieves the state-of-the-art bounds. Our second result is an O_epsilon(m + n log n) time construction of a spanner with (2k-1)(1+epsilon) stretch, O(log k * n^(1+1/k) edges and O_epsilon(log k * n^(1/k)) lightness. This is an exponential improvement in the dependence on k compared to the previous result with such running time. Finally, for the important special case where k=log n, for every constant epsilon>0, we provide an O(m+n^(1+epsilon)) time construction that produces an O(log n)-spanner with O(n) edges and O(1) lightness which is asymptotically optimal. This is the first known sub-quadratic construction of such a spanner for any k = omega(1). To achieve our constructions, we show a novel deterministic incremental approximate distance oracle. Our new oracle is crucial in our construction, as known randomized dynamic oracles require the assumption of a non-adaptive adversary. This is a strong assumption, which has seen recent attention in prolific venues. Our new oracle allows the order of the edge insertions to not be fixed in advance, which is critical as our spanner algorithm chooses which edges to insert based on the answers to distance queries. We believe our new oracle is of independent interest.

Cite as

Stephen Alstrup, Søren Dahlgaard, Arnold Filtser, Morten Stöckel, and Christian Wulff-Nilsen. Constructing Light Spanners Deterministically in Near-Linear Time. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alstrup_et_al:LIPIcs.ESA.2019.4,
  author =	{Alstrup, Stephen and Dahlgaard, S{\o}ren and Filtser, Arnold and St\"{o}ckel, Morten and Wulff-Nilsen, Christian},
  title =	{{Constructing Light Spanners Deterministically in Near-Linear Time}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.4},
  URN =		{urn:nbn:de:0030-drops-111255},
  doi =		{10.4230/LIPIcs.ESA.2019.4},
  annote =	{Keywords: Spanners, Light Spanners, Efficient Construction, Deterministic Dynamic Distance Oracle}
}
Document
Repetition Detection in a Dynamic String

Authors: Amihood Amir, Itai Boneh, Panagiotis Charalampopoulos, and Eitan Kondratovsky


Abstract
A string UU for a non-empty string U is called a square. Squares have been well-studied both from a combinatorial and an algorithmic perspective. In this paper, we are the first to consider the problem of maintaining a representation of the squares in a dynamic string S of length at most n. We present an algorithm that updates this representation in n^o(1) time. This representation allows us to report a longest square-substring of S in O(1) time and all square-substrings of S in O(output) time. We achieve this by introducing a novel tool - maintaining prefix-suffix matches of two dynamic strings. We extend the above result to address the problem of maintaining a representation of all runs (maximal repetitions) of the string. Runs are known to capture the periodic structure of a string, and, as an application, we show that our representation of runs allows us to efficiently answer periodicity queries for substrings of a dynamic string. These queries have proven useful in static pattern matching problems and our techniques have the potential of offering solutions to these problems in a dynamic text setting.

Cite as

Amihood Amir, Itai Boneh, Panagiotis Charalampopoulos, and Eitan Kondratovsky. Repetition Detection in a Dynamic String. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ESA.2019.5,
  author =	{Amir, Amihood and Boneh, Itai and Charalampopoulos, Panagiotis and Kondratovsky, Eitan},
  title =	{{Repetition Detection in a Dynamic String}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.5},
  URN =		{urn:nbn:de:0030-drops-111265},
  doi =		{10.4230/LIPIcs.ESA.2019.5},
  annote =	{Keywords: string algorithms, dynamic algorithms, squares, repetitions, runs}
}
Document
Longest Common Substring Made Fully Dynamic

Authors: Amihood Amir, Panagiotis Charalampopoulos, Solon P. Pissis, and Jakub Radoszewski


Abstract
Given two strings S and T, each of length at most n, the longest common substring (LCS) problem is to find a longest substring common to S and T. This is a classical problem in computer science with an O(n)-time solution. In the fully dynamic setting, edit operations are allowed in either of the two strings, and the problem is to find an LCS after each edit. We present the first solution to this problem requiring sublinear time in n per edit operation. In particular, we show how to find an LCS after each edit operation in O~(n^(2/3)) time, after O~(n)-time and space preprocessing. This line of research has been recently initiated in a somewhat restricted dynamic variant by Amir et al. [SPIRE 2017]. More specifically, they presented an O~(n)-sized data structure that returns an LCS of the two strings after a single edit operation (that is reverted afterwards) in O~(1) time. At CPM 2018, three papers (Abedin et al., Funakoshi et al., and Urabe et al.) studied analogously restricted dynamic variants of problems on strings. We show that the techniques we develop can be applied to obtain fully dynamic algorithms for all of these variants. The only previously known sublinear-time dynamic algorithms for problems on strings were for maintaining a dynamic collection of strings for comparison queries and for pattern matching, with the most recent advances made by Gawrychowski et al. [SODA 2018] and by Clifford et al. [STACS 2018]. As an intermediate problem we consider computing the solution for a string with a given set of k edits, which leads us, in particular, to answering internal queries on a string. The input to such a query is specified by a substring (or substrings) of a given string. Data structures for answering internal string queries that were proposed by Kociumaka et al. [SODA 2015] and by Gagie et al. [CCCG 2013] are used, along with new ones, based on ingredients such as the suffix tree, heavy-path decomposition, orthogonal range queries, difference covers, and string periodicity.

Cite as

Amihood Amir, Panagiotis Charalampopoulos, Solon P. Pissis, and Jakub Radoszewski. Longest Common Substring Made Fully Dynamic. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ESA.2019.6,
  author =	{Amir, Amihood and Charalampopoulos, Panagiotis and Pissis, Solon P. and Radoszewski, Jakub},
  title =	{{Longest Common Substring Made Fully Dynamic}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{6:1--6:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.6},
  URN =		{urn:nbn:de:0030-drops-111275},
  doi =		{10.4230/LIPIcs.ESA.2019.6},
  annote =	{Keywords: longest common substring, string algorithms, dynamic algorithms}
}
Document
Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem

Authors: Haris Angelidakis, Pranjal Awasthi, Avrim Blum, Vaggos Chatziafratis, and Chen Dan


Abstract
We study the classic Maximum Independent Set problem under the notion of stability introduced by Bilu and Linial (2010): a weighted instance of Independent Set is gamma-stable if it has a unique optimal solution that remains the unique optimal solution under multiplicative perturbations of the weights by a factor of at most gamma >= 1. The goal then is to efficiently recover this "pronounced" optimal solution exactly. In this work, we solve stable instances of Independent Set on several classes of graphs: we improve upon previous results by solving O~(Delta/sqrt(log Delta))-stable instances on graphs of maximum degree Delta, (k - 1)-stable instances on k-colorable graphs and (1 + epsilon)-stable instances on planar graphs (for any fixed epsilon > 0), using both combinatorial techniques as well as LPs and the Sherali-Adams hierarchy. For general graphs, we present a strong lower bound showing that there are no efficient algorithms for O(n^(1/2 - epsilon))-stable instances of Independent Set, assuming the planted clique conjecture. To complement our negative result, we give an algorithm for (epsilon n)-stable instances, for any fixed epsilon > 0. As a by-product of our techniques, we give algorithms as well as lower bounds for stable instances of Node Multiway Cut (a generalization of Edge Multiway Cut), by exploiting its connections to Vertex Cover. Furthermore, we prove a general structural result showing that the integrality gap of convex relaxations of several maximization problems reduces dramatically on stable instances. Moreover, we initiate the study of certified algorithms for Independent Set. The notion of a gamma-certified algorithm was introduced very recently by Makarychev and Makarychev (2018) and it is a class of gamma-approximation algorithms that satisfy one crucial property: the solution returned is optimal for a perturbation of the original instance, where perturbations are again multiplicative up to a factor of gamma >= 1 (hence, such algorithms not only solve gamma-stable instances optimally, but also have guarantees even on unstable instances). Here, we obtain Delta-certified algorithms for Independent Set on graphs of maximum degree Delta, and (1+epsilon)-certified algorithms on planar graphs. Finally, we analyze the algorithm of Berman and Fürer (1994) and prove that it is a ((Delta + 1)/3 + epsilon)-certified algorithm for Independent Set on graphs of maximum degree Delta where all weights are equal to 1.

Cite as

Haris Angelidakis, Pranjal Awasthi, Avrim Blum, Vaggos Chatziafratis, and Chen Dan. Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{angelidakis_et_al:LIPIcs.ESA.2019.7,
  author =	{Angelidakis, Haris and Awasthi, Pranjal and Blum, Avrim and Chatziafratis, Vaggos and Dan, Chen},
  title =	{{Bilu-Linial Stability, Certified Algorithms and the Independent Set Problem}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.7},
  URN =		{urn:nbn:de:0030-drops-111288},
  doi =		{10.4230/LIPIcs.ESA.2019.7},
  annote =	{Keywords: Bilu-Linial stability, perturbation resilience, beyond worst-case analysis, Independent Set, Vertex Cover, Multiway Cut}
}
Document
On the Complexity of Anchored Rectangle Packing

Authors: Antonios Antoniadis, Felix Biermeier, Andrés Cristi, Christoph Damerius, Ruben Hoeksma, Dominik Kaaser, Peter Kling, and Lukas Nölke


Abstract
In the Anchored Rectangle Packing (ARP) problem, we are given a set of points P in the unit square [0,1]^2 and seek a maximum-area set of axis-aligned interior-disjoint rectangles S, each of which is anchored at a point p in P. In the most prominent variant - Lower-Left-Anchored Rectangle Packing (LLARP) - rectangles are anchored in their lower-left corner. Freedman [W. T. Tutte (Ed.), 1969] conjectured in 1969 that, if (0,0) in P, then there is a LLARP that covers an area of at least 0.5. Somewhat surprisingly, this conjecture remains open to this day, with the best known result covering an area of 0.091 [Dumitrescu and Tóth, 2015]. Maybe even more surprisingly, it is not known whether LLARP - or any ARP-problem with only one anchor - is NP-hard. In this work, we first study the Center-Anchored Rectangle Packing (CARP) problem, where rectangles are anchored in their center. We prove NP-hardness and provide a PTAS. In fact, our PTAS applies to any ARP problem where the anchor lies in the interior of the rectangles. Afterwards, we turn to the LLARP problem and investigate two different resource-augmentation settings: In the first we allow an epsilon-perturbation of the input P, whereas in the second we permit an epsilon-overlap between rectangles. For the former setting, we give an algorithm that covers at least as much area as an optimal solution of the original problem. For the latter, we give an (1 - epsilon)-approximation.

Cite as

Antonios Antoniadis, Felix Biermeier, Andrés Cristi, Christoph Damerius, Ruben Hoeksma, Dominik Kaaser, Peter Kling, and Lukas Nölke. On the Complexity of Anchored Rectangle Packing. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{antoniadis_et_al:LIPIcs.ESA.2019.8,
  author =	{Antoniadis, Antonios and Biermeier, Felix and Cristi, Andr\'{e}s and Damerius, Christoph and Hoeksma, Ruben and Kaaser, Dominik and Kling, Peter and N\"{o}lke, Lukas},
  title =	{{On the Complexity of Anchored Rectangle Packing}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.8},
  URN =		{urn:nbn:de:0030-drops-111297},
  doi =		{10.4230/LIPIcs.ESA.2019.8},
  annote =	{Keywords: anchored rectangle, rectangle packing, resource augmentation, PTAS, NP, hardness}
}
Document
Quantum Walk Sampling by Growing Seed Sets

Authors: Simon Apers


Abstract
This work describes a new algorithm for creating a superposition over the edge set of a graph, encoding a quantum sample of the random walk stationary distribution. The algorithm requires a number of quantum walk steps scaling as O~(m^(1/3) delta^(-1/3)), with m the number of edges and delta the random walk spectral gap. This improves on existing strategies by initially growing a classical seed set in the graph, from which a quantum walk is then run. The algorithm leads to a number of improvements: (i) it provides a new bound on the setup cost of quantum walk search algorithms, (ii) it yields a new algorithm for st-connectivity, and (iii) it allows to create a superposition over the isomorphisms of an n-node graph in time O~(2^(n/3)), surpassing the Omega(2^(n/2)) barrier set by index erasure.

Cite as

Simon Apers. Quantum Walk Sampling by Growing Seed Sets. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{apers:LIPIcs.ESA.2019.9,
  author =	{Apers, Simon},
  title =	{{Quantum Walk Sampling by Growing Seed Sets}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.9},
  URN =		{urn:nbn:de:0030-drops-111300},
  doi =		{10.4230/LIPIcs.ESA.2019.9},
  annote =	{Keywords: Quantum algorithms, Quantum walks, Connectivity, Graph theory}
}
Document
PUFFINN: Parameterless and Universally Fast FInding of Nearest Neighbors

Authors: Martin Aumüller, Tobias Christiani, Rasmus Pagh, and Michael Vesterli


Abstract
We present PUFFINN, a parameterless LSH-based index for solving the k-nearest neighbor problem with probabilistic guarantees. By parameterless we mean that the user is only required to specify the amount of memory the index is supposed to use and the result quality that should be achieved. The index combines several heuristic ideas known in the literature. By small adaptions to the query algorithm, we make heuristics rigorous. We perform experiments on real-world and synthetic inputs to evaluate implementation choices and show that the implementation satisfies the quality guarantees while being competitive with other state-of-the-art approaches to nearest neighbor search. We describe a novel synthetic data set that is difficult to solve for almost all existing nearest neighbor search approaches, and for which PUFFINN significantly outperform previous methods.

Cite as

Martin Aumüller, Tobias Christiani, Rasmus Pagh, and Michael Vesterli. PUFFINN: Parameterless and Universally Fast FInding of Nearest Neighbors. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 10:1-10:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aumuller_et_al:LIPIcs.ESA.2019.10,
  author =	{Aum\"{u}ller, Martin and Christiani, Tobias and Pagh, Rasmus and Vesterli, Michael},
  title =	{{PUFFINN: Parameterless and Universally Fast FInding of Nearest Neighbors}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{10:1--10:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.10},
  URN =		{urn:nbn:de:0030-drops-111317},
  doi =		{10.4230/LIPIcs.ESA.2019.10},
  annote =	{Keywords: Nearest Neighbor Search, Locality-Sensitive Hashing, Adaptive Similarity Search}
}
Document
Online Multistage Subset Maximization Problems

Authors: Evripidis Bampis, Bruno Escoffier, Kevin Schewior, and Alexandre Teiller


Abstract
Numerous combinatorial optimization problems (knapsack, maximum-weight matching, etc.) can be expressed as subset maximization problems: One is given a ground set N={1,...,n}, a collection F subseteq 2^N of subsets thereof such that the empty set is in F, and an objective (profit) function p: F -> R_+. The task is to choose a set S in F that maximizes p(S). We consider the multistage version (Eisenstat et al., Gupta et al., both ICALP 2014) of such problems: The profit function p_t (and possibly the set of feasible solutions F_t) may change over time. Since in many applications changing the solution is costly, the task becomes to find a sequence of solutions that optimizes the trade-off between good per-time solutions and stable solutions taking into account an additional similarity bonus. As similarity measure for two consecutive solutions, we consider either the size of the intersection of the two solutions or the difference of n and the Hamming distance between the two characteristic vectors. We study multistage subset maximization problems in the online setting, that is, p_t (along with possibly F_t) only arrive one by one and, upon such an arrival, the online algorithm has to output the corresponding solution without knowledge of the future. We develop general techniques for online multistage subset maximization and thereby characterize those models (given by the type of data evolution and the type of similarity measure) that admit a constant-competitive online algorithm. When no constant competitive ratio is possible, we employ lookahead to circumvent this issue. When a constant competitive ratio is possible, we provide almost matching lower and upper bounds on the best achievable one.

Cite as

Evripidis Bampis, Bruno Escoffier, Kevin Schewior, and Alexandre Teiller. Online Multistage Subset Maximization Problems. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bampis_et_al:LIPIcs.ESA.2019.11,
  author =	{Bampis, Evripidis and Escoffier, Bruno and Schewior, Kevin and Teiller, Alexandre},
  title =	{{Online Multistage Subset Maximization Problems}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.11},
  URN =		{urn:nbn:de:0030-drops-111320},
  doi =		{10.4230/LIPIcs.ESA.2019.11},
  annote =	{Keywords: Multistage optimization, Online algorithms}
}
Document
A Constant Approximation for Colorful k-Center

Authors: Sayan Bandyapadhyay, Tanmay Inamdar, Shreyas Pai, and Kasturi Varadarajan


Abstract
In this paper, we consider the colorful k-center problem, which is a generalization of the well-known k-center problem. Here, we are given red and blue points in a metric space, and a coverage requirement for each color. The goal is to find the smallest radius rho, such that with k balls of radius rho, the desired number of points of each color can be covered. We obtain a constant approximation for this problem in the Euclidean plane. We obtain this result by combining a "pseudo-approximation" algorithm that works in any metric space, and an approximation algorithm that works for a special class of instances in the plane. The latter algorithm uses a novel connection to a certain matching problem in graphs.

Cite as

Sayan Bandyapadhyay, Tanmay Inamdar, Shreyas Pai, and Kasturi Varadarajan. A Constant Approximation for Colorful k-Center. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bandyapadhyay_et_al:LIPIcs.ESA.2019.12,
  author =	{Bandyapadhyay, Sayan and Inamdar, Tanmay and Pai, Shreyas and Varadarajan, Kasturi},
  title =	{{A Constant Approximation for Colorful k-Center}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.12},
  URN =		{urn:nbn:de:0030-drops-111336},
  doi =		{10.4230/LIPIcs.ESA.2019.12},
  annote =	{Keywords: Colorful k-center, Euclidean Plane, LP Rounding, Outliers}
}
Document
Parametrized Complexity of Expansion Height

Authors: Ulrich Bauer, Abhishek Rathod, and Jonathan Spreer


Abstract
Deciding whether two simplicial complexes are homotopy equivalent is a fundamental problem in topology, which is famously undecidable. There exists a combinatorial refinement of this concept, called simple-homotopy equivalence: two simplicial complexes are of the same simple-homotopy type if they can be transformed into each other by a sequence of two basic homotopy equivalences, an elementary collapse and its inverse, an elementary expansion. In this article we consider the following related problem: given a 2-dimensional simplicial complex, is there a simple-homotopy equivalence to a 1-dimensional simplicial complex using at most p expansions? We show that the problem, which we call the erasability expansion height, is W[P]-complete in the natural parameter p.

Cite as

Ulrich Bauer, Abhishek Rathod, and Jonathan Spreer. Parametrized Complexity of Expansion Height. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 13:1-13:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bauer_et_al:LIPIcs.ESA.2019.13,
  author =	{Bauer, Ulrich and Rathod, Abhishek and Spreer, Jonathan},
  title =	{{Parametrized Complexity of Expansion Height}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{13:1--13:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.13},
  URN =		{urn:nbn:de:0030-drops-111346},
  doi =		{10.4230/LIPIcs.ESA.2019.13},
  annote =	{Keywords: Simple-homotopy theory, simple-homotopy type, parametrized complexity theory, simplicial complexes, (modified) dunce hat}
}
Document
UnLimited TRAnsfers for Multi-Modal Route Planning: An Efficient Solution

Authors: Moritz Baum, Valentin Buchhold, Jonas Sauer, Dorothea Wagner, and Tobias Zündorf


Abstract
We study a multi-modal route planning scenario consisting of a public transit network and a transfer graph representing a secondary transportation mode (e.g., walking or taxis). The objective is to compute all journeys that are Pareto-optimal with respect to arrival time and the number of required transfers. While various existing algorithms can efficiently compute optimal journeys in either a pure public transit network or a pure transfer graph, combining the two increases running times significantly. As a result, even walking between stops is typically limited by a maximal duration or distance, or by requiring the transfer graph to be transitively closed. To overcome these shortcomings, we propose a novel preprocessing technique called ULTRA (UnLimited TRAnsfers): Given a complete transfer graph (without any limitations, representing an arbitrary non-schedule-based mode of transportation), we compute a small number of transfer shortcuts that are provably sufficient for computing all Pareto-optimal journeys. We demonstrate the practicality of our approach by showing that these transfer shortcuts can be integrated into a variety of state-of-the-art public transit algorithms, establishing the ULTRA-Query algorithm family. Our extensive experimental evaluation shows that ULTRA is able to improve these algorithms from limited to unlimited transfers without sacrificing query speed, yielding the fastest known algorithms for multi-modal routing. This is true not just for walking, but also for other transfer modes such as cycling or driving.

Cite as

Moritz Baum, Valentin Buchhold, Jonas Sauer, Dorothea Wagner, and Tobias Zündorf. UnLimited TRAnsfers for Multi-Modal Route Planning: An Efficient Solution. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{baum_et_al:LIPIcs.ESA.2019.14,
  author =	{Baum, Moritz and Buchhold, Valentin and Sauer, Jonas and Wagner, Dorothea and Z\"{u}ndorf, Tobias},
  title =	{{UnLimited TRAnsfers for Multi-Modal Route Planning: An Efficient Solution}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.14},
  URN =		{urn:nbn:de:0030-drops-111352},
  doi =		{10.4230/LIPIcs.ESA.2019.14},
  annote =	{Keywords: Algorithms, Optimization, Route Planning, Public Transportation}
}
Document
Streaming and Massively Parallel Algorithms for Edge Coloring

Authors: Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Marina Knittel, and Hamed Saleh


Abstract
A valid edge-coloring of a graph is an assignment of "colors" to its edges such that no two incident edges receive the same color. The goal is to find a proper coloring that uses few colors. (Note that the maximum degree, Delta, is a trivial lower bound.) In this paper, we revisit this fundamental problem in two models of computation specific to massive graphs, the Massively Parallel Computations (MPC) model and the Graph Streaming model: - Massively Parallel Computation: We give a randomized MPC algorithm that with high probability returns a Delta+O~(Delta^(3/4)) edge coloring in O(1) rounds using O(n) space per machine and O(m) total space. The space per machine can also be further improved to n^(1-Omega(1)) if Delta = n^Omega(1). Our algorithm improves upon a previous result of Harvey et al. [SPAA 2018]. - Graph Streaming: Since the output of edge-coloring is as large as its input, we consider a standard variant of the streaming model where the output is also reported in a streaming fashion. The main challenge is that the algorithm cannot "remember" all the reported edge colors, yet has to output a proper edge coloring using few colors. We give a one-pass O~(n)-space streaming algorithm that always returns a valid coloring and uses 5.44 Delta colors with high probability if the edges arrive in a random order. For adversarial order streams, we give another one-pass O~(n)-space algorithm that requires O(Delta^2) colors.

Cite as

Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Marina Knittel, and Hamed Saleh. Streaming and Massively Parallel Algorithms for Edge Coloring. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{behnezhad_et_al:LIPIcs.ESA.2019.15,
  author =	{Behnezhad, Soheil and Derakhshan, Mahsa and Hajiaghayi, MohammadTaghi and Knittel, Marina and Saleh, Hamed},
  title =	{{Streaming and Massively Parallel Algorithms for Edge Coloring}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.15},
  URN =		{urn:nbn:de:0030-drops-111361},
  doi =		{10.4230/LIPIcs.ESA.2019.15},
  annote =	{Keywords: Massively Parallel Computation, Streaming, Edge Coloring}
}
Document
Quantum Algorithms for Classical Probability Distributions

Authors: Aleksandrs Belovs


Abstract
We study quantum algorithms working on classical probability distributions. We formulate four different models for accessing a classical probability distribution on a quantum computer, which are derived from previous work on the topic, and study their mutual relationships. Additionally, we prove that quantum query complexity of distinguishing two probability distributions is given by their inverse Hellinger distance, which gives a quadratic improvement over classical query complexity for any pair of distributions. The results are obtained by using the adversary method for state-generating input oracles and for distinguishing probability distributions on input strings.

Cite as

Aleksandrs Belovs. Quantum Algorithms for Classical Probability Distributions. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 16:1-16:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{belovs:LIPIcs.ESA.2019.16,
  author =	{Belovs, Aleksandrs},
  title =	{{Quantum Algorithms for Classical Probability Distributions}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{16:1--16:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.16},
  URN =		{urn:nbn:de:0030-drops-111370},
  doi =		{10.4230/LIPIcs.ESA.2019.16},
  annote =	{Keywords: quantum query complexity, quantum adversary method, distinguishing probability distributions, Hellinger distance}
}
Document
More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints

Authors: Benjamin Bergougnoux and Mamadou Moustapha Kanté


Abstract
In this paper, we design a framework to obtain efficient algorithms for several problems with a global constraint (acyclicity or connectivity) such as Connected Dominating Set, Node Weighted Steiner Tree, Maximum Induced Tree, Longest Induced Path, and Feedback Vertex Set. For all these problems, we obtain 2^O(k)* n^O(1), 2^O(k log(k))* n^O(1), 2^O(k^2) * n^O(1) and n^O(k) time algorithms parameterized respectively by clique-width, Q-rank-width, rank-width and maximum induced matching width. Our approach simplifies and unifies the known algorithms for each of the parameters and match asymptotically also the running time of the best algorithms for basic NP-hard problems such as Vertex Cover and Dominating Set. Our framework is based on the d-neighbor equivalence defined in [Bui-Xuan, Telle and Vatshelle, TCS 2013]. The results we obtain highlight the importance and the generalizing power of this equivalence relation on width measures. We also prove that this equivalence relation could be useful for Max Cut: a W[1]-hard problem parameterized by clique-width. For this latter problem, we obtain n^O(k), n^O(k) and n^(2^O(k)) time algorithm parameterized by clique-width, Q-rank-width and rank-width.

Cite as

Benjamin Bergougnoux and Mamadou Moustapha Kanté. More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.ESA.2019.17,
  author =	{Bergougnoux, Benjamin and Kant\'{e}, Mamadou Moustapha},
  title =	{{More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.17},
  URN =		{urn:nbn:de:0030-drops-111383},
  doi =		{10.4230/LIPIcs.ESA.2019.17},
  annote =	{Keywords: connectivity problem, feedback vertex set, d-neighbor equivalence, \{sigma,rho\}-domination, clique-width, rank-width, mim-width}
}
Document
Online Bin Covering with Limited Migration

Authors: Sebastian Berndt, Leah Epstein, Klaus Jansen, Asaf Levin, Marten Maack, and Lars Rohwedder


Abstract
Semi-online models where decisions may be revoked in a limited way have been studied extensively in the last years. This is motivated by the fact that the pure online model is often too restrictive to model real-world applications, where some changes might be allowed. A well-studied measure of the amount of decisions that can be revoked is the migration factor beta: When an object o of size s(o) arrives, the decisions for objects of total size at most beta * s(o) may be revoked. Usually beta should be a constant. This means that a small object only leads to small changes. This measure has been successfully investigated for different, classical problems such as bin packing or makespan minimization. The dual of makespan minimization - the Santa Claus or machine covering problem - has also been studied, whereas the dual of bin packing - the bin covering problem - has not been looked at from such a perspective. In this work, we extensively study the bin covering problem with migration in different scenarios. We develop algorithms both for the static case - where only insertions are allowed - and for the dynamic case, where items may also depart. We also develop lower bounds for these scenarios both for amortized migration and for worst-case migration showing that our algorithms have nearly optimal migration factor and asymptotic competitive ratio (up to an arbitrary small epsilon). We therefore resolve the competitiveness of the bin covering problem with migration.

Cite as

Sebastian Berndt, Leah Epstein, Klaus Jansen, Asaf Levin, Marten Maack, and Lars Rohwedder. Online Bin Covering with Limited Migration. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{berndt_et_al:LIPIcs.ESA.2019.18,
  author =	{Berndt, Sebastian and Epstein, Leah and Jansen, Klaus and Levin, Asaf and Maack, Marten and Rohwedder, Lars},
  title =	{{Online Bin Covering with Limited Migration}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.18},
  URN =		{urn:nbn:de:0030-drops-111391},
  doi =		{10.4230/LIPIcs.ESA.2019.18},
  annote =	{Keywords: online algorithms, dynamic algorithms, competitive ratio, bin covering, migration factor}
}
Document
Computing k-Modal Embeddings of Planar Digraphs

Authors: Juan José Besa, Giordano Da Lozzo, and Michael T. Goodrich


Abstract
Given a planar digraph G and a positive even integer k, an embedding of G in the plane is k-modal, if every vertex of G is incident to at most k pairs of consecutive edges with opposite orientations, i.e., the incoming and the outgoing edges at each vertex are grouped by the embedding into at most k sets of consecutive edges with the same orientation. In this paper, we study the k-Modality problem, which asks for the existence of a k-modal embedding of a planar digraph. This combinatorial problem is at the very core of a variety of constrained embedding questions for planar digraphs and flat clustered networks. First, since the 2-Modality problem can be easily solved in linear time, we consider the general k-Modality problem for any value of k>2 and show that the problem is NP-complete for planar digraphs of maximum degree Delta <= k+3. We relate its computational complexity to that of two notions of planarity for flat clustered networks: Planar Intersection-Link and Planar NodeTrix representations. This allows us to answer in the strongest possible way an open question by Di Giacomo [https://doi.org/10.1007/978-3-319-73915-1_37], concerning the complexity of constructing planar NodeTrix representations of flat clustered networks with small clusters, and to address a research question by Angelini et al. [https://doi.org/10.7155/jgaa.00437], concerning intersection-link representations based on geometric objects that determine complex arrangements. On the positive side, we provide a simple FPT algorithm for partial 2-trees of arbitrary degree, whose running time is exponential in k and linear in the input size. Second, motivated by the recently-introduced planar L-drawings of planar digraphs [https://doi.org/10.1007/978-3-319-73915-1_36], which require the computation of a 4-modal embedding, we focus our attention on k=4. On the algorithmic side, we show a complexity dichotomy for the 4-Modality problem with respect to Delta, by providing a linear-time algorithm for planar digraphs with Delta <= 6. This algorithmic result is based on decomposing the input digraph into its blocks via BC-trees and each of these blocks into its triconnected components via SPQR-trees. In particular, we are able to show that the constraints imposed on the embedding by the rigid triconnected components can be tackled by means of a small set of reduction rules and discover that the algorithmic core of the problem lies in special instances of NAESAT, which we prove to be always NAE-satisfiable - a result of independent interest that improves on Porschen et al. [https://doi.org/10.1007/978-3-540-24605-3_14]. Finally, on the combinatorial side, we consider outerplanar digraphs and show that any such a digraph always admits a k-modal embedding with k=4 and that this value of k is best possible for the digraphs in this family.

Cite as

Juan José Besa, Giordano Da Lozzo, and Michael T. Goodrich. Computing k-Modal Embeddings of Planar Digraphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{besa_et_al:LIPIcs.ESA.2019.19,
  author =	{Besa, Juan Jos\'{e} and Da Lozzo, Giordano and Goodrich, Michael T.},
  title =	{{Computing k-Modal Embeddings of Planar Digraphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.19},
  URN =		{urn:nbn:de:0030-drops-111404},
  doi =		{10.4230/LIPIcs.ESA.2019.19},
  annote =	{Keywords: Modal Embeddings, Planarity, Directed Graphs, SPQR trees, NAESAT}
}
Document
Cost Sharing over Combinatorial Domains: Complement-Free Cost Functions and Beyond

Authors: Georgios Birmpas, Evangelos Markakis, and Guido Schäfer


Abstract
We study mechanism design for combinatorial cost sharing models. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently [S. Dobzinski and S. Ovadia, 2017]. Still, many questions about the interplay between strategyproofness, cost recovery and economic efficiency remain unanswered. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (which we term trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations (complementary to a certain extent) of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced and O(H_n)-approximate with respect to the social cost. Further, we show that our mechanism is budget-balanced and H_n-approximate if both the valuations and the cost functions are symmetric submodular; given existing impossibility results, this is best possible. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees. In particular, we derive improved mechanisms for fundamental cost sharing problems, including Vertex Cover and Set Cover.

Cite as

Georgios Birmpas, Evangelos Markakis, and Guido Schäfer. Cost Sharing over Combinatorial Domains: Complement-Free Cost Functions and Beyond. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{birmpas_et_al:LIPIcs.ESA.2019.20,
  author =	{Birmpas, Georgios and Markakis, Evangelos and Sch\"{a}fer, Guido},
  title =	{{Cost Sharing over Combinatorial Domains: Complement-Free Cost Functions and Beyond}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.20},
  URN =		{urn:nbn:de:0030-drops-111419},
  doi =		{10.4230/LIPIcs.ESA.2019.20},
  annote =	{Keywords: Approximation Algorithms, Combinatorial Cost Sharing, Mechanism Design, Truthfulness}
}
Document
Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs

Authors: Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Ulrich Meyer, Manuel Penschuck, and Christopher Weyand


Abstract
Hyperbolic random graphs (HRG) and geometric inhomogeneous random graphs (GIRG) are two similar generative network models that were designed to resemble complex real world networks. In particular, they have a power-law degree distribution with controllable exponent beta, and high clustering that can be controlled via the temperature T. We present the first implementation of an efficient GIRG generator running in expected linear time. Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs. Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to T = 0. We also support parallelization, although this is not the focus of this paper. Moreover, we note that our generators draw from the correct probability distribution, i.e., they involve no approximation. Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model. This makes it possible to specify the desired expected average degree as input. Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straight-forward inclusion does not hold in practice. However, the difference is negligible for most use cases.

Cite as

Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Ulrich Meyer, Manuel Penschuck, and Christopher Weyand. Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2019.21,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Katzmann, Maximilian and Meyer, Ulrich and Penschuck, Manuel and Weyand, Christopher},
  title =	{{Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.21},
  URN =		{urn:nbn:de:0030-drops-111424},
  doi =		{10.4230/LIPIcs.ESA.2019.21},
  annote =	{Keywords: hyperbolic random graphs, geometric inhomogeneous random graph, efficient network generation}
}
Document
Randomized Incremental Construction of Delaunay Triangulations of Nice Point Sets

Authors: Jean-Daniel Boissonnat, Olivier Devillers, Kunal Dutta, and Marc Glisse


Abstract
Randomized incremental construction (RIC) is one of the most important paradigms for building geometric data structures. Clarkson and Shor developed a general theory that led to numerous algorithms that are both simple and efficient in theory and in practice. Randomized incremental constructions are most of the time space and time optimal in the worst-case, as exemplified by the construction of convex hulls, Delaunay triangulations and arrangements of line segments. However, the worst-case scenario occurs rarely in practice and we would like to understand how RIC behaves when the input is nice in the sense that the associated output is significantly smaller than in the worst-case. For example, it is known that the Delaunay triangulations of nicely distributed points on polyhedral surfaces in E^3 has linear complexity, as opposed to a worst-case quadratic complexity. The standard analysis does not provide accurate bounds on the complexity of such cases and we aim at establishing such bounds in this paper. More precisely, we will show that, in the case of nicely distributed points on polyhedral surfaces, the complexity of the usual RIC is O(n log n), which is optimal. In other words, without any modification, RIC nicely adapts to good cases of practical value. Our proofs also work for some other notions of nicely distributed point sets, such as (epsilon, kappa)-samples. Along the way, we prove a probabilistic lemma for sampling without replacement, which may be of independent interest.

Cite as

Jean-Daniel Boissonnat, Olivier Devillers, Kunal Dutta, and Marc Glisse. Randomized Incremental Construction of Delaunay Triangulations of Nice Point Sets. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 22:1-22:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{boissonnat_et_al:LIPIcs.ESA.2019.22,
  author =	{Boissonnat, Jean-Daniel and Devillers, Olivier and Dutta, Kunal and Glisse, Marc},
  title =	{{Randomized Incremental Construction of Delaunay Triangulations of Nice Point Sets}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{22:1--22:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.22},
  URN =		{urn:nbn:de:0030-drops-111437},
  doi =		{10.4230/LIPIcs.ESA.2019.22},
  annote =	{Keywords: Randomized incremental construction, Delaunay triangulations, Voronoi diagrams, polyhedral surfaces, probabilistic analysis}
}
Document
Fine-Grained Complexity of k-OPT in Bounded-Degree Graphs for Solving TSP

Authors: Édouard Bonnet, Yoichi Iwata, Bart M. P. Jansen, and Łukasz Kowalik


Abstract
The Traveling Salesman Problem asks to find a minimum-weight Hamiltonian cycle in an edge-weighted complete graph. Local search is a widely-employed strategy for finding good solutions to TSP. A popular neighborhood operator for local search is k-opt, which turns a Hamiltonian cycle C into a new Hamiltonian cycle C' by replacing k edges. We analyze the problem of determining whether the weight of a given cycle can be decreased by a k-opt move. Earlier work has shown that (i) assuming the Exponential Time Hypothesis, there is no algorithm that can detect whether or not a given Hamiltonian cycle C in an n-vertex input can be improved by a k-opt move in time f(k) n^o(k / log k) for any function f, while (ii) it is possible to improve on the brute-force running time of O(n^k) and save linear factors in the exponent. Modern TSP heuristics are very successful at identifying the most promising edges to be used in k-opt moves, and experiments show that very good global solutions can already be reached using only the top-O(1) most promising edges incident to each vertex. This leads to the following question: can improving k-opt moves be found efficiently in graphs of bounded degree? We answer this question in various regimes, presenting new algorithms and conditional lower bounds. We show that the aforementioned ETH lower bound also holds for graphs of maximum degree three, but that in bounded-degree graphs the best improving k-move can be found in time O(n^((23/135+epsilon_k)k)), where lim_{k -> infty} epsilon_k = 0. This improves upon the best-known bounds for general graphs. Due to its practical importance, we devote special attention to the range of k in which improving k-moves in bounded-degree graphs can be found in quasi-linear time. For k <= 7, we give quasi-linear time algorithms for general weights. For k=8 we obtain a quasi-linear time algorithm when the weights are bounded by O(polylog n). On the other hand, based on established fine-grained complexity hypotheses about the impossibility of detecting a triangle in edge-linear time, we prove that the k = 9 case does not admit quasi-linear time algorithms. Hence we fully characterize the values of k for which quasi-linear time algorithms exist for polylogarithmic weights on bounded-degree graphs.

Cite as

Édouard Bonnet, Yoichi Iwata, Bart M. P. Jansen, and Łukasz Kowalik. Fine-Grained Complexity of k-OPT in Bounded-Degree Graphs for Solving TSP. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ESA.2019.23,
  author =	{Bonnet, \'{E}douard and Iwata, Yoichi and Jansen, Bart M. P. and Kowalik, {\L}ukasz},
  title =	{{Fine-Grained Complexity of k-OPT in Bounded-Degree Graphs for Solving TSP}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.23},
  URN =		{urn:nbn:de:0030-drops-111444},
  doi =		{10.4230/LIPIcs.ESA.2019.23},
  annote =	{Keywords: traveling salesman problem, k-OPT, bounded degree}
}
Document
Linear Transformations Between Colorings in Chordal Graphs

Authors: Nicolas Bousquet and Valentin Bartier


Abstract
Let k and d be such that k >= d+2. Consider two k-colorings of a d-degenerate graph G. Can we transform one into the other by recoloring one vertex at each step while maintaining a proper coloring at any step? Cereceda et al. answered that question in the affirmative, and exhibited a recolouring sequence of exponential length. If k=d+2, we know that there exists graphs for which a quadratic number of recolorings is needed. And when k=2d+2, there always exists a linear transformation. In this paper, we prove that, as long as k >= d+4, there exists a transformation of length at most f(Delta) * n between any pair of k-colorings of chordal graphs (where Delta denotes the maximum degree of the graph). The proof is constructive and provides a linear time algorithm that, given two k-colorings c_1,c_2 computes a linear transformation between c_1 and c_2.

Cite as

Nicolas Bousquet and Valentin Bartier. Linear Transformations Between Colorings in Chordal Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bousquet_et_al:LIPIcs.ESA.2019.24,
  author =	{Bousquet, Nicolas and Bartier, Valentin},
  title =	{{Linear Transformations Between Colorings in Chordal Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.24},
  URN =		{urn:nbn:de:0030-drops-111459},
  doi =		{10.4230/LIPIcs.ESA.2019.24},
  annote =	{Keywords: graph recoloring, chordal graphs}
}
Document
Patching Colors with Tensors

Authors: Cornelius Brand


Abstract
We describe a generic way of exponentially speeding up algorithms which rely on Color-Coding by using the recently introduced technique of Extensor-Coding (Brand, Dell and Husfeldt, STOC 2018). To demonstrate the usefulness of this "patching" of Color-Coding algorithms, we apply it ad hoc to the exponential-space algorithms given in Gutin et al. (Journal Comp. Sys. Sci. 2018) and obtain the fastest known deterministic algorithms for, among others, the k-internal out-branching and k-internal spanning tree problems. To realize these technical advances, we make qualitative progress in a special case of the detection of multilinear monomials in multivariate polynomials: We give the first deterministic fixed-parameter tractable algorithm for the k-multilinear detection problem on a class of arithmetic circuits that may involve cancellations, as long as the computed polynomial is promised to satisfy a certain natural condition. Furthermore, we explore the limitations of using this very approach to speed up algorithms by determining exactly the dimension of a crucial subalgebra of extensors that arises naturally in the instantiation of the technique: It is equal to F_{2k+1}, the kth odd term in the Fibonacci sequence. To determine this dimension, we use tools from the theory of Gröbner bases, and the studied algebraic object may be of independent interest. We note that the asymptotic bound of F_{2k+1} ~~ phi^(2k) = O(2.619^k) curiously coincides with the running time bound on one of the fastest algorithms for the k-path problem based on representative sets due to Fomin et al. (JACM 2016). Here, phi is the golden ratio.

Cite as

Cornelius Brand. Patching Colors with Tensors. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 25:1-25:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{brand:LIPIcs.ESA.2019.25,
  author =	{Brand, Cornelius},
  title =	{{Patching Colors with Tensors}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{25:1--25:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.25},
  URN =		{urn:nbn:de:0030-drops-111467},
  doi =		{10.4230/LIPIcs.ESA.2019.25},
  annote =	{Keywords: Color-Coding, Extensor-Coding, internal out-branching, colorful problems, algebraic algorithms, multilinear detection, deterministic algorithms, exterior algebra}
}
Document
On Geometric Set Cover for Orthants

Authors: Karl Bringmann, Sándor Kisfaludi-Bak, Michał Pilipczuk, and Erik Jan van Leeuwen


Abstract
We study SET COVER for orthants: Given a set of points in a d-dimensional Euclidean space and a set of orthants of the form (-infty,p_1] x ... x (-infty,p_d], select a minimum number of orthants so that every point is contained in at least one selected orthant. This problem draws its motivation from applications in multi-objective optimization problems. While for d=2 the problem can be solved in polynomial time, for d>2 no algorithm is known that avoids the enumeration of all size-k subsets of the input to test whether there is a set cover of size k. Our contribution is a precise understanding of the complexity of this problem in any dimension d >= 3, when k is considered a parameter: - For d=3, we give an algorithm with runtime n^O(sqrt{k}), thus avoiding exhaustive enumeration. - For d=3, we prove a tight lower bound of n^Omega(sqrt{k}) (assuming ETH). - For d >=slant 4, we prove a tight lower bound of n^Omega(k) (assuming ETH). Here n is the size of the set of points plus the size of the set of orthants. The first statement comes as a corollary of a more general result: an algorithm for SET COVER for half-spaces in dimension 3. In particular, we show that given a set of points U in R^3, a set of half-spaces D in R^3, and an integer k, one can decide whether U can be covered by the union of at most k half-spaces from D in time |D|^O(sqrt{k})* |U|^O(1). We also study approximation for SET COVER for orthants. While in dimension 3 a PTAS can be inferred from existing results, we show that in dimension 4 and larger, there is no 1.05-approximation algorithm with runtime f(k)* n^o(k) for any computable f, where k is the optimum.

Cite as

Karl Bringmann, Sándor Kisfaludi-Bak, Michał Pilipczuk, and Erik Jan van Leeuwen. On Geometric Set Cover for Orthants. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2019.26,
  author =	{Bringmann, Karl and Kisfaludi-Bak, S\'{a}ndor and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan},
  title =	{{On Geometric Set Cover for Orthants}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{26:1--26:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.26},
  URN =		{urn:nbn:de:0030-drops-111476},
  doi =		{10.4230/LIPIcs.ESA.2019.26},
  annote =	{Keywords: Set Cover, parameterized complexity, algorithms, Exponential Time Hypothesis}
}
Document
Simpler and Better Algorithms for Minimum-Norm Load Balancing

Authors: Deeparnab Chakrabarty and Chaitanya Swamy


Abstract
Recently, Chakrabarty and Swamy (STOC 2019) introduced the minimum-norm load-balancing problem on unrelated machines, wherein we are given a set J of jobs that need to be scheduled on a set of m unrelated machines, and a monotone, symmetric norm; We seek an assignment sigma: J -> [m] that minimizes the norm of the resulting load vector load_{sigma} in R_+^m, where load_{sigma}(i) is the load on machine i under the assignment sigma. Besides capturing all l_p norms, symmetric norms also capture other norms of interest including top-l norms, and ordered norms. Chakrabarty and Swamy (STOC 2019) give a (38+epsilon)-approximation algorithm for this problem via a general framework they develop for minimum-norm optimization that proceeds by first carefully reducing this problem (in a series of steps) to a problem called min-max ordered load balancing, and then devising a so-called deterministic oblivious LP-rounding algorithm for ordered load balancing. We give a direct, and simple 4+epsilon-approximation algorithm for the minimum-norm load balancing based on rounding a (near-optimal) solution to a novel convex-programming relaxation for the problem. Whereas the natural convex program encoding minimum-norm load balancing problem has a large non-constant integrality gap, we show that this issue can be remedied by including a key constraint that bounds the "norm of the job-cost vector." Our techniques also yield a (essentially) 4-approximation for: (a) multi-norm load balancing, wherein we are given multiple monotone symmetric norms, and we seek an assignment respecting a given budget for each norm; (b) the best simultaneous approximation factor achievable for all symmetric norms for a given instance.

Cite as

Deeparnab Chakrabarty and Chaitanya Swamy. Simpler and Better Algorithms for Minimum-Norm Load Balancing. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 27:1-27:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ESA.2019.27,
  author =	{Chakrabarty, Deeparnab and Swamy, Chaitanya},
  title =	{{Simpler and Better Algorithms for Minimum-Norm Load Balancing}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{27:1--27:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.27},
  URN =		{urn:nbn:de:0030-drops-111488},
  doi =		{10.4230/LIPIcs.ESA.2019.27},
  annote =	{Keywords: Approximation Algorithms}
}
Document
On Computing Centroids According to the p-Norms of Hamming Distance Vectors

Authors: Jiehua Chen, Danny Hermelin, and Manuel Sorge


Abstract
In this paper we consider the p-Norm Hamming Centroid problem which asks to determine whether some given strings have a centroid with a bound on the p-norm of its Hamming distances to the strings. Specifically, given a set S of strings and a real k, we consider the problem of determining whether there exists a string s^* with (sum_{s in S} d^{p}(s^*,s))^(1/p) <=k, where d(,) denotes the Hamming distance metric. This problem has important applications in data clustering and multi-winner committee elections, and is a generalization of the well-known polynomial-time solvable Consensus String (p=1) problem, as well as the NP-hard Closest String (p=infty) problem. Our main result shows that the problem is NP-hard for all fixed rational p > 1, closing the gap for all rational values of p between 1 and infty. Under standard complexity assumptions the reduction also implies that the problem has no 2^o(n+m)-time or 2^o(k^(p/(p+1)))-time algorithm, where m denotes the number of input strings and n denotes the length of each string, for any fixed p > 1. The first bound matches a straightforward brute-force algorithm. The second bound is tight in the sense that for each fixed epsilon > 0, we provide a 2^(k^(p/((p+1))+epsilon))-time algorithm. In the last part of the paper, we complement our hardness result by presenting a fixed-parameter algorithm and a factor-2 approximation algorithm for the problem.

Cite as

Jiehua Chen, Danny Hermelin, and Manuel Sorge. On Computing Centroids According to the p-Norms of Hamming Distance Vectors. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2019.28,
  author =	{Chen, Jiehua and Hermelin, Danny and Sorge, Manuel},
  title =	{{On Computing Centroids According to the p-Norms of Hamming Distance Vectors}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.28},
  URN =		{urn:nbn:de:0030-drops-111495},
  doi =		{10.4230/LIPIcs.ESA.2019.28},
  annote =	{Keywords: Strings, Clustering, Multiwinner Election, Hamming Distance}
}
Document
Non-Cooperative Rational Interactive Proofs

Authors: Jing Chen, Samuel McCauley, and Shikha Singh


Abstract
Interactive-proof games model the scenario where an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Interactive proofs are increasingly used as a framework to design protocols for computation outsourcing. Existing interactive-proof games largely fall into two categories: either as games of cooperation such as multi-prover interactive proofs and cooperative rational proofs, where the provers work together as a team; or as games of conflict such as refereed games, where the provers directly compete with each other in a zero-sum game. Neither of these extremes truly capture the strategic nature of service providers in outsourcing applications. How to design and analyze non-cooperative interactive proofs is an important open problem. In this paper, we introduce a mechanism-design approach to define a multi-prover interactive-proof model in which the provers are rational and non-cooperative - they act to maximize their expected utility given others' strategies. We define a strong notion of backwards induction as our solution concept to analyze the resulting extensive-form game with imperfect information. We fully characterize the complexity of our proof system under different utility gap guarantees. (At a high level, a utility gap of u means that the protocol is robust against provers that may not care about a utility loss of 1/u.) We show, for example, that the power of non-cooperative rational interactive proofs with a polynomial utility gap is exactly equal to the complexity class P^{NEXP}.

Cite as

Jing Chen, Samuel McCauley, and Shikha Singh. Non-Cooperative Rational Interactive Proofs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2019.29,
  author =	{Chen, Jing and McCauley, Samuel and Singh, Shikha},
  title =	{{Non-Cooperative Rational Interactive Proofs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.29},
  URN =		{urn:nbn:de:0030-drops-111508},
  doi =		{10.4230/LIPIcs.ESA.2019.29},
  annote =	{Keywords: non-cooperative game theory, extensive-form games with imperfect information, refined sequential equilibrium, rational proofs, interactive proofs}
}
Document
Stronger ILPs for the Graph Genus Problem

Authors: Markus Chimani and Tilo Wiedera


Abstract
The minimum genus of a graph is an important question in graph theory and a key ingredient in several graph algorithms. However, its computation is NP-hard and turns out to be hard even in practice. Only recently, the first non-trivial approach - based on SAT and ILP (integer linear programming) models - has been presented, but it is unable to successfully tackle graphs of genus larger than 1 in practice. Herein, we show how to improve the ILP formulation. The crucial ingredients are two-fold. First, we show that instead of modeling rotation schemes explicitly, it suffices to optimize over partitions of the (bidirected) arc set A of the graph. Second, we exploit the cycle structure of the graph, explicitly mapping short closed walks on A to faces in the embedding. Besides the theoretical advantages of our models, we show their practical strength by a thorough experimental evaluation. Contrary to the previous approach, we are able to quickly solve many instances of genus > 1.

Cite as

Markus Chimani and Tilo Wiedera. Stronger ILPs for the Graph Genus Problem. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chimani_et_al:LIPIcs.ESA.2019.30,
  author =	{Chimani, Markus and Wiedera, Tilo},
  title =	{{Stronger ILPs for the Graph Genus Problem}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.30},
  URN =		{urn:nbn:de:0030-drops-111511},
  doi =		{10.4230/LIPIcs.ESA.2019.30},
  annote =	{Keywords: algorithm engineering, genus, integer linear programming}
}
Document
Complexity of C_k-Coloring in Hereditary Classes of Graphs

Authors: Maria Chudnovsky, Shenwei Huang, Paweł Rzążewski, Sophie Spirkl, and Mingxian Zhong


Abstract
For a graph F, a graph G is F-free if it does not contain an induced subgraph isomorphic to F. For two graphs G and H, an H-coloring of G is a mapping f:V(G) -> V(H) such that for every edge uv in E(G) it holds that f(u)f(v)in E(H). We are interested in the complexity of the problem H-Coloring, which asks for the existence of an H-coloring of an input graph G. In particular, we consider H-Coloring of F-free graphs, where F is a fixed graph and H is an odd cycle of length at least 5. This problem is closely related to the well known open problem of determining the complexity of 3-Coloring of P_t-free graphs. We show that for every odd k >= 5 the C_k-Coloring problem, even in the precoloring-extension variant, can be solved in polynomial time in P_9-free graphs. On the other hand, we prove that the extension version of C_k-Coloring is NP-complete for F-free graphs whenever some component of F is not a subgraph of a subdivided claw.

Cite as

Maria Chudnovsky, Shenwei Huang, Paweł Rzążewski, Sophie Spirkl, and Mingxian Zhong. Complexity of C_k-Coloring in Hereditary Classes of Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 31:1-31:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chudnovsky_et_al:LIPIcs.ESA.2019.31,
  author =	{Chudnovsky, Maria and Huang, Shenwei and Rz\k{a}\.{z}ewski, Pawe{\l} and Spirkl, Sophie and Zhong, Mingxian},
  title =	{{Complexity of C\underlinek-Coloring in Hereditary Classes of Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{31:1--31:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.31},
  URN =		{urn:nbn:de:0030-drops-111529},
  doi =		{10.4230/LIPIcs.ESA.2019.31},
  annote =	{Keywords: homomorphism, hereditary class, computational complexity, forbidden induced subgraph}
}
Document
Consistent Digital Curved Rays and Pseudoline Arrangements

Authors: Jinhee Chun, Kenya Kikuchi, and Takeshi Tokuyama


Abstract
Representing a family of geometric objects in the digital world where each object is represented by a set of pixels is a basic problem in graphics and computational geometry. One important criterion is the consistency, where the intersection pattern of the objects should be consistent with axioms of the Euclidean geometry, e.g., the intersection of two lines should be a single connected component. Previously, the set of linear rays and segments has been considered. In this paper, we extended this theory to families of curved rays going through the origin. We further consider some psudoline arrangements obtained as unions of such families of rays.

Cite as

Jinhee Chun, Kenya Kikuchi, and Takeshi Tokuyama. Consistent Digital Curved Rays and Pseudoline Arrangements. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 32:1-32:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chun_et_al:LIPIcs.ESA.2019.32,
  author =	{Chun, Jinhee and Kikuchi, Kenya and Tokuyama, Takeshi},
  title =	{{Consistent Digital Curved Rays and Pseudoline Arrangements}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{32:1--32:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.32},
  URN =		{urn:nbn:de:0030-drops-111538},
  doi =		{10.4230/LIPIcs.ESA.2019.32},
  annote =	{Keywords: Computational Geometry, Digital Geometry, Spanning Tree, Graph Drawing}
}
Document
Efficient Approximation Schemes for Uniform-Cost Clustering Problems in Planar Graphs

Authors: Vincent Cohen-Addad, Marcin Pilipczuk, and Michał Pilipczuk


Abstract
We consider the k-Median problem on planar graphs: given an edge-weighted planar graph G, a set of clients C subseteq V(G), a set of facilities F subseteq V(G), and an integer parameter k, the task is to find a set of at most k facilities whose opening minimizes the total connection cost of clients, where each client contributes to the cost with the distance to the closest open facility. We give two new approximation schemes for this problem: - FPT Approximation Scheme: for any epsilon>0, in time 2^{O(k epsilon^{-3} log (k epsilon^{-1}))}* n^O(1) we can compute a solution that has connection cost at most (1+epsilon) times the optimum, with high probability. - Efficient Bicriteria Approximation Scheme: for any epsilon>0, in time 2^{O(epsilon^{-5} log (epsilon^{-1}))}* n^O(1) we can compute a set of at most (1+epsilon)k facilities whose opening yields connection cost at most (1+epsilon) times the optimum connection cost for opening at most k facilities, with high probability. As a direct corollary of the second result we obtain an EPTAS for Uniform Facility Location on planar graphs, with same running time. Our main technical tool is a new construction of a "coreset for facilities" for k-Median in planar graphs: we show that in polynomial time one can compute a subset of facilities F_0 subseteq F of size k * (log n/epsilon)^O(epsilon^{-3}) with a guarantee that there is a (1+epsilon)-approximate solution contained in F_0.

Cite as

Vincent Cohen-Addad, Marcin Pilipczuk, and Michał Pilipczuk. Efficient Approximation Schemes for Uniform-Cost Clustering Problems in Planar Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{cohenaddad_et_al:LIPIcs.ESA.2019.33,
  author =	{Cohen-Addad, Vincent and Pilipczuk, Marcin and Pilipczuk, Micha{\l}},
  title =	{{Efficient Approximation Schemes for Uniform-Cost Clustering Problems in Planar Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.33},
  URN =		{urn:nbn:de:0030-drops-111543},
  doi =		{10.4230/LIPIcs.ESA.2019.33},
  annote =	{Keywords: k-Median, Facility Location, Planar Graphs, Approximation Scheme}
}
Document
Improved Bounds for the Excluded-Minor Approximation of Treedepth

Authors: Wojciech Czerwiński, Wojciech Nadara, and Marcin Pilipczuk


Abstract
Treedepth, a more restrictive graph width parameter than treewidth and pathwidth, plays a major role in the theory of sparse graph classes. We show that there exists a constant C such that for every integers a,b >= 2 and a graph G, if the treedepth of G is at least Cab log a, then the treewidth of G is at least a or G contains a subcubic (i.e., of maximum degree at most 3) tree of treedepth at least b as a subgraph. As a direct corollary, we obtain that every graph of treedepth Omega(k^3 log k) is either of treewidth at least k, contains a subdivision of full binary tree of depth k, or contains a path of length 2^k. This improves the bound of Omega(k^5 log^2 k) of Kawarabayashi and Rossman [SODA 2018]. We also show an application for approximation algorithms of treedepth: given a graph G of treedepth k and treewidth t, one can in polynomial time compute a treedepth decomposition of G of width O(kt log^{3/2} t). This improves upon a bound of O(kt^2 log t) stemming from a tradeoff between known results. The main technical ingredient in our result is a proof that every tree of treedepth d contains a subcubic subtree of treedepth at least d * log_3 ((1+sqrt{5})/2).

Cite as

Wojciech Czerwiński, Wojciech Nadara, and Marcin Pilipczuk. Improved Bounds for the Excluded-Minor Approximation of Treedepth. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 34:1-34:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{czerwinski_et_al:LIPIcs.ESA.2019.34,
  author =	{Czerwi\'{n}ski, Wojciech and Nadara, Wojciech and Pilipczuk, Marcin},
  title =	{{Improved Bounds for the Excluded-Minor Approximation of Treedepth}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{34:1--34:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.34},
  URN =		{urn:nbn:de:0030-drops-111557},
  doi =		{10.4230/LIPIcs.ESA.2019.34},
  annote =	{Keywords: treedepth, excluded minor}
}
Document
Building a Nest by an Automaton

Authors: Jurek Czyzowicz, Dariusz Dereniowski, and Andrzej Pelc


Abstract
A robot modeled as a deterministic finite automaton has to build a structure from material available to it. The robot navigates in the infinite oriented grid Z x Z. Some cells of the grid are full (contain a brick) and others are empty. The subgraph of the grid induced by full cells, called the field, is initially connected. The (Manhattan) distance between the farthest cells of the field is called its span. The robot starts at a full cell. It can carry at most one brick at a time. At each step it can pick a brick from a full cell, move to an adjacent cell and drop a brick at an empty cell. The aim of the robot is to construct the most compact possible structure composed of all bricks, i.e., a nest. That is, the robot has to move all bricks in such a way that the span of the resulting field be the smallest. Our main result is the design of a deterministic finite automaton that accomplishes this task and subsequently stops, for every initially connected field, in time O(sz), where s is the span of the initial field and z is the number of bricks. We show that this complexity is optimal.

Cite as

Jurek Czyzowicz, Dariusz Dereniowski, and Andrzej Pelc. Building a Nest by an Automaton. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{czyzowicz_et_al:LIPIcs.ESA.2019.35,
  author =	{Czyzowicz, Jurek and Dereniowski, Dariusz and Pelc, Andrzej},
  title =	{{Building a Nest by an Automaton}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.35},
  URN =		{urn:nbn:de:0030-drops-111564},
  doi =		{10.4230/LIPIcs.ESA.2019.35},
  annote =	{Keywords: finite automaton, plane, grid, construction task, brick, mobile agent, robot}
}
Document
Robustness of Randomized Rumour Spreading

Authors: Rami Daknama, Konstantinos Panagiotou, and Simon Reisser


Abstract
In this work we consider three well-studied broadcast protocols: push, pull and push&pull. A key property of all these models, which is also an important reason for their popularity, is that they are presumed to be very robust, since they are simple, randomized, and, crucially, do not utilize explicitly the global structure of the underlying graph. While sporadic results exist, there has been no systematic theoretical treatment quantifying the robustness of these models. Here we investigate this question with respect to two orthogonal aspects: (adversarial) modifications of the underlying graph and message transmission failures. We explore in particular the following notion of local resilience: beginning with a graph, we investigate up to which fraction of the edges an adversary may delete at each vertex, so that the protocols need significantly more rounds to broadcast the information. Our main findings establish a separation among the three models. On one hand pull is robust with respect to all parameters that we consider. On the other hand, push may slow down significantly, even if the adversary is allowed to modify the degrees of the vertices by an arbitrarily small positive fraction only. Finally, push&pull is robust when no message transmission failures are considered, otherwise it may be slowed down. On the technical side, we develop two novel methods for the analysis of randomized rumour spreading protocols. First, we exploit the notion of self-bounding functions to facilitate significantly the round-based analysis: we show that for any graph the variance of the growth of informed vertices is bounded by its expectation, so that concentration results follow immediately. Second, in order to control adversarial modifications of the graph we make use of a powerful tool from extremal graph theory, namely Szemerédi’s Regularity Lemma.

Cite as

Rami Daknama, Konstantinos Panagiotou, and Simon Reisser. Robustness of Randomized Rumour Spreading. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{daknama_et_al:LIPIcs.ESA.2019.36,
  author =	{Daknama, Rami and Panagiotou, Konstantinos and Reisser, Simon},
  title =	{{Robustness of Randomized Rumour Spreading}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.36},
  URN =		{urn:nbn:de:0030-drops-111571},
  doi =		{10.4230/LIPIcs.ESA.2019.36},
  annote =	{Keywords: Rumour Spreading, Local Resilience, Robustness, Self-bounding Functions, Szemer\'{e}di’s Regularity Lemma}
}
Document
Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class

Authors: Erik D. Demaine, Timothy D. Goodrich, Kyle Kloster, Brian Lavallee, Quanquan C. Liu, Blair D. Sullivan, Ali Vakilian, and Andrew van der Poel


Abstract
We develop a framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to edit a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then lift the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, (l-)Dominating Set, Edge (l-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of a few important graph classes (in some cases these are bicriteria algorithms which simultaneously approximate both the number of editing operations and the target parameter of the family). For bounded degeneracy, we obtain an O(r log{n})-approximation and a bicriteria (4,4)-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w}))-approximation, and for bounded pathwidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w} * log n))-approximation. For treedepth 2 (related to bounded expansion), we obtain a 4-approximation. We also prove complementary hardness-of-approximation results assuming P != NP: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor 2 (assuming UGC).

Cite as

Erik D. Demaine, Timothy D. Goodrich, Kyle Kloster, Brian Lavallee, Quanquan C. Liu, Blair D. Sullivan, Ali Vakilian, and Andrew van der Poel. Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.ESA.2019.37,
  author =	{Demaine, Erik D. and Goodrich, Timothy D. and Kloster, Kyle and Lavallee, Brian and Liu, Quanquan C. and Sullivan, Blair D. and Vakilian, Ali and van der Poel, Andrew},
  title =	{{Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.37},
  URN =		{urn:nbn:de:0030-drops-111583},
  doi =		{10.4230/LIPIcs.ESA.2019.37},
  annote =	{Keywords: structural rounding, graph editing, approximation algorithms}
}
Document
Dense Peelable Random Uniform Hypergraphs

Authors: Martin Dietzfelbinger and Stefan Walzer


Abstract
We describe a new family of k-uniform hypergraphs with independent random edges. The hypergraphs have a high probability of being peelable, i.e. to admit no sub-hypergraph of minimum degree 2, even when the edge density (number of edges over vertices) is close to 1. In our construction, the vertex set is partitioned into linearly arranged segments and each edge is incident to random vertices of k consecutive segments. Quite surprisingly, the linear geometry allows our graphs to be peeled “from the outside in”. The density thresholds f_k for peelability of our hypergraphs (f_3 ≈ 0.918, f_4 ≈ 0.977, f_5 ≈ 0.992, …) are well beyond the corresponding thresholds (c_3 ≈ 0.818, c_4 ≈ 0.772, c_5 ≈ 0.702, …) of standard k-uniform random hypergraphs. To get a grip on f_k, we analyse an idealised peeling process on the random weak limit of our hypergraph family. The process can be described in terms of an operator on [0,1]^ℤ and f_k can be linked to thresholds relating to the operator. These thresholds are then tractable with numerical methods. Random hypergraphs underlie the construction of various data structures based on hashing, for instance invertible Bloom filters, perfect hash functions, retrieval data structures, error correcting codes and cuckoo hash tables, where inputs are mapped to edges using hash functions. Frequently, the data structures rely on peelability of the hypergraph or peelability allows for simple linear time algorithms. Memory efficiency is closely tied to edge density while worst and average case query times are tied to maximum and average edge size. To demonstrate the usefulness of our construction, we used our 3-uniform hypergraphs as a drop-in replacement for the standard 3-uniform hypergraphs in a retrieval data structure by Botelho et al. [Fabiano Cupertino Botelho et al., 2013]. This reduces memory usage from 1.23m bits to 1.12m bits (m being the input size) with almost no change in running time. Using k > 3 attains, at small sacrifices in running time, further improvements to memory usage.

Cite as

Martin Dietzfelbinger and Stefan Walzer. Dense Peelable Random Uniform Hypergraphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dietzfelbinger_et_al:LIPIcs.ESA.2019.38,
  author =	{Dietzfelbinger, Martin and Walzer, Stefan},
  title =	{{Dense Peelable Random Uniform Hypergraphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.38},
  URN =		{urn:nbn:de:0030-drops-111599},
  doi =		{10.4230/LIPIcs.ESA.2019.38},
  annote =	{Keywords: Random Hypergraphs, Peeling Threshold, 2-Core, Hashing, Retrieval, Succinct Data Structure}
}
Document
Efficient Gauss Elimination for Near-Quadratic Matrices with One Short Random Block per Row, with Applications

Authors: Martin Dietzfelbinger and Stefan Walzer


Abstract
In this paper we identify a new class of sparse near-quadratic random Boolean matrices that have full row rank over F_2 = {0,1} with high probability and can be transformed into echelon form in almost linear time by a simple version of Gauss elimination. The random matrix with dimensions n(1-epsilon) x n is generated as follows: In each row, identify a block of length L = O((log n)/epsilon) at a random position. The entries outside the block are 0, the entries inside the block are given by fair coin tosses. Sorting the rows according to the positions of the blocks transforms the matrix into a kind of band matrix, on which, as it turns out, Gauss elimination works very efficiently with high probability. For the proof, the effects of Gauss elimination are interpreted as a ("coin-flipping") variant of Robin Hood hashing, whose behaviour can be captured in terms of a simple Markov model from queuing theory. Bounds for expected construction time and high success probability follow from results in this area. They readily extend to larger finite fields in place of F_2. By employing hashing, this matrix family leads to a new implementation of a retrieval data structure, which represents an arbitrary function f: S -> {0,1} for some set S of m = (1-epsilon)n keys. It requires m/(1-epsilon) bits of space, construction takes O(m/epsilon^2) expected time on a word RAM, while queries take O(1/epsilon) time and access only one contiguous segment of O((log m)/epsilon) bits in the representation (O(1/epsilon) consecutive words on a word RAM). The method is readily implemented and highly practical, and it is competitive with state-of-the-art methods. In a more theoretical variant, which works only for unrealistically large S, we can even achieve construction time O(m/epsilon) and query time O(1), accessing O(1) contiguous memory words for a query. By well-established methods the retrieval data structure leads to efficient constructions of (static) perfect hash functions and (static) Bloom filters with almost optimal space and very local storage access patterns for queries.

Cite as

Martin Dietzfelbinger and Stefan Walzer. Efficient Gauss Elimination for Near-Quadratic Matrices with One Short Random Block per Row, with Applications. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 39:1-39:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dietzfelbinger_et_al:LIPIcs.ESA.2019.39,
  author =	{Dietzfelbinger, Martin and Walzer, Stefan},
  title =	{{Efficient Gauss Elimination for Near-Quadratic Matrices with One Short Random Block per Row, with Applications}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{39:1--39:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.39},
  URN =		{urn:nbn:de:0030-drops-111602},
  doi =		{10.4230/LIPIcs.ESA.2019.39},
  annote =	{Keywords: Random Band Matrix, Gauss Elimination, Retrieval, Hashing, Succinct Data Structure, Randomised Data Structure, Robin Hood Hashing, Bloom Filter}
}
Document
Greedy Strategy Works for k-Center Clustering with Outliers and Coreset Construction

Authors: Hu Ding, Haikuo Yu, and Zixiu Wang


Abstract
We study the problem of k-center clustering with outliers in arbitrary metrics and Euclidean space. Though a number of methods have been developed in the past decades, it is still quite challenging to design quality guaranteed algorithm with low complexity for this problem. Our idea is inspired by the greedy method, Gonzalez’s algorithm, for solving the problem of ordinary k-center clustering. Based on some novel observations, we show that this greedy strategy actually can handle k-center clustering with outliers efficiently, in terms of clustering quality and time complexity. We further show that the greedy approach yields small coreset for the problem in doubling metrics, so as to reduce the time complexity significantly. Our algorithms are easy to implement in practice. We test our method on both synthetic and real datasets. The experimental results suggest that our algorithms can achieve near optimal solutions and yield lower running times comparing with existing methods.

Cite as

Hu Ding, Haikuo Yu, and Zixiu Wang. Greedy Strategy Works for k-Center Clustering with Outliers and Coreset Construction. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2019.40,
  author =	{Ding, Hu and Yu, Haikuo and Wang, Zixiu},
  title =	{{Greedy Strategy Works for k-Center Clustering with Outliers and Coreset Construction}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.40},
  URN =		{urn:nbn:de:0030-drops-111613},
  doi =		{10.4230/LIPIcs.ESA.2019.40},
  annote =	{Keywords: k-center clustering, outliers, coreset, doubling metrics, random sampling}
}
Document
Bidirectional Text Compression in External Memory

Authors: Patrick Dinklage, Jonas Ellert, Johannes Fischer, Dominik Köppl, and Manuel Penschuck


Abstract
Bidirectional compression algorithms work by substituting repeated substrings by references that, unlike in the famous LZ77-scheme, can point to either direction. We present such an algorithm that is particularly suited for an external memory implementation. We evaluate it experimentally on large data sets of size up to 128 GiB (using only 16 GiB of RAM) and show that it is significantly faster than all known LZ77 compressors, while producing a roughly similar number of factors. We also introduce an external memory decompressor for texts compressed with any uni- or bidirectional compression scheme.

Cite as

Patrick Dinklage, Jonas Ellert, Johannes Fischer, Dominik Köppl, and Manuel Penschuck. Bidirectional Text Compression in External Memory. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinklage_et_al:LIPIcs.ESA.2019.41,
  author =	{Dinklage, Patrick and Ellert, Jonas and Fischer, Johannes and K\"{o}ppl, Dominik and Penschuck, Manuel},
  title =	{{Bidirectional Text Compression in External Memory}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.41},
  URN =		{urn:nbn:de:0030-drops-111624},
  doi =		{10.4230/LIPIcs.ESA.2019.41},
  annote =	{Keywords: text compression, bidirectional parsing, text decompression, external algorithms}
}
Document
Bisection of Bounded Treewidth Graphs by Convolutions

Authors: Eduard Eiben, Daniel Lokshtanov, and Amer E. Mouawad


Abstract
In the Bisection problem, we are given as input an edge-weighted graph G. The task is to find a partition of V(G) into two parts A and B such that ||A| - |B|| <= 1 and the sum of the weights of the edges with one endpoint in A and the other in B is minimized. We show that the complexity of the Bisection problem on trees, and more generally on graphs of bounded treewidth, is intimately linked to the (min, +)-Convolution problem. Here the input consists of two sequences (a[i])^{n-1}_{i = 0} and (b[i])^{n-1}_{i = 0}, the task is to compute the sequence (c[i])^{n-1}_{i = 0}, where c[k] = min_{i=0,...,k}(a[i] + b[k - i]). In particular, we prove that if (min, +)-Convolution can be solved in O(tau(n)) time, then Bisection of graphs of treewidth t can be solved in time O(8^t t^{O(1)} log n * tau(n)), assuming a tree decomposition of width t is provided as input. Plugging in the naive O(n^2) time algorithm for (min, +)-Convolution yields a O(8^t t^{O(1)} n^2 log n) time algorithm for Bisection. This improves over the (dependence on n of the) O(2^t n^3) time algorithm of Jansen et al. [SICOMP 2005] at the cost of a worse dependence on t. "Conversely", we show that if Bisection can be solved in time O(beta(n)) on edge weighted trees, then (min, +)-Convolution can be solved in O(beta(n)) time as well. Thus, obtaining a sub-quadratic algorithm for Bisection on trees is extremely challenging, and could even be impossible. On the other hand, for unweighted graphs of treewidth t, by making use of a recent algorithm for Bounded Difference (min, +)-Convolution of Chan and Lewenstein [STOC 2015], we obtain a sub-quadratic algorithm for Bisection with running time O(8^t t^{O(1)} n^{1.864} log n).

Cite as

Eduard Eiben, Daniel Lokshtanov, and Amer E. Mouawad. Bisection of Bounded Treewidth Graphs by Convolutions. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 42:1-42:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.ESA.2019.42,
  author =	{Eiben, Eduard and Lokshtanov, Daniel and Mouawad, Amer E.},
  title =	{{Bisection of Bounded Treewidth Graphs by Convolutions}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{42:1--42:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.42},
  URN =		{urn:nbn:de:0030-drops-111639},
  doi =		{10.4230/LIPIcs.ESA.2019.42},
  annote =	{Keywords: bisection, convolution, treewidth, fine-grained analysis, hardness in P}
}
Document
Oracle-Based Primal-Dual Algorithms for Packing and Covering Semidefinite Programs

Authors: Khaled Elbassioni and Kazuhisa Makino


Abstract
Packing and covering semidefinite programs (SDPs) appear in natural relaxations of many combinatorial optimization problems as well as a number of other applications. Recently, several techniques were proposed, that utilize the particular structure of this class of problems, to obtain more efficient algorithms than those offered by general SDP solvers. For certain applications, such as those described in this paper, it maybe required to deal with SDP’s with exponentially or infinitely many constraints, which are accessible only via an oracle. In this paper, we give an efficient primal-dual algorithm to solve the problem in this case, which is an extension of a logarithmic-potential based algorithm of Grigoriadis, Khachiyan, Porkolab and Villavicencio (SIAM Journal of Optimization 41 (2001)) for packing/covering linear programs.

Cite as

Khaled Elbassioni and Kazuhisa Makino. Oracle-Based Primal-Dual Algorithms for Packing and Covering Semidefinite Programs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{elbassioni_et_al:LIPIcs.ESA.2019.43,
  author =	{Elbassioni, Khaled and Makino, Kazuhisa},
  title =	{{Oracle-Based Primal-Dual Algorithms for Packing and Covering Semidefinite Programs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.43},
  URN =		{urn:nbn:de:0030-drops-111642},
  doi =		{10.4230/LIPIcs.ESA.2019.43},
  annote =	{Keywords: Semidefinite programs, packing and covering, logarithmic potential, primal-dual algorithms, approximate solutions}
}
Document
Online Disjoint Set Cover Without Prior Knowledge

Authors: Yuval Emek, Adam Goldbraikh, and Erez Kantor


Abstract
The disjoint set cover (DSC) problem is a fundamental combinatorial optimization problem concerned with partitioning the (hyper)edges of a hypergraph into (pairwise disjoint) clusters so that the number of clusters that cover all nodes is maximized. In its online version, the edges arrive one-by-one and should be assigned to clusters in an irrevocable fashion without knowing the future edges. This paper investigates the competitiveness of online DSC algorithms. Specifically, we develop the first (randomized) online DSC algorithm that guarantees a poly-logarithmic (O(log^{2} n)) competitive ratio without prior knowledge of the hypergraph’s minimum degree. On the negative side, we prove that the competitive ratio of any randomized online DSC algorithm must be at least Omega((log n)/(log log n)) (even if the online algorithm does know the minimum degree in advance), thus establishing the first lower bound on the competitive ratio of randomized online DSC algorithms.

Cite as

Yuval Emek, Adam Goldbraikh, and Erez Kantor. Online Disjoint Set Cover Without Prior Knowledge. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 44:1-44:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{emek_et_al:LIPIcs.ESA.2019.44,
  author =	{Emek, Yuval and Goldbraikh, Adam and Kantor, Erez},
  title =	{{Online Disjoint Set Cover Without Prior Knowledge}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{44:1--44:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.44},
  URN =		{urn:nbn:de:0030-drops-111654},
  doi =		{10.4230/LIPIcs.ESA.2019.44},
  annote =	{Keywords: disjoint set cover, online algorithms, competitive analysis, competitiveness with high probability}
}
Document
Bayesian Generalized Network Design

Authors: Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi


Abstract
We study network coordination problems, as captured by the setting of generalized network design (Emek et al., STOC 2018), in the face of uncertainty resulting from partial information that the network users hold regarding the actions of their peers. This uncertainty is formalized using Alon et al.’s Bayesian ignorance framework (TCS 2012). While the approach of Alon et al. is purely combinatorial, the current paper takes into account computational considerations: Our main technical contribution is the development of (strongly) polynomial time algorithms for local decision making in the face of Bayesian uncertainty.

Cite as

Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi. Bayesian Generalized Network Design. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{emek_et_al:LIPIcs.ESA.2019.45,
  author =	{Emek, Yuval and Kutten, Shay and Lavi, Ron and Shi, Yangguang},
  title =	{{Bayesian Generalized Network Design}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.45},
  URN =		{urn:nbn:de:0030-drops-111660},
  doi =		{10.4230/LIPIcs.ESA.2019.45},
  annote =	{Keywords: approximation algorithms, Bayesian competitive ratio, Bayesian ignorance, generalized network design, diseconomies of scale, energy consumption, smoothness, best response dynamics}
}
Document
Obviously Strategyproof Mechanisms for Machine Scheduling

Authors: Diodato Ferraioli, Adrian Meier, Paolo Penna, and Carmine Ventre


Abstract
Catering to the incentives of people with limited rationality is a challenging research direction that requires novel paradigms to design mechanisms and approximation algorithms. Obviously strategyproof (OSP) mechanisms have recently emerged as the concept of interest to this research agenda. However, the majority of the literature in the area has either highlighted the shortcomings of OSP or focused on the "right" definition rather than on the construction of these mechanisms. We here give the first set of tight results on the approximation guarantee of OSP mechanisms for scheduling related machines. By extending the well-known cycle monotonicity technique, we are able to concentrate on the algorithmic component of OSP mechanisms and provide some novel paradigms for their design.

Cite as

Diodato Ferraioli, Adrian Meier, Paolo Penna, and Carmine Ventre. Obviously Strategyproof Mechanisms for Machine Scheduling. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ferraioli_et_al:LIPIcs.ESA.2019.46,
  author =	{Ferraioli, Diodato and Meier, Adrian and Penna, Paolo and Ventre, Carmine},
  title =	{{Obviously Strategyproof Mechanisms for Machine Scheduling}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.46},
  URN =		{urn:nbn:de:0030-drops-111677},
  doi =		{10.4230/LIPIcs.ESA.2019.46},
  annote =	{Keywords: Bounded Rationality, Extensive-form Mechanisms, Approximate Mechanism Design}
}
Document
Going Far From Degeneracy

Authors: Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi


Abstract
An undirected graph G is d-degenerate if every subgraph of G has a vertex of degree at most d. By the classical theorem of Erdős and Gallai from 1959, every graph of degeneracy d>1 contains a cycle of length at least d+1. The proof of Erdős and Gallai is constructive and can be turned into a polynomial time algorithm constructing a cycle of length at least d+1. But can we decide in polynomial time whether a graph contains a cycle of length at least d+2? An easy reduction from Hamiltonian Cycle provides a negative answer to this question: Deciding whether a graph has a cycle of length at least d+2 is NP-complete. Surprisingly, the complexity of the problem changes drastically when the input graph is 2-connected. In this case we prove that deciding whether G contains a cycle of length at least d+k can be done in time 2^{O(k)}|V(G)|^O(1). In other words, deciding whether a 2-connected n-vertex G contains a cycle of length at least d+log{n} can be done in polynomial time. Similar algorithmic results hold for long paths in graphs. We observe that deciding whether a graph has a path of length at least d+1 is NP-complete. However, we prove that if graph G is connected, then deciding whether G contains a path of length at least d+k can be done in time 2^{O(k)}n^O(1). We complement these results by showing that the choice of degeneracy as the "above guarantee parameterization" is optimal in the following sense: For any epsilon>0 it is NP-complete to decide whether a connected (2-connected) graph of degeneracy d has a path (cycle) of length at least (1+epsilon)d.

Cite as

Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. Going Far From Degeneracy. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2019.47,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Going Far From Degeneracy}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.47},
  URN =		{urn:nbn:de:0030-drops-111688},
  doi =		{10.4230/LIPIcs.ESA.2019.47},
  annote =	{Keywords: Longest path, longest cycle, fixed-parameter tractability, above guarantee parameterization}
}
Document
Group Activity Selection with Few Agent Types

Authors: Robert Ganian, Sebastian Ordyniak, and C. S. Rahul


Abstract
The Group Activity Selection Problem (GASP) models situations where a group of agents needs to be distributed to a set of activities while taking into account preferences of the agents w.r.t. individual activities and activity sizes. The problem, along with its well-known variants sGASP and gGASP, has previously been studied in the parameterized complexity setting with various parameterizations, such as number of agents, number of activities and solution size. However, the complexity of the problem parameterized by the number of types of agents, a natural parameter proposed already in the first paper that introduced GASP, has so far remained unexplored. In this paper we establish the complexity map for GASP, sGASP and gGASP when the number of types of agents is the parameter. Our positive results, consisting of one fixed-parameter algorithm and one XP algorithm, rely on a combination of novel Subset Sum machinery (which may be of general interest) and identifying certain compression steps which allow us to focus on solutions which are "acyclic". These algorithms are complemented by matching lower bounds, which among others close a gap to a recently obtained tractability result of Gupta, Roy, Saurabh and Zehavi (2017). In this direction, the techniques used to establish W[1]-hardness of sGASP are of particular interest: as an intermediate step, we use Sidon sequences to show the W[1]-hardness of a highly restricted variant of multi-dimensional Subset Sum, which may find applications in other settings as well.

Cite as

Robert Ganian, Sebastian Ordyniak, and C. S. Rahul. Group Activity Selection with Few Agent Types. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 48:1-48:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.ESA.2019.48,
  author =	{Ganian, Robert and Ordyniak, Sebastian and Rahul, C. S.},
  title =	{{Group Activity Selection with Few Agent Types}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{48:1--48:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.48},
  URN =		{urn:nbn:de:0030-drops-111693},
  doi =		{10.4230/LIPIcs.ESA.2019.48},
  annote =	{Keywords: group activity selection problem, parameterized complexity analysis, multi-agent systems}
}
Document
Optimal Sorting with Persistent Comparison Errors

Authors: Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna


Abstract
We consider the problem of sorting n elements in the case of persistent comparison errors. In this problem, each comparison between two elements can be wrong with some fixed (small) probability p, and comparisons cannot be repeated (Braverman and Mossel, SODA'08). Sorting perfectly in this model is impossible, and the objective is to minimize the dislocation of each element in the output sequence, that is, the difference between its true rank and its position. Existing lower bounds for this problem show that no algorithm can guarantee, with high probability, maximum dislocation and total dislocation better than Omega(log n) and Omega(n), respectively, regardless of its running time. In this paper, we present the first O(n log n)-time sorting algorithm that guarantees both O(log n) maximum dislocation and O(n) total dislocation with high probability. This settles the time complexity of this problem and shows that comparison errors do not increase its computational difficulty: a sequence with the best possible dislocation can be obtained in O(n log n) time and, even without comparison errors, Omega(n log n) time is necessary to guarantee such dislocation bounds. In order to achieve this optimality result, we solve two sub-problems in the persistent error comparisons model, and the respective methods have their own merits for further application. One is how to locate a position in which to insert an element in an almost-sorted sequence having O(log n) maximum dislocation in such a way that the dislocation of the resulting sequence will still be O(log n). The other is how to simultaneously insert m elements into an almost sorted sequence of m different elements, such that the resulting sequence of 2m elements remains almost sorted.

Cite as

Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimal Sorting with Persistent Comparison Errors. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{geissmann_et_al:LIPIcs.ESA.2019.49,
  author =	{Geissmann, Barbara and Leucci, Stefano and Liu, Chih-Hung and Penna, Paolo},
  title =	{{Optimal Sorting with Persistent Comparison Errors}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.49},
  URN =		{urn:nbn:de:0030-drops-111706},
  doi =		{10.4230/LIPIcs.ESA.2019.49},
  annote =	{Keywords: approximate sorting, comparison errors, persistent errors}
}
Document
Dynamic Dominators and Low-High Orders in DAGs

Authors: Loukas Georgiadis, Konstantinos Giannis, Giuseppe F. Italiano, Aikaterini Karanasiou, and Luigi Laura


Abstract
We consider practical algorithms for maintaining the dominator tree and a low-high order in directed acyclic graphs (DAGs) subject to dynamic operations. Let G be a directed graph with a distinguished start vertex s. The dominator tree D of G is a tree rooted at s, such that a vertex v is an ancestor of a vertex w if and only if all paths from s to w in G include v. The dominator tree is a central tool in program optimization and code generation, and has many applications in other diverse areas including constraint programming, circuit testing, biology, and in algorithms for graph connectivity problems. A low-high order of G is a preorder of D that certifies the correctness of D, and has further applications in connectivity and path-determination problems. We first provide a practical and carefully engineered version of a recent algorithm [ICALP 2017] for maintaining the dominator tree of a DAG through a sequence of edge deletions. The algorithm runs in O(mn) total time and O(m) space, where n is the number of vertices and m is the number of edges before any deletion. In addition, we present a new algorithm that maintains a low-high order of a DAG under edge deletions within the same bounds. Both results extend to the case of reducible graphs (a class that includes DAGs). Furthermore, we present a fully dynamic algorithm for maintaining the dominator tree of a DAG under an intermixed sequence of edge insertions and deletions. Although it does not maintain the O(mn) worst-case bound of the decremental algorithm, our experiments highlight that the fully dynamic algorithm performs very well in practice. Finally, we study the practical efficiency of all our algorithms by conducting an extensive experimental study on real-world and synthetic graphs.

Cite as

Loukas Georgiadis, Konstantinos Giannis, Giuseppe F. Italiano, Aikaterini Karanasiou, and Luigi Laura. Dynamic Dominators and Low-High Orders in DAGs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 50:1-50:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ESA.2019.50,
  author =	{Georgiadis, Loukas and Giannis, Konstantinos and Italiano, Giuseppe F. and Karanasiou, Aikaterini and Laura, Luigi},
  title =	{{Dynamic Dominators and Low-High Orders in DAGs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{50:1--50:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.50},
  URN =		{urn:nbn:de:0030-drops-111715},
  doi =		{10.4230/LIPIcs.ESA.2019.50},
  annote =	{Keywords: Connectivity, dominators, low-high orders}
}
Document
On the Hardness and Inapproximability of Recognizing Wheeler Graphs

Authors: Daniel Gibney and Sharma V. Thankachan


Abstract
In recent years several compressed indexes based on variants of the Burrows-Wheeler transformation have been introduced. Some of these are used to index structures far more complex than a single string, as was originally done with the FM-index [Ferragina and Manzini, J. ACM 2005]. As such, there has been an increasing effort to better understand under which conditions such an indexing scheme is possible. This has led to the introduction of Wheeler graphs [Gagie et al., Theor. Comput. Sci., 2017]. Gagie et al. showed that de Bruijn graphs, generalized compressed suffix arrays, and several other BWT related structures can be represented as Wheeler graphs, and that Wheeler graphs can be indexed in a way which is space efficient. Hence, being able to recognize whether a given graph is a Wheeler graph, or being able to approximate a given graph by a Wheeler graph, could have numerous applications in indexing. Here we resolve the open question of whether there exists an efficient algorithm for recognizing if a given graph is a Wheeler graph. We present: - The problem of recognizing whether a given graph G=(V,E) is a Wheeler graph is NP-complete for any edge label alphabet of size sigma >= 2, even when G is a DAG. This holds even on a restricted, subset of graphs called d-NFA’s for d >= 5. This is in contrast to recent results demonstrating the problem can be solved in polynomial time for d-NFA’s where d <= 2. We also show the recognition problem can be solved in linear time for sigma =1; - There exists an 2^{e log sigma + O(n + e)} time exact algorithm where n = |V| and e = |E|. This algorithm relies on graph isomorphism being computable in strictly sub-exponential time; - We define an optimization variant of the problem called Wheeler Graph Violation, abbreviated WGV, where the aim is to remove the minimum number of edges in order to obtain a Wheeler graph. We show WGV is APX-hard, even when G is a DAG, implying there exists a constant C >= 1 for which there is no C-approximation algorithm (unless P = NP). Also, conditioned on the Unique Games Conjecture, for all C >= 1, it is NP-hard to find a C-approximation; - We define the Wheeler Subgraph problem, abbreviated WS, where the aim is to find the largest subgraph which is a Wheeler Graph (the dual of the WGV). In contrast to WGV, we prove that the WS problem is in APX for sigma=O(1); The above findings suggest that most problems under this theme are computationally difficult. However, we identify a class of graphs for which the recognition problem is polynomial time solvable, raising the open question of which parameters determine this problem’s difficulty.

Cite as

Daniel Gibney and Sharma V. Thankachan. On the Hardness and Inapproximability of Recognizing Wheeler Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 51:1-51:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gibney_et_al:LIPIcs.ESA.2019.51,
  author =	{Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{On the Hardness and Inapproximability of Recognizing Wheeler Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{51:1--51:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.51},
  URN =		{urn:nbn:de:0030-drops-111728},
  doi =		{10.4230/LIPIcs.ESA.2019.51},
  annote =	{Keywords: Burrows–Wheeler transform, string algorithms, suffix trees, NP-completeness}
}
Document
Evaluation of a Flow-Based Hypergraph Bipartitioning Algorithm

Authors: Lars Gottesbüren, Michael Hamann, and Dorothea Wagner


Abstract
In this paper, we propose HyperFlowCutter, an algorithm for balanced hypergraph bipartitioning that is based on minimum S-T hyperedge cuts and maximum flows. It computes a sequence of bipartitions that optimize cut size and balance in the Pareto sense, being able to trade one for the other. HyperFlowCutter builds on the FlowCutter algorithm for partitioning graphs. We propose additional features, such as handling disconnected hypergraphs, novel methods for obtaining starting S,T pairs as well as an approach to refine a given partition with HyperFlowCutter. Our main contribution is ReBaHFC, a new algorithm which obtains an initial partition with the fast multilevel hypergraph partitioner PaToH and then improves it using HyperFlowCutter as a refinement algorithm. ReBaHFC is able to significantly improve the solution quality of PaToH at little additional running time. The solution quality is only marginally worse than that of the best-performing hypergraph partitioners KaHyPar and hMETIS, while being one order of magnitude faster. Thus ReBaHFC offers a new time-quality trade-off in the current spectrum of hypergraph partitioners. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields slightly better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC.

Cite as

Lars Gottesbüren, Michael Hamann, and Dorothea Wagner. Evaluation of a Flow-Based Hypergraph Bipartitioning Algorithm. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2019.52,
  author =	{Gottesb\"{u}ren, Lars and Hamann, Michael and Wagner, Dorothea},
  title =	{{Evaluation of a Flow-Based Hypergraph Bipartitioning Algorithm}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.52},
  URN =		{urn:nbn:de:0030-drops-111730},
  doi =		{10.4230/LIPIcs.ESA.2019.52},
  annote =	{Keywords: Hypergraph Partitioning, Maximum Flows, Algorithm Engineering}
}
Document
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack

Authors: Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese


Abstract
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+epsilon)-approximations in f(k,epsilon)n^O(1) time where k is some parameter of the input. The goal is to overcome lower bounds from either of the areas. We obtain the following results on parameterized approximability: - In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(log log n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant epsilon>0 and integer k>0, in time f(k,epsilon)n^g(epsilon), either outputs a solution of size at least k/(1+epsilon), or declares that the optimum solution has size less than k. - In the (2-dimensional) geometric knapsack problem (2DK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2DK with rotations (2DKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2+epsilon [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2DKR. For all considered problems, getting time f(k,epsilon)n^O(1), rather than f(k,epsilon)n^g(epsilon), would give FPT time f'(k)n^O(1) exact algorithms by setting epsilon=1/(k+1), contradicting W[1]-hardness. Instead, for each fixed epsilon>0, our PASs give (1+epsilon)-approximate solutions in FPT time. For both MISR and 2DKR our techniques also give rise to preprocessing algorithms that take n^g(epsilon) time and return a subset of at most k^g(epsilon) rectangles/items that contains a solution of size at least k/(1+epsilon) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al., STOC'17].

Cite as

Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese. Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{grandoni_et_al:LIPIcs.ESA.2019.53,
  author =	{Grandoni, Fabrizio and Kratsch, Stefan and Wiese, Andreas},
  title =	{{Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{53:1--53:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.53},
  URN =		{urn:nbn:de:0030-drops-111741},
  doi =		{10.4230/LIPIcs.ESA.2019.53},
  annote =	{Keywords: parameterized approximation, parameterized intractability, independent set of rectangles, geometric knapsack}
}
Document
Packing Cars into Narrow Roads: PTASs for Limited Supply Highway

Authors: Fabrizio Grandoni and Andreas Wiese


Abstract
In the Highway problem, we are given a path with n edges (the highway), and a set of m drivers, each one characterized by a subpath and a budget. For a given assignment of edge prices (the tolls), the highway owner collects from each driver the total price of the associated path when it does not exceed drivers’s budget, and zero otherwise. The goal is to choose the prices to maximize the total profit. A PTAS is known for this (strongly NP-hard) problem [Grandoni,Rothvoss-SODA'11, SICOMP'16]. In this paper we study the limited supply generalization of Highway, that incorporates capacity constraints. Here the input also includes a capacity u_e >= 0 for each edge e; we need to select, among drivers that can afford the required price, a subset such that the number of drivers that use each edge e is at most u_e (and we get profit only from selected drivers). To the best of our knowledge, the only approximation algorithm known for this problem is a folklore O(log m) approximation based on a reduction to the related Unsplittable Flow on a Path problem (UFP). The main result of this paper is a PTAS for limited supply highway. As a second contribution, we study a natural generalization of the problem where each driver i demands a different amount d_i of capacity. Using known techniques, it is not hard to derive a QPTAS for this problem. Here we present a PTAS for the case that drivers have uniform budgets. Finding a PTAS for non-uniform-demand limited supply highway is left as a challenging open problem.

Cite as

Fabrizio Grandoni and Andreas Wiese. Packing Cars into Narrow Roads: PTASs for Limited Supply Highway. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{grandoni_et_al:LIPIcs.ESA.2019.54,
  author =	{Grandoni, Fabrizio and Wiese, Andreas},
  title =	{{Packing Cars into Narrow Roads: PTASs for Limited Supply Highway}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{54:1--54:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.54},
  URN =		{urn:nbn:de:0030-drops-111751},
  doi =		{10.4230/LIPIcs.ESA.2019.54},
  annote =	{Keywords: approximation algorithms, pricing problems, highway problem, unsplittable flow on a path}
}
Document
Engineering Negative Cycle Canceling for Wind Farm Cabling

Authors: Sascha Gritzbach, Torsten Ueckerdt, Dorothea Wagner, Franziska Wegner, and Matthias Wolf


Abstract
In a wind farm turbines convert wind energy into electrical energy. The generation of each turbine is transmitted, possibly via other turbines, to a substation that is connected to the power grid. On every possible interconnection there can be at most one of various different cable types. Each cable type comes with a cost per unit length and with a capacity. Designing a cost-minimal cable layout for a wind farm to feed all turbine production into the power grid is called the Wind Farm Cabling Problem (WCP). We consider a formulation of WCP as a flow problem on a graph where the cost of a flow on an edge is modeled by a step function originating from the cable types. Recently, we presented a proof-of-concept for a negative cycle canceling-based algorithm for WCP [Sascha Gritzbach et al., 2018]. We extend key steps of that heuristic and build a theoretical foundation that explains how this heuristic tackles the problems arising from the special structure of WCP. A thorough experimental evaluation identifies the best setup of the algorithm and compares it to existing methods from the literature such as Mixed-integer Linear Programming (MILP) and Simulated Annealing (SA). The heuristic runs in a range of half a millisecond to under two minutes on instances with up to 500 turbines. It provides solutions of similar quality compared to both competitors with running times of one hour and one day. When comparing the solution quality after a running time of two seconds, our algorithm outperforms the MILP- and SA-approaches, which allows it to be applied in interactive wind farm planning.

Cite as

Sascha Gritzbach, Torsten Ueckerdt, Dorothea Wagner, Franziska Wegner, and Matthias Wolf. Engineering Negative Cycle Canceling for Wind Farm Cabling. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 55:1-55:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gritzbach_et_al:LIPIcs.ESA.2019.55,
  author =	{Gritzbach, Sascha and Ueckerdt, Torsten and Wagner, Dorothea and Wegner, Franziska and Wolf, Matthias},
  title =	{{Engineering Negative Cycle Canceling for Wind Farm Cabling}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{55:1--55:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.55},
  URN =		{urn:nbn:de:0030-drops-111766},
  doi =		{10.4230/LIPIcs.ESA.2019.55},
  annote =	{Keywords: Negative Cycle Canceling, Step Cost Function, Wind Farm Planning}
}
Document
Towards Improving Christofides Algorithm for Half-Integer TSP

Authors: Arash Haddadan and Alantha Newman


Abstract
We study the traveling salesman problem (TSP) in the case when the objective function of the subtour linear programming relaxation is minimized by a half-cycle point: x_e in {0,1/2,1} where the half-edges form a 2-factor and the 1-edges form a perfect matching. Such points are sufficient to resolve half-integer TSP in general and they have been conjectured to demonstrate the largest integrality gap for the subtour relaxation. For half-cycle points, the best-known approximation guarantee is 3/2 due to Christofides' famous algorithm. Proving an integrality gap of alpha for the subtour relaxation is equivalent to showing that alpha x can be written as a convex combination of tours, where x is any feasible solution for this relaxation. To beat Christofides' bound, our goal is to show that (3/2 - epsilon)x can be written as a convex combination of tours for some positive constant epsilon. Let y_e = 3/2-epsilon when x_e = 1 and y_e = 3/4 when x_e = 1/2. As a first step towards this goal, our main result is to show that y can be written as a convex combination of tours. In other words, we show that we can save on 1-edges, which has several applications. Among them, it gives an alternative algorithm for the recently studied uniform cover problem. Our main new technique is a procedure to glue tours over proper 3-edge cuts that are tight with respect to x, thus reducing the problem to a base case in which such cuts do not occur.

Cite as

Arash Haddadan and Alantha Newman. Towards Improving Christofides Algorithm for Half-Integer TSP. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 56:1-56:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{haddadan_et_al:LIPIcs.ESA.2019.56,
  author =	{Haddadan, Arash and Newman, Alantha},
  title =	{{Towards Improving Christofides Algorithm for Half-Integer TSP}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{56:1--56:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.56},
  URN =		{urn:nbn:de:0030-drops-111772},
  doi =		{10.4230/LIPIcs.ESA.2019.56},
  annote =	{Keywords: Traveling salesman problem, subtour elimination relaxation, integrality gap, gluing subtours}
}
Document
Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

Authors: Yael Hitron and Merav Parter


Abstract
We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS'90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time.

Cite as

Yael Hitron and Merav Parter. Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 57:1-57:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ESA.2019.57,
  author =	{Hitron, Yael and Parter, Merav},
  title =	{{Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{57:1--57:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.57},
  URN =		{urn:nbn:de:0030-drops-111782},
  doi =		{10.4230/LIPIcs.ESA.2019.57},
  annote =	{Keywords: stochastic neural networks, approximate counting, synchronizer}
}
Document
Triconnected Planar Graphs of Maximum Degree Five are Subhamiltonian

Authors: Michael Hoffmann and Boris Klemz


Abstract
We show that every triconnected planar graph of maximum degree five is subhamiltonian planar. A graph is subhamiltonian planar if it is a subgraph of a Hamiltonian planar graph or, equivalently, if it admits a 2-page book embedding. In fact, our result is stronger because we only require vertices of a separating triangle to have degree at most five, all other vertices may have arbitrary degree. This degree bound is tight: We describe a family of triconnected planar graphs that are not subhamiltonian planar and where every vertex of a separating triangle has degree at most six. Our results improve earlier work by Heath and by Bauernöppel and, independently, Bekos, Gronemann, and Raftopoulou, who showed that planar graphs of maximum degree three and four, respectively, are subhamiltonian planar. The proof is constructive and yields a quadratic time algorithm to obtain a subhamiltonian plane cycle for a given graph. As one of our main tools, which might be of independent interest, we devise an algorithm that, in a given 3-connected plane graph satisfying the above degree bounds, collapses each maximal separating triangle into a single edge such that the resulting graph is biconnected, contains no separating triangle, and no separation pair whose vertices are adjacent.

Cite as

Michael Hoffmann and Boris Klemz. Triconnected Planar Graphs of Maximum Degree Five are Subhamiltonian. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hoffmann_et_al:LIPIcs.ESA.2019.58,
  author =	{Hoffmann, Michael and Klemz, Boris},
  title =	{{Triconnected Planar Graphs of Maximum Degree Five are Subhamiltonian}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.58},
  URN =		{urn:nbn:de:0030-drops-111797},
  doi =		{10.4230/LIPIcs.ESA.2019.58},
  annote =	{Keywords: Graph drawing, book embedding, Hamiltonian graph, planar graph, bounded degree graph, graph augmentation, computational geometry, SPQR decomposition}
}
Document
Parallel Weighted Random Sampling

Authors: Lorenz Hübschle-Schneider and Peter Sanders


Abstract
Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries.

Cite as

Lorenz Hübschle-Schneider and Peter Sanders. Parallel Weighted Random Sampling. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 59:1-59:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hubschleschneider_et_al:LIPIcs.ESA.2019.59,
  author =	{H\"{u}bschle-Schneider, Lorenz and Sanders, Peter},
  title =	{{Parallel Weighted Random Sampling}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{59:1--59:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.59},
  URN =		{urn:nbn:de:0030-drops-111800},
  doi =		{10.4230/LIPIcs.ESA.2019.59},
  annote =	{Keywords: categorical distribution, multinoulli distribution, parallel algorithm, alias method, PRAM, communication efficient algorithm, subset sampling, reservoir sampling}
}
Document
External Memory Priority Queues with Decrease-Key and Applications to Graph Algorithms

Authors: John Iacono, Riko Jacob, and Konstantinos Tsakalidis


Abstract
We present priority queues in the external memory model with block size B and main memory size M that support on N elements, operation Update (a combination of operations Insert and DecreaseKey) in O(1/Blog_{M/B} N/B) amortized I/Os and operations ExtractMin and Delete in O(ceil[(M^epsilon)/B log_{M/B} N/B] log_{M/B} N/B) amortized I/Os, for any real epsilon in (0,1), using O(N/Blog_{M/B} N/B) blocks. Previous I/O-efficient priority queues either support these operations in O(1/Blog_2 N/B) amortized I/Os [Kumar and Schwabe, SPDP '96] or support only operations Insert, Delete and ExtractMin in optimal O(1/Blog_{M/B} N/B) amortized I/Os, however without supporting DecreaseKey [Fadel et al., TCS '99]. We also present buffered repository trees that support on a multi-set of N elements, operation Insert in O(1/Blog_M/B N/B) I/Os and operation Extract on K extracted elements in O(M^{epsilon} log_M/B N/B + K/B) amortized I/Os, using O(N/B) blocks. Previous results achieve O(1/Blog_2 N/B) I/Os and O(log_2 N/B + K/B) I/Os, respectively [Buchsbaum et al., SODA '00]. Our results imply improved O(E/Blog_{M/B} E/B) I/Os for single-source shortest paths, depth-first search and breadth-first search algorithms on massive directed dense graphs (V,E) with E = Omega (V^(1+epsilon)), epsilon > 0 and V = Omega (M), which is equal to the I/O-optimal bound for sorting E values in external memory.

Cite as

John Iacono, Riko Jacob, and Konstantinos Tsakalidis. External Memory Priority Queues with Decrease-Key and Applications to Graph Algorithms. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{iacono_et_al:LIPIcs.ESA.2019.60,
  author =	{Iacono, John and Jacob, Riko and Tsakalidis, Konstantinos},
  title =	{{External Memory Priority Queues with Decrease-Key and Applications to Graph Algorithms}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.60},
  URN =		{urn:nbn:de:0030-drops-111817},
  doi =		{10.4230/LIPIcs.ESA.2019.60},
  annote =	{Keywords: priority queues, external memory, graph algorithms, shortest paths, depth-first search, breadth-first search}
}
Document
Shortest Reconfiguration of Perfect Matchings via Alternating Cycles

Authors: Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, and Yoshio Okamoto


Abstract
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.

Cite as

Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, and Yoshio Okamoto. Shortest Reconfiguration of Perfect Matchings via Alternating Cycles. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 61:1-61:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ito_et_al:LIPIcs.ESA.2019.61,
  author =	{Ito, Takehiro and Kakimura, Naonori and Kamiyama, Naoyuki and Kobayashi, Yusuke and Okamoto, Yoshio},
  title =	{{Shortest Reconfiguration of Perfect Matchings via Alternating Cycles}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{61:1--61:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.61},
  URN =		{urn:nbn:de:0030-drops-111823},
  doi =		{10.4230/LIPIcs.ESA.2019.61},
  annote =	{Keywords: Matching, Combinatorial reconfiguration, Alternating cycles, Combinatorial shortest paths}
}
Document
Closing the Gap for Pseudo-Polynomial Strip Packing

Authors: Klaus Jansen and Malin Rau


Abstract
Two-dimensional packing problems are a fundamental class of optimization problems and Strip Packing is one of the most natural and famous among them. Indeed it can be defined in just one sentence: Given a set of rectangular axis parallel items and a strip with bounded width and infinite height, the objective is to find a packing of the items into the strip minimizing the packing height. We speak of pseudo-polynomial Strip Packing if we consider algorithms with pseudo-polynomial running time with respect to the width of the strip. It is known that there is no pseudo-polynomial time algorithm for Strip Packing with a ratio better than 5/4 unless P = NP. The best algorithm so far has a ratio of 4/3 + epsilon. In this paper, we close the gap between inapproximability result and currently known algorithms by presenting an algorithm with approximation ratio 5/4 + epsilon. The algorithm relies on a new structural result which is the main accomplishment of this paper. It states that each optimal solution can be transformed with bounded loss in the objective such that it has one of a polynomial number of different forms thus making the problem tractable by standard techniques, i.e., dynamic programming. To show the conceptual strength of the approach, we extend our result to other problems as well, e.g., Strip Packing with 90 degree rotations and Contiguous Moldable Task Scheduling, and present algorithms with approximation ratio 5/4 + epsilon for these problems as well.

Cite as

Klaus Jansen and Malin Rau. Closing the Gap for Pseudo-Polynomial Strip Packing. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2019.62,
  author =	{Jansen, Klaus and Rau, Malin},
  title =	{{Closing the Gap for Pseudo-Polynomial Strip Packing}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.62},
  URN =		{urn:nbn:de:0030-drops-111831},
  doi =		{10.4230/LIPIcs.ESA.2019.62},
  annote =	{Keywords: Strip Packing, pseudo-polynomial, 90 degree rotation, Contiguous Moldable Task Scheduling}
}
Document
Odd-Cycle Separation for Maximum Cut and Binary Quadratic Optimization

Authors: Michael Jünger and Sven Mallach


Abstract
Solving the NP-hard Maximum Cut or Binary Quadratic Optimization Problem to optimality is important in many applications including Physics, Chemistry, Neuroscience, and Circuit Layout. The leading approaches based on linear/semidefinite programming require the separation of so-called odd-cycle inequalities for solving relaxations within their associated branch-and-cut frameworks. In their groundbreaking work, F. Barahona and A.R. Mahjoub have given an informal description of a polynomial-time separation procedure for the odd-cycle inequalities. Since then, the odd-cycle separation problem has broadly been considered solved. However, as we reveal, a straightforward implementation is likely to generate inequalities that are not facet-defining and have further undesired properties. Here, we present a more detailed analysis, along with enhancements to overcome the associated issues efficiently. In a corresponding experimental study, it turns out that these are worthwhile, and may speed up the solution process significantly.

Cite as

Michael Jünger and Sven Mallach. Odd-Cycle Separation for Maximum Cut and Binary Quadratic Optimization. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 63:1-63:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{junger_et_al:LIPIcs.ESA.2019.63,
  author =	{J\"{u}nger, Michael and Mallach, Sven},
  title =	{{Odd-Cycle Separation for Maximum Cut and Binary Quadratic Optimization}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{63:1--63:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.63},
  URN =		{urn:nbn:de:0030-drops-111840},
  doi =		{10.4230/LIPIcs.ESA.2019.63},
  annote =	{Keywords: Maximum cut, Binary quadratic optimization, Integer linear programming}
}
Document
Triangles and Girth in Disk Graphs and Transmission Graphs

Authors: Haim Kaplan, Katharina Klost, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir


Abstract
Let S subset R^2 be a set of n sites, where each s in S has an associated radius r_s > 0. The disk graph D(S) is the undirected graph with vertex set S and an undirected edge between two sites s, t in S if and only if |st| <= r_s + r_t, i.e., if the disks with centers s and t and respective radii r_s and r_t intersect. Disk graphs are used to model sensor networks. Similarly, the transmission graph T(S) is the directed graph with vertex set S and a directed edge from a site s to a site t if and only if |st| <= r_s, i.e., if t lies in the disk with center s and radius r_s. We provide algorithms for detecting (directed) triangles and, more generally, computing the length of a shortest cycle (the girth) in D(S) and in T(S). These problems are notoriously hard in general, but better solutions exist for special graph classes such as planar graphs. We obtain similarly efficient results for disk graphs and for transmission graphs. More precisely, we show that a shortest (Euclidean) triangle in D(S) and in T(S) can be found in O(n log n) expected time, and that the (weighted) girth of D(S) can be found in O(n log n) expected time. For this, we develop new tools for batched range searching that may be of independent interest.

Cite as

Haim Kaplan, Katharina Klost, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Triangles and Girth in Disk Graphs and Transmission Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 64:1-64:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2019.64,
  author =	{Kaplan, Haim and Klost, Katharina and Mulzer, Wolfgang and Roditty, Liam and Seiferth, Paul and Sharir, Micha},
  title =	{{Triangles and Girth in Disk Graphs and Transmission Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{64:1--64:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.64},
  URN =		{urn:nbn:de:0030-drops-111859},
  doi =		{10.4230/LIPIcs.ESA.2019.64},
  annote =	{Keywords: disk graph, transmission graph, triangle, girth}
}
Document
Reliable Hubs for Partially-Dynamic All-Pairs Shortest Paths in Directed Graphs

Authors: Adam Karczmarz and Jakub Łącki


Abstract
We give new partially-dynamic algorithms for the all-pairs shortest paths problem in weighted directed graphs. Most importantly, we give a new deterministic incremental algorithm for the problem that handles updates in O~(mn^(4/3) log{W}/epsilon) total time (where the edge weights are from [1,W]) and explicitly maintains a (1+epsilon)-approximate distance matrix. For a fixed epsilon>0, this is the first deterministic partially dynamic algorithm for all-pairs shortest paths in directed graphs, whose update time is o(n^2) regardless of the number of edges. Furthermore, we also show how to improve the state-of-the-art partially dynamic randomized algorithms for all-pairs shortest paths [Baswana et al. STOC’02, Bernstein STOC’13] from Monte Carlo randomized to Las Vegas randomized without increasing the running time bounds (with respect to the O~(*) notation). Our results are obtained by giving new algorithms for the problem of dynamically maintaining hubs, that is a set of O~(n/d) vertices which hit a shortest path between each pair of vertices, provided it has hop-length Omega(d). We give new subquadratic deterministic and Las Vegas algorithms for maintenance of hubs under either edge insertions or deletions.

Cite as

Adam Karczmarz and Jakub Łącki. Reliable Hubs for Partially-Dynamic All-Pairs Shortest Paths in Directed Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 65:1-65:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{karczmarz_et_al:LIPIcs.ESA.2019.65,
  author =	{Karczmarz, Adam and {\L}\k{a}cki, Jakub},
  title =	{{Reliable Hubs for Partially-Dynamic All-Pairs Shortest Paths in Directed Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{65:1--65:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.65},
  URN =		{urn:nbn:de:0030-drops-111862},
  doi =		{10.4230/LIPIcs.ESA.2019.65},
  annote =	{Keywords: shortest paths, dynamic, incremental, decremental, directed graphs, hubs}
}
Document
Min-Cost Flow in Unit-Capacity Planar Graphs

Authors: Adam Karczmarz and Piotr Sankowski


Abstract
In this paper we give an O~((nm)^(2/3) log C) time algorithm for computing min-cost flow (or min-cost circulation) in unit capacity planar multigraphs where edge costs are integers bounded by C. For planar multigraphs, this improves upon the best known algorithms for general graphs: the O~(m^(10/7) log C) time algorithm of Cohen et al. [SODA 2017], the O(m^(3/2) log(nC)) time algorithm of Gabow and Tarjan [SIAM J. Comput. 1989] and the O~(sqrt(n) m log C) time algorithm of Lee and Sidford [FOCS 2014]. In particular, our result constitutes the first known fully combinatorial algorithm that breaks the Omega(m^(3/2)) time barrier for min-cost flow problem in planar graphs. To obtain our result we first give a very simple successive shortest paths based scaling algorithm for unit-capacity min-cost flow problem that does not explicitly operate on dual variables. This algorithm also runs in O~(m^(3/2) log C) time for general graphs, and, to the best of our knowledge, it has not been described before. We subsequently show how to implement this algorithm faster on planar graphs using well-established tools: r-divisions and efficient algorithms for computing (shortest) paths in so-called dense distance graphs.

Cite as

Adam Karczmarz and Piotr Sankowski. Min-Cost Flow in Unit-Capacity Planar Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 66:1-66:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{karczmarz_et_al:LIPIcs.ESA.2019.66,
  author =	{Karczmarz, Adam and Sankowski, Piotr},
  title =	{{Min-Cost Flow in Unit-Capacity Planar Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{66:1--66:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.66},
  URN =		{urn:nbn:de:0030-drops-111878},
  doi =		{10.4230/LIPIcs.ESA.2019.66},
  annote =	{Keywords: minimum-cost flow, minimum-cost circulation, planar graphs}
}
Document
Global Curve Simplification

Authors: Mees van de Kerkhof, Irina Kostitsyna, Maarten Löffler, Majid Mirzanezhad, and Carola Wenk


Abstract
Due to its many applications, curve simplification is a long-studied problem in computational geometry and adjacent disciplines, such as graphics, geographical information science, etc. Given a polygonal curve P with n vertices, the goal is to find another polygonal curve P' with a smaller number of vertices such that P' is sufficiently similar to P. Quality guarantees of a simplification are usually given in a local sense, bounding the distance between a shortcut and its corresponding section of the curve. In this work we aim to provide a systematic overview of curve simplification problems under global distance measures that bound the distance between P and P'. We consider six different curve distance measures: three variants of the Hausdorff distance and three variants of the Fréchet distance. And we study different restrictions on the choice of vertices for P'. We provide polynomial-time algorithms for some variants of the global curve simplification problem, and show NP-hardness for other variants. Through this systematic study we observe, for the first time, some surprising patterns, and suggest directions for future research in this important area.

Cite as

Mees van de Kerkhof, Irina Kostitsyna, Maarten Löffler, Majid Mirzanezhad, and Carola Wenk. Global Curve Simplification. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{vandekerkhof_et_al:LIPIcs.ESA.2019.67,
  author =	{van de Kerkhof, Mees and Kostitsyna, Irina and L\"{o}ffler, Maarten and Mirzanezhad, Majid and Wenk, Carola},
  title =	{{Global Curve Simplification}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.67},
  URN =		{urn:nbn:de:0030-drops-111887},
  doi =		{10.4230/LIPIcs.ESA.2019.67},
  annote =	{Keywords: Curve simplification, Fr\'{e}chet distance, Hausdorff distance}
}
Document
Trace Reconstruction: Generalized and Parameterized

Authors: Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal


Abstract
In the beautifully simple-to-state problem of trace reconstruction, the goal is to reconstruct an unknown binary string x given random "traces" of x where each trace is generated by deleting each coordinate of x independently with probability p<1. The problem is well studied both when the unknown string is arbitrary and when it is chosen uniformly at random. For both settings, there is still an exponential gap between upper and lower sample complexity bounds and our understanding of the problem is still surprisingly limited. In this paper, we consider natural parameterizations and generalizations of this problem in an effort to attain a deeper and more comprehensive understanding. Perhaps our most surprising results are: 1) We prove that exp(O(n^(1/4) sqrt{log n})) traces suffice for reconstructing arbitrary matrices. In the matrix version of the problem, each row and column of an unknown sqrt{n} x sqrt{n} matrix is deleted independently with probability p. Our results contrasts with the best known results for sequence reconstruction where the best known upper bound is exp(O(n^(1/3))). 2) An optimal result for random matrix reconstruction: we show that Theta(log n) traces are necessary and sufficient. This is in contrast to the problem for random sequences where there is a super-logarithmic lower bound and the best known upper bound is exp({O}(log^(1/3) n)). 3) We show that exp(O(k^(1/3) log^(2/3) n)) traces suffice to reconstruct k-sparse strings, providing an improvement over the best known sequence reconstruction results when k = o(n/log^2 n). 4) We show that poly(n) traces suffice if x is k-sparse and we additionally have a "separation" promise, specifically that the indices of 1’s in x all differ by Omega(k log n).

Cite as

Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal. Trace Reconstruction: Generalized and Parameterized. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 68:1-68:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{krishnamurthy_et_al:LIPIcs.ESA.2019.68,
  author =	{Krishnamurthy, Akshay and Mazumdar, Arya and McGregor, Andrew and Pal, Soumyabrata},
  title =	{{Trace Reconstruction: Generalized and Parameterized}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{68:1--68:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.68},
  URN =		{urn:nbn:de:0030-drops-111891},
  doi =		{10.4230/LIPIcs.ESA.2019.68},
  annote =	{Keywords: deletion channel, trace reconstruction, matrix reconstruction}
}
Document
Generalized Assignment via Submodular Optimization with Reserved Capacity

Authors: Ariel Kulik, Kanthi Sarpatwar, Baruch Schieber, and Hadas Shachnai


Abstract
We study a variant of the generalized assignment problem (GAP) with group constraints. An instance of (Group GAP) is a set I of items, partitioned into L groups, and a set of m uniform (unit-sized) bins. Each item i in I has a size s_i >0, and a profit p_{i,j} >= 0 if packed in bin j. A group of items is satisfied if all of its items are packed. The goal is to find a feasible packing of a subset of the items in the bins such that the total profit from satisfied groups is maximized. We point to central applications of Group GAP in Video-on-Demand services, mobile Device-to-Device network caching and base station cooperation in 5G networks. Our main result is a 1/6-approximation algorithm for Group GAP instances where the total size of each group is at most m/2. At the heart of our algorithm lies an interesting derivation of a submodular function from the classic LP formulation of GAP, which facilitates the construction of a high profit solution utilizing at most half the total bin capacity, while the other half is reserved for later use. In particular, we give an algorithm for submodular maximization subject to a knapsack constraint, which finds a solution of profit at least 1/3 of the optimum, using at most half the knapsack capacity, under mild restrictions on element sizes. Our novel approach of submodular optimization subject to a knapsack with reserved capacity constraint may find applications in solving other group assignment problems.

Cite as

Ariel Kulik, Kanthi Sarpatwar, Baruch Schieber, and Hadas Shachnai. Generalized Assignment via Submodular Optimization with Reserved Capacity. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 69:1-69:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kulik_et_al:LIPIcs.ESA.2019.69,
  author =	{Kulik, Ariel and Sarpatwar, Kanthi and Schieber, Baruch and Shachnai, Hadas},
  title =	{{Generalized Assignment via Submodular Optimization with Reserved Capacity}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{69:1--69:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.69},
  URN =		{urn:nbn:de:0030-drops-111906},
  doi =		{10.4230/LIPIcs.ESA.2019.69},
  annote =	{Keywords: Group Generalized Assignment Problem, Submodular Maximization, Knapsack Constraints, Approximation Algorithms}
}
Document
Resilient Dictionaries for Randomly Unreliable Memory

Authors: Stefano Leucci, Chih-Hung Liu, and Simon Meierhans


Abstract
We study the problem of designing a dictionary data structure that is resilient to memory corruptions. Our error model is a variation of the faulty RAM model in which, except for constant amount of definitely reliable memory, each memory word is randomly unreliable with a probability p < 1/2, and the locations of the unreliable words are unknown to the algorithm. An adversary observes the whole memory and can, at any time, arbitrarily corrupt (i.e., modify) the contents of one or more unreliable words. Our dictionary has capacity n, stores N<n keys in the optimal O(N) amount of space, supports insertions and deletions in O(log n) amortized time, and allows to search for a key in O(log n) worst-case time. With a global probability of at least 1-1/n, all possible search operations are guaranteed to return the correct answer w.r.t. the set of uncorrupted keys. The closest related results are the ones of Finocchi et al. [Irene Finocchi et al., 2009] and Brodal et al. [Brodal et al., 2007] on the faulty RAM model, in which all but O(1) memory is unreliable. There, if an upper bound delta on the number of corruptions is known in advance, all dictionary operations can be implemented in Theta(log n + delta) amortized time, thus trading resiliency for speed as soon as delta = omega(log n). Our construction does not need to know the value of delta in advance and remains fast and effective even when up to a constant fraction of the available memory is corrupted. Our techniques can be immediately extended to implement other data types (e.g., associative containers and priority queues), which can then be used as a building block in the design of other resilient algorithms. For example, we are able to solve the resilient sorting problem in our model using O(n log n) time.

Cite as

Stefano Leucci, Chih-Hung Liu, and Simon Meierhans. Resilient Dictionaries for Randomly Unreliable Memory. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 70:1-70:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{leucci_et_al:LIPIcs.ESA.2019.70,
  author =	{Leucci, Stefano and Liu, Chih-Hung and Meierhans, Simon},
  title =	{{Resilient Dictionaries for Randomly Unreliable Memory}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{70:1--70:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.70},
  URN =		{urn:nbn:de:0030-drops-111911},
  doi =		{10.4230/LIPIcs.ESA.2019.70},
  annote =	{Keywords: resilient dictionary, unreliable memory, faulty RAM}
}
Document
Hermitian Laplacians and a Cheeger Inequality for the Max-2-Lin Problem

Authors: Huan Li, He Sun, and Luca Zanetti


Abstract
We study spectral approaches for the MAX-2-LIN(k) problem, in which we are given a system of m linear equations of the form x_i - x_j is equivalent to c_{ij} mod k, and required to find an assignment to the n variables {x_i} that maximises the total number of satisfied equations. We consider Hermitian Laplacians related to this problem, and prove a Cheeger inequality that relates the smallest eigenvalue of a Hermitian Laplacian to the maximum number of satisfied equations of a MAX-2-LIN(k) instance I. We develop an O~(kn^2) time algorithm that, for any (1-epsilon)-satisfiable instance, produces an assignment satisfying a (1 - O(k)sqrt{epsilon})-fraction of equations. We also present a subquadratic-time algorithm that, when the graph associated with I is an expander, produces an assignment satisfying a (1- O(k^2)epsilon)-fraction of the equations. Our Cheeger inequality and first algorithm can be seen as generalisations of the Cheeger inequality and algorithm for MAX-CUT developed by Trevisan.

Cite as

Huan Li, He Sun, and Luca Zanetti. Hermitian Laplacians and a Cheeger Inequality for the Max-2-Lin Problem. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 71:1-71:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2019.71,
  author =	{Li, Huan and Sun, He and Zanetti, Luca},
  title =	{{Hermitian Laplacians and a Cheeger Inequality for the Max-2-Lin Problem}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{71:1--71:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.71},
  URN =		{urn:nbn:de:0030-drops-111926},
  doi =		{10.4230/LIPIcs.ESA.2019.71},
  annote =	{Keywords: Spectral methods, Hermitian Laplacians, the Max-2-Lin problem, Unique Games}
}
Document
Packing Directed Circuits Quarter-Integrally

Authors: Tomáš Masařík, Irene Muzi, Marcin Pilipczuk, Paweł Rzążewski, and Manuel Sorge


Abstract
The celebrated Erdős-Pósa theorem states that every undirected graph that does not admit a family of k vertex-disjoint cycles contains a feedback vertex set (a set of vertices hitting all cycles in the graph) of size O(k log k). After being known for long as Younger’s conjecture, a similar statement for directed graphs has been proven in 1996 by Reed, Robertson, Seymour, and Thomas. However, in their proof, the dependency of the size of the feedback vertex set on the size of vertex-disjoint cycle packing is not elementary. We show that if we compare the size of a minimum feedback vertex set in a directed graph with quarter-integral cycle packing number, we obtain a polynomial bound. More precisely, we show that if in a directed graph G there is no family of k cycles such that every vertex of G is in at most four of the cycles, then there exists a feedback vertex set in G of size O(k^4). On the way there we prove a more general result about quarter-integral packing of subgraphs of high directed treewidth: for every pair of positive integers a and b, if a directed graph G has directed treewidth Omega(a^6 b^8 log^2(ab)), then one can find in G a family of a subgraphs, each of directed treewidth at least b, such that every vertex of G is in at most four subgraphs.

Cite as

Tomáš Masařík, Irene Muzi, Marcin Pilipczuk, Paweł Rzążewski, and Manuel Sorge. Packing Directed Circuits Quarter-Integrally. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 72:1-72:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{masarik_et_al:LIPIcs.ESA.2019.72,
  author =	{Masa\v{r}{\'\i}k, Tom\'{a}\v{s} and Muzi, Irene and Pilipczuk, Marcin and Rz\k{a}\.{z}ewski, Pawe{\l} and Sorge, Manuel},
  title =	{{Packing Directed Circuits Quarter-Integrally}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{72:1--72:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.72},
  URN =		{urn:nbn:de:0030-drops-111938},
  doi =		{10.4230/LIPIcs.ESA.2019.72},
  annote =	{Keywords: Directed graphs, graph algorithms, linkage, Erd\H{o}s–P\'{o}sa property}
}
Document
Equal-Subset-Sum Faster Than the Meet-in-the-Middle

Authors: Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki


Abstract
In the Equal-Subset-Sum problem, we are given a set S of n integers and the problem is to decide if there exist two disjoint nonempty subsets A,B subseteq S, whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in O^*(3^(n/2)) <= O^*(1.7321^n) time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give O^*(1.7088^n) worst case Monte Carlo algorithm. This answers a question suggested by Woeginger in his inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in O^*(3^n) time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in O^*(2.6817^n) time and polynomial space.

Cite as

Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki. Equal-Subset-Sum Faster Than the Meet-in-the-Middle. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 73:1-73:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mucha_et_al:LIPIcs.ESA.2019.73,
  author =	{Mucha, Marcin and Nederlof, Jesper and Pawlewicz, Jakub and W\k{e}grzycki, Karol},
  title =	{{Equal-Subset-Sum Faster Than the Meet-in-the-Middle}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{73:1--73:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.73},
  URN =		{urn:nbn:de:0030-drops-111946},
  doi =		{10.4230/LIPIcs.ESA.2019.73},
  annote =	{Keywords: Equal-Subset-Sum, Subset-Sum, meet-in-the-middle, enumeration technique, randomized algorithm}
}
Document
Hardness of Bichromatic Closest Pair with Jaccard Similarity

Authors: Rasmus Pagh, Nina Mesing Stausholm, and Mikkel Thorup


Abstract
Consider collections A and B of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from A x B that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity |a cap b|/|a cup b| for (a,b) in A x B. We consider the approximate version of the problem where we are given thresholds j_1 > j_2 and wish to return a pair from A x B that has Jaccard similarity higher than j_2 if there exists a pair in A x B with Jaccard similarity at least j_1. The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of Broder et al., solves this problem in Õ(n^(2-delta)) time if j_1 >= j_2^(1-delta). In particular, for delta=Omega(1), the approximation ratio j_1/j_2 = 1/j_2^delta increases polynomially in 1/j_2. In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in O(n^(2-Omega(1))) time for j_1/j_2 = 1/j_2^o(1). Specifically, assuming OVC, we prove that for any delta>0 there exists an epsilon>0 such that Bichromatic Closest Pair with Jaccard similarity requires time Omega(n^(2-delta)) for any choice of thresholds j_2 < j_1 < 1-delta, that satisfy j_1 <= j_2^(1-epsilon).

Cite as

Rasmus Pagh, Nina Mesing Stausholm, and Mikkel Thorup. Hardness of Bichromatic Closest Pair with Jaccard Similarity. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 74:1-74:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{pagh_et_al:LIPIcs.ESA.2019.74,
  author =	{Pagh, Rasmus and Stausholm, Nina Mesing and Thorup, Mikkel},
  title =	{{Hardness of Bichromatic Closest Pair with Jaccard Similarity}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{74:1--74:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.74},
  URN =		{urn:nbn:de:0030-drops-111951},
  doi =		{10.4230/LIPIcs.ESA.2019.74},
  annote =	{Keywords: fine-grained complexity, set similarity search, bichromatic closest pair, jaccard similarity}
}
Document
Compact Oblivious Routing

Authors: Harald Räcke and Stefan Schmid


Abstract
Oblivious routing is an attractive paradigm for large distributed systems in which centralized control and frequent reconfigurations are infeasible or undesired (e.g., costly). Over the last almost 20 years, much progress has been made in devising oblivious routing schemes that guarantee close to optimal load and also algorithms for constructing such schemes efficiently have been designed. However, a common drawback of existing oblivious routing schemes is that they are not compact: they require large routing tables (of polynomial size), which does not scale. This paper presents the first oblivious routing scheme which guarantees close to optimal load and is compact at the same time - requiring routing tables of polylogarithmic size. Our algorithm maintains the polylogarithmic competitive ratio of existing algorithms, and is hence particularly well-suited for emerging large-scale networks.

Cite as

Harald Räcke and Stefan Schmid. Compact Oblivious Routing. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{racke_et_al:LIPIcs.ESA.2019.75,
  author =	{R\"{a}cke, Harald and Schmid, Stefan},
  title =	{{Compact Oblivious Routing}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.75},
  URN =		{urn:nbn:de:0030-drops-111968},
  doi =		{10.4230/LIPIcs.ESA.2019.75},
  annote =	{Keywords: Oblivious Routing, Compact Routing, Competitive Analysis}
}
Document
Geometric Crossing-Minimization - A Scalable Randomized Approach

Authors: Marcel Radermacher and Ignaz Rutter


Abstract
We consider the minimization of edge-crossings in geometric drawings of graphs G=(V, E), i.e., in drawings where each edge is depicted as a line segment. The respective decision problem is NP-hard [Daniel Bienstock, 1991]. Crossing-minimization, in general, is a popular theoretical research topic; see Vrt'o [Imrich Vrt'o, 2014]. In contrast to theory and the topological setting, the geometric setting did not receive a lot of attention in practice. Prior work [Marcel Radermacher et al., 2018] is limited to the crossing-minimization in geometric graphs with less than 200 edges. The described heuristics base on the primitive operation of moving a single vertex v to its crossing-minimal position, i.e., the position in R^2 that minimizes the number of crossings on edges incident to v. In this paper, we introduce a technique to speed-up the computation by a factor of 20. This is necessary but not sufficient to cope with graphs with a few thousand edges. In order to handle larger graphs, we drop the condition that each vertex v has to be moved to its crossing-minimal position and compute a position that is only optimal with respect to a small random subset of the edges. In our theoretical contribution, we consider drawings that contain for each edge uv in E and each position p in R^2 for v o(|E|) crossings. In this case, we prove that with a random subset of the edges of size Theta(k log k) the co-crossing number of a degree-k vertex v, i.e., the number of edge pairs uv in E, e in E that do not cross, can be approximated by an arbitrary but fixed factor delta with high probability. In our experimental evaluation, we show that the randomized approach reduces the number of crossings in graphs with up to 13 000 edges considerably. The evaluation suggests that depending on the degree-distribution different strategies result in the fewest number of crossings.

Cite as

Marcel Radermacher and Ignaz Rutter. Geometric Crossing-Minimization - A Scalable Randomized Approach. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 76:1-76:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{radermacher_et_al:LIPIcs.ESA.2019.76,
  author =	{Radermacher, Marcel and Rutter, Ignaz},
  title =	{{Geometric Crossing-Minimization - A Scalable Randomized Approach}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{76:1--76:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.76},
  URN =		{urn:nbn:de:0030-drops-111977},
  doi =		{10.4230/LIPIcs.ESA.2019.76},
  annote =	{Keywords: Geometric Crossing Minimization, Randomization, Approximation, VC-Dimension, Experiments}
}
Document
An Approximate Kernel for Connected Feedback Vertex Set

Authors: M. S. Ramanujan


Abstract
The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G' of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G' of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0.

Cite as

M. S. Ramanujan. An Approximate Kernel for Connected Feedback Vertex Set. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ramanujan:LIPIcs.ESA.2019.77,
  author =	{Ramanujan, M. S.},
  title =	{{An Approximate Kernel for Connected Feedback Vertex Set}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.77},
  URN =		{urn:nbn:de:0030-drops-111989},
  doi =		{10.4230/LIPIcs.ESA.2019.77},
  annote =	{Keywords: Parameterized Complexity, Kernelization, Approximation Algorithms}
}
Document
Multicommodity Multicast, Wireless and Fast

Authors: R. Ravi and Oleksandr Rudenko


Abstract
We study rumor spreading in graphs, specifically multicommodity multicast problem under the wireless model: given source-destination pairs in the graph, one needs to find the fastest schedule to transfer information from each source to the corresponding destination. Under the wireless model, nodes can transmit to any subset of their neighbors in synchronous time steps, as long as they either transmit or receive from at most one transmitter during the same time step. We improve approximation ratio for this problem from O~(n^(2/3)) to O~(n^((1/2) + epsilon)) on n-node graphs. We also design an algorithm that satisfies p given demand pairs in O(OPT + p) steps, where OPT is the length of an optimal schedule, by reducing it to the well-studied packet routing problem. In the case where underlying graph is an n-node tree, we improve the previously best-known approximation ratio of O((log n)/(log log n)) to 3. One consequence of our proof is a simple constructive rule for optimal broadcasting in a tree under a widely studied telephone model.

Cite as

R. Ravi and Oleksandr Rudenko. Multicommodity Multicast, Wireless and Fast. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ravi_et_al:LIPIcs.ESA.2019.78,
  author =	{Ravi, R. and Rudenko, Oleksandr},
  title =	{{Multicommodity Multicast, Wireless and Fast}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.78},
  URN =		{urn:nbn:de:0030-drops-111991},
  doi =		{10.4230/LIPIcs.ESA.2019.78},
  annote =	{Keywords: Rumor, scheduling, broadcast, multicommodity, multicast, optimization algorithms, approximation, packet routing}
}
Document
Recognizing Planar Laman Graphs

Authors: Jonathan Rollin, Lena Schlipf, and André Schulz


Abstract
Laman graphs are the minimally rigid graphs in the plane. We present two algorithms for recognizing planar Laman graphs. A simple algorithm with running time O(n^(3/2)) and a more complicated algorithm with running time O(n log^3 n) based on involved planar network flow algorithms. Both improve upon the previously fastest algorithm for general graphs by Gabow and Westermann [Algorithmica, 7(5-6):465 - 497, 1992] with running time O(n sqrt{n log n}). To solve this problem we introduce two algorithms (with the running times stated above) that check whether for a directed planar graph G, disjoint sets S, T subseteq V(G), and a fixed k the following connectivity condition holds: for each vertex s in S there are k directed paths from s to T pairwise having only vertex s in common. This variant of connectivity seems interesting on its own.

Cite as

Jonathan Rollin, Lena Schlipf, and André Schulz. Recognizing Planar Laman Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 79:1-79:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rollin_et_al:LIPIcs.ESA.2019.79,
  author =	{Rollin, Jonathan and Schlipf, Lena and Schulz, Andr\'{e}},
  title =	{{Recognizing Planar Laman Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{79:1--79:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.79},
  URN =		{urn:nbn:de:0030-drops-112001},
  doi =		{10.4230/LIPIcs.ESA.2019.79},
  annote =	{Keywords: planar graphs, Laman graphs, network flow, connectivity}
}
Document
Simultaneous Representation of Proper and Unit Interval Graphs

Authors: Ignaz Rutter, Darren Strash, Peter Stumpf, and Michael Vollmer


Abstract
In a confluence of combinatorics and geometry, simultaneous representations provide a way to realize combinatorial objects that share common structure. A standard case in the study of simultaneous representations is the sunflower case where all objects share the same common structure. While the recognition problem for general simultaneous interval graphs - the simultaneous version of arguably one of the most well-studied graph classes - is NP-complete, the complexity of the sunflower case for three or more simultaneous interval graphs is currently open. In this work we settle this question for proper interval graphs. We give an algorithm to recognize simultaneous proper interval graphs in linear time in the sunflower case where we allow any number of simultaneous graphs. Simultaneous unit interval graphs are much more "rigid" and therefore have less freedom in their representation. We show they can be recognized in time O(|V|*|E|) for any number of simultaneous graphs in the sunflower case where G=(V,E) is the union of the simultaneous graphs. We further show that both recognition problems are in general NP-complete if the number of simultaneous graphs is not fixed. The restriction to the sunflower case is in this sense necessary.

Cite as

Ignaz Rutter, Darren Strash, Peter Stumpf, and Michael Vollmer. Simultaneous Representation of Proper and Unit Interval Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 80:1-80:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rutter_et_al:LIPIcs.ESA.2019.80,
  author =	{Rutter, Ignaz and Strash, Darren and Stumpf, Peter and Vollmer, Michael},
  title =	{{Simultaneous Representation of Proper and Unit Interval Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{80:1--80:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.80},
  URN =		{urn:nbn:de:0030-drops-112013},
  doi =		{10.4230/LIPIcs.ESA.2019.80},
  annote =	{Keywords: Intersection Graphs, Recognition Algorithm, Proper/Unit Interval Graphs, Simultaneous Representations}
}
Document
Correlation Clustering with Same-Cluster Queries Bounded by Optimal Cost

Authors: Barna Saha and Sanjay Subramanian


Abstract
Several clustering frameworks with interactive (semi-supervised) queries have been studied in the past. Recently, clustering with same-cluster queries has become popular. An algorithm in this setting has access to an oracle with full knowledge of an optimal clustering, and the algorithm can ask the oracle queries of the form, "Does the optimal clustering put vertices u and v in the same cluster?" Due to its simplicity, this querying model can easily be implemented in real crowd-sourcing platforms and has attracted a lot of recent work. In this paper, we study the popular correlation clustering problem (Bansal et al., 2002) under the same-cluster querying framework. Given a complete graph G=(V,E) with positive and negative edge labels, correlation clustering objective aims to compute a graph clustering that minimizes the total number of disagreements, that is the negative intra-cluster edges and positive inter-cluster edges. In a recent work, Ailon et al. (2018b) provided an approximation algorithm for correlation clustering that approximates the correlation clustering objective within (1+epsilon) with O((k^{14} log{n} log{k})/epsilon^6) queries when the number of clusters, k, is fixed. For many applications, k is not fixed and can grow with |V|. Moreover, the dependency of k^14 on query complexity renders the algorithm impractical even for datasets with small values of k. In this paper, we take a different approach. Let C_{OPT} be the number of disagreements made by the optimal clustering. We present algorithms for correlation clustering whose error and query bounds are parameterized by C_{OPT} rather than by the number of clusters. Indeed, a good clustering must have small C_{OPT}. Specifically, we present an efficient algorithm that recovers an exact optimal clustering using at most 2C_{OPT} queries and an efficient algorithm that outputs a 2-approximation using at most C_{OPT} queries. In addition, we show under a plausible complexity assumption, there does not exist any polynomial time algorithm that has an approximation ratio better than 1+alpha for an absolute constant alpha > 0 with o(C_{OPT}) queries. Therefore, our first algorithm achieves the optimal query bound within a factor of 2. We extensively evaluate our methods on several synthetic and real-world datasets using real crowd-sourced oracles. Moreover, we compare our approach against known correlation clustering algorithms that do not perform querying. In all cases, our algorithms exhibit superior performance.

Cite as

Barna Saha and Sanjay Subramanian. Correlation Clustering with Same-Cluster Queries Bounded by Optimal Cost. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 81:1-81:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{saha_et_al:LIPIcs.ESA.2019.81,
  author =	{Saha, Barna and Subramanian, Sanjay},
  title =	{{Correlation Clustering with Same-Cluster Queries Bounded by Optimal Cost}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{81:1--81:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.81},
  URN =		{urn:nbn:de:0030-drops-112020},
  doi =		{10.4230/LIPIcs.ESA.2019.81},
  annote =	{Keywords: Clustering, Approximation Algorithm, Crowdsourcing, Randomized Algorithm}
}
Document
Graph Balancing with Orientation Costs

Authors: Roy Schwartz and Ran Yeheskel


Abstract
Motivated by the classic Generalized Assignment Problem, we consider the Graph Balancing problem in the presence of orientation costs: given an undirected multi-graph G=(V,E) equipped with edge weights and orientation costs on the edges, the goal is to find an orientation of the edges that minimizes both the maximum weight of edges oriented toward any vertex (makespan) and total orientation cost. We present a general framework for minimizing makespan in the presence of costs that allows us to: (1) achieve bicriteria approximations for the Graph Balancing problem that capture known previous results (Shmoys-Tardos [Math. Progrm. '93], Ebenlendr-Krcál-Sgall [Algorithmica '14], and Wang-Sitters [Inf. Process. Lett. '16]); and (2) achieve bicriteria approximations for extensions of the Graph Balancing problem that admit hyperedges and unrelated weights. Our framework is based on a remarkably simple rounding of a strengthened linear relaxation. We complement the above by presenting bicriteria lower bounds with respect to the linear programming relaxations we use that show that a loss in the total orientation cost is required if one aims for an approximation better than 2 in the makespan.

Cite as

Roy Schwartz and Ran Yeheskel. Graph Balancing with Orientation Costs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 82:1-82:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{schwartz_et_al:LIPIcs.ESA.2019.82,
  author =	{Schwartz, Roy and Yeheskel, Ran},
  title =	{{Graph Balancing with Orientation Costs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{82:1--82:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.82},
  URN =		{urn:nbn:de:0030-drops-112034},
  doi =		{10.4230/LIPIcs.ESA.2019.82},
  annote =	{Keywords: Graph Balancing, Generalized Assignment Problem}
}
Document
FPT-Algorithms for Computing Gromov-Hausdorff and Interleaving Distances Between Trees

Authors: Elena Farahbakhsh Touli and Yusu Wang


Abstract
The Gromov-Hausdorff distance is a natural way to measure the distortion between two metric spaces. However, there has been only limited algorithmic development to compute or approximate this distance. We focus on computing the Gromov-Hausdorff distance between two metric trees. Roughly speaking, a metric tree is a metric space that can be realized by the shortest path metric on a tree. Any finite tree with positive edge weight can be viewed as a metric tree where the weight is treated as edge length and the metric is the induced shortest path metric in the tree. Previously, Agarwal et al. showed that even for trees with unit edge length, it is NP-hard to approximate the Gromov-Hausdorff distance between them within a factor of 3. In this paper, we present a fixed-parameter tractable (FPT) algorithm that can approximate the Gromov-Hausdorff distance between two general metric trees within a multiplicative factor of 14. Interestingly, the development of our algorithm is made possible by a connection between the Gromov-Hausdorff distance for metric trees and the interleaving distance for the so-called merge trees. The merge trees arise in practice naturally as a simple yet meaningful topological summary (it is a variant of the Reeb graphs and contour trees), and are of independent interest. It turns out that an exact or approximation algorithm for the interleaving distance leads to an approximation algorithm for the Gromov-Hausdorff distance. One of the key contributions of our work is that we re-define the interleaving distance in a way that makes it easier to develop dynamic programming approaches to compute it. We then present a fixed-parameter tractable algorithm to compute the interleaving distance between two merge trees exactly, which ultimately leads to an FPT-algorithm to approximate the Gromov-Hausdorff distance between two metric trees. This exact FPT-algorithm to compute the interleaving distance between merge trees is of interest itself, as it is known that it is NP-hard to approximate it within a factor of 3, and previously the best known algorithm has an approximation factor of O(sqrt{n}) even for trees with unit edge length.

Cite as

Elena Farahbakhsh Touli and Yusu Wang. FPT-Algorithms for Computing Gromov-Hausdorff and Interleaving Distances Between Trees. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 83:1-83:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{farahbakhshtouli_et_al:LIPIcs.ESA.2019.83,
  author =	{Farahbakhsh Touli, Elena and Wang, Yusu},
  title =	{{FPT-Algorithms for Computing Gromov-Hausdorff and Interleaving Distances Between Trees}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{83:1--83:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.83},
  URN =		{urn:nbn:de:0030-drops-112048},
  doi =		{10.4230/LIPIcs.ESA.2019.83},
  annote =	{Keywords: Gromov-Hausdorff distance, Interleaving distance, Merge trees}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail