LIPIcs, Volume 173

28th Annual European Symposium on Algorithms (ESA 2020)



Thumbnail PDF

Event

ESA 2020, September 7-9, 2020, Pisa, Italy (Virtual Conference)

Editors

Fabrizio Grandoni
  • IDSIA, USI-SUPSI, Manno, Switzerland
Grzegorz Herman
  • Jagiellonian University, Kraków, Poland
Peter Sanders
  • Karlsruhe Institute of Technology, Germany

Publication Details

  • published at: 2020-08-26
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-162-7
  • DBLP: db/conf/esa/esa2020

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 173, ESA 2020, Complete Volume

Authors: Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders


Abstract
LIPIcs, Volume 173, ESA 2020, Complete Volume

Cite as

28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 1-1598, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{grandoni_et_al:LIPIcs.ESA.2020,
  title =	{{LIPIcs, Volume 173, ESA 2020, Complete Volume}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{1--1598},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020},
  URN =		{urn:nbn:de:0030-drops-128651},
  doi =		{10.4230/LIPIcs.ESA.2020},
  annote =	{Keywords: LIPIcs, Volume 173, ESA 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grandoni_et_al:LIPIcs.ESA.2020.0,
  author =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.0},
  URN =		{urn:nbn:de:0030-drops-128669},
  doi =		{10.4230/LIPIcs.ESA.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Planar Bichromatic Bottleneck Spanning Trees

Authors: A. Karim Abu-Affash, Sujoy Bhore, Paz Carmi, and Joseph S. B. Mitchell


Abstract
Given a set P of n red and blue points in the plane, a planar bichromatic spanning tree of P is a geometric spanning tree of P, such that each edge connects between a red and a blue point, and no two edges intersect. In the bottleneck planar bichromatic spanning tree problem, the goal is to find a planar bichromatic spanning tree T, such that the length of the longest edge in T is minimized. In this paper, we show that this problem is NP-hard for points in general position. Our main contribution is a polynomial-time (8√2)-approximation algorithm, by showing that any bichromatic spanning tree of bottleneck λ can be converted to a planar bichromatic spanning tree of bottleneck at most 8√2 λ.

Cite as

A. Karim Abu-Affash, Sujoy Bhore, Paz Carmi, and Joseph S. B. Mitchell. Planar Bichromatic Bottleneck Spanning Trees. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 1:1-1:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abuaffash_et_al:LIPIcs.ESA.2020.1,
  author =	{Abu-Affash, A. Karim and Bhore, Sujoy and Carmi, Paz and Mitchell, Joseph S. B.},
  title =	{{Planar Bichromatic Bottleneck Spanning Trees}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{1:1--1:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.1},
  URN =		{urn:nbn:de:0030-drops-128670},
  doi =		{10.4230/LIPIcs.ESA.2020.1},
  annote =	{Keywords: Approximation Algorithms, Bottleneck Spanning Tree, NP-Hardness}
}
Document
Parallel Batch-Dynamic Trees via Change Propagation

Authors: Umut A. Acar, Daniel Anderson, Guy E. Blelloch, Laxman Dhulipala, and Sam Westrick


Abstract
The dynamic trees problem is to maintain a forest subject to edge insertions and deletions while facilitating queries such as connectivity, path weights, and subtree weights. Dynamic trees are a fundamental building block of a large number of graph algorithms. Although traditionally studied in the single-update setting, dynamic algorithms capable of supporting batches of updates are increasingly relevant today due to the emergence of rapidly evolving dynamic datasets. Since processing updates on a single processor is often unrealistic for large batches of updates, designing parallel batch-dynamic algorithms that achieve provably low span is important for many applications. In this work, we design the first work-efficient parallel batch-dynamic algorithm for dynamic trees that is capable of supporting both path queries and subtree queries, as well as a variety of nonlocal queries. Previous work-efficient dynamic trees of Tseng et al. were only capable of handling subtree queries [ALENEX'19, (2019), pp. 92 - 106]. To achieve this, we propose a framework for algorithmically dynamizing static round-synchronous algorithms to obtain parallel batch-dynamic algorithms. In our framework, the algorithm designer can apply the technique to any suitably defined static algorithm. We then obtain theoretical guarantees for algorithms in our framework by defining the notion of a computation distance between two executions of the underlying algorithm. Our dynamic trees algorithm is obtained by applying our dynamization framework to the parallel tree contraction algorithm of Miller and Reif [FOCS'85, (1985), pp. 478 - 489], and then performing a novel analysis of the computation distance of this algorithm under batch updates. We show that k updates can be performed in O(klog(1+n/k)) work in expectation, which matches the algorithm of Tseng et al. while providing support for a substantially larger number of queries and applications.

Cite as

Umut A. Acar, Daniel Anderson, Guy E. Blelloch, Laxman Dhulipala, and Sam Westrick. Parallel Batch-Dynamic Trees via Change Propagation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{acar_et_al:LIPIcs.ESA.2020.2,
  author =	{Acar, Umut A. and Anderson, Daniel and Blelloch, Guy E. and Dhulipala, Laxman and Westrick, Sam},
  title =	{{Parallel Batch-Dynamic Trees via Change Propagation}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.2},
  URN =		{urn:nbn:de:0030-drops-128686},
  doi =		{10.4230/LIPIcs.ESA.2020.2},
  annote =	{Keywords: Dynamic trees, Graph algorithms, Parallel algorithms, Dynamic algorithms}
}
Document
Reconstructing Biological and Digital Phylogenetic Trees in Parallel

Authors: Ramtin Afshar, Michael T. Goodrich, Pedro Matias, and Martha C. Osegueda


Abstract
In this paper, we study the parallel query complexity of reconstructing biological and digital phylogenetic trees from simple queries involving their nodes. This is motivated from computational biology, data protection, and computer security settings, which can be abstracted in terms of two parties, a responder, Alice, who must correctly answer queries of a given type regarding a degree-d tree, T, and a querier, Bob, who issues batches of queries, with each query in a batch being independent of the others, so as to eventually infer the structure of T. We show that a querier can efficiently reconstruct an n-node degree-d tree, T, with a logarithmic number of rounds and quasilinear number of queries, with high probability, for various types of queries, including relative-distance queries and path queries. Our results are all asymptotically optimal and improve the asymptotic (sequential) query complexity for one of the problems we study. Moreover, through an experimental analysis using both real-world and synthetic data, we provide empirical evidence that our algorithms provide significant parallel speedups while also improving the total query complexities for the problems we study.

Cite as

Ramtin Afshar, Michael T. Goodrich, Pedro Matias, and Martha C. Osegueda. Reconstructing Biological and Digital Phylogenetic Trees in Parallel. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{afshar_et_al:LIPIcs.ESA.2020.3,
  author =	{Afshar, Ramtin and Goodrich, Michael T. and Matias, Pedro and Osegueda, Martha C.},
  title =	{{Reconstructing Biological and Digital Phylogenetic Trees in Parallel}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{3:1--3:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.3},
  URN =		{urn:nbn:de:0030-drops-128696},
  doi =		{10.4230/LIPIcs.ESA.2020.3},
  annote =	{Keywords: Tree Reconstruction, Parallel Algorithms, Privacy, Phylogenetic Trees, Data Structures, Hierarchical Clustering}
}
Document
Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem

Authors: Reyan Ahmed, Faryad Darabi Sahneh, Keaton Hamm, Stephen Kobourov, and Richard Spence


Abstract
We study the multi-level Steiner tree problem: a generalization of the Steiner tree problem in graphs where terminals T require varying priority, level, or quality of service. In this problem, we seek to find a minimum cost tree containing edges of varying rates such that any two terminals u, v with priorities P(u), P(v) are connected using edges of rate min{P(u),P(v)} or better. The case where edge costs are proportional to their rate is approximable to within a constant factor of the optimal solution. For the more general case of non-proportional costs, this problem is hard to approximate with ratio c log log n, where n is the number of vertices in the graph. A simple greedy algorithm by Charikar et al., however, provides a min{2(ln |T|+1), 𝓁 ρ}-approximation in this setting, where ρ is an approximation ratio for a heuristic solver for the Steiner tree problem and 𝓁 is the number of priorities or levels (Byrka et al. give a Steiner tree algorithm with ρ≈1.39, for example). In this paper, we describe a natural generalization to the multi-level case of the classical (single-level) Steiner tree approximation algorithm based on Kruskal’s minimum spanning tree algorithm. We prove that this algorithm achieves an approximation ratio at least as good as Charikar et al., and experimentally performs better with respect to the optimum solution. We develop an integer linear programming formulation to compute an exact solution for the multi-level Steiner tree problem with non-proportional edge costs and use it to evaluate the performance of our algorithm on both random graphs and multi-level instances derived from SteinLib.

Cite as

Reyan Ahmed, Faryad Darabi Sahneh, Keaton Hamm, Stephen Kobourov, and Richard Spence. Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 4:1-4:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ahmed_et_al:LIPIcs.ESA.2020.4,
  author =	{Ahmed, Reyan and Sahneh, Faryad Darabi and Hamm, Keaton and Kobourov, Stephen and Spence, Richard},
  title =	{{Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{4:1--4:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.4},
  URN =		{urn:nbn:de:0030-drops-128709},
  doi =		{10.4230/LIPIcs.ESA.2020.4},
  annote =	{Keywords: multi-level, Steiner tree, approximation algorithms}
}
Document
Analysis of the Period Recovery Error Bound

Authors: Amihood Amir, Itai Boneh, Michael Itzhaki, and Eitan Kondratovsky


Abstract
The recovery problem is the problem whose input is a corrupted text T that was originally periodic, and where one wishes to recover its original period. The algorithm’s input is T without any information about either the period’s length or the period itself. An algorithm that solves this problem is called a recovery algorithm. In order to make recovery possible, there must be some assumption that not "too many" errors corrupted the initial periodic string. This is called the error bound. In previous recovery algorithms, it was shown that a given error bound of n/((2+ε)p) can lead to O(log_{1+ε} n) period candidates, that are guaranteed to include the original period, where p is the length of the original period (unknown by the algorithm) and ε > 0 is an arbitrary constant. This paper provides the first analysis of the relationship between the error bound and the number of candidates, as well as identification of the error parameters that still guarantee recovery. We improve the previously known upper error bound on the number of corruptions, n/((2+ε)p), that outputs O(log_{1+ε} n) period candidates. We show how to (1) remove ε from the bound, (2) relax the error bound to allow more errors while keeping the candidates set of size O(log n). It turns out that this relaxation on the previously known upper bound is quite challenging. To achieve this result we provide what, to our knowledge, is the first known non-trivial lower bound on the Hamming distance between two periodic strings. This proof leads to an error bound, that produces a family of period candidates of size 2log₃ n. We show that this result is tight and further provide a compact representation of the period candidates. We call this representation the canonic period seed. In addition to providing less restrictive error bounds that guarantee a smaller candidate set, we also provide a hierarchy of more restrictive upper error bounds that asymptotically reduces the size of the potential period candidate set.

Cite as

Amihood Amir, Itai Boneh, Michael Itzhaki, and Eitan Kondratovsky. Analysis of the Period Recovery Error Bound. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ESA.2020.5,
  author =	{Amir, Amihood and Boneh, Itai and Itzhaki, Michael and Kondratovsky, Eitan},
  title =	{{Analysis of the Period Recovery Error Bound}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.5},
  URN =		{urn:nbn:de:0030-drops-128717},
  doi =		{10.4230/LIPIcs.ESA.2020.5},
  annote =	{Keywords: Period Recovery, Period Recovery Hierarchy, Hamming Distance}
}
Document
Approximation of the Diagonal of a Laplacian’s Pseudoinverse for Complex Network Analysis

Authors: Eugenio Angriman, Maria Predari, Alexander van der Grinten, and Henning Meyerhenke


Abstract
The ubiquity of massive graph data sets in numerous applications requires fast algorithms for extracting knowledge from these data. We are motivated here by three electrical measures for the analysis of large small-world graphs G = (V, E) - i. e., graphs with diameter in O(log |V|), which are abundant in complex network analysis. From a computational point of view, the three measures have in common that their crucial component is the diagonal of the graph Laplacian’s pseudoinverse, L^+. Computing diag(L^+) exactly by pseudoinversion, however, is as expensive as dense matrix multiplication - and the standard tools in practice even require cubic time. Moreover, the pseudoinverse requires quadratic space - hardly feasible for large graphs. Resorting to approximation by, e. g., using the Johnson-Lindenstrauss transform, requires the solution of O(log |V| / ε²) Laplacian linear systems to guarantee a relative error, which is still very expensive for large inputs. In this paper, we present a novel approximation algorithm that requires the solution of only one Laplacian linear system. The remaining parts are purely combinatorial - mainly sampling uniform spanning trees, which we relate to diag(L^+) via effective resistances. For small-world networks, our algorithm obtains a ± ε-approximation with high probability, in a time that is nearly-linear in |E| and quadratic in 1 / ε. Another positive aspect of our algorithm is its parallel nature due to independent sampling. We thus provide two parallel implementations of our algorithm: one using OpenMP, one MPI + OpenMP. In our experiments against the state of the art, our algorithm (i) yields more accurate approximation results for diag(L^+), (ii) is much faster and more memory-efficient, and (iii) obtains good parallel speedups, in particular in the distributed setting.

Cite as

Eugenio Angriman, Maria Predari, Alexander van der Grinten, and Henning Meyerhenke. Approximation of the Diagonal of a Laplacian’s Pseudoinverse for Complex Network Analysis. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 6:1-6:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{angriman_et_al:LIPIcs.ESA.2020.6,
  author =	{Angriman, Eugenio and Predari, Maria and van der Grinten, Alexander and Meyerhenke, Henning},
  title =	{{Approximation of the Diagonal of a Laplacian’s Pseudoinverse for Complex Network Analysis}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{6:1--6:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.6},
  URN =		{urn:nbn:de:0030-drops-128723},
  doi =		{10.4230/LIPIcs.ESA.2020.6},
  annote =	{Keywords: Laplacian pseudoinverse, electrical centrality measures, uniform spanning tree, effective resistance, parallel sampling}
}
Document
Cutting Polygons into Small Pieces with Chords: Laser-Based Localization

Authors: Esther M. Arkin, Rathish Das, Jie Gao, Mayank Goswami, Joseph S. B. Mitchell, Valentin Polishchuk, and Csaba D. Tóth


Abstract
Motivated by indoor localization by tripwire lasers, we study the problem of cutting a polygon into small-size pieces, using the chords of the polygon. Several versions are considered, depending on the definition of the "size" of a piece. In particular, we consider the area, the diameter, and the radius of the largest inscribed circle as a measure of the size of a piece. We also consider different objectives, either minimizing the maximum size of a piece for a given number of chords, or minimizing the number of chords that achieve a given size threshold for the pieces. We give hardness results for polygons with holes and approximation algorithms for multiple variants of the problem.

Cite as

Esther M. Arkin, Rathish Das, Jie Gao, Mayank Goswami, Joseph S. B. Mitchell, Valentin Polishchuk, and Csaba D. Tóth. Cutting Polygons into Small Pieces with Chords: Laser-Based Localization. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{arkin_et_al:LIPIcs.ESA.2020.7,
  author =	{Arkin, Esther M. and Das, Rathish and Gao, Jie and Goswami, Mayank and Mitchell, Joseph S. B. and Polishchuk, Valentin and T\'{o}th, Csaba D.},
  title =	{{Cutting Polygons into Small Pieces with Chords: Laser-Based Localization}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.7},
  URN =		{urn:nbn:de:0030-drops-128736},
  doi =		{10.4230/LIPIcs.ESA.2020.7},
  annote =	{Keywords: Polygon partition, Arrangements, Visibility, Localization}
}
Document
Set Cover with Delay - Clairvoyance Is Not Required

Authors: Yossi Azar, Ashish Chiplunkar, Shay Kutten, and Noam Touitou


Abstract
In most online problems with delay, clairvoyance (i.e. knowing the future delay of a request upon its arrival) is required for polylogarithmic competitiveness. In this paper, we show that this is not the case for set cover with delay (SCD) - specifically, we present the first non-clairvoyant algorithm, which is O(log n log m)-competitive, where n is the number of elements and m is the number of sets. This matches the best known result for the classic online set cover (a special case of non-clairvoyant SCD). Moreover, clairvoyance does not allow for significant improvement - we present lower bounds of Ω(√{log n}) and Ω(√{log m}) for SCD which apply for the clairvoyant case. In addition, the competitiveness of our algorithm does not depend on the number of requests. Such a guarantee on the size of the universe alone was not previously known even for the clairvoyant case - the only previously-known algorithm (due to Carrasco et al.) is clairvoyant, with competitiveness that grows with the number of requests. For the special case of vertex cover with delay, we show a simpler, deterministic algorithm which is 3-competitive (and also non-clairvoyant).

Cite as

Yossi Azar, Ashish Chiplunkar, Shay Kutten, and Noam Touitou. Set Cover with Delay - Clairvoyance Is Not Required. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 8:1-8:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{azar_et_al:LIPIcs.ESA.2020.8,
  author =	{Azar, Yossi and Chiplunkar, Ashish and Kutten, Shay and Touitou, Noam},
  title =	{{Set Cover with Delay - Clairvoyance Is Not Required}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.8},
  URN =		{urn:nbn:de:0030-drops-128749},
  doi =		{10.4230/LIPIcs.ESA.2020.8},
  annote =	{Keywords: Set Cover, Delay, Clairvoyant}
}
Document
Improved Bounds for Metric Capacitated Covering Problems

Authors: Sayan Bandyapadhyay


Abstract
In the Metric Capacitated Covering (MCC) problem, given a set of balls ℬ in a metric space P with metric d and a capacity parameter U, the goal is to find a minimum sized subset ℬ' ⊆ ℬ and an assignment of the points in P to the balls in ℬ' such that each point is assigned to a ball that contains it and each ball is assigned with at most U points. MCC achieves an O(log |P|)-approximation using a greedy algorithm. On the other hand, it is hard to approximate within a factor of o(log |P|) even with β < 3 factor expansion of the balls. Bandyapadhyay et al. [SoCG 2018, DCG 2019] showed that one can obtain an O(1)-approximation for the problem with 6.47 factor expansion of the balls. An open question left by their work is to reduce the gap between the lower bound 3 and the upper bound 6.47. In this current work, we show that it is possible to obtain an O(1)-approximation with only 4.24 factor expansion of the balls. We also show a similar upper bound of 5 for a more generalized version of MCC for which the best previously known bound was 9.

Cite as

Sayan Bandyapadhyay. Improved Bounds for Metric Capacitated Covering Problems. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bandyapadhyay:LIPIcs.ESA.2020.9,
  author =	{Bandyapadhyay, Sayan},
  title =	{{Improved Bounds for Metric Capacitated Covering Problems}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.9},
  URN =		{urn:nbn:de:0030-drops-128759},
  doi =		{10.4230/LIPIcs.ESA.2020.9},
  annote =	{Keywords: Capacitated covering, approximation algorithms, bicriteria approximation, LP rounding}
}
Document
Minimum Neighboring Degree Realization in Graphs and Trees

Authors: Amotz Bar-Noy, Keerti Choudhary, Avi Cohen, David Peleg, and Dror Rawitz


Abstract
We study a graph realization problem that pertains to degrees in vertex neighborhoods. The classical problem of degree sequence realizability asks whether or not a given sequence of n positive integers is equal to the degree sequence of some n-vertex undirected simple graph. While the realizability problem of degree sequences has been well studied for different classes of graphs, there has been relatively little work concerning the realizability of other types of information profiles, such as the vertex neighborhood profiles. In this paper we introduce and explore the minimum degrees in vertex neighborhood profile as it is one of the most natural extensions of the classical degree profile to vertex neighboring degree profiles. Given a graph G = (V,E), the min-degree of a vertex v ∈ V, namely MinND(v), is given by min{deg(w) ∣ w ∈ N[v]}. Our input is a sequence σ = (d_𝓁^{n_𝓁}, ⋯ , d₁^{n₁}), where d_{i+1} > d_i and each n_i is a positive integer. We provide some necessary and sufficient conditions for σ to be realizable. Furthermore, under the restriction that the realization is acyclic, i.e., a tree or a forest, we provide a full characterization of realizable sequences, along with a corresponding constructive algorithm. We believe our results are a crucial step towards understanding extremal neighborhood degree relations in graphs.

Cite as

Amotz Bar-Noy, Keerti Choudhary, Avi Cohen, David Peleg, and Dror Rawitz. Minimum Neighboring Degree Realization in Graphs and Trees. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barnoy_et_al:LIPIcs.ESA.2020.10,
  author =	{Bar-Noy, Amotz and Choudhary, Keerti and Cohen, Avi and Peleg, David and Rawitz, Dror},
  title =	{{Minimum Neighboring Degree Realization in Graphs and Trees}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.10},
  URN =		{urn:nbn:de:0030-drops-128765},
  doi =		{10.4230/LIPIcs.ESA.2020.10},
  annote =	{Keywords: Graph realization, neighborhood profile, graph algorithms, degree sequences}
}
Document
Tight Approximation Algorithms for p-Mean Welfare Under Subadditive Valuations

Authors: Siddharth Barman, Umang Bhaskar, Anand Krishna, and Ranjani G. Sundaram


Abstract
We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among n agents that have subadditive valuations over the goods. We first consider the Nash social welfare as our objective and design a polynomial-time algorithm that, in the value oracle model, finds an 8n-approximation to the Nash optimal allocation. Subadditive valuations include XOS (fractionally subadditive) and submodular valuations as special cases. Our result, even for the special case of submodular valuations, improves upon the previously best known O(n log n)-approximation ratio of Garg et al. (2020). More generally, we study maximization of p-mean welfare. The p-mean welfare is parameterized by an exponent term p ∈ (-∞, 1] and encompasses a range of welfare functions, such as social welfare (p = 1), Nash social welfare (p → 0), and egalitarian welfare (p → -∞). We give an algorithm that, for subadditive valuations and any given p ∈ (-∞, 1], computes (in the value oracle model and in polynomial time) an allocation with p-mean welfare at least 1/(8n) times the optimal. Further, we show that our approximation guarantees are essentially tight for XOS and, hence, subadditive valuations. We adapt a result of Dobzinski et al. (2010) to show that, under XOS valuations, an O (n^{1-ε}) approximation for the p-mean welfare for any p ∈ (-∞,1] (including the Nash social welfare) requires exponentially many value queries; here, ε > 0 is any fixed constant.

Cite as

Siddharth Barman, Umang Bhaskar, Anand Krishna, and Ranjani G. Sundaram. Tight Approximation Algorithms for p-Mean Welfare Under Subadditive Valuations. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barman_et_al:LIPIcs.ESA.2020.11,
  author =	{Barman, Siddharth and Bhaskar, Umang and Krishna, Anand and Sundaram, Ranjani G.},
  title =	{{Tight Approximation Algorithms for p-Mean Welfare Under Subadditive Valuations}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.11},
  URN =		{urn:nbn:de:0030-drops-128775},
  doi =		{10.4230/LIPIcs.ESA.2020.11},
  annote =	{Keywords: Discrete Fair Division, Nash Social Welfare, Subadditive Valuations, Submodular Valuations}
}
Document
Mincut Sensitivity Data Structures for the Insertion of an Edge

Authors: Surender Baswana, Shiv Gupta, and Till Knollmann


Abstract
Let G = (V,E) be an undirected graph on n vertices with non-negative capacities on its edges. The mincut sensitivity problem for the insertion of an edge is defined as follows. Build a compact data structure for G and a given set S ⊆ V of vertices that, on receiving any edge (x,y) ∈ S×S of positive capacity as query input, can efficiently report the set of all pairs from S× S whose mincut value increases upon insertion of the edge (x,y) to G. The only result that exists for this problem is for a single pair of vertices (Picard and Queyranne, Mathematical Programming Study, 13 (1980), 8-16). We present the following results for the single source and the all-pairs versions of this problem. 1) Single source: Given any designated source vertex s, there exists a data structure of size 𝒪(|S|) that can output all those vertices from S whose mincut value to s increases upon insertion of any given edge. The time taken by the data structure to answer any query is 𝒪(|S|). 2) All-pairs: There exists an 𝒪(|S|²) size data structure that can output all those pairs of vertices from S× S whose mincut value gets increased upon insertion of any given edge. The time taken by the data structure to answer any query is 𝒪(k), where k is the number of pairs of vertices whose mincut increases. For both these versions, we also address the problem of reporting the values of the mincuts upon insertion of any given edge. To derive our results, we use interesting insights into the nearest and the farthest mincuts for a pair of vertices. In addition, a crucial result, that we establish and use in our data structures, is that there exists a directed acyclic graph of 𝒪(n) size that compactly stores the farthest mincuts from all vertices of V to a designated vertex s in the graph. We believe that this result is of independent interest, especially, because it also complements a previously existing result by Hariharan et al. (STOC 2007) that the nearest mincuts from all vertices of V to s is a laminar family, and hence, can be stored compactly in a tree of 𝒪(n) size.

Cite as

Surender Baswana, Shiv Gupta, and Till Knollmann. Mincut Sensitivity Data Structures for the Insertion of an Edge. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{baswana_et_al:LIPIcs.ESA.2020.12,
  author =	{Baswana, Surender and Gupta, Shiv and Knollmann, Till},
  title =	{{Mincut Sensitivity Data Structures for the Insertion of an Edge}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.12},
  URN =		{urn:nbn:de:0030-drops-128781},
  doi =		{10.4230/LIPIcs.ESA.2020.12},
  annote =	{Keywords: Mincut, Sensitivity, Data Structure}
}
Document
Linear Time LexDFS on Chordal Graphs

Authors: Jesse Beisegel, Ekkehard Köhler, Robert Scheffler, and Martin Strehler


Abstract
Lexicographic Depth First Search (LexDFS) is a special variant of a Depth First Search (DFS), which was introduced by Corneil and Krueger in 2008. While this search has been used in various applications, in contrast to other graph searches, no general linear time implementation is known to date. In 2014, Köhler and Mouatadid achieved linear running time to compute some special LexDFS orderings for cocomparability graphs. In this paper, we present a linear time implementation of LexDFS for chordal graphs. Our algorithm even implements the extended version LexDFS^+ and is, therefore, able to find any LexDFS ordering for this graph class. To the best of our knowledge this is the first unrestricted linear time implementation of LexDFS on a non-trivial graph class. In the algorithm we use a search tree computed by Lexicographic Breadth First Search (LexBFS).

Cite as

Jesse Beisegel, Ekkehard Köhler, Robert Scheffler, and Martin Strehler. Linear Time LexDFS on Chordal Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{beisegel_et_al:LIPIcs.ESA.2020.13,
  author =	{Beisegel, Jesse and K\"{o}hler, Ekkehard and Scheffler, Robert and Strehler, Martin},
  title =	{{Linear Time LexDFS on Chordal Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.13},
  URN =		{urn:nbn:de:0030-drops-128790},
  doi =		{10.4230/LIPIcs.ESA.2020.13},
  annote =	{Keywords: LexDFS, chordal graphs, linear time implementation, search trees, LexBFS}
}
Document
Grundy Distinguishes Treewidth from Pathwidth

Authors: Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Yota Otachi


Abstract
Structural graph parameters, such as treewidth, pathwidth, and clique-width, are a central topic of study in parameterized complexity. A main aim of research in this area is to understand the "price of generality" of these widths: as we transition from more restrictive to more general notions, which are the problems that see their complexity status deteriorate from fixed-parameter tractable to intractable? This type of question is by now very well-studied, but, somewhat strikingly, the algorithmic frontier between the two (arguably) most central width notions, treewidth and pathwidth, is still not understood: currently, no natural graph problem is known to be W-hard for one but FPT for the other. Indeed, a surprising development of the last few years has been the observation that for many of the most paradigmatic problems, their complexities for the two parameters actually coincide exactly, despite the fact that treewidth is a much more general parameter. It would thus appear that the extra generality of treewidth over pathwidth often comes "for free". Our main contribution in this paper is to uncover the first natural example where this generality comes with a high price. We consider Grundy Coloring, a variation of coloring where one seeks to calculate the worst possible coloring that could be assigned to a graph by a greedy First-Fit algorithm. We show that this well-studied problem is FPT parameterized by pathwidth; however, it becomes significantly harder (W[1]-hard) when parameterized by treewidth. Furthermore, we show that Grundy Coloring makes a second complexity jump for more general widths, as it becomes para-NP-hard for clique-width. Hence, Grundy Coloring nicely captures the complexity trade-offs between the three most well-studied parameters. Completing the picture, we show that Grundy Coloring is FPT parameterized by modular-width.

Cite as

Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Yota Otachi. Grundy Distinguishes Treewidth from Pathwidth. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{belmonte_et_al:LIPIcs.ESA.2020.14,
  author =	{Belmonte, R\'{e}my and Kim, Eun Jung and Lampis, Michael and Mitsou, Valia and Otachi, Yota},
  title =	{{Grundy Distinguishes Treewidth from Pathwidth}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.14},
  URN =		{urn:nbn:de:0030-drops-128803},
  doi =		{10.4230/LIPIcs.ESA.2020.14},
  annote =	{Keywords: Treewidth, Pathwidth, Clique-width, Grundy Coloring}
}
Document
On the Complexity of BWT-Runs Minimization via Alphabet Reordering

Authors: Jason W. Bentley, Daniel Gibney, and Sharma V. Thankachan


Abstract
The Burrows-Wheeler Transform (BWT) has been an essential tool in text compression and indexing. First introduced in 1994, it went on to provide the backbone for the first encoding of the classic suffix tree data structure in space close to entropy-based lower bound. Within the last decade, it has seen its role further enhanced with the development of compact suffix trees in space proportional to "r", the number of runs in the BWT. While r would superficially appear to be only a measure of space complexity, it is actually appearing increasingly often in the time complexity of new algorithms as well. This makes having the smallest value of r of growing importance. Interestingly, unlike other popular measures of compression, the parameter r is sensitive to the lexicographic ordering given to the text’s alphabet. Despite several past attempts to exploit this fact, a provably efficient algorithm for finding, or approximating, an alphabet ordering which minimizes r has been open for years. We help to explain this lack of progress by presenting the first set of results on the computational complexity of minimizing BWT-runs via alphabet reordering. We prove that the decision version of this problem is NP-complete and cannot be solved in time poly(n)⋅ 2^o(σ) unless the Exponential Time Hypothesis fails, where σ is the size of the alphabet and n is the length of the text. Moreover, we show that the optimization variant is APX-hard. In doing so, we relate two previously disparate topics: the optimal traveling salesperson path of a graph and the number of runs in the BWT of a text. In addition, by relating recent results in the field of dictionary compression, we illustrate that an arbitrary alphabet ordering provides an O(log² n)-approximation. Lastly, we provide an optimal linear-time algorithm for a more restricted problem of finding an optimal ordering on a subset of symbols (occurring only once) under ordering constraints.

Cite as

Jason W. Bentley, Daniel Gibney, and Sharma V. Thankachan. On the Complexity of BWT-Runs Minimization via Alphabet Reordering. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bentley_et_al:LIPIcs.ESA.2020.15,
  author =	{Bentley, Jason W. and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{On the Complexity of BWT-Runs Minimization via Alphabet Reordering}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.15},
  URN =		{urn:nbn:de:0030-drops-128819},
  doi =		{10.4230/LIPIcs.ESA.2020.15},
  annote =	{Keywords: BWT, NP-hardness, APX-hardness}
}
Document
Simulating Population Protocols in Sub-Constant Time per Interaction

Authors: Petra Berenbrink, David Hammer, Dominik Kaaser, Ulrich Meyer, Manuel Penschuck, and Hung Tran


Abstract
We consider the efficient simulation of population protocols. In the population model, we are given a system of n agents modeled as identical finite-state machines. In each step, two agents are selected uniformly at random to interact by updating their states according to a common transition function. We empirically and analytically analyze two classes of simulators for this model. First, we consider sequential simulators executing one interaction after the other. Key to the performance of these simulators is the data structure storing the agents' states. For our analysis, we consider plain arrays, binary search trees, and a novel Dynamic Alias Table data structure. Secondly, we consider batch processing to efficiently update the states of multiple independent agents in one step. For many protocols considered in literature, our simulator requires amortized sub-constant time per interaction and is fast in practice: given a fixed time budget, the implementation of our batched simulator is able to simulate population protocols several orders of magnitude larger compared to the sequential competitors, and can carry out 2^50 interactions among the same number of agents in less than 400s.

Cite as

Petra Berenbrink, David Hammer, Dominik Kaaser, Ulrich Meyer, Manuel Penschuck, and Hung Tran. Simulating Population Protocols in Sub-Constant Time per Interaction. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{berenbrink_et_al:LIPIcs.ESA.2020.16,
  author =	{Berenbrink, Petra and Hammer, David and Kaaser, Dominik and Meyer, Ulrich and Penschuck, Manuel and Tran, Hung},
  title =	{{Simulating Population Protocols in Sub-Constant Time per Interaction}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.16},
  URN =		{urn:nbn:de:0030-drops-128827},
  doi =		{10.4230/LIPIcs.ESA.2020.16},
  annote =	{Keywords: Population Protocols, Simulation, Random Sampling, Dynamic Alias Table}
}
Document
An Optimal Decentralized (Δ + 1)-Coloring Algorithm

Authors: Daniel Bertschinger, Johannes Lengler, Anders Martinsson, Robert Meier, Angelika Steger, Miloš Trujić, and Emo Welzl


Abstract
Consider the following simple coloring algorithm for a graph on n vertices. Each vertex chooses a color from {1, ..., Δ(G) + 1} uniformly at random. While there exists a conflicted vertex choose one such vertex uniformly at random and recolor it with a randomly chosen color. This algorithm was introduced by Bhartia et al. [MOBIHOC'16] for channel selection in WIFI-networks. We show that this algorithm always converges to a proper coloring in expected O(n log Δ) steps, which is optimal and proves a conjecture of Chakrabarty and de Supinski [SOSA'20].

Cite as

Daniel Bertschinger, Johannes Lengler, Anders Martinsson, Robert Meier, Angelika Steger, Miloš Trujić, and Emo Welzl. An Optimal Decentralized (Δ + 1)-Coloring Algorithm. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 17:1-17:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bertschinger_et_al:LIPIcs.ESA.2020.17,
  author =	{Bertschinger, Daniel and Lengler, Johannes and Martinsson, Anders and Meier, Robert and Steger, Angelika and Truji\'{c}, Milo\v{s} and Welzl, Emo},
  title =	{{An Optimal Decentralized (\Delta + 1)-Coloring Algorithm}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{17:1--17:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.17},
  URN =		{urn:nbn:de:0030-drops-128837},
  doi =		{10.4230/LIPIcs.ESA.2020.17},
  annote =	{Keywords: Decentralized Algorithm, Distributed Computing, Graph Coloring, Randomized Algorithms}
}
Document
Noisy, Greedy and Not so Greedy k-Means++

Authors: Anup Bhattacharya, Jan Eube, Heiko Röglin, and Melanie Schmidt


Abstract
The k-means++ algorithm due to Arthur and Vassilvitskii [David Arthur and Sergei Vassilvitskii, 2007] has become the most popular seeding method for Lloyd’s algorithm. It samples the first center uniformly at random from the data set and the other k-1 centers iteratively according to D²-sampling, i.e., the probability that a data point becomes the next center is proportional to its squared distance to the closest center chosen so far. k-means++ is known to achieve an approximation factor of 𝒪(log k) in expectation. Already in the original paper on k-means++, Arthur and Vassilvitskii suggested a variation called greedy k-means++ algorithm in which in each iteration multiple possible centers are sampled according to D²-sampling and only the one that decreases the objective the most is chosen as a center for that iteration. It is stated as an open question whether this also leads to an 𝒪(log k)-approximation (or even better). We show that this is not the case by presenting a family of instances on which greedy k-means++ yields only an Ω(𝓁⋅log k)-approximation in expectation where 𝓁 is the number of possible centers that are sampled in each iteration. Inspired by the negative results, we study a variation of greedy k-means++ which we call noisy k-means++ algorithm. In this variation only one center is sampled in every iteration but not exactly by D²-sampling. Instead in each iteration an adversary is allowed to change the probabilities arising from D²-sampling individually for each point by a factor between 1-ε₁ and 1+ε₂ for parameters ε₁ ∈ [0,1) and ε₂ ≥ 0. We prove that noisy k-means++ computes an 𝒪(log² k)-approximation in expectation. We use the analysis of noisy k-means++ to design a moderately greedy k-means++ algorithm.

Cite as

Anup Bhattacharya, Jan Eube, Heiko Röglin, and Melanie Schmidt. Noisy, Greedy and Not so Greedy k-Means++. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 18:1-18:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ESA.2020.18,
  author =	{Bhattacharya, Anup and Eube, Jan and R\"{o}glin, Heiko and Schmidt, Melanie},
  title =	{{Noisy, Greedy and Not so Greedy k-Means++}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{18:1--18:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.18},
  URN =		{urn:nbn:de:0030-drops-128848},
  doi =		{10.4230/LIPIcs.ESA.2020.18},
  annote =	{Keywords: k-means++, greedy, adaptive sampling}
}
Document
An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling

Authors: Sujoy Bhore, Guangping Li, and Martin Nöllenburg


Abstract
Map labeling is a classical problem in cartography and geographic information systems (GIS) that asks to place labels for area, line, and point features, with the goal to select and place the maximum number of independent, i.e., overlap-free, labels. A practically interesting case is point labeling with axis-parallel rectangular labels of common size. In a fully dynamic setting, at each time step, either a new label appears or an existing label disappears. Then, the challenge is to maintain a maximum cardinality subset of pairwise independent labels with sub-linear update time. Motivated by this, we study the maximal independent set (MIS) and maximum independent set (Max-IS) problems on fully dynamic (insertion/deletion model) sets of axis-parallel rectangles of two types - (i) uniform height and width and (ii) uniform height and arbitrary width; both settings can be modeled as rectangle intersection graphs. We present the first deterministic algorithm for maintaining a MIS (and thus a 4-approximate Max-IS) of a dynamic set of uniform rectangles with amortized sub-logarithmic update time. This breaks the natural barrier of Ω(Δ) update time (where Δ is the maximum degree in the graph) for vertex updates presented by Assadi et al. (STOC 2018). We continue by investigating Max-IS and provide a series of deterministic dynamic approximation schemes. For uniform rectangles, we first give an algorithm that maintains a 4-approximate Max-IS with O(1) update time. In a subsequent algorithm, we establish the trade-off between approximation quality 2(1+1/k) and update time O(k²log n), for k ∈ ℕ. We conclude with an algorithm that maintains a 2-approximate Max-IS for dynamic sets of unit-height and arbitrary-width rectangles with O(ω log n) update time, where ω is the maximum size of an independent set of rectangles stabbed by any horizontal line. We have implemented our algorithms and report the results of an experimental comparison exploring the trade-off between solution quality and update time for synthetic and real-world map labeling instances.

Cite as

Sujoy Bhore, Guangping Li, and Martin Nöllenburg. An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 19:1-19:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bhore_et_al:LIPIcs.ESA.2020.19,
  author =	{Bhore, Sujoy and Li, Guangping and N\"{o}llenburg, Martin},
  title =	{{An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{19:1--19:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.19},
  URN =		{urn:nbn:de:0030-drops-128856},
  doi =		{10.4230/LIPIcs.ESA.2020.19},
  annote =	{Keywords: Independent Sets, Dynamic Algorithms, Rectangle Intersection Graphs, Approximation Algorithms, Experimental Evaluation}
}
Document
Lower Bounds and Approximation Algorithms for Search Space Sizes in Contraction Hierarchies

Authors: Johannes Blum and Sabine Storandt


Abstract
Contraction hierarchies (CH) is a prominent preprocessing-based technique that accelerates the computation of shortest paths in road networks by reducing the search space size of a bidirectional Dijkstra run. To explain the practical success of CH, several theoretical upper bounds for the maximum search space size were derived in previous work. For example, it was shown that in minor-closed graph families search space sizes in 𝒪(√n) can be achieved (with n denoting the number of nodes in the graph), and search space sizes in 𝒪(h log D) in graphs of highway dimension h and diameter D. In this paper, we primarily focus on lower bounds. We prove that the average search space size in a so called weak CH is in Ω(b_α) for α ≥ 2/3 where b_α is the size of a smallest α-balanced node separator. This discovery allows us to describe the first approximation algorithm for the average search space size. Our new lower bound also shows that the 𝒪(√n) bound for minor-closed graph families is tight. Furthermore, we deeper investigate the relationship of CH and the highway dimension and skeleton dimension of the graph, and prove new lower bound and incomparability results. Finally, we discuss how lower bounds for strong CH can be obtained from solving a HittingSet problem defined on a set of carefully chosen subgraphs of the input network.

Cite as

Johannes Blum and Sabine Storandt. Lower Bounds and Approximation Algorithms for Search Space Sizes in Contraction Hierarchies. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ESA.2020.20,
  author =	{Blum, Johannes and Storandt, Sabine},
  title =	{{Lower Bounds and Approximation Algorithms for Search Space Sizes in Contraction Hierarchies}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.20},
  URN =		{urn:nbn:de:0030-drops-128861},
  doi =		{10.4230/LIPIcs.ESA.2020.20},
  annote =	{Keywords: contraction hierarchies, search space size, balanced separator, tree decomposition}
}
Document
The Minimization of Random Hypergraphs

Authors: Thomas Bläsius, Tobias Friedrich, and Martin Schirneck


Abstract
We investigate the maximum-entropy model B_{n,m,p} for random n-vertex, m-edge multi-hypergraphs with expected edge size pn. We show that the expected size of the minimization min(B_{n,m,p}), i.e., the number of inclusion-wise minimal edges of B_{n,m,p}, undergoes a phase transition with respect to m. If m is at most 1/(1-p)^{(1-p)n}, then E[|min(B_{n,m,p})|] is of order Θ(m), while for m ≥ 1/(1-p)^{(1-p+ε)n} for any ε > 0, it is Θ(2^{(H(α) + (1-α) log₂ p) n}/√n). Here, H denotes the binary entropy function and α = - (log_{1-p} m)/n. The result implies that the maximum expected number of minimal edges over all m is Θ((1+p)ⁿ/√n). Our structural findings have algorithmic implications for minimizing an input hypergraph. This has applications in the profiling of relational databases as well as for the Orthogonal Vectors problem studied in fine-grained complexity. We make several technical contributions that are of independent interest in probability. First, we improve the Chernoff-Hoeffding theorem on the tail of the binomial distribution. In detail, we show that for a binomial variable Y ∼ Bin(n,p) and any 0 < x < p, it holds that P[Y ≤ xn] = Θ(2^{-D(x‖p) n}/√n), where D is the binary Kullback-Leibler divergence between Bernoulli distributions. We give explicit upper and lower bounds on the constants hidden in the big-O notation that hold for all n. Secondly, we establish the fact that the probability of a set of cardinality i being minimal after m i.i.d. maximum-entropy trials exhibits a sharp threshold behavior at i^* = n + log_{1-p} m.

Cite as

Thomas Bläsius, Tobias Friedrich, and Martin Schirneck. The Minimization of Random Hypergraphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2020.21,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Schirneck, Martin},
  title =	{{The Minimization of Random Hypergraphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.21},
  URN =		{urn:nbn:de:0030-drops-128871},
  doi =		{10.4230/LIPIcs.ESA.2020.21},
  annote =	{Keywords: Chernoff-Hoeffding theorem, maximum entropy, maximization, minimization, phase transition, random hypergraphs}
}
Document
Acyclic, Star and Injective Colouring: A Complexity Picture for H-Free Graphs

Authors: Jan Bok, Nikola Jedlic̆ková, Barnaby Martin, Daniël Paulusma, and Siani Smith


Abstract
A k-colouring c of a graph G is a mapping V(G) → {1,2,… k} such that c(u) ≠ c(v) whenever u and v are adjacent. The corresponding decision problem is Colouring. A colouring is acyclic, star, or injective if any two colour classes induce a forest, star forest or disjoint union of vertices and edges, respectively. Hence, every injective colouring is a star colouring and every star colouring is an acyclic colouring. The corresponding decision problems are Acyclic Colouring, Star Colouring and Injective Colouring (the last problem is also known as L(1,1)-Labelling). A classical complexity result on Colouring is a well-known dichotomy for H-free graphs, which was established twenty years ago (in this context, a graph is H-free if and only if it does not contain H as an induced subgraph). Moreover, this result has led to a large collection of results, which helped us to better understand the complexity of Colouring. In contrast, there is no systematic study into the computational complexity of Acyclic Colouring, Star Colouring and Injective Colouring despite numerous algorithmic and structural results that have appeared over the years. We initiate such a systematic complexity study, and similar to the study of Colouring we use the class of H-free graphs as a testbed. We prove the following results: 1) We give almost complete classifications for the computational complexity of Acyclic Colouring, Star Colouring and Injective Colouring for H-free graphs. 2) If the number of colours k is fixed, that is, not part of the input, we give full complexity classifications for each of the three problems for H-free graphs. From our study we conclude that for fixed k the three problems behave in the same way, but this is no longer true if k is part of the input. To obtain several of our results we prove stronger complexity results that in particular involve the girth of a graph and the class of line graphs.

Cite as

Jan Bok, Nikola Jedlic̆ková, Barnaby Martin, Daniël Paulusma, and Siani Smith. Acyclic, Star and Injective Colouring: A Complexity Picture for H-Free Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 22:1-22:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bok_et_al:LIPIcs.ESA.2020.22,
  author =	{Bok, Jan and Jedlic̆kov\'{a}, Nikola and Martin, Barnaby and Paulusma, Dani\"{e}l and Smith, Siani},
  title =	{{Acyclic, Star and Injective Colouring: A Complexity Picture for H-Free Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{22:1--22:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.22},
  URN =		{urn:nbn:de:0030-drops-128885},
  doi =		{10.4230/LIPIcs.ESA.2020.22},
  annote =	{Keywords: acyclic colouring, star colouring, injective colouring, H-free, dichotomy}
}
Document
An Algorithmic Weakening of the Erdős-Hajnal Conjecture

Authors: Édouard Bonnet, Stéphan Thomassé, Xuan Thang Tran, and Rémi Watrigant


Abstract
We study the approximability of the Maximum Independent Set (MIS) problem in H-free graphs (that is, graphs which do not admit H as an induced subgraph). As one motivation we investigate the following conjecture: for every fixed graph H, there exists a constant δ > 0 such that MIS can be n^{1-δ}-approximated in H-free graphs, where n denotes the number of vertices of the input graph. We first prove that a constructive version of the celebrated Erdős-Hajnal conjecture implies ours. We then prove that the set of graphs H satisfying our conjecture is closed under the so-called graph substitution. This, together with the known polynomial-time algorithms for MIS in H-free graphs (e.g. P₆-free and fork-free graphs), implies that our conjecture holds for many graphs H for which the Erdős-Hajnal conjecture is still open. We then focus on improving the constant δ for some graph classes: we prove that the classical Local Search algorithm provides an OPT^{1-1/t}-approximation in K_{t, t}-free graphs (hence a √{OPT}-approximation in C₄-free graphs), and, while there is a simple √n-approximation in triangle-free graphs, it cannot be improved to n^{1/4-ε} for any ε > 0 unless NP ⊆ BPP. More generally, we show that there is a constant c such that MIS in graphs of girth γ cannot be n^{c/(γ)}-approximated. Up to a constant factor in the exponent, this matches the ratio of a known approximation algorithm by Monien and Speckenmeyer, and by Murphy. To the best of our knowledge, this is the first strong (i.e., Ω(n^δ) for some δ > 0) inapproximability result for Maximum Independent Set in a proper hereditary class.

Cite as

Édouard Bonnet, Stéphan Thomassé, Xuan Thang Tran, and Rémi Watrigant. An Algorithmic Weakening of the Erdős-Hajnal Conjecture. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ESA.2020.23,
  author =	{Bonnet, \'{E}douard and Thomass\'{e}, St\'{e}phan and Tran, Xuan Thang and Watrigant, R\'{e}mi},
  title =	{{An Algorithmic Weakening of the Erd\H{o}s-Hajnal Conjecture}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.23},
  URN =		{urn:nbn:de:0030-drops-128894},
  doi =		{10.4230/LIPIcs.ESA.2020.23},
  annote =	{Keywords: Approximation, Maximum Independent Set, H-free Graphs, Erd\H{o}s-Hajnal conjecture}
}
Document
Reconfiguration of Spanning Trees with Many or Few Leaves

Authors: Nicolas Bousquet, Takehiro Ito, Yusuke Kobayashi, Haruka Mizuta, Paul Ouvrard, Akira Suzuki, and Kunihiro Wasa


Abstract
Let G be a graph and T₁,T₂ be two spanning trees of G. We say that T₁ can be transformed into T₂ via an edge flip if there exist two edges e ∈ T₁ and f in T₂ such that T₂ = (T₁⧵e) ∪ f. Since spanning trees form a matroid, one can indeed transform a spanning tree into any other via a sequence of edge flips, as observed in [Takehiro Ito et al., 2011]. We investigate the problem of determining, given two spanning trees T₁,T₂ with an additional property Π, if there exists an edge flip transformation from T₁ to T₂ keeping property Π all along. First we show that determining if there exists a transformation from T₁ to T₂ such that all the trees of the sequence have at most k (for any fixed k ≥ 3) leaves is PSPACE-complete. We then prove that determining if there exists a transformation from T₁ to T₂ such that all the trees of the sequence have at least k leaves (where k is part of the input) is PSPACE-complete even restricted to split, bipartite or planar graphs. We complete this result by showing that the problem becomes polynomial for cographs, interval graphs and when k = n-2.

Cite as

Nicolas Bousquet, Takehiro Ito, Yusuke Kobayashi, Haruka Mizuta, Paul Ouvrard, Akira Suzuki, and Kunihiro Wasa. Reconfiguration of Spanning Trees with Many or Few Leaves. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bousquet_et_al:LIPIcs.ESA.2020.24,
  author =	{Bousquet, Nicolas and Ito, Takehiro and Kobayashi, Yusuke and Mizuta, Haruka and Ouvrard, Paul and Suzuki, Akira and Wasa, Kunihiro},
  title =	{{Reconfiguration of Spanning Trees with Many or Few Leaves}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.24},
  URN =		{urn:nbn:de:0030-drops-128909},
  doi =		{10.4230/LIPIcs.ESA.2020.24},
  annote =	{Keywords: combinatorial reconfiguration, spanning trees, PSPACE, polynomial-time algorithms}
}
Document
When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation

Authors: Karl Bringmann, Marvin Künnemann, and André Nusser


Abstract
Consider the natural question of how to measure the similarity of curves in the plane by a quantity that is invariant under translations of the curves. Such a measure is justified whenever we aim to quantify the similarity of the curves' shapes rather than their positioning in the plane, e.g., to compare the similarity of handwritten characters. Perhaps the most natural such notion is the (discrete) Fréchet distance under translation. Unfortunately, the algorithmic literature on this problem yields a very pessimistic view: On polygonal curves with n vertices, the fastest algorithm runs in time 𝒪(n^4.667) and cannot be improved below n^{4-o(1)} unless the Strong Exponential Time Hypothesis fails. Can we still obtain an implementation that is efficient on realistic datasets? Spurred by the surprising performance of recent implementations for the Fréchet distance, we perform algorithm engineering for the Fréchet distance under translation. Our solution combines fast, but inexact tools from continuous optimization (specifically, branch-and-bound algorithms for global Lipschitz optimization) with exact, but expensive algorithms from computational geometry (specifically, problem-specific algorithms based on an arrangement construction). We combine these two ingredients to obtain an exact decision algorithm for the Fréchet distance under translation. For the related task of computing the distance value up to a desired precision, we engineer and compare different methods. On a benchmark set involving handwritten characters and route trajectories, our implementation answers a typical query for either task in the range of a few milliseconds up to a second on standard desktop hardware. We believe that our implementation will enable, for the first time, the use of the Fréchet distance under translation in applications, whereas previous algorithmic approaches would have been computationally infeasible. Furthermore, we hope that our combination of continuous optimization and computational geometry will inspire similar approaches for further algorithmic questions.

Cite as

Karl Bringmann, Marvin Künnemann, and André Nusser. When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2020.25,
  author =	{Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}},
  title =	{{When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fr\'{e}chet Distance Under Translation}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.25},
  URN =		{urn:nbn:de:0030-drops-128912},
  doi =		{10.4230/LIPIcs.ESA.2020.25},
  annote =	{Keywords: Fr\'{e}chet Distance, Computational Geometry, Continuous Optimization, Algorithm Engineering}
}
Document
Improved Algorithms for Alternating Matrix Space Isometry: From Theory to Practice

Authors: Peter A. Brooksbank, Yinan Li, Youming Qiao, and James B. Wilson


Abstract
Motivated by testing isomorphism of p-groups, we study the alternating matrix space isometry problem (AltMatSpIso), which asks to decide whether two m-dimensional subspaces of n×n alternating (skew-symmetric if the field is not of characteristic 2) matrices are the same up to a change of basis. Over a finite field 𝔽_p with some prime p≠2, solving AltMatSpIso in time p^O(n+m) is equivalent to testing isomorphism of p-groups of class 2 and exponent p in time polynomial in the group order. The latter problem has long been considered a bottleneck case for the group isomorphism problem. Recently, Li and Qiao presented an average-case algorithm for AltMatSpIso in time p^O(n) when n and m are linearly related (FOCS '17). In this paper, we present an average-case algorithm for AltMatSpIso in time p^O(n+m). Besides removing the restriction on the relation between n and m, our algorithm is considerably simpler, and the average-case analysis is stronger. We then implement our algorithm, with suitable modifications, in Magma. Our experiments indicate that it improves significantly over default (brute-force) algorithms for this problem.

Cite as

Peter A. Brooksbank, Yinan Li, Youming Qiao, and James B. Wilson. Improved Algorithms for Alternating Matrix Space Isometry: From Theory to Practice. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{brooksbank_et_al:LIPIcs.ESA.2020.26,
  author =	{Brooksbank, Peter A. and Li, Yinan and Qiao, Youming and Wilson, James B.},
  title =	{{Improved Algorithms for Alternating Matrix Space Isometry: From Theory to Practice}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.26},
  URN =		{urn:nbn:de:0030-drops-128920},
  doi =		{10.4230/LIPIcs.ESA.2020.26},
  annote =	{Keywords: Alternating Matrix Spaces, Average-case Algorithm, p-groups of Class 2nd Exponent p, Magma}
}
Document
Sometimes Reliable Spanners of Almost Linear Size

Authors: Kevin Buchin, Sariel Har-Peled, and Dániel Oláh


Abstract
Reliable spanners can withstand huge failures, even when a linear number of vertices are deleted from the network. In case of failures, some of the remaining vertices of a reliable spanner may no longer admit the spanner property, but this collateral damage is bounded by a fraction of the size of the attack. It is known that Ω(nlog n) edges are needed to achieve this strong property, where n is the number of vertices in the network, even in one dimension. Constructions of reliable geometric (1+ε)-spanners, for n points in ℝ^d, are known, where the resulting graph has 𝒪(n log n log log⁶n) edges. Here, we show randomized constructions of smaller size spanners that have the desired reliability property in expectation or with good probability. The new construction is simple, and potentially practical - replacing a hierarchical usage of expanders (which renders the previous constructions impractical) by a simple skip list like construction. This results in a 1-spanner, on the line, that has linear number of edges. Using this, we present a construction of a reliable spanner in ℝ^d with 𝒪(n log log²n log log log n) edges.

Cite as

Kevin Buchin, Sariel Har-Peled, and Dániel Oláh. Sometimes Reliable Spanners of Almost Linear Size. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 27:1-27:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ESA.2020.27,
  author =	{Buchin, Kevin and Har-Peled, Sariel and Ol\'{a}h, D\'{a}niel},
  title =	{{Sometimes Reliable Spanners of Almost Linear Size}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{27:1--27:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.27},
  URN =		{urn:nbn:de:0030-drops-128934},
  doi =		{10.4230/LIPIcs.ESA.2020.27},
  annote =	{Keywords: Geometric spanners, vertex failures, reliability}
}
Document
New Binary Search Tree Bounds via Geometric Inversions

Authors: Parinya Chalermsook and Wanchote Po Jiamjitrak


Abstract
The long-standing dynamic optimality conjecture postulates the existence of a dynamic binary search tree (BST) that is O(1)-competitive to all other dynamic BSTs. Despite attempts from many groups of researchers, we believe the conjecture is still far-fetched. One of the main reasons is the lack of the "right" potential functions for the problem: existing results that prove various consequences of dynamic optimality rely on very different potential function techniques, while proving dynamic optimality requires a single potential function that can be used to derive all these consequences. In this paper, we propose a new potential function, that we call extended (geometric) inversion. Inversion is arguably the most natural potential function principle that has been used in competitive analysis but has never been used in the context of BSTs. We use our potential function to derive new results, as well as streamlining/strengthening existing results. First, we show that a broad class of BST algorithms (including Greedy and Splay) are O(1)-competitive to Move-to-Root algorithm and therefore have simulation embedding property - a new BST property that was recently introduced and studied by Levy and Tarjan (SODA 2019). This result, besides substantially expanding the list of BST algorithms having this property, gives the first potential function proof of the simulation embedding property for BSTs (thus unifying apparently different kinds of results). Moreover, our analysis is the first where the costs of two dynamic binary search trees are compared against each other directly and systematically. Secondly, we use our new potential function to unify and strengthen known BST bounds, e.g., showing that Greedy satisfies the weighted dynamic finger property within a multiplicative factor of (5+o(1)).

Cite as

Parinya Chalermsook and Wanchote Po Jiamjitrak. New Binary Search Tree Bounds via Geometric Inversions. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chalermsook_et_al:LIPIcs.ESA.2020.28,
  author =	{Chalermsook, Parinya and Jiamjitrak, Wanchote Po},
  title =	{{New Binary Search Tree Bounds via Geometric Inversions}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.28},
  URN =		{urn:nbn:de:0030-drops-128944},
  doi =		{10.4230/LIPIcs.ESA.2020.28},
  annote =	{Keywords: Binary Search Tree, Potential Function, Inversion, Data Structures, Online Algorithms}
}
Document
More on Change-Making and Related Problems

Authors: Timothy M. Chan and Qizheng He


Abstract
Given a set of n integer-valued coin types and a target value t, the well-known change-making problem asks for the minimum number of coins that sum to t, assuming an unlimited number of coins in each type. In the more general all-targets version of the problem, we want the minimum number of coins summing to j, for every j = 0,…,t. For example, the textbook dynamic programming algorithms can solve the all-targets problem in O(nt) time. Recently, Chan and He (SOSA'20) described a number of O(t polylog t)-time algorithms for the original (single-target) version of the change-making problem, but not the all-targets version. In this paper, we obtain a number of new results on change-making and related problems: - We present a new algorithm for the all-targets change-making problem with running time Õ(t^{4/3}), improving a previous Õ(t^{3/2})-time algorithm. - We present a very simple Õ(u²+t)-time algorithm for the all-targets change-making problem, where u denotes the maximum coin value. The analysis of the algorithm uses a theorem of Erdős and Graham (1972) on the Frobenius problem. This algorithm can be extended to solve the all-capacities version of the unbounded knapsack problem (for integer item weights bounded by u). - For the original (single-target) coin changing problem, we describe a simple modification of one of Chan and He’s algorithms that runs in Õ(u) time (instead of Õ(t)). - For the original (single-capacity) unbounded knapsack problem, we describe a simple algorithm that runs in Õ(nu) time, improving previous near-u²-time algorithms. - We also observe how one of our ideas implies a new result on the minimum word break problem, an optimization version of a string problem studied by Bringmann et al. (FOCS'17), generalizing change-making (which corresponds to the unary special case).

Cite as

Timothy M. Chan and Qizheng He. More on Change-Making and Related Problems. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ESA.2020.29,
  author =	{Chan, Timothy M. and He, Qizheng},
  title =	{{More on Change-Making and Related Problems}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{29:1--29:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.29},
  URN =		{urn:nbn:de:0030-drops-128958},
  doi =		{10.4230/LIPIcs.ESA.2020.29},
  annote =	{Keywords: Coin changing, knapsack, dynamic programming, Frobenius problem, fine-grained complexity}
}
Document
The Maximum Binary Tree Problem

Authors: Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu


Abstract
We introduce and investigate the approximability of the maximum binary tree problem (MBT) in directed and undirected graphs. The goal in MBT is to find a maximum-sized binary tree in a given graph. MBT is a natural variant of the well-studied longest path problem, since both can be viewed as finding a maximum-sized tree of bounded degree in a given graph. The connection to longest path motivates the study of MBT in directed acyclic graphs (DAGs), since the longest path problem is solvable efficiently in DAGs. In contrast, we show that MBT in DAGs is in fact hard: it has no efficient exp(-O(log n/ log log n))-approximation algorithm under the exponential time hypothesis, where n is the number of vertices in the input graph. In undirected graphs, we show that MBT has no efficient exp(-O(log^0.63 n))-approximation under the exponential time hypothesis. Our inapproximability results rely on self-improving reductions and structural properties of binary trees. We also show constant-factor inapproximability assuming P ≠ NP. In addition to inapproximability results, we present algorithmic results along two different flavors: (1) We design a randomized algorithm to verify if a given directed graph on n vertices contains a binary tree of size k in 2^k poly(n) time. (2) Motivated by the longest heapable subsequence problem, introduced by Byers, Heeringa, Mitzenmacher, and Zervas, ANALCO 2011, which is equivalent to MBT in permutation DAGs, we design efficient algorithms for MBT in bipartite permutation graphs.

Cite as

Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu. The Maximum Binary Tree Problem. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 30:1-30:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.ESA.2020.30,
  author =	{Chandrasekaran, Karthekeyan and Grigorescu, Elena and Istrate, Gabriel and Kulkarni, Shubhang and Lin, Young-San and Zhu, Minshen},
  title =	{{The Maximum Binary Tree Problem}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{30:1--30:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.30},
  URN =		{urn:nbn:de:0030-drops-128967},
  doi =		{10.4230/LIPIcs.ESA.2020.30},
  annote =	{Keywords: maximum binary tree, heapability, inapproximability, fixed-parameter tractability}
}
Document
Single-Source Shortest Paths and Strong Connectivity in Dynamic Planar Graphs

Authors: Panagiotis Charalampopoulos and Adam Karczmarz


Abstract
Efficient algorithms for computing and processing additively weighted Voronoi diagrams on planar graphs have been instrumental in obtaining several recent breakthrough results, most notably the almost-optimal exact distance oracle for planar graphs [Charalampopoulos et al., STOC'19], and subquadratic algorithms for planar diameter [Cabello, SODA'17, Gawrychowski et al., SODA'18]. In this paper, we show how Voronoi diagrams can be useful in obtaining dynamic planar graph algorithms and apply them to classical problems such as dynamic single-source shortest paths and dynamic strongly connected components. First, we give a fully dynamic single-source shortest paths data structure for planar weighted digraphs with Õ(n^{4/5}) worst-case update time and O(log² n) query time. Here, a single update can either change the graph by inserting or deleting an edge, or reset the source s of interest. All known non-trivial planarity-exploiting exact dynamic single-source shortest paths algorithms to date had polynomial query time. Further, note that a data structure with strongly sublinear update time capable of answering distance queries between all pairs of vertices in polylogarithmic time would refute the APSP conjecture [Abboud and Dahlgaard, FOCS'16]. Somewhat surprisingly, the Voronoi diagram based approach we take for single-source shortest paths can also be used in the fully dynamic strongly connected components problem. In particular, we obtain a data structure maintaining a planar digraph under edge insertions and deletions, capable of returning the identifier of the strongly connected component of any query vertex. The worst-case update and query time bounds are the same as for our single-source distance oracle. To the best of our knowledge, this is the first fully dynamic strong-connectivity algorithm achieving both sublinear update time and polylogarithmic query time for an important class of digraphs.

Cite as

Panagiotis Charalampopoulos and Adam Karczmarz. Single-Source Shortest Paths and Strong Connectivity in Dynamic Planar Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 31:1-31:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{charalampopoulos_et_al:LIPIcs.ESA.2020.31,
  author =	{Charalampopoulos, Panagiotis and Karczmarz, Adam},
  title =	{{Single-Source Shortest Paths and Strong Connectivity in Dynamic Planar Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{31:1--31:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.31},
  URN =		{urn:nbn:de:0030-drops-128970},
  doi =		{10.4230/LIPIcs.ESA.2020.31},
  annote =	{Keywords: dynamic graph algorithms, planar graphs, single-source shortest paths, strong connectivity}
}
Document
The Number of Repetitions in 2D-Strings

Authors: Panagiotis Charalampopoulos, Jakub Radoszewski, Wojciech Rytter, Tomasz Waleń, and Wiktor Zuba


Abstract
The notions of periodicity and repetitions in strings, and hence these of runs and squares, naturally extend to two-dimensional strings. We consider two types of repetitions in 2D-strings: 2D-runs and quartics (quartics are a 2D-version of squares in standard strings). Amir et al. introduced 2D-runs, showed that there are 𝒪(n³) of them in an n × n 2D-string and presented a simple construction giving a lower bound of Ω(n²) for their number (Theoretical Computer Science, 2020). We make a significant step towards closing the gap between these bounds by showing that the number of 2D-runs in an n × n 2D-string is 𝒪(n² log² n). In particular, our bound implies that the 𝒪(n²log n + output) run-time of the algorithm of Amir et al. for computing 2D-runs is also 𝒪(n² log² n). We expect this result to allow for exploiting 2D-runs algorithmically in the area of 2D pattern matching. A quartic is a 2D-string composed of 2 × 2 identical blocks (2D-strings) that was introduced by Apostolico and Brimkov (Theoretical Computer Science, 2000), where by quartics they meant only primitively rooted quartics, i.e. built of a primitive block. Here our notion of quartics is more general and analogous to that of squares in 1D-strings. Apostolico and Brimkov showed that there are 𝒪(n² log² n) occurrences of primitively rooted quartics in an n × n 2D-string and that this bound is attainable. Consequently the number of distinct primitively rooted quartics is 𝒪(n² log² n). The straightforward bound for the maximal number of distinct general quartics is 𝒪(n⁴). Here, we prove that the number of distinct general quartics is also 𝒪(n² log² n). This extends the rich combinatorial study of the number of distinct squares in a 1D-string, that was initiated by Fraenkel and Simpson (Journal of Combinatorial Theory, Series A, 1998), to two dimensions. Finally, we show some algorithmic applications of 2D-runs. Specifically, we present algorithms for computing all occurrences of primitively rooted quartics and counting all general distinct quartics in 𝒪(n² log² n) time, which is quasi-linear with respect to the size of the input. The former algorithm is optimal due to the lower bound of Apostolico and Brimkov. The latter can be seen as a continuation of works on enumeration of distinct squares in 1D-strings using runs (Crochemore et al., Theoretical Computer Science, 2014). However, the methods used in 2D are different because of different properties of 2D-runs and quartics.

Cite as

Panagiotis Charalampopoulos, Jakub Radoszewski, Wojciech Rytter, Tomasz Waleń, and Wiktor Zuba. The Number of Repetitions in 2D-Strings. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{charalampopoulos_et_al:LIPIcs.ESA.2020.32,
  author =	{Charalampopoulos, Panagiotis and Radoszewski, Jakub and Rytter, Wojciech and Wale\'{n}, Tomasz and Zuba, Wiktor},
  title =	{{The Number of Repetitions in 2D-Strings}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.32},
  URN =		{urn:nbn:de:0030-drops-128987},
  doi =		{10.4230/LIPIcs.ESA.2020.32},
  annote =	{Keywords: 2D-run, quartic, run, square}
}
Document
New Bounds on Augmenting Steps of Block-Structured Integer Programs

Authors: Lin Chen, Martin Koutecký, Lei Xu, and Weidong Shi


Abstract
Iterative augmentation has recently emerged as an overarching method for solving Integer Programs (IP) in variable dimension, in stark contrast with the volume and flatness techniques of IP in fixed dimension. Here we consider 4-block n-fold integer programs, which are the most general class considered so far. A 4-block n-fold IP has a constraint matrix which consists of n copies of small matrices A, B, and D, and one copy of C, in a specific block structure. Iterative augmentation methods rely on the so-called Graver basis of the constraint matrix, which constitutes a set of fundamental augmenting steps. All existing algorithms rely on bounding the 𝓁₁- or 𝓁_∞-norm of elements of the Graver basis. Hemmecke et al. [Math. Prog. 2014] showed that 4-block n-fold IP has Graver elements of 𝓁_∞-norm at most 𝒪_FPT(n^{2^{s_D}}), leading to an algorithm with a similar runtime; here, s_D is the number of rows of matrix D and 𝒪_FPT hides a multiplicative factor that is only dependent on the small matrices A,B,C,D, However, it remained open whether their bounds are tight, in particular, whether they could be improved to 𝒪_FPT(1), perhaps at least in some restricted cases. We prove that the 𝓁_∞-norm of the Graver elements of 4-block n-fold IP is upper bounded by 𝒪_FPT(n^{s_D}), improving significantly over the previous bound 𝒪_FPT(n^{2^{s_D}}). We also provide a matching lower bound of Ω(n^{s_D}) which even holds for arbitrary non-zero lattice elements, ruling out augmenting algorithm relying on even more restricted notions of augmentation than the Graver basis. We then consider a special case of 4-block n-fold in which C is a zero matrix, called 3-block n-fold IP. We show that while the 𝓁_∞-norm of its Graver elements is Ω(n^{s_D}), there exists a different decomposition into lattice elements whose 𝓁_∞-norm is bounded by 𝒪_FPT(1), which allows us to provide improved upper bounds on the 𝓁_∞-norm of Graver elements for 3-block n-fold IP. The key difference between the respective decompositions is that a Graver basis guarantees a sign-compatible decomposition; this property is critical in applications because it guarantees each step of the decomposition to be feasible. Consequently, our improved upper bounds let us establish faster algorithms for 3-block n-fold IP and 4-block IP, and our lower bounds strongly hint at parameterized hardness of 4-block and even 3-block n-fold IP. Furthermore, we show that 3-block n-fold IP is without loss of generality in the sense that 4-block n-fold IP can be solved in FPT oracle time by taking an algorithm for 3-block n-fold IP as an oracle.

Cite as

Lin Chen, Martin Koutecký, Lei Xu, and Weidong Shi. New Bounds on Augmenting Steps of Block-Structured Integer Programs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 33:1-33:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2020.33,
  author =	{Chen, Lin and Kouteck\'{y}, Martin and Xu, Lei and Shi, Weidong},
  title =	{{New Bounds on Augmenting Steps of Block-Structured Integer Programs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{33:1--33:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.33},
  URN =		{urn:nbn:de:0030-drops-128994},
  doi =		{10.4230/LIPIcs.ESA.2020.33},
  annote =	{Keywords: Integer Programming, Graver basis, Fixed parameter tractable}
}
Document
Distance Bounds for High Dimensional Consistent Digital Rays and 2-D Partially-Consistent Digital Rays

Authors: Man-Kwun Chiu, Matias Korman, Martin Suderland, and Takeshi Tokuyama


Abstract
We consider the problem of digitalizing Euclidean segments. Specifically, we look for a constructive method to connect any two points in ℤ^d. The construction must be consistent (that is, satisfy the natural extension of the Euclidean axioms) while resembling them as much as possible. Previous work has shown asymptotically tight results in two dimensions with Θ(log N) error, where resemblance between segments is measured with the Hausdorff distance, and N is the L₁ distance between the two points. This construction was considered tight because of a Ω(log N) lower bound that applies to any consistent construction in ℤ². In this paper we observe that the lower bound does not directly extend to higher dimensions. We give an alternative argument showing that any consistent construction in d dimensions must have Ω(log^{1/(d-1)} N) error. We tie the error of a consistent construction in high dimensions to the error of similar weak constructions in two dimensions (constructions for which some points need not satisfy all the axioms). This not only opens the possibility for having constructions with o(log N) error in high dimensions, but also opens up an interesting line of research in the tradeoff between the number of axiom violations and the error of the construction. In order to show our lower bound, we also consider a colored variation of the concept of discrepancy of a set of points that we find of independent interest.

Cite as

Man-Kwun Chiu, Matias Korman, Martin Suderland, and Takeshi Tokuyama. Distance Bounds for High Dimensional Consistent Digital Rays and 2-D Partially-Consistent Digital Rays. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 34:1-34:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chiu_et_al:LIPIcs.ESA.2020.34,
  author =	{Chiu, Man-Kwun and Korman, Matias and Suderland, Martin and Tokuyama, Takeshi},
  title =	{{Distance Bounds for High Dimensional Consistent Digital Rays and 2-D Partially-Consistent Digital Rays}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{34:1--34:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.34},
  URN =		{urn:nbn:de:0030-drops-129002},
  doi =		{10.4230/LIPIcs.ESA.2020.34},
  annote =	{Keywords: Consistent Digital Line Segments, Digital Geometry, Discrepancy}
}
Document
Finding Large H-Colorable Subgraphs in Hereditary Graph Classes

Authors: Maria Chudnovsky, Jason King, Michał Pilipczuk, Paweł Rzążewski, and Sophie Spirkl


Abstract
We study the Max Partial H-Coloring problem: given a graph G, find the largest induced subgraph of G that admits a homomorphism into H, where H is a fixed pattern graph without loops. Note that when H is a complete graph on k vertices, the problem reduces to finding the largest induced k-colorable subgraph, which for k = 2 is equivalent (by complementation) to Odd Cycle Transversal. We prove that for every fixed pattern graph H without loops, Max Partial H-Coloring can be solved: - in {P₅,F}-free graphs in polynomial time, whenever F is a threshold graph; - in {P₅,bull}-free graphs in polynomial time; - in P₅-free graphs in time n^𝒪(ω(G)); - in {P₆,1-subdivided claw}-free graphs in time n^𝒪(ω(G)³). Here, n is the number of vertices of the input graph G and ω(G) is the maximum size of a clique in G. Furthermore, by combining the mentioned algorithms for P₅-free and for {P₆,1-subdivided claw}-free graphs with a simple branching procedure, we obtain subexponential-time algorithms for Max Partial H-Coloring in these classes of graphs. Finally, we show that even a restricted variant of Max Partial H-Coloring is NP-hard in the considered subclasses of P₅-free graphs, if we allow loops on H.

Cite as

Maria Chudnovsky, Jason King, Michał Pilipczuk, Paweł Rzążewski, and Sophie Spirkl. Finding Large H-Colorable Subgraphs in Hereditary Graph Classes. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chudnovsky_et_al:LIPIcs.ESA.2020.35,
  author =	{Chudnovsky, Maria and King, Jason and Pilipczuk, Micha{\l} and Rz\k{a}\.{z}ewski, Pawe{\l} and Spirkl, Sophie},
  title =	{{Finding Large H-Colorable Subgraphs in Hereditary Graph Classes}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.35},
  URN =		{urn:nbn:de:0030-drops-129019},
  doi =		{10.4230/LIPIcs.ESA.2020.35},
  annote =	{Keywords: homomorphisms, hereditary graph classes, odd cycle transversal}
}
Document
Compact Oblivious Routing in Weighted Graphs

Authors: Philipp Czerner and Harald Räcke


Abstract
The space-requirement for routing-tables is an important characteristic of routing schemes. For the cost-measure of minimizing the total network load there exist a variety of results that show tradeoffs between stretch and required size for the routing tables. This paper designs compact routing schemes for the cost-measure congestion, where the goal is to minimize the maximum relative load of a link in the network (the relative load of a link is its traffic divided by its bandwidth). We show that for arbitrary undirected graphs we can obtain oblivious routing strategies with competitive ratio 𝒪̃(1) that have header length 𝒪̃(1), label size 𝒪̃(1), and require routing-tables of size 𝒪̃(deg(v)) at each vertex v in the graph. This improves a result of Räcke and Schmid who proved a similar result in unweighted graphs.

Cite as

Philipp Czerner and Harald Räcke. Compact Oblivious Routing in Weighted Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 36:1-36:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{czerner_et_al:LIPIcs.ESA.2020.36,
  author =	{Czerner, Philipp and R\"{a}cke, Harald},
  title =	{{Compact Oblivious Routing in Weighted Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{36:1--36:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.36},
  URN =		{urn:nbn:de:0030-drops-129024},
  doi =		{10.4230/LIPIcs.ESA.2020.36},
  annote =	{Keywords: Oblivious Routing, Compact Routing, Competitive Analysis}
}
Document
Approximation Algorithms for Clustering with Dynamic Points

Authors: Shichuan Deng, Jian Li, and Yuval Rabani


Abstract
In many classic clustering problems, we seek to sketch a massive data set of n points (a.k.a clients) in a metric space, by segmenting them into k categories or clusters, each cluster represented concisely by a single point in the metric space (a.k.a. the cluster’s center or its facility). The goal is to find such a sketch that minimizes some objective that depends on the distances between the clients and their respective facilities (the objective is a.k.a. the service cost). Two notable examples are the k-center/k-supplier problem where the objective is to minimize the maximum distance from any client to its facility, and the k-median problem where the objective is to minimize the sum over all clients of the distance from the client to its facility. In practical applications of clustering, the data set may evolve over time, reflecting an evolution of the underlying clustering model. Thus, in such applications, a good clustering must simultaneously represent the temporal data set well, but also not change too drastically between time steps. In this paper, we initiate the study of a dynamic version of clustering problems that aims to capture these considerations. In this version there are T time steps, and in each time step t ∈ {1,2,… ,T}, the set of clients needed to be clustered may change, and we can move the k facilities between time steps. The general goal is to minimize certain combinations of the service cost and the facility movement cost, or minimize one subject to some constraints on the other. More specifically, we study two concrete problems in this framework: the Dynamic Ordered k-Median and the Dynamic k-Supplier problem. Our technical contributions are as follows: - We consider the Dynamic Ordered k-Median problem, where the objective is to minimize the weighted sum of ordered distances over all time steps, plus the total cost of moving the facilities between time steps. We present one constant-factor approximation algorithm for T = 2 and another approximation algorithm for fixed T ≥ 3. - We consider the Dynamic k-Supplier problem, where the objective is to minimize the maximum distance from any client to its facility, subject to the constraint that between time steps the maximum distance moved by any facility is no more than a given threshold. When the number of time steps T is 2, we present a simple constant factor approximation algorithm and a bi-criteria constant factor approximation algorithm for the outlier version, where some of the clients can be discarded. We also show that it is NP-hard to approximate the problem with any factor for T ≥ 3.

Cite as

Shichuan Deng, Jian Li, and Yuval Rabani. Approximation Algorithms for Clustering with Dynamic Points. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ESA.2020.37,
  author =	{Deng, Shichuan and Li, Jian and Rabani, Yuval},
  title =	{{Approximation Algorithms for Clustering with Dynamic Points}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.37},
  URN =		{urn:nbn:de:0030-drops-129037},
  doi =		{10.4230/LIPIcs.ESA.2020.37},
  annote =	{Keywords: clustering, dynamic points, multi-objective optimization}
}
Document
A Sub-Linear Time Framework for Geometric Optimization with Outliers in High Dimensions

Authors: Hu Ding


Abstract
Many real-world problems can be formulated as geometric optimization problems in high dimensions, especially in the fields of machine learning and data mining. Moreover, we often need to take into account of outliers when optimizing the objective functions. However, the presence of outliers could make the problems to be much more challenging than their vanilla versions. In this paper, we study the fundamental minimum enclosing ball (MEB) with outliers problem first; partly inspired by the core-set method from Bădoiu and Clarkson, we propose a sub-linear time bi-criteria approximation algorithm based on two novel techniques, the Uniform-Adaptive Sampling method and Sandwich Lemma. To the best of our knowledge, our result is the first sub-linear time algorithm, which has the sample size (i.e., the number of sampled points) independent of both the number of input points n and dimensionality d, for MEB with outliers in high dimensions. Furthermore, we observe that these two techniques can be generalized to deal with a broader range of geometric optimization problems with outliers in high dimensions, including flat fitting, k-center clustering, and SVM with outliers, and therefore achieve the sub-linear time algorithms for these problems respectively.

Cite as

Hu Ding. A Sub-Linear Time Framework for Geometric Optimization with Outliers in High Dimensions. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ding:LIPIcs.ESA.2020.38,
  author =	{Ding, Hu},
  title =	{{A Sub-Linear Time Framework for Geometric Optimization with Outliers in High Dimensions}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.38},
  URN =		{urn:nbn:de:0030-drops-129045},
  doi =		{10.4230/LIPIcs.ESA.2020.38},
  annote =	{Keywords: minimum enclosing ball, outliers, shape fitting, high dimensions, sub-linear time}
}
Document
Practical Performance of Space Efficient Data Structures for Longest Common Extensions

Authors: Patrick Dinklage, Johannes Fischer, Alexander Herlez, Tomasz Kociumaka, and Florian Kurpicz


Abstract
For a text T[1,n], a Longest Common Extension (LCE) query lce_T(i,j) asks for the length of the longest common prefix of the suffixes T[i,n] and T[j,n] identified by their starting positions 1 ≤ i,j ≤ n. A classic problem in stringology asks to preprocess a static text T[1,n] over an alphabet of size σ so that LCE queries can be efficiently answered on-line. Since its introduction in the 1980’s, this problem has found numerous applications: in suffix sorting, edit distance computation, approximate pattern matching, regularities finding, string mining, and many more. Text-book solutions offer O(n) preprocessing time and O(1) query time, but they employ memory-heavy data structures, such as suffix arrays, in practice several times bigger than the text itself. Very recently, more space efficient solutions using O(nlogσ) bits of total space or even only O(log n) bits of extra space have been proposed: string synchronizing sets [Kempa and Kociumaka, STOC'19, and Birenzwige et al., SODA'20] and in-place fingerprinting [Prezza, SODA'18]. The goal of this article is to present well-engineered implementations of these new solutions and study their practicality on a commonly agreed text corpus. We show that both perform extremely well in practice, with space consumption of only around 10% of the input size for string synchronizing sets (around 20% for highly repetitive texts), and essentially no extra space for fingerprinting. Interestingly, our experiments also show that both solutions become much faster than naive scanning even for finding common prefixes of moderate length, contradicting a common belief that sophisticated data structures for LCE queries are not competitive with naive approaches [Ilie and Tinta, SPIRE'09].

Cite as

Patrick Dinklage, Johannes Fischer, Alexander Herlez, Tomasz Kociumaka, and Florian Kurpicz. Practical Performance of Space Efficient Data Structures for Longest Common Extensions. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dinklage_et_al:LIPIcs.ESA.2020.39,
  author =	{Dinklage, Patrick and Fischer, Johannes and Herlez, Alexander and Kociumaka, Tomasz and Kurpicz, Florian},
  title =	{{Practical Performance of Space Efficient Data Structures for Longest Common Extensions}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.39},
  URN =		{urn:nbn:de:0030-drops-129050},
  doi =		{10.4230/LIPIcs.ESA.2020.39},
  annote =	{Keywords: text indexing, longest common prefix, space efficient data structures}
}
Document
First-Order Model-Checking in Random Graphs and Complex Networks

Authors: Jan Dreier, Philipp Kuinke, and Peter Rossmanith


Abstract
Complex networks are everywhere. They appear for example in the form of biological networks, social networks, or computer networks and have been studied extensively. Efficient algorithms to solve problems on complex networks play a central role in today’s society. Algorithmic meta-theorems show that many problems can be solved efficiently. Since logic is a powerful tool to model problems, it has been used to obtain very general meta-theorems. In this work, we consider all problems definable in first-order logic and analyze which properties of complex networks allow them to be solved efficiently. The mathematical tool to describe complex networks are random graph models. We define a property of random graph models called α-power-law-boundedness. Roughly speaking, a random graph is α-power-law-bounded if it does not admit strong clustering and its degree sequence is bounded by a power-law distribution with exponent at least α (i.e. the fraction of vertices with degree k is roughly O(k^{-α})). We solve the first-order model-checking problem (parameterized by the length of the formula) in almost linear FPT time on random graph models satisfying this property with α ≥ 3. This means in particular that one can solve every problem expressible in first-order logic in almost linear expected time on these random graph models. This includes for example preferential attachment graphs, Chung-Lu graphs, configuration graphs, and sparse Erdős-Rényi graphs. Our results match known hardness results and generalize previous tractability results on this topic.

Cite as

Jan Dreier, Philipp Kuinke, and Peter Rossmanith. First-Order Model-Checking in Random Graphs and Complex Networks. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 40:1-40:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dreier_et_al:LIPIcs.ESA.2020.40,
  author =	{Dreier, Jan and Kuinke, Philipp and Rossmanith, Peter},
  title =	{{First-Order Model-Checking in Random Graphs and Complex Networks}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{40:1--40:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.40},
  URN =		{urn:nbn:de:0030-drops-129068},
  doi =		{10.4230/LIPIcs.ESA.2020.40},
  annote =	{Keywords: random graphs, average case analysis, first-order model-checking}
}
Document
Optimally Handling Commitment Issues in Online Throughput Maximization

Authors: Franziska Eberle, Nicole Megow, and Kevin Schewior


Abstract
We consider a fundamental online scheduling problem in which jobs with processing times and deadlines arrive online over time at their release dates. The task is to determine a feasible preemptive schedule on m machines that maximizes the number of jobs that complete before their deadline. Due to strong impossibility results for competitive analysis, it is commonly required that jobs contain some slack ε > 0, which means that the feasible time window for scheduling a job is at least 1+ε times its processing time. In this paper, we answer the question on how to handle commitment requirements which enforce that a scheduler has to guarantee at a certain point in time the completion of admitted jobs. This is very relevant, e.g., in providing cloud-computing services and disallows last-minute rejections of critical tasks. We present the first online algorithm for handling commitment on parallel machines for arbitrary slack ε. When the scheduler must commit upon starting a job, the algorithm is Θ(1/ε)-competitive. Somewhat surprisingly, this is the same optimal performance bound (up to constants) as for scheduling without commitment on a single machine. If commitment decisions must be made before a job’s slack becomes less than a δ-fraction of its size, we prove a competitive ratio of 𝒪(1/(ε - δ)) for 0 < δ < ε. This result nicely interpolates between commitment upon starting a job and commitment upon arrival. For the latter commitment model, it is known that no (randomized) online algorithms admits any bounded competitive ratio.

Cite as

Franziska Eberle, Nicole Megow, and Kevin Schewior. Optimally Handling Commitment Issues in Online Throughput Maximization. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{eberle_et_al:LIPIcs.ESA.2020.41,
  author =	{Eberle, Franziska and Megow, Nicole and Schewior, Kevin},
  title =	{{Optimally Handling Commitment Issues in Online Throughput Maximization}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.41},
  URN =		{urn:nbn:de:0030-drops-129076},
  doi =		{10.4230/LIPIcs.ESA.2020.41},
  annote =	{Keywords: Deadline scheduling, throughput, online algorithms, competitive analysis}
}
Document
A Polynomial Kernel for Line Graph Deletion

Authors: Eduard Eiben and William Lochet


Abstract
The line graph of a graph G is the graph L(G) whose vertex set is the edge set of G and there is an edge between e,f ∈ E(G) if e and f share an endpoint in G. A graph is called line graph if it is a line graph of some graph. We study the Line-Graph-Edge Deletion problem, which asks whether we can delete at most k edges from the input graph G such that the resulting graph is a line graph. More precisely, we give a polynomial kernel for Line-Graph-Edge Deletion with O(k⁵) vertices. This answers an open question posed by Falk Hüffner at Workshop on Kernels (WorKer) in 2013.

Cite as

Eduard Eiben and William Lochet. A Polynomial Kernel for Line Graph Deletion. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 42:1-42:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.ESA.2020.42,
  author =	{Eiben, Eduard and Lochet, William},
  title =	{{A Polynomial Kernel for Line Graph Deletion}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{42:1--42:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.42},
  URN =		{urn:nbn:de:0030-drops-129088},
  doi =		{10.4230/LIPIcs.ESA.2020.42},
  annote =	{Keywords: Kernelization, line graphs, H-free editing, graph modification problem}
}
Document
Approximate CVP_p in Time 2^{0.802 n}

Authors: Friedrich Eisenbrand and Moritz Venzin


Abstract
We show that a constant factor approximation of the shortest and closest lattice vector problem w.r.t. any 𝓁_p-norm can be computed in time 2^{(0.802 +ε) n}. This matches the currently fastest constant factor approximation algorithm for the shortest vector problem w.r.t. 𝓁₂. To obtain our result, we combine the latter algorithm w.r.t. 𝓁₂ with geometric insights related to coverings.

Cite as

Friedrich Eisenbrand and Moritz Venzin. Approximate CVP_p in Time 2^{0.802 n}. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{eisenbrand_et_al:LIPIcs.ESA.2020.43,
  author =	{Eisenbrand, Friedrich and Venzin, Moritz},
  title =	{{Approximate CVP\underlinep in Time 2^\{0.802 n\}}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.43},
  URN =		{urn:nbn:de:0030-drops-129097},
  doi =		{10.4230/LIPIcs.ESA.2020.43},
  annote =	{Keywords: Shortest and closest vector problem, approximation algorithm, sieving, covering convex bodies}
}
Document
A (1-e^{-1}-ε)-Approximation for the Monotone Submodular Multiple Knapsack Problem

Authors: Yaron Fairstein, Ariel Kulik, Joseph (Seffi) Naor, Danny Raz, and Hadas Shachnai


Abstract
We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint (SMKP). The input is a set I of items, each associated with a non-negative weight, and a set of bins having arbitrary capacities. Also, we are given a submodular, monotone and non-negative function f over subsets of the items. The objective is to find a subset of items A ⊆ I and a packing of these items in the bins, such that f(A) is maximized. SMKP is a natural extension of both Multiple Knapsack and the problem of monotone submodular maximization subject to a knapsack constraint. Our main result is a nearly optimal polynomial time (1-e^{-1}-ε)-approximation algorithm for the problem, for any ε > 0. Our algorithm relies on a refined analysis of techniques for constrained submodular optimization combined with sophisticated application of tools used in the development of approximation schemes for packing problems.

Cite as

Yaron Fairstein, Ariel Kulik, Joseph (Seffi) Naor, Danny Raz, and Hadas Shachnai. A (1-e^{-1}-ε)-Approximation for the Monotone Submodular Multiple Knapsack Problem. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 44:1-44:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fairstein_et_al:LIPIcs.ESA.2020.44,
  author =	{Fairstein, Yaron and Kulik, Ariel and Naor, Joseph (Seffi) and Raz, Danny and Shachnai, Hadas},
  title =	{{A (1-e^\{-1\}-\epsilon)-Approximation for the Monotone Submodular Multiple Knapsack Problem}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{44:1--44:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.44},
  URN =		{urn:nbn:de:0030-drops-129107},
  doi =		{10.4230/LIPIcs.ESA.2020.44},
  annote =	{Keywords: Sumodular Optimization, Multiple Knapsack, Randomized Rounding}
}
Document
Linear Expected Complexity for Directional and Multiplicative Voronoi Diagrams

Authors: Chenglin Fan and Benjamin Raichel


Abstract
While the standard unweighted Voronoi diagram in the plane has linear worst-case complexity, many of its natural generalizations do not. This paper considers two such previously studied generalizations, namely multiplicative and semi Voronoi diagrams. These diagrams both have quadratic worst-case complexity, though here we show that their expected complexity is linear for certain natural randomized inputs. Specifically, we argue that the expected complexity is linear for: (1) semi Voronoi diagrams when the visible direction is randomly sampled, and (2) for multiplicative diagrams when either weights are sampled from a constant-sized set, or the more challenging case when weights are arbitrary but locations are sampled from a square.

Cite as

Chenglin Fan and Benjamin Raichel. Linear Expected Complexity for Directional and Multiplicative Voronoi Diagrams. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 45:1-45:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fan_et_al:LIPIcs.ESA.2020.45,
  author =	{Fan, Chenglin and Raichel, Benjamin},
  title =	{{Linear Expected Complexity for Directional and Multiplicative Voronoi Diagrams}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{45:1--45:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.45},
  URN =		{urn:nbn:de:0030-drops-129111},
  doi =		{10.4230/LIPIcs.ESA.2020.45},
  annote =	{Keywords: Voronoi Diagrams, Expected Complexity, Computational Geometry}
}
Document
Polynomial Time Approximation Schemes for Clustering in Low Highway Dimension Graphs

Authors: Andreas Emil Feldmann and David Saulpic


Abstract
We study clustering problems such as k-Median, k-Means, and Facility Location in graphs of low highway dimension, which is a graph parameter modeling transportation networks. It was previously shown that approximation schemes for these problems exist, which either run in quasi-polynomial time (assuming constant highway dimension) [Feldmann et al. SICOMP 2018] or run in FPT time (parameterized by the number of clusters k, the highway dimension, and the approximation factor) [Becker et al. ESA 2018, Braverman et al. 2020]. In this paper we show that a polynomial-time approximation scheme (PTAS) exists (assuming constant highway dimension). We also show that the considered problems are NP-hard on graphs of highway dimension 1.

Cite as

Andreas Emil Feldmann and David Saulpic. Polynomial Time Approximation Schemes for Clustering in Low Highway Dimension Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 46:1-46:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{feldmann_et_al:LIPIcs.ESA.2020.46,
  author =	{Feldmann, Andreas Emil and Saulpic, David},
  title =	{{Polynomial Time Approximation Schemes for Clustering in Low Highway Dimension Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{46:1--46:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.46},
  URN =		{urn:nbn:de:0030-drops-129129},
  doi =		{10.4230/LIPIcs.ESA.2020.46},
  annote =	{Keywords: Approximation Scheme, Clustering, Highway Dimension}
}
Document
Coresets for the Nearest-Neighbor Rule

Authors: Alejandro Flores-Velazco and David M. Mount


Abstract
Given a training set P of labeled points, the nearest-neighbor rule predicts the class of an unlabeled query point as the label of its closest point in the set. To improve the time and space complexity of classification, a natural question is how to reduce the training set without significantly affecting the accuracy of the nearest-neighbor rule. Nearest-neighbor condensation deals with finding a subset R ⊆ P such that for every point p ∈ P, p’s nearest-neighbor in R has the same label as p. This relates to the concept of coresets, which can be broadly defined as subsets of the set, such that an exact result on the coreset corresponds to an approximate result on the original set. However, the guarantees of a coreset hold for any query point, and not only for the points of the training set. This paper introduces the concept of coresets for nearest-neighbor classification. We extend existing criteria used for condensation, and prove sufficient conditions to correctly classify any query point when using these subsets. Additionally, we prove that finding such subsets of minimum cardinality is NP-hard, and propose quadratic-time approximation algorithms with provable upper-bounds on the size of their selected subsets. Moreover, we show how to improve one of these algorithms to have subquadratic runtime, being the first of this kind for condensation.

Cite as

Alejandro Flores-Velazco and David M. Mount. Coresets for the Nearest-Neighbor Rule. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 47:1-47:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{floresvelazco_et_al:LIPIcs.ESA.2020.47,
  author =	{Flores-Velazco, Alejandro and Mount, David M.},
  title =	{{Coresets for the Nearest-Neighbor Rule}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{47:1--47:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.47},
  URN =		{urn:nbn:de:0030-drops-129138},
  doi =		{10.4230/LIPIcs.ESA.2020.47},
  annote =	{Keywords: coresets, nearest-neighbor rule, classification, nearest-neighbor condensation, training-set reduction, approximate nearest-neighbor, approximation algorithms}
}
Document
Kernelization of Whitney Switches

Authors: Fedor V. Fomin and Petr A. Golovach


Abstract
A fundamental theorem of Whitney from 1933 asserts that 2-connected graphs G and H are 2-isomorphic, or equivalently, their cycle matroids are isomorphic, if and only if G can be transformed into H by a series of operations called Whitney switches. In this paper we consider the quantitative question arising from Whitney’s theorem: Given 2-isomorphic graphs, can we transform one into another by applying at most k Whitney switches? This problem is already NP-complete for cycles, and we investigate its parameterized complexity. We show that the problem admits a kernel of size 𝒪(k), and thus, is fixed-parameter tractable when parameterized by k.

Cite as

Fedor V. Fomin and Petr A. Golovach. Kernelization of Whitney Switches. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 48:1-48:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2020.48,
  author =	{Fomin, Fedor V. and Golovach, Petr A.},
  title =	{{Kernelization of Whitney Switches}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{48:1--48:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.48},
  URN =		{urn:nbn:de:0030-drops-129144},
  doi =		{10.4230/LIPIcs.ESA.2020.48},
  annote =	{Keywords: Whitney switch, 2-isomorphism, Parameterized Complexity, kernelization}
}
Document
Subexponential Parameterized Algorithms and Kernelization on Almost Chordal Graphs

Authors: Fedor V. Fomin and Petr A. Golovach


Abstract
We study algorithmic properties of the graph class Chordal-ke, that is, graphs that can be turned into a chordal graph by adding at most k edges or, equivalently, the class of graphs of fill-in at most k. We discover that a number of fundamental intractable optimization problems being parameterized by k admit subexponential algorithms on graphs from Chordal-ke. While various parameterized algorithms on graphs for many structural parameters like vertex cover or treewidth can be found in the literature, up to the Exponential Time Hypothesis (ETH), the existence of subexponential parameterized algorithms for most of the structural parameters and optimization problems is highly unlikely. This is why we find the algorithmic behavior of the "fill-in parameterization" very unusual. Being intrigued by this behaviour, we identify a large class of optimization problems on Chordal-ke that admit algorithms with the typical running time 2^𝒪(√k log k) ⋅ n^𝒪(1). Examples of the problems from this class are finding an independent set of maximum weight, finding a feedback vertex set or an odd cycle transversal of minimum weight, or the problem of finding a maximum induced planar subgraph. On the other hand, we show that for some fundamental optimization problems, like finding an optimal graph coloring or finding a maximum clique, are FPT on Chordal-ke when parameterized by k but do not admit subexponential in k algorithms unless ETH fails. Besides subexponential time algorithms, the class of Chordal-ke graphs appears to be appealing from the perspective of kernelization (with parameter k). While it is possible to show that most of the weighted variants of optimization problems do not admit polynomial in k kernels on Chordal-ke graphs, this does not exclude the existence of Turing kernelization and kernelization for unweighted graphs. In particular, we construct a polynomial Turing kernel for Weighted Clique on Chordal-ke graphs. For (unweighted) Independent Set we design polynomial kernels on two interesting subclasses of Chordal-ke, namely, Interval-ke and Split-ke graphs.

Cite as

Fedor V. Fomin and Petr A. Golovach. Subexponential Parameterized Algorithms and Kernelization on Almost Chordal Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 49:1-49:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2020.49,
  author =	{Fomin, Fedor V. and Golovach, Petr A.},
  title =	{{Subexponential Parameterized Algorithms and Kernelization on Almost Chordal Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{49:1--49:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.49},
  URN =		{urn:nbn:de:0030-drops-129157},
  doi =		{10.4230/LIPIcs.ESA.2020.49},
  annote =	{Keywords: Parameterized complexity, structural parameterization, subexponential algorithms, kernelization, chordal graphs, fill-in, independent set, clique, coloring}
}
Document
On the Complexity of Recovering Incidence Matrices

Authors: Fedor V. Fomin, Petr Golovach, Pranabendu Misra, and M. S. Ramanujan


Abstract
The incidence matrix of a graph is a fundamental object naturally appearing in many applications, involving graphs such as social networks, communication networks, or transportation networks. Often, the data collected about the incidence relations can have some slight noise. In this paper, we initiate the study of the computational complexity of recovering incidence matrices of graphs from a binary matrix: given a binary matrix M which can be written as the superposition of two binary matrices L and S, where S is the incidence matrix of a graph from a specified graph class, and L is a matrix (i) of small rank or, (ii) of small (Hamming) weight. Further, identify all those graphs whose incidence matrices form part of such a superposition. Here, L represents the noise in the input matrix M. Another motivation for this problem comes from the Matroid Minors project of Geelen, Gerards and Whittle, where perturbed graphic and co-graphic matroids play a prominent role. There, it is expected that a perturbed binary matroid (or its dual) is presented as L+S where L is a low rank matrix and S is the incidence matrix of a graph. Here, we address the complexity of constructing such a decomposition. When L is of small rank, we show that the problem is NP-complete, but it can be decided in time (mn)^O(r), where m,n are dimensions of M and r is an upper-bound on the rank of L. When L is of small weight, then the problem is solvable in polynomial time (mn)^O(1). Furthermore, in many applications it is desirable to have the list of all possible solutions for further analysis. We show that our algorithms naturally extend to enumeration algorithms for the above two problems with delay (mn)^O(r) and (mn)^O(1), respectively, between consecutive outputs.

Cite as

Fedor V. Fomin, Petr Golovach, Pranabendu Misra, and M. S. Ramanujan. On the Complexity of Recovering Incidence Matrices. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2020.50,
  author =	{Fomin, Fedor V. and Golovach, Petr and Misra, Pranabendu and Ramanujan, M. S.},
  title =	{{On the Complexity of Recovering Incidence Matrices}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.50},
  URN =		{urn:nbn:de:0030-drops-129164},
  doi =		{10.4230/LIPIcs.ESA.2020.50},
  annote =	{Keywords: Graph Incidence Matrix, Matrix Recovery, Enumeration Algorithm}
}
Document
An Algorithmic Meta-Theorem for Graph Modification to Planarity and FOL

Authors: Fedor V. Fomin, Petr A. Golovach, Giannos Stamoulis, and Dimitrios M. Thilikos


Abstract
In general, a graph modification problem is defined by a graph modification operation ⊠ and a target graph property 𝒫. Typically, the modification operation ⊠ may be vertex removal, edge removal, edge contraction, or edge addition and the question is, given a graph G and an integer k, whether it is possible to transform G to a graph in 𝒫 after applying k times the operation ⊠ on G. This problem has been extensively studied for particilar instantiations of ⊠ and 𝒫. In this paper we consider the general property 𝒫_ϕ of being planar and, moreover, being a model of some First-Order Logic sentence ϕ (an FOL-sentence). We call the corresponding meta-problem Graph ⊠-Modification to Planarity and ϕ and prove the following algorithmic meta-theorem: there exists a function f: ℕ² → ℕ such that, for every ⊠ and every FOL sentence ϕ, the Graph ⊠-Modification to Planarity and ϕ is solvable in f(k,|ϕ|)⋅n² time. The proof constitutes a hybrid of two different classic techniques in graph algorithms. The first is the irrelevant vertex technique that is typically used in the context of Graph Minors and deals with properties such as planarity or surface-embeddability (that are not FOL-expressible) and the second is the use of Gaifman’s Locality Theorem that is the theoretical base for the meta-algorithmic study of FOL-expressible problems.

Cite as

Fedor V. Fomin, Petr A. Golovach, Giannos Stamoulis, and Dimitrios M. Thilikos. An Algorithmic Meta-Theorem for Graph Modification to Planarity and FOL. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2020.51,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Stamoulis, Giannos and Thilikos, Dimitrios M.},
  title =	{{An Algorithmic Meta-Theorem for Graph Modification to Planarity and FOL}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.51},
  URN =		{urn:nbn:de:0030-drops-129172},
  doi =		{10.4230/LIPIcs.ESA.2020.51},
  annote =	{Keywords: Graph modification Problems, Algorithmic meta-theorems, First Order Logic, Irrelevant vertex technique, Planar graphs, Surface embeddable graphs}
}
Document
A Constant-Factor Approximation for Directed Latency in Quasi-Polynomial Time

Authors: Zachary Friggstad and Chaitanya Swamy


Abstract
We consider the directed minimum latency problem (DirLat), wherein we seek a path P visiting all points (or clients) in a given asymmetric metric starting at a given root node r, so as to minimize the sum of the client waiting times, where the waiting time of a client v is the length of the r-v portion of P. We give the first constant-factor approximation guarantee for DirLat, but in quasi-polynomial time. Previously, a polynomial-time O(log n)-approximation was known [Z. Friggstad et al., 2013], and no better approximation guarantees were known even in quasi-polynomial time. A key ingredient of our result, and our chief technical contribution, is an extension of a recent result of [A. Köhne et al., 2019] showing that the integrality gap of the natural Held-Karp relaxation for asymmetric TSP-Path (ATSPP) is at most a constant, which itself builds on the breakthrough similar result established for asymmetric TSP (ATSP) by Svensson et al. [O. Svensson et al., 2018]. We show that the integrality gap of the Held-Karp relaxation for ATSPP is bounded by a constant even if the cut requirements of the LP relaxation are relaxed from x(δ^{in}(S)) ≥ 1 to x(δ^{in}(S)) ≥ ρ for some constant 1/2 < ρ ≤ 1. We also give a better approximation guarantee for the minimum total-regret problem, where the goal is to find a path P that minimizes the total time that nodes spend in excess of their shortest-path distances from r, which can be cast as a special case of DirLat involving so-called regret metrics.

Cite as

Zachary Friggstad and Chaitanya Swamy. A Constant-Factor Approximation for Directed Latency in Quasi-Polynomial Time. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{friggstad_et_al:LIPIcs.ESA.2020.52,
  author =	{Friggstad, Zachary and Swamy, Chaitanya},
  title =	{{A Constant-Factor Approximation for Directed Latency in Quasi-Polynomial Time}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.52},
  URN =		{urn:nbn:de:0030-drops-129183},
  doi =		{10.4230/LIPIcs.ESA.2020.52},
  annote =	{Keywords: Approximation Algorithms, Directed Latency, TSP}
}
Document
On Compact RAC Drawings

Authors: Henry Förster and Michael Kaufmann


Abstract
We present new bounds for the required area of Right Angle Crossing (RAC) drawings for complete graphs, i.e. drawings where any two crossing edges are perpendicular to each other. First, we improve upon results by Didimo et al. [Walter Didimo et al., 2011] and Di Giacomo et al. [Emilio Di Giacomo et al., 2011] by showing how to compute a RAC drawing with three bends per edge in cubic area. We also show that quadratic area can be achieved when allowing eight bends per edge in general or with three bends per edge for p-partite graphs. As a counterpart, we prove that in general quadratic area is not sufficient for RAC drawings with three bends per edge.

Cite as

Henry Förster and Michael Kaufmann. On Compact RAC Drawings. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 53:1-53:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{forster_et_al:LIPIcs.ESA.2020.53,
  author =	{F\"{o}rster, Henry and Kaufmann, Michael},
  title =	{{On Compact RAC Drawings}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{53:1--53:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.53},
  URN =		{urn:nbn:de:0030-drops-129192},
  doi =		{10.4230/LIPIcs.ESA.2020.53},
  annote =	{Keywords: RAC drawings, visualization of dense graphs, compact drawings}
}
Document
Fast Preprocessing for Optimal Orthogonal Range Reporting and Range Successor with Applications to Text Indexing

Authors: Younan Gao, Meng He, and Yakov Nekrich


Abstract
Under the word RAM model, we design three data structures that can be constructed in O(n √{lg n}) time over n points in an n × n grid. The first data structure is an O(n lg^ε n)-word structure supporting orthogonal range reporting in O(lg lg n+k) time, where k denotes output size and ε is an arbitrarily small constant. The second is an O(n lg lg n)-word structure supporting orthogonal range successor in O(lg lg n) time, while the third is an O(n lg^ε n)-word structure supporting sorted range reporting in O(lg lg n+k) time. The query times of these data structures are optimal when the space costs must be within O(n polylog n) words. Their exact space bounds match those of the best known results achieving the same query times, and the O(n √{lg n}) construction time beats the previous bounds on preprocessing. Previously, among 2d range search structures, only the orthogonal range counting structure of Chan and Pǎtraşcu (SODA 2010) and the linear space, O(lg^ε n) query time structure for orthogonal range successor by Belazzougui and Puglisi (SODA 2016) can be built in the same O(n √{lg n}) time. Hence our work is the first that achieve the same preprocessing time for optimal orthogonal range reporting and range successor. We also apply our results to improve the construction time of text indexes.

Cite as

Younan Gao, Meng He, and Yakov Nekrich. Fast Preprocessing for Optimal Orthogonal Range Reporting and Range Successor with Applications to Text Indexing. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 54:1-54:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gao_et_al:LIPIcs.ESA.2020.54,
  author =	{Gao, Younan and He, Meng and Nekrich, Yakov},
  title =	{{Fast Preprocessing for Optimal Orthogonal Range Reporting and Range Successor with Applications to Text Indexing}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{54:1--54:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.54},
  URN =		{urn:nbn:de:0030-drops-129202},
  doi =		{10.4230/LIPIcs.ESA.2020.54},
  annote =	{Keywords: orthogonal range search, geometric data structures, orthogonal range reporting, orthogonal range successor, sorted range reporting, text indexing, word RAM}
}
Document
Dual Half-Integrality for Uncrossable Cut Cover and Its Application to Maximum Half-Integral Flow

Authors: Naveen Garg and Nikhil Kumar


Abstract
Given an edge weighted graph and a forest F, the 2-edge connectivity augmentation problem is to pick a minimum weighted set of edges, E', such that every connected component of E' ∪ F is 2-edge connected. Williamson et al. gave a 2-approximation algorithm (WGMV) for this problem using the primal-dual schema. We show that when edge weights are integral, the WGMV procedure can be modified to obtain a half-integral dual. The 2-edge connectivity augmentation problem has an interesting connection to routing flow in graphs where the union of supply and demand is planar. The half-integrality of the dual leads to a tight 2-approximate max-half-integral-flow min-multicut theorem.

Cite as

Naveen Garg and Nikhil Kumar. Dual Half-Integrality for Uncrossable Cut Cover and Its Application to Maximum Half-Integral Flow. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 55:1-55:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2020.55,
  author =	{Garg, Naveen and Kumar, Nikhil},
  title =	{{Dual Half-Integrality for Uncrossable Cut Cover and Its Application to Maximum Half-Integral Flow}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.55},
  URN =		{urn:nbn:de:0030-drops-129214},
  doi =		{10.4230/LIPIcs.ESA.2020.55},
  annote =	{Keywords: Combinatorial Optimization, Multicommodity Flow, Network Design}
}
Document
An Efficient, Practical Algorithm and Implementation for Computing Multiplicatively Weighted Voronoi Diagrams

Authors: Martin Held and Stefan de Lorenzo


Abstract
We present a simple wavefront-like approach for computing multiplicatively weighted Voronoi diagrams of points and straight-line segments in the Euclidean plane. If the input sites may be assumed to be randomly weighted points then the use of a so-called overlay arrangement [Har-Peled & Raichel, Discrete Comput. Geom. 53:547 - 568, 2015] allows to achieve an expected runtime complexity of 𝒪(n log⁴ n), while still maintaining the simplicity of our approach. We implemented the full algorithm for weighted points as input sites, based on CGAL. The results of an experimental evaluation of our implementation suggest 𝒪(n log² n) as a practical bound on the runtime. Our algorithm can be extended to handle also additive weights in addition to multiplicative weights, and it yields a truly simple 𝒪(n log n) solution for solving the one-dimensional version of this problem.

Cite as

Martin Held and Stefan de Lorenzo. An Efficient, Practical Algorithm and Implementation for Computing Multiplicatively Weighted Voronoi Diagrams. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 56:1-56:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{held_et_al:LIPIcs.ESA.2020.56,
  author =	{Held, Martin and de Lorenzo, Stefan},
  title =	{{An Efficient, Practical Algorithm and Implementation for Computing Multiplicatively Weighted Voronoi Diagrams}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{56:1--56:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.56},
  URN =		{urn:nbn:de:0030-drops-129224},
  doi =		{10.4230/LIPIcs.ESA.2020.56},
  annote =	{Keywords: Voronoi Diagram, multiplicative weight, additive weight, arc expansion, overlay arrangement, implementation, experiments, CGAL, exact arithmetic}
}
Document
Fully-Dynamic Coresets

Authors: Monika Henzinger and Sagar Kale


Abstract
With input sizes becoming massive, coresets - small yet representative summary of the input - are relevant more than ever. A weighted set C_w that is a subset of the input is an ε-coreset if the cost of any feasible solution S with respect to C_w is within [1±ε] of the cost of S with respect to the original input. We give a very general technique to compute coresets in the fully-dynamic setting where input points can be added or deleted. Given a static (i.e., not dynamic) ε-coreset-construction algorithm that runs in time t(n, ε, λ) and computes a coreset of size s(n, ε, λ), where n is the number of input points and 1-λ is the success probability, we give a fully-dynamic algorithm that computes an ε-coreset with worst-case update time O((log n) ⋅ t(s(n, ε/log n, λ/n), ε/log n, λ/n)) (this bound is stated informally), where the success probability is 1-λ. Our technique is a fully-dynamic analog of the merge-and-reduce technique, which is due to Har-Peled and Mazumdar [Har-Peled and Mazumdar, 2004] and is based on a technique of Bentley and Saxe [Jon Louis Bentley and James B. Saxe, 1980], that applies to the insertion-only setting where points can only be added. Although, our space usage is O(n), our technique works in the presence of an adaptive adversary, and we show that Ω(n) space is required when adversary is adaptive. As a concrete implication of our technique, using the result of Braverman et al. [{Braverman} et al., 2016], we get fully-dynamic ε-coreset-construction algorithms for k-median and k-means with worst-case update time O(ε^{-2} k² log⁵ n log³ k) and coreset size O(ε^{-2} k log n log² k) ignoring log log n and log(1/ε) factors and assuming that ε = Ω(1/poly(n)) and λ = Ω(1/poly(n)) (which are very weak assumptions made only to make these bounds easy to parse). This results in the first fully-dynamic constant-approximation algorithms for k-median and k-means with update times O(poly(k, log n, ε^{-1})). Specifically, the dependence on k is only quadratic, and the bounds are worst-case. The best previous bound for both problems was amortized O(nlog n) by Cohen-Addad et al. [Cohen-Addad et al., 2019] via randomized O(1)-coresets in O(n) space. We also show that under the OMv conjecture [Monika Henzinger et al., 2015], a fully-dynamic (4 - δ)-approximation algorithm for k-means must either have an amortized update time of Ω(k^{1-γ}) or amortized query time of Ω(k^{2 - γ}), where γ > 0 is a constant.

Cite as

Monika Henzinger and Sagar Kale. Fully-Dynamic Coresets. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 57:1-57:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2020.57,
  author =	{Henzinger, Monika and Kale, Sagar},
  title =	{{Fully-Dynamic Coresets}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{57:1--57:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.57},
  URN =		{urn:nbn:de:0030-drops-129230},
  doi =		{10.4230/LIPIcs.ESA.2020.57},
  annote =	{Keywords: Clustering, Coresets, Dynamic Algorithms}
}
Document
Dynamic Matching Algorithms in Practice

Authors: Monika Henzinger, Shahbaz Khan, Richard Paul, and Christian Schulz


Abstract
In recent years, significant advances have been made in the design and analysis of fully dynamic maximal matching algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. In this paper, we attempt to bridge the gap between theory and practice that is currently observed for the fully dynamic maximal matching problem. We engineer several algorithms and empirically study those algorithms on an extensive set of dynamic instances.

Cite as

Monika Henzinger, Shahbaz Khan, Richard Paul, and Christian Schulz. Dynamic Matching Algorithms in Practice. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 58:1-58:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2020.58,
  author =	{Henzinger, Monika and Khan, Shahbaz and Paul, Richard and Schulz, Christian},
  title =	{{Dynamic Matching Algorithms in Practice}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{58:1--58:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.58},
  URN =		{urn:nbn:de:0030-drops-129243},
  doi =		{10.4230/LIPIcs.ESA.2020.58},
  annote =	{Keywords: Matching, Dynamic Matching, Blossom Algorithm}
}
Document
Finding All Global Minimum Cuts in Practice

Authors: Monika Henzinger, Alexander Noe, Christian Schulz, and Darren Strash


Abstract
We present a practically efficient algorithm that finds all global minimum cuts in huge undirected graphs. Our algorithm uses a multitude of kernelization rules to reduce the graph to a small equivalent instance and then finds all minimum cuts using an optimized version of the algorithm of Nagamochi, Nakao and Ibaraki. In shared memory we are able to find all minimum cuts of graphs with up to billions of edges and millions of minimum cuts in a few minutes. We also give a new linear time algorithm to find the most balanced minimum cuts given as input the representation of all minimum cuts.

Cite as

Monika Henzinger, Alexander Noe, Christian Schulz, and Darren Strash. Finding All Global Minimum Cuts in Practice. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 59:1-59:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2020.59,
  author =	{Henzinger, Monika and Noe, Alexander and Schulz, Christian and Strash, Darren},
  title =	{{Finding All Global Minimum Cuts in Practice}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{59:1--59:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.59},
  URN =		{urn:nbn:de:0030-drops-129255},
  doi =		{10.4230/LIPIcs.ESA.2020.59},
  annote =	{Keywords: Minimum Cut, Graph Algorithm, Algorithm Engineering, Cut Enumeration, Balanced Cut, Global Minimum Cut, Large-scale Graph Analysis}
}
Document
Approximate Turing Kernelization for Problems Parameterized by Treewidth

Authors: Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse


Abstract
We extend the notion of lossy kernelization, introduced by Lokshtanov et al. [STOC 2017], to approximate Turing kernelization. An α-approximate Turing kernel for a parameterized optimization problem is a polynomial-time algorithm that, when given access to an oracle that outputs c-approximate solutions in 𝒪(1) time, obtains an α ⋅ c-approximate solution to the considered problem, using calls to the oracle of size at most f(k) for some function f that only depends on the parameter. Using this definition, we show that Independent Set parameterized by treewidth 𝓁 has a (1+ε)-approximate Turing kernel with 𝒪(𝓁²/ε) vertices, answering an open question posed by Lokshtanov et al. [STOC 2017]. Furthermore, we give (1+ε)-approximate Turing kernels for the following graph problems parameterized by treewidth: Vertex Cover, Edge Clique Cover, Edge-Disjoint Triangle Packing and Connected Vertex Cover. We generalize the result for Independent Set and Vertex Cover, by showing that all graph problems that we will call friendly admit (1+ε)-approximate Turing kernels of polynomial size when parameterized by treewidth. We use this to obtain approximate Turing kernels for Vertex-Disjoint H-packing for connected graphs H, Clique Cover, Feedback Vertex Set and Edge Dominating Set.

Cite as

Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse. Approximate Turing Kernelization for Problems Parameterized by Treewidth. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 60:1-60:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.ESA.2020.60,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan and Pieterse, Astrid},
  title =	{{Approximate Turing Kernelization for Problems Parameterized by Treewidth}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{60:1--60:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.60},
  URN =		{urn:nbn:de:0030-drops-129261},
  doi =		{10.4230/LIPIcs.ESA.2020.60},
  annote =	{Keywords: Approximation, Turing kernelization, Graph problems, Treewidth}
}
Document
The Fine-Grained Complexity of Median and Center String Problems Under Edit Distance

Authors: Gary Hoppenworth, Jason W. Bentley, Daniel Gibney, and Sharma V. Thankachan


Abstract
We present the first fine-grained complexity results on two classic problems on strings. The first one is the k-Median-Edit-Distance problem, where the input is a collection of k strings, each of length at most n, and the task is to find a new string that minimizes the sum of the edit distances from itself to all other strings in the input. Arising frequently in computational biology, this problem provides an important generalization of edit distance to multiple strings and is similar to the multiple sequence alignment problem in bioinformatics. We demonstrate that for any ε > 0 and k ≥ 2, an O(n^{k-ε}) time solution for the k-Median-Edit-Distance problem over an alphabet of size O(k) refutes the Strong Exponential Time Hypothesis (SETH). This provides the first matching conditional lower bound for the O(n^k) time algorithm established in 1975 by Sankoff. The second problem we study is the k-Center-Edit-Distance problem. Here also, the input is a collection of k strings, each of length at most n. The task is to find a new string that minimizes the maximum edit distance from itself to any other string in the input. We prove that the same conditional lower bound as before holds. Our results also imply new conditional lower bounds for the k-Tree-Alignment and the k-Bottleneck-Tree-Alignment problems studied in phylogenetics.

Cite as

Gary Hoppenworth, Jason W. Bentley, Daniel Gibney, and Sharma V. Thankachan. The Fine-Grained Complexity of Median and Center String Problems Under Edit Distance. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 61:1-61:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hoppenworth_et_al:LIPIcs.ESA.2020.61,
  author =	{Hoppenworth, Gary and Bentley, Jason W. and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{The Fine-Grained Complexity of Median and Center String Problems Under Edit Distance}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{61:1--61:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.61},
  URN =		{urn:nbn:de:0030-drops-129278},
  doi =		{10.4230/LIPIcs.ESA.2020.61},
  annote =	{Keywords: Edit Distance, Median String, Center String, SETH}
}
Document
Capacitated Sum-Of-Radii Clustering: An FPT Approximation

Authors: Tanmay Inamdar and Kasturi Varadarajan


Abstract
In sum of radii clustering, the input consists of a finite set of points in a metric space. The problem asks to place a set of k balls centered at a subset of the points such that every point is covered by some ball, and the objective is to minimize the sum of radii of these balls. In the capacitated version of the problem, we want to assign each point to a ball containing it, such that no ball is assigned more than U points, where U denotes the capacity of the points. While constant approximations are known for the uncapacitated version of the problem, there is no work on the capacitated version. We make progress on this problem by obtaining a constant approximation using a Fixed Parameter Tractable (FPT) algorithm. In particular, the running time of the algorithm is of the form 2^O(k²) ⋅ n^O(1). As a warm-up for this result, we also give a constant approximation for the uncapacitated sum of radii clustering problem with matroid constraints, thus obtaining the first FPT approximation for this problem.

Cite as

Tanmay Inamdar and Kasturi Varadarajan. Capacitated Sum-Of-Radii Clustering: An FPT Approximation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 62:1-62:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{inamdar_et_al:LIPIcs.ESA.2020.62,
  author =	{Inamdar, Tanmay and Varadarajan, Kasturi},
  title =	{{Capacitated Sum-Of-Radii Clustering: An FPT Approximation}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{62:1--62:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.62},
  URN =		{urn:nbn:de:0030-drops-129288},
  doi =		{10.4230/LIPIcs.ESA.2020.62},
  annote =	{Keywords: Sum-of-radii Clustering, Capacitated Clustering}
}
Document
Optimal Polynomial-Time Compression for Boolean Max CSP

Authors: Bart M. P. Jansen and Michał Włodarczyk


Abstract
In the Boolean maximum constraint satisfaction problem - Max CSP(Γ) - one is given a collection of weighted applications of constraints from a finite constraint language Γ, over a common set of variables, and the goal is to assign Boolean values to the variables so that the total weight of satisfied constraints is maximized. There exists a concise dichotomy theorem providing a criterion on Γ for the problem to be polynomial-time solvable and stating that otherwise it becomes NP-hard. We study the NP-hard cases through the lens of kernelization and provide a complete characterization of Max CSP(Γ) with respect to the optimal compression size. Namely, we prove that Max CSP(Γ) parameterized by the number of variables n is either polynomial-time solvable, or there exists an integer d ≥ 2 depending on Γ, such that: 1) An instance of Max CSP(Γ) can be compressed into an equivalent instance with 𝒪(n^d log n) bits in polynomial time, 2) Max CSP(Γ) does not admit such a compression to 𝒪(n^{d-ε}) bits unless NP ⊆ co-NP / poly. Our reductions are based on interpreting constraints as multilinear polynomials combined with the framework of constraint implementations. As another application of our reductions, we reveal tight connections between optimal running times for solving Max CSP(Γ). More precisely, we show that obtaining a running time of the form 𝒪(2^{(1-ε)n}) for particular classes of Max CSPs is as hard as breaching this barrier for Max d-SAT for some d.

Cite as

Bart M. P. Jansen and Michał Włodarczyk. Optimal Polynomial-Time Compression for Boolean Max CSP. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 63:1-63:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2020.63,
  author =	{Jansen, Bart M. P. and W{\l}odarczyk, Micha{\l}},
  title =	{{Optimal Polynomial-Time Compression for Boolean Max CSP}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{63:1--63:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.63},
  URN =		{urn:nbn:de:0030-drops-129297},
  doi =		{10.4230/LIPIcs.ESA.2020.63},
  annote =	{Keywords: constraint satisfaction problem, kernelization, exponential time algorithms}
}
Document
A Linear Fixed Parameter Tractable Algorithm for Connected Pathwidth

Authors: Mamadou Moustapha Kanté, Christophe Paul, and Dimitrios M. Thilikos


Abstract
The graph parameter of pathwidth can be seen as a measure of the topological resemblance of a graph to a path. A popular definition of pathwidth is given in terms of node search where we are given a system of tunnels (represented by a graph) that is contaminated by some infectious substance and we are looking for a search strategy that, at each step, either places a searcher on a vertex or removes a searcher from a vertex and where an edge is cleaned when both endpoints are simultaneously occupied by searchers. It was proved that the minimum number of searchers required for a successful cleaning strategy is equal to the pathwidth of the graph plus one. Two desired characteristics for a cleaning strategy is to be monotone (no recontamination occurs) and connected (clean territories always remain connected). Under these two demands, the number of searchers is equivalent to a variant of pathwidth called connected pathwidth. We prove that connected pathwidth is fixed parameter tractable, in particular we design a 2^O(k²)⋅n time algorithm that checks whether the connected pathwidth of G is at most k. This resolves an open question by [Dereniowski, Osula, and Rzążewski, Finding small-width connected path-decompositions in polynomial time. Theor. Comput. Sci., 794:85–100, 2019]. For our algorithm, we enrich the typical sequence technique that is able to deal with the connectivity demand. Typical sequences have been introduced in [Bodlaender and Kloks. Efficient and constructive algorithms for the pathwidth and treewidth of graphs. J. Algorithms, 21(2):358–402, 1996] for the design of linear parameterized algorithms for treewidth and pathwidth. While this technique has been later applied to other parameters, none of its advancements was able to deal with the connectivity demand, as it is a "global" demand that concerns an unbounded number of parts of the graph of unbounded size. The proposed extension is based on an encoding of the connectivity property that is quite versatile and may be adapted so to deliver linear parameterized algorithms for the connected variants of other width parameters as well. An immediate consequence of our result is a 2^O(k²)⋅n time algorithm for the monotone and connected version of the edge search number.

Cite as

Mamadou Moustapha Kanté, Christophe Paul, and Dimitrios M. Thilikos. A Linear Fixed Parameter Tractable Algorithm for Connected Pathwidth. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 64:1-64:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kante_et_al:LIPIcs.ESA.2020.64,
  author =	{Kant\'{e}, Mamadou Moustapha and Paul, Christophe and Thilikos, Dimitrios M.},
  title =	{{A Linear Fixed Parameter Tractable Algorithm for Connected Pathwidth}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{64:1--64:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.64},
  URN =		{urn:nbn:de:0030-drops-129307},
  doi =		{10.4230/LIPIcs.ESA.2020.64},
  annote =	{Keywords: Graph decompositions, Parameterized algorithms, Typical sequences, Pathwidth, Graph searching}
}
Document
Exploiting c-Closure in Kernelization Algorithms for Graph Problems

Authors: Tomohiro Koana, Christian Komusiewicz, and Frank Sommer


Abstract
A graph is c-closed if every pair of vertices with at least c common neighbors is adjacent. The c-closure of a graph G is the smallest number c such that G is c-closed. Fox et al. [SIAM J. Comput. '20] defined c-closure and investigated it in the context of clique enumeration. We show that c-closure can be applied in kernelization algorithms for several classic graph problems. We show that Dominating Set admits a kernel of size k^𝒪(c), that Induced Matching admits a kernel with 𝒪(c⁷ k⁸) vertices, and that Irredundant Set admits a kernel with 𝒪(c^{5/2} k³) vertices. Our kernelization exploits the fact that c-closed graphs have polynomially-bounded Ramsey numbers, as we show.

Cite as

Tomohiro Koana, Christian Komusiewicz, and Frank Sommer. Exploiting c-Closure in Kernelization Algorithms for Graph Problems. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 65:1-65:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{koana_et_al:LIPIcs.ESA.2020.65,
  author =	{Koana, Tomohiro and Komusiewicz, Christian and Sommer, Frank},
  title =	{{Exploiting c-Closure in Kernelization Algorithms for Graph Problems}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{65:1--65:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.65},
  URN =		{urn:nbn:de:0030-drops-129316},
  doi =		{10.4230/LIPIcs.ESA.2020.65},
  annote =	{Keywords: Fixed-parameter tractability, kernelization, c-closure, Dominating Set, Induced Matching, Irredundant Set, Ramsey numbers}
}
Document
Many Visits TSP Revisited

Authors: Łukasz Kowalik, Shaohua Li, Wojciech Nadara, Marcin Smulewicz, and Magnus Wahlström


Abstract
We study the Many Visits TSP problem, where given a number k(v) for each of n cities and pairwise (possibly asymmetric) integer distances, one has to find an optimal tour that visits each city v exactly k(v) times. The currently fastest algorithm is due to Berger, Kozma, Mnich and Vincze [SODA 2019, TALG 2020] and runs in time and space O*(5ⁿ). They also show a polynomial space algorithm running in time O(16^{n+o(n)}). In this work, we show three main results: - A randomized polynomial space algorithm in time O*(2^n D), where D is the maximum distance between two cities. By using standard methods, this results in a (1+ε)-approximation in time O*(2ⁿε^{-1}). Improving the constant 2 in these results would be a major breakthrough, as it would result in improving the O*(2ⁿ)-time algorithm for Directed Hamiltonian Cycle, which is a 50 years old open problem. - A tight analysis of Berger et al.’s exponential space algorithm, resulting in an O*(4ⁿ) running time bound. - A new polynomial space algorithm, running in time O(7.88ⁿ).

Cite as

Łukasz Kowalik, Shaohua Li, Wojciech Nadara, Marcin Smulewicz, and Magnus Wahlström. Many Visits TSP Revisited. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 66:1-66:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kowalik_et_al:LIPIcs.ESA.2020.66,
  author =	{Kowalik, {\L}ukasz and Li, Shaohua and Nadara, Wojciech and Smulewicz, Marcin and Wahlstr\"{o}m, Magnus},
  title =	{{Many Visits TSP Revisited}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{66:1--66:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.66},
  URN =		{urn:nbn:de:0030-drops-129329},
  doi =		{10.4230/LIPIcs.ESA.2020.66},
  annote =	{Keywords: many visits traveling salesman problem, exponential algorithm}
}
Document
Light Euclidean Spanners with Steiner Points

Authors: Hung Le and Shay Solomon


Abstract
The FOCS'19 paper of Le and Solomon [Hung Le and Shay Solomon, 2019], culminating a long line of research on Euclidean spanners, proves that the lightness (normalized weight) of the greedy (1+ε)-spanner in ℝ^d is Õ(ε^{-d}) for any d = O(1) and any ε = Ω(n^{-1/(d-1)}) (where Õ hides polylogarithmic factors of 1/ε), and also shows the existence of point sets in ℝ^d for which any (1+ε)-spanner must have lightness Ω(ε^{-d}). Given this tight bound on the lightness, a natural arising question is whether a better lightness bound can be achieved using Steiner points. Our first result is a construction of Steiner spanners in ℝ² with lightness O(ε^{-1} log Δ), where Δ is the spread of the point set. In the regime of Δ ≪ 2^(1/ε), this provides an improvement over the lightness bound of [Hung Le and Shay Solomon, 2019]; this regime of parameters is of practical interest, as point sets arising in real-life applications (e.g., for various random distributions) have polynomially bounded spread, while in spanner applications ε often controls the precision, and it sometimes needs to be much smaller than O(1/log n). Moreover, for spread polynomially bounded in 1/ε, this upper bound provides a quadratic improvement over the non-Steiner bound of [Hung Le and Shay Solomon, 2019], We then demonstrate that such a light spanner can be constructed in O_ε(n) time for polynomially bounded spread, where O_ε hides a factor of poly(1/(ε)). Finally, we extend the construction to higher dimensions, proving a lightness upper bound of Õ(ε^{-(d+1)/2} + ε^{-2} log Δ) for any 3 ≤ d = O(1) and any ε = Ω(n^{-1/(d-1)}).

Cite as

Hung Le and Shay Solomon. Light Euclidean Spanners with Steiner Points. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 67:1-67:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{le_et_al:LIPIcs.ESA.2020.67,
  author =	{Le, Hung and Solomon, Shay},
  title =	{{Light Euclidean Spanners with Steiner Points}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{67:1--67:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.67},
  URN =		{urn:nbn:de:0030-drops-129331},
  doi =		{10.4230/LIPIcs.ESA.2020.67},
  annote =	{Keywords: Euclidean spanners, Steiner spanners, light spanners}
}
Document
Settling the Relationship Between Wilber’s Bounds for Dynamic Optimality

Authors: Victor Lecomte and Omri Weinstein


Abstract
In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence X ∈ [n]^m. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture (Sleator and Tarjan, 1985), but their relationship remained unknown for more than three decades. We show that Wilber’s Funnel bound dominates his Alternation bound for all X, and give a tight Θ(lg lg n) separation for some X, answering Wilber’s conjecture and an open problem of Iacono, Demaine et. al. The main ingredient of the proof is a new symmetric characterization of Wilber’s Funnel bound, which proves that it is invariant under rotations of X. We use this characterization to provide initial indication that the Funnel bound matches the Independent Rectangle bound (Demaine et al., 2009), by proving that when the Funnel bound is constant, IRB_upRect is linear. To the best of our knowledge, our results provide the first progress on Wilber’s conjecture that the Funnel bound is dynamically optimal (1986).

Cite as

Victor Lecomte and Omri Weinstein. Settling the Relationship Between Wilber’s Bounds for Dynamic Optimality. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 68:1-68:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lecomte_et_al:LIPIcs.ESA.2020.68,
  author =	{Lecomte, Victor and Weinstein, Omri},
  title =	{{Settling the Relationship Between Wilber’s Bounds for Dynamic Optimality}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{68:1--68:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.68},
  URN =		{urn:nbn:de:0030-drops-129342},
  doi =		{10.4230/LIPIcs.ESA.2020.68},
  annote =	{Keywords: data structures, binary search trees, dynamic optimality, lower bounds}
}
Document
On the Computational Complexity of Linear Discrepancy

Authors: Lily Li and Aleksandar Nikolov


Abstract
Many problems in computer science and applied mathematics require rounding a vector 𝐰 of fractional values lying in the interval [0,1] to a binary vector 𝐱 so that, for a given matrix 𝐀, 𝐀𝐱 is as close to 𝐀𝐰 as possible. For example, this problem arises in LP rounding algorithms used to approximate NP-hard optimization problems and in the design of uniformly distributed point sets for numerical integration. For a given matrix 𝐀, the worst-case error over all choices of 𝐰 incurred by the best possible rounding is measured by the linear discrepancy of 𝐀, a quantity studied in discrepancy theory, and introduced by Lovasz, Spencer, and Vesztergombi (EJC, 1986). We initiate the study of the computational complexity of linear discrepancy. Our investigation proceeds in two directions: (1) proving hardness results and (2) finding both exact and approximate algorithms to evaluate the linear discrepancy of certain matrices. For (1), we show that linear discrepancy is NP-hard. Thus we do not expect to find an efficient exact algorithm for the general case. Restricting our attention to matrices with a constant number of rows, we present a poly-time exact algorithm for matrices consisting of a single row and matrices with a constant number of rows and entries of bounded magnitude. We also present an exponential-time approximation algorithm for general matrices, and an algorithm that approximates linear discrepancy to within an exponential factor.

Cite as

Lily Li and Aleksandar Nikolov. On the Computational Complexity of Linear Discrepancy. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 69:1-69:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2020.69,
  author =	{Li, Lily and Nikolov, Aleksandar},
  title =	{{On the Computational Complexity of Linear Discrepancy}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{69:1--69:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.69},
  URN =		{urn:nbn:de:0030-drops-129352},
  doi =		{10.4230/LIPIcs.ESA.2020.69},
  annote =	{Keywords: discrepancy theory, linear discrepancy, rounding, NP-hardness}
}
Document
Augmenting the Algebraic Connectivity of Graphs

Authors: Bogdan-Adrian Manghiuc, Pan Peng, and He Sun


Abstract
For any undirected graph G = (V,E) and a set E_W of candidate edges with E ∩ E_W = ∅, the (k,γ)-spectral augmentability problem is to find a set F of k edges from E_W with appropriate weighting, such that the algebraic connectivity of the resulting graph H = (V, E ∪ F) is least γ. Because of a tight connection between the algebraic connectivity and many other graph parameters, including the graph’s conductance and the mixing time of random walks in a graph, maximising the resulting graph’s algebraic connectivity by adding a small number of edges has been studied over the past 15 years, and has many practical applications in network optimisation. In this work we present an approximate and efficient algorithm for the (k,γ)-spectral augmentability problem, and our algorithm runs in almost-linear time under a wide regime of parameters. Our main algorithm is based on the following two novel techniques developed in the paper, which might have applications beyond the (k,γ)-spectral augmentability problem: - We present a fast algorithm for solving a feasibility version of an SDP for the algebraic connectivity maximisation problem from [Ghosh and Boyd, 2006]. Our algorithm is based on the classic primal-dual framework for solving SDP, which in turn uses the multiplicative weight update algorithm. We present a novel approach of unifying SDP constraints of different matrix and vector variables and give a good separation oracle accordingly. - We present an efficient algorithm for the subgraph sparsification problem, and for a wide range of parameters our algorithm runs in almost-linear time, in contrast to the previously best known algorithm running in at least Ω(n²mk) time [Kolla et al., 2010]. Our analysis shows how the randomised BSS framework can be generalised in the setting of subgraph sparsification, and how the potential functions can be applied to approximately keep track of different subspaces.

Cite as

Bogdan-Adrian Manghiuc, Pan Peng, and He Sun. Augmenting the Algebraic Connectivity of Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 70:1-70:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{manghiuc_et_al:LIPIcs.ESA.2020.70,
  author =	{Manghiuc, Bogdan-Adrian and Peng, Pan and Sun, He},
  title =	{{Augmenting the Algebraic Connectivity of Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{70:1--70:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.70},
  URN =		{urn:nbn:de:0030-drops-129367},
  doi =		{10.4230/LIPIcs.ESA.2020.70},
  annote =	{Keywords: Graph sparsification, Algebraic connectivity, Semidefinite programming}
}
Document
Chordless Cycle Packing Is Fixed-Parameter Tractable

Authors: Dániel Marx


Abstract
A chordless cycle or hole in a graph G is an induced cycle of length at least 4. In the Hole Packing problem, a graph G and an integer k is given, and the task is to find (if exists) a set of k pairwise vertex-disjoint chordless cycles. Our main result is showing that Hole Packing is fixed-parameter tractable (FPT), that is, can be solved in time f(k)n^O(1) for some function f depending only on k.

Cite as

Dániel Marx. Chordless Cycle Packing Is Fixed-Parameter Tractable. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{marx:LIPIcs.ESA.2020.71,
  author =	{Marx, D\'{a}niel},
  title =	{{Chordless Cycle Packing Is Fixed-Parameter Tractable}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.71},
  URN =		{urn:nbn:de:0030-drops-129373},
  doi =		{10.4230/LIPIcs.ESA.2020.71},
  annote =	{Keywords: chordal graphs, packing, fixed-parameter tractability}
}
Document
Incompressibility of H-Free Edge Modification Problems: Towards a Dichotomy

Authors: Dániel Marx and R. B. Sandeep


Abstract
Given a graph G and an integer k, the H-free Edge Editing problem is to find whether there exist at most k pairs of vertices in G such that changing the adjacency of the pairs in G results in a graph without any induced copy of H. The existence of polynomial kernels for H-free Edge Editing (that is, whether it is possible to reduce the size of the instance to k^O(1) in polynomial time) received significant attention in the parameterized complexity literature. Nontrivial polynomial kernels are known to exist for some graphs H with at most 4 vertices (e.g., path on 3 or 4 vertices, diamond, paw), but starting from 5 vertices, polynomial kernels are known only if H is either complete or empty. This suggests the conjecture that there is no other H with at least 5 vertices were H-free Edge Editing admits a polynomial kernel. Towards this goal, we obtain a set ℋ of nine 5-vertex graphs such that if for every H ∈ ℋ, H-free Edge Editing is incompressible and the complexity assumption NP ⊈ coNP/poly holds, then H-free Edge Editing is incompressible for every graph H with at least five vertices that is neither complete nor empty. That is, proving incompressibility for these nine graphs would give a complete classification of the kernelization complexity of H-free Edge Editing for every H with at least 5 vertices. We obtain similar result also for H-free Edge Deletion. Here the picture is more complicated due to the existence of another infinite family of graphs H where the problem is trivial (graphs with exactly one edge). We obtain a larger set ℋ of nineteen graphs whose incompressibility would give a complete classification of the kernelization complexity of H-free Edge Deletion for every graph H with at least 5 vertices. Analogous results follow also for the H-free Edge Completion problem by simple complementation.

Cite as

Dániel Marx and R. B. Sandeep. Incompressibility of H-Free Edge Modification Problems: Towards a Dichotomy. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 72:1-72:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{marx_et_al:LIPIcs.ESA.2020.72,
  author =	{Marx, D\'{a}niel and Sandeep, R. B.},
  title =	{{Incompressibility of H-Free Edge Modification Problems: Towards a Dichotomy}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{72:1--72:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.72},
  URN =		{urn:nbn:de:0030-drops-129383},
  doi =		{10.4230/LIPIcs.ESA.2020.72},
  annote =	{Keywords: incompressibility, edge modification problems, H-free graphs}
}
Document
Approximating k-Connected m-Dominating Sets

Authors: Zeev Nutov


Abstract
A subset S of nodes in a graph G is a k-connected m-dominating set ((k,m)-cds) if the subgraph G[S] induced by S is k-connected and every v ∈ V⧵S has at least m neighbors in S. In the k-Connected m-Dominating Set ((k,m)-CDS) problem the goal is to find a minimum weight (k,m)-cds in a node-weighted graph. For m ≥ k we obtain the following approximation ratios. For general graphs our ratio O(k ln n) improves the previous best ratio O(k² ln n) of [Z. Nutov, 2018] and matches the best known ratio for unit weights of [Z. Zhang et al., 2018]. For unit disk graphs we improve the ratio O(k ln k) of [Z. Nutov, 2018] to min{m/(m-k),k^{2/3}} ⋅ O(ln² k) - this is the first sublinear ratio for the problem, and the first polylogarithmic ratio O(ln² k)/ε when m ≥ (1+ε)k; furthermore, we obtain ratio min{m/(m-k), √k} ⋅ O(ln² k) for uniform weights. These results are obtained by showing the same ratios for the Subset k-Connectivity problem when the set of terminals is an m-dominating set.

Cite as

Zeev Nutov. Approximating k-Connected m-Dominating Sets. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nutov:LIPIcs.ESA.2020.73,
  author =	{Nutov, Zeev},
  title =	{{Approximating k-Connected m-Dominating Sets}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.73},
  URN =		{urn:nbn:de:0030-drops-129392},
  doi =		{10.4230/LIPIcs.ESA.2020.73},
  annote =	{Keywords: k-connected graph, m-dominating set, approximation algorithm, rooted subset k-connectivity, subset k-connectivity}
}
Document
Full Complexity Classification of the List Homomorphism Problem for Bounded-Treewidth Graphs

Authors: Karolina Okrasa, Marta Piecyk, and Paweł Rzążewski


Abstract
A homomorphism from a graph G to a graph H is an edge-preserving mapping from V(G) to V(H). Let H be a fixed graph with possible loops. In the list homomorphism problem, denoted by LHom(H), we are given a graph G, whose every vertex v is assigned with a list L(v) of vertices of H. We ask whether there exists a homomorphism h from G to H, which respects lists L, i.e., for every v ∈ V(G) it holds that h(v) ∈ L(v). The complexity dichotomy for LHom(H) was proven by Feder, Hell, and Huang [JGT 2003]. The authors showed that the problem is polynomial-time solvable if H belongs to the class called bi-arc graphs, and for all other graphs H it is NP-complete. We are interested in the complexity of the LHom(H) problem, parameterized by the treewidth of the input graph. This problem was investigated by Egri, Marx, and Rzążewski [STACS 2018], who obtained tight complexity bounds for the special case of reflexive graphs H, i.e., if every vertex has a loop. In this paper we extend and generalize their results for all relevant graphs H, i.e., those, for which the LHom(H) problem is NP-hard. For every such H we find a constant k = k(H), such that the LHom(H) problem on instances G with n vertices and treewidth t - can be solved in time k^t ⋅ n^𝒪(1), provided that G is given along with a tree decomposition of width t, - cannot be solved in time (k-ε)^t ⋅ n^𝒪(1), for any ε > 0, unless the SETH fails. For some graphs H the value of k(H) is much smaller than the trivial upper bound, i.e., |V(H)|. Obtaining matching upper and lower bounds shows that the set of algorithmic tools that we have discovered cannot be extended in order to obtain faster algorithms for LHom(H) in bounded-treewidth graphs. Furthermore, neither the algorithm, nor the proof of the lower bound, is very specific to treewidth. We believe that they can be used for other variants of the LHom(H) problem, e.g. with different parameterizations.

Cite as

Karolina Okrasa, Marta Piecyk, and Paweł Rzążewski. Full Complexity Classification of the List Homomorphism Problem for Bounded-Treewidth Graphs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 74:1-74:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{okrasa_et_al:LIPIcs.ESA.2020.74,
  author =	{Okrasa, Karolina and Piecyk, Marta and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Full Complexity Classification of the List Homomorphism Problem for Bounded-Treewidth Graphs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{74:1--74:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.74},
  URN =		{urn:nbn:de:0030-drops-129402},
  doi =		{10.4230/LIPIcs.ESA.2020.74},
  annote =	{Keywords: list homomorphisms, fine-grained complexity, SETH, treewidth}
}
Document
Generalizing CGAL Periodic Delaunay Triangulations

Authors: Georg Osang, Mael Rouxel-Labbé, and Monique Teillaud


Abstract
Even though Delaunay originally introduced his famous triangulations in the case of infinite point sets with translational periodicity, a software that computes such triangulations in the general case is not yet available, to the best of our knowledge. Combining and generalizing previous work, we present a practical algorithm for computing such triangulations. The algorithm has been implemented and experiments show that its performance is as good as the one of the CGAL package, which is restricted to cubic periodicity.

Cite as

Georg Osang, Mael Rouxel-Labbé, and Monique Teillaud. Generalizing CGAL Periodic Delaunay Triangulations. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 75:1-75:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{osang_et_al:LIPIcs.ESA.2020.75,
  author =	{Osang, Georg and Rouxel-Labb\'{e}, Mael and Teillaud, Monique},
  title =	{{Generalizing CGAL Periodic Delaunay Triangulations}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{75:1--75:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.75},
  URN =		{urn:nbn:de:0030-drops-129419},
  doi =		{10.4230/LIPIcs.ESA.2020.75},
  annote =	{Keywords: Delaunay triangulation, lattice, algorithm, software, experiments}
}
Document
Engineering Fast Almost Optimal Algorithms for Bipartite Graph Matching

Authors: Ioannis Panagiotas and Bora Uçar


Abstract
We consider the maximum cardinality matching problem in bipartite graphs. There are a number of exact, deterministic algorithms for this purpose, whose complexities are high in practice. There are randomized approaches for special classes of bipartite graphs. Random 2-out bipartite graphs, where each vertex chooses two neighbors at random from the other side, form one class for which there is an O(m+nlog n)-time Monte Carlo algorithm. Regular bipartite graphs, where all vertices have the same degree, form another class for which there is an expected O(m + nlog n)-time Las Vegas algorithm. We investigate these two algorithms and turn them into practical heuristics with randomization. Experimental results show that the heuristics are fast and obtain near optimal matchings. They are also more robust than the state of the art heuristics used in the cardinality matching algorithms, and are generally more useful as initialization routines.

Cite as

Ioannis Panagiotas and Bora Uçar. Engineering Fast Almost Optimal Algorithms for Bipartite Graph Matching. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 76:1-76:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{panagiotas_et_al:LIPIcs.ESA.2020.76,
  author =	{Panagiotas, Ioannis and U\c{c}ar, Bora},
  title =	{{Engineering Fast Almost Optimal Algorithms for Bipartite Graph Matching}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{76:1--76:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.76},
  URN =		{urn:nbn:de:0030-drops-129424},
  doi =		{10.4230/LIPIcs.ESA.2020.76},
  annote =	{Keywords: bipartite graphs, matching, randomized algorithm}
}
Document
Efficient Computation of 2-Covers of a String

Authors: Jakub Radoszewski and Juliusz Straszyński


Abstract
Quasiperiodicity is a generalization of periodicity that has been researched for almost 30 years. The notion of cover is the classic variant of quasiperiodicity. A cover of a text T is a string whose occurrences in T cover all positions of T. There are several algorithms computing covers of a text in linear time. In this paper we consider a natural extension of cover. For a text T, we call a pair of strings a 2-cover if they have the same length and their occurrences cover the text T. We give an algorithm that computes all 2-covers of a string of length n in 𝒪(n log n log log n + output) expected time or 𝒪(n log n log² log n / log log log n + output) worst-case time, where output is the size of output. If (X,Y) is a 2-cover of T, then either X is a prefix and Y is a suffix of T, in which case we call (X,Y) a ps-cover, or one of X, Y is a border (that is, both a prefix and a suffix) of T, and then we call (X,Y) a b-cover. A string of length n has up to n ps-covers; we show an algorithm that computes all of them in 𝒪(n log log n) expected time or 𝒪(n log² log n / log log log n) worst-case time. A string of length n can have Θ(n²) non-trivial b-covers; our algorithm can report one b-cover per length (if it exists) or all shortest b-covers in 𝒪(n log n log log n) expected time or 𝒪(n log n log² log n / log log log n) worst-case time. All our algorithms use linear space. The problem in scope can be generalized to λ > 2 equal-length strings, resulting in the notion of λ-cover. Cole et al. (2005) showed that the λ-cover problem is NP-complete. Our algorithms generalize to λ-covers, with (the first component of) the algorithm’s complexity multiplied by n^{λ-2}.

Cite as

Jakub Radoszewski and Juliusz Straszyński. Efficient Computation of 2-Covers of a String. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 77:1-77:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{radoszewski_et_al:LIPIcs.ESA.2020.77,
  author =	{Radoszewski, Jakub and Straszy\'{n}ski, Juliusz},
  title =	{{Efficient Computation of 2-Covers of a String}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{77:1--77:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.77},
  URN =		{urn:nbn:de:0030-drops-129432},
  doi =		{10.4230/LIPIcs.ESA.2020.77},
  annote =	{Keywords: quasiperiodicity, cover of a string, 2-cover, lambda-cover}
}
Document
Improved Approximation Algorithm for Set Multicover with Non-Piercing Regions

Authors: Rajiv Raman and Saurabh Ray


Abstract
In the Set Multicover problem, we are given a set system (X,𝒮), where X is a finite ground set, and 𝒮 is a collection of subsets of X. Each element x ∈ X has a non-negative demand d(x). The goal is to pick a smallest cardinality sub-collection 𝒮' of 𝒮 such that each point is covered by at least d(x) sets from 𝒮'. In this paper, we study the set multicover problem for set systems defined by points and non-piercing regions in the plane, which includes disks, pseudodisks, k-admissible regions, squares, unit height rectangles, homothets of convex sets, upward paths on a tree, etc. We give a polynomial time (2+ε)-approximation algorithm for the set multicover problem (P, ℛ), where P is a set of points with demands, and ℛ is a set of non-piercing regions, as well as for the set multicover problem (𝒟, P), where 𝒟 is a set of pseudodisks with demands, and P is a set of points in the plane, which is the hitting set problem with demands.

Cite as

Rajiv Raman and Saurabh Ray. Improved Approximation Algorithm for Set Multicover with Non-Piercing Regions. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 78:1-78:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{raman_et_al:LIPIcs.ESA.2020.78,
  author =	{Raman, Rajiv and Ray, Saurabh},
  title =	{{Improved Approximation Algorithm for Set Multicover with Non-Piercing Regions}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{78:1--78:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.78},
  URN =		{urn:nbn:de:0030-drops-129441},
  doi =		{10.4230/LIPIcs.ESA.2020.78},
  annote =	{Keywords: Approximation algorithms, geometry, Covering}
}
Document
Improved Distance Sensitivity Oracles with Subcubic Preprocessing Time

Authors: Hanlin Ren


Abstract
We consider the problem of building Distance Sensitivity Oracles (DSOs). Given a directed graph G = (V, E) with edge weights in {1, 2, … , M}, we need to preprocess it into a data structure, and answer the following queries: given vertices u,v,x ∈ V, output the length of the shortest path from u to v that does not go through x. Our main result is a simple DSO with Õ(n^2.7233 M²) preprocessing time and O(1) query time. Moreover, if the input graph is undirected, the preprocessing time can be improved to Õ(n^2.6865 M²). Our algorithms are randomized with correct probability ≥ 1-1/n^c, for a constant c that can be made arbitrarily large. Previously, there is a DSO with Õ(n^2.8729 M) preprocessing time and polylog(n) query time [Chechik and Cohen, STOC'20]. At the core of our DSO is the following observation from [Bernstein and Karger, STOC'09]: if there is a DSO with preprocessing time P and query time Q, then we can construct a DSO with preprocessing time P+Õ(Mn²)⋅ Q and query time O(1). (Here Õ(⋅) hides polylog(n) factors.)

Cite as

Hanlin Ren. Improved Distance Sensitivity Oracles with Subcubic Preprocessing Time. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 79:1-79:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ren:LIPIcs.ESA.2020.79,
  author =	{Ren, Hanlin},
  title =	{{Improved Distance Sensitivity Oracles with Subcubic Preprocessing Time}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{79:1--79:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.79},
  URN =		{urn:nbn:de:0030-drops-129450},
  doi =		{10.4230/LIPIcs.ESA.2020.79},
  annote =	{Keywords: Graph theory, Failure-prone structures}
}
Document
Fine-Grained Complexity of Regular Expression Pattern Matching and Membership

Authors: Philipp Schepper


Abstract
The currently fastest algorithm for regular expression pattern matching and membership improves the classical O(nm) time algorithm by a factor of about log^{3/2}n. Instead of focussing on general patterns we analyse homogeneous patterns of bounded depth in this work. For them a classification splitting the types in easy (strongly sub-quadratic) and hard (essentially quadratic time under SETH) is known. We take a very fine-grained look at the hard pattern types from this classification and show a dichotomy: few types allow super-poly-logarithmic improvements while the algorithms for the other pattern types can only be improved by a constant number of log-factors, assuming the Formula-SAT Hypothesis.

Cite as

Philipp Schepper. Fine-Grained Complexity of Regular Expression Pattern Matching and Membership. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 80:1-80:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{schepper:LIPIcs.ESA.2020.80,
  author =	{Schepper, Philipp},
  title =	{{Fine-Grained Complexity of Regular Expression Pattern Matching and Membership}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{80:1--80:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.80},
  URN =		{urn:nbn:de:0030-drops-129464},
  doi =		{10.4230/LIPIcs.ESA.2020.80},
  annote =	{Keywords: Fine-Grained Complexity, Regular Expression, Pattern Matching, Dichotomy}
}
Document
Space-Efficient, Fast and Exact Routing in Time-Dependent Road Networks

Authors: Ben Strasser, Dorothea Wagner, and Tim Zeitz


Abstract
We study the problem of computing shortest paths in massive road networks with traffic predictions. Incorporating traffic predictions into routing allows, for example, to avoid commuter traffic congestions. Existing techniques follow a two-phase approach: In a preprocessing step, an index is built. The index depends on the road network and the traffic patterns but not on the path start and end. The latter are the input of the query phase, in which shortest paths are computed. All existing techniques have either large index size, slow query running times, or may compute suboptimal paths. In this work, we introduce CATCHUp (Customizable Approximated Time-dependent Contraction Hierarchies through Unpacking), the first algorithm that simultaneously achieves all three objectives. The core idea of CATCHUp is to store paths instead of travel times at shortcuts. Shortcut travel times are derived lazily from the stored paths. We perform an experimental study on a set of real world instances and compare our approach with state-of-the-art techniques. Our approach achieves the fastest preprocessing, competitive query running times and up to 30 times smaller indexes than competing approaches.

Cite as

Ben Strasser, Dorothea Wagner, and Tim Zeitz. Space-Efficient, Fast and Exact Routing in Time-Dependent Road Networks. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 81:1-81:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{strasser_et_al:LIPIcs.ESA.2020.81,
  author =	{Strasser, Ben and Wagner, Dorothea and Zeitz, Tim},
  title =	{{Space-Efficient, Fast and Exact Routing in Time-Dependent Road Networks}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{81:1--81:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.81},
  URN =		{urn:nbn:de:0030-drops-129479},
  doi =		{10.4230/LIPIcs.ESA.2020.81},
  annote =	{Keywords: realistic road networks, time-dependent route planning, shortest paths}
}
Document
Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents

Authors: Hanrui Zhang


Abstract
We give a framework for designing prophet inequalities for combinatorial welfare maximization. Instantiated with different parameters, our framework implies (1) an O(log m / log log m)-competitive prophet inequality for subadditive agents, improving over the O(log m) upper bound via item pricing, (2) an O(D log m / log log m)-competitive prophet inequality for D-approximately subadditive agents, where D ∈ {1, … , m-1} measures the maximum number of items that complement each other, and (3) as a byproduct, an O(1)-competitive prophet inequality for submodular or fractionally subadditive (a.k.a. XOS) agents, matching the optimal ratio asymptotically. Our framework is computationally efficient given sample access to the prior and demand queries.

Cite as

Hanrui Zhang. Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 82:1-82:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ESA.2020.82,
  author =	{Zhang, Hanrui},
  title =	{{Improved Prophet Inequalities for Combinatorial Welfare Maximization with (Approximately) Subadditive Agents}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{82:1--82:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.82},
  URN =		{urn:nbn:de:0030-drops-129488},
  doi =		{10.4230/LIPIcs.ESA.2020.82},
  annote =	{Keywords: Prophet Inequalities, Combinatorial Welfare Maximization, (Approximate) Subadditivity}
}
Document
On the Approximation Ratio of the k-Opt and Lin-Kernighan Algorithm for Metric and Graph TSP

Authors: Xianghui Zhong


Abstract
The k-Opt and Lin-Kernighan algorithm are two of the most important local search approaches for the Metric TSP. Both start with an arbitrary tour and make local improvements in each step to get a shorter tour. We show that for any fixed k ≥ 3 the approximation ratio of the k-Opt algorithm for Metric TSP is O(√[k]{n}). Assuming the Erdős girth conjecture, we prove a matching lower bound of Ω(√[k]{n}). Unconditionally, we obtain matching bounds for k = 3,4,6 and a lower bound of Ω(n^{2/(3k-3)}). Our most general bounds depend on the values of a function from extremal graph theory and are tight up to a factor logarithmic in the number of vertices unconditionally. Moreover, all the upper bounds also apply to a parameterized version of the Lin-Kernighan algorithm with appropriate parameter. We also show that the approximation ratio of k-Opt for Graph TSP is Ω(log(n)/(log log(n))) and O({log(n)/(log log(n))}^{log₂(9)+ε}) for all ε > 0.

Cite as

Xianghui Zhong. On the Approximation Ratio of the k-Opt and Lin-Kernighan Algorithm for Metric and Graph TSP. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 83:1-83:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhong:LIPIcs.ESA.2020.83,
  author =	{Zhong, Xianghui},
  title =	{{On the Approximation Ratio of the k-Opt and Lin-Kernighan Algorithm for Metric and Graph TSP}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{83:1--83:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.83},
  URN =		{urn:nbn:de:0030-drops-129497},
  doi =		{10.4230/LIPIcs.ESA.2020.83},
  annote =	{Keywords: traveling salesman problem, metric TSP, graph TSP, k-Opt algorithm, Lin-Kernighan algorithm, approximation algorithm, approximation ratio.}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail