LIPIcs, Volume 87

25th Annual European Symposium on Algorithms (ESA 2017)



Thumbnail PDF

Event

ESA 2017, September 4-6, 2017, Vienna, Austria

Editors

Kirk Pruhs
Christian Sohler

Publication Details

  • published at: 2017-09-01
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-049-1
  • DBLP: db/conf/esa/esa2017

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 87, ESA'17, Complete Volume

Authors: Kirk Pruhs and Christian Sohler


Abstract
LIPIcs, Volume 87, ESA'17, Complete Volume

Cite as

25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{pruhs_et_al:LIPIcs.ESA.2017,
  title =	{{LIPIcs, Volume 87, ESA'17, Complete Volume}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017},
  URN =		{urn:nbn:de:0030-drops-79096},
  doi =		{10.4230/LIPIcs.ESA.2017},
  annote =	{Keywords: Data Structures, Nonnumerical Algorithms and Problems, Optimization, Discrete Mathematics, Mathematical Software, AlgorithmsProblem Solving, Control Methods, and Search, Computational Geometry and Object Modeling}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Programm Commitees, External Reviewers

Authors: Kirk Pruhs and Christian Sohler


Abstract
Front Matter, Table of Contents, Preface, Programm Commitees, External Reviewers

Cite as

25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pruhs_et_al:LIPIcs.ESA.2017.0,
  author =	{Pruhs, Kirk and Sohler, Christian},
  title =	{{Front Matter, Table of Contents, Preface, Programm Commitees, External Reviewers}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.0},
  URN =		{urn:nbn:de:0030-drops-78147},
  doi =		{10.4230/LIPIcs.ESA.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Programm Commitees, External Reviewers}
}
Document
Invited Talk
Sketching for Geometric Problems (Invited Talk)

Authors: David P. Woodruff


Abstract
In this invited talk at the European Symposium on Algorithms (ESA), 2017, I will discuss a tool called sketching, which is a form of data dimensionality reduction, and its applications to several problems in high dimensional geometry. In particular, I will show how to obtain the fastest possible algorithms for fundamental problems such as projection onto a flat, and also study generalizations of projection onto more complicated objects such as the union of flats or subspaces. Some of these problems are just least squares regression problems, with many applications in machine learning, numerical linear algebra, and optimization. I will also discuss low rank approximation, with applications to clustering. Finally I will mention a number of other applications of sketching in machine learning, numerical linear algebra, and optimization.

Cite as

David P. Woodruff. Sketching for Geometric Problems (Invited Talk). In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 1:1-1:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{woodruff:LIPIcs.ESA.2017.1,
  author =	{Woodruff, David P.},
  title =	{{Sketching for Geometric Problems}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{1:1--1:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.1},
  URN =		{urn:nbn:de:0030-drops-78848},
  doi =		{10.4230/LIPIcs.ESA.2017.1},
  annote =	{Keywords: dimensionality reduction, low rank approximation, projection, regression, sketching}
}
Document
Permuting and Batched Geometric Lower Bounds in the I/O Model

Authors: Peyman Afshani and Ingo van Duijn


Abstract
We study permuting and batched orthogonal geometric reporting problems in the External Memory Model (EM), assuming indivisibility of the input records. Our main results are twofold. First, we prove a general simulation result that essentially shows that any permutation algorithm (resp. duplicate removal algorithm) that does alpha*N/B I/Os (resp. to remove a fraction of the existing duplicates) can be simulated with an algorithm that does alpha phases where each phase reads and writes each element once, but using a factor alpha smaller block size. Second, we prove two lower bounds for batched rectangle stabbing and batched orthogonal range reporting queries. Assuming a short cache, we prove very high lower bounds that currently are not possible with the existing techniques under the tall cache assumption.

Cite as

Peyman Afshani and Ingo van Duijn. Permuting and Batched Geometric Lower Bounds in the I/O Model. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ESA.2017.2,
  author =	{Afshani, Peyman and van Duijn, Ingo},
  title =	{{Permuting and Batched Geometric Lower Bounds in the I/O Model}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{2:1--2:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.2},
  URN =		{urn:nbn:de:0030-drops-78695},
  doi =		{10.4230/LIPIcs.ESA.2017.2},
  annote =	{Keywords: I/O Model, Batched Geometric Queries, Lower Bounds, Permuting}
}
Document
Independent Range Sampling, Revisited

Authors: Peyman Afshani and Zhewei Wei


Abstract
In the independent range sampling (IRS) problem, given an input set P of n points in R^d, the task is to build a data structure, such that given a range R and an integer t >= 1, it returns t points that are uniformly and independently drawn from P cap R. The samples must satisfy inter-query independence, that is, the samples returned by every query must be independent of the samples returned by all the previous queries. This problem was first tackled by Hu, Qiao and Tao in 2014, who proposed optimal structures for one-dimensional dynamic IRS problem in internal memory and one-dimensional static IRS problem in external memory. In this paper, we study two natural extensions of the independent range sampling problem. In the first extension, we consider the static IRS problem in two and three dimensions in internal memory. We obtain data structures with optimal space-query tradeoffs for 3D halfspace, 3D dominance, and 2D three-sided queries. The second extension considers weighted IRS problem. Each point is associated with a real-valued weight, and given a query range R, a sample is drawn independently such that each point in P cap R is selected with probability proportional to its weight. Walker's alias method is a classic solution to this problem when no query range is specified. We obtain optimal data structure for one dimensional weighted range sampling problem, thereby extending the alias method to allow range queries.

Cite as

Peyman Afshani and Zhewei Wei. Independent Range Sampling, Revisited. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ESA.2017.3,
  author =	{Afshani, Peyman and Wei, Zhewei},
  title =	{{Independent Range Sampling, Revisited}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.3},
  URN =		{urn:nbn:de:0030-drops-78592},
  doi =		{10.4230/LIPIcs.ESA.2017.3},
  annote =	{Keywords: data structures, range searching, range sampling, random sampling}
}
Document
Approximate Nearest Neighbor Search Amid Higher-Dimensional Flats

Authors: Pankaj K. Agarwal, Natan Rubin, and Micha Sharir


Abstract
We consider the Approximate Nearest Neighbor (ANN) problem where the input set consists of n k-flats in the Euclidean Rd, for any fixed parameters k<d, and where, for each query point q, we want to return an input flat whose distance from q is at most (1 + epsilon) times the shortest such distance, where epsilon > 0 is another prespecified parameter. We present an algorithm that achieves this task with n^{k+1}(log(n)/epsilon)^O(1) storage and preprocessing (where the constant of proportionality in the big-O notation depends on d), and can answer a query in O(polylog(n)) time (where the power of the logarithm depends on d and k). In particular, we need only near-quadratic storage to answer ANN queries amidst a set of n lines in any fixed-dimensional Euclidean space. As a by-product, our approach also yields an algorithm, with similar performance bounds, for answering exact nearest neighbor queries amidst k-flats with respect to any polyhedral distance function. Our results are more general, in that they also provide a tradeoff between storage and query time.

Cite as

Pankaj K. Agarwal, Natan Rubin, and Micha Sharir. Approximate Nearest Neighbor Search Amid Higher-Dimensional Flats. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ESA.2017.4,
  author =	{Agarwal, Pankaj K. and Rubin, Natan and Sharir, Micha},
  title =	{{Approximate Nearest Neighbor Search Amid Higher-Dimensional Flats}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.4},
  URN =		{urn:nbn:de:0030-drops-78182},
  doi =		{10.4230/LIPIcs.ESA.2017.4},
  annote =	{Keywords: Approximate nearest neighbor search, k-flats, Polyhedral distance functions, Linear programming queries}
}
Document
Output Sensitive Algorithms for Approximate Incidences and Their Applications

Authors: Dror Aiger, Haim Kaplan, and Micha Sharir


Abstract
An epsilon-approximate incidence between a point and some geometric object (line, circle, plane, sphere) occurs when the point and the object lie at distance at most epsilon from each other. Given a set of points and a set of objects, computing the approximate incidences between them is a major step in many database and web-based applications in computer vision and graphics, including robust model fitting, approximate point pattern matching, and estimating the fundamental matrix in epipolar (stereo) geometry. In a typical approximate incidence problem of this sort, we are given a set P of m points in two or three dimensions, a set S of n objects (lines, circles, planes, spheres), and an error parameter epsilon>0, and our goal is to report all pairs (p,s) in P times S that lie at distance at most epsilon from one another. We present efficient output-sensitive approximation algorithms for quite a few cases, including points and lines or circles in the plane, and points and planes, spheres, lines, or circles in three dimensions. Several of these cases arise in the applications mentioned above. Our algorithms report all pairs at distance <= epsilon, but may also report additional pairs, all of which are guaranteed to be at distance at most alphaepsilon, for some constant alpha>1. Our algorithms are based on simple primal and dual grid decompositions and are easy to implement. We note though that (a) the use of duality, which leads to significant improvements in the overhead cost of the algorithms, appears to be novel for this kind of problems; (b) the correct choice of duality in some of these problems is fairly intricate and requires some care; and (c) the correctness and performance analysis of the algorithms (especially in the more advanced versions) is fairly non-trivial. We analyze our algorithms and prove guaranteed upper bounds on their running time and on the "distortion" parameter alpha. We also briefly describe the motivating applications, and show how they can effectively exploit our solutions. The superior theoretical bounds on the performance of our algorithms, and their simplicity, make them indeed ideal tools for these applications. In a series of preliminary experimentations (not included in this abstract), we substantiate this feeling, and show that our algorithms lead in practice to significant improved performance of the aforementioned applications.

Cite as

Dror Aiger, Haim Kaplan, and Micha Sharir. Output Sensitive Algorithms for Approximate Incidences and Their Applications. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aiger_et_al:LIPIcs.ESA.2017.5,
  author =	{Aiger, Dror and Kaplan, Haim and Sharir, Micha},
  title =	{{Output Sensitive Algorithms for Approximate Incidences and Their Applications}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{5:1--5:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.5},
  URN =		{urn:nbn:de:0030-drops-78224},
  doi =		{10.4230/LIPIcs.ESA.2017.5},
  annote =	{Keywords: Approximate incidences, near-neighbor reporting, duality, grid-based approximation}
}
Document
Randomized Contractions for Multiobjective Minimum Cuts

Authors: Hassene Aissi, Ali Ridha Mahjoub, and R. Ravi


Abstract
We show that Karger's randomized contraction method (SODA 93) can be adapted to multiobjective global minimum cut problems with a constant number of edge or node budget constraints to give efficient algorithms. For global minimum cuts with a single edge-budget constraint, our extension of the randomized contraction method has running time tilde{O}(n^3) in an n-node graph improving upon the best-known randomized algorithm with running time tilde{O}(n^4) due to Armon and Zwick (Algorithmica 2006). Our analysis also gives a new upper bound of O(n^3) for the number of optimal solutions for a single edge-budget min cut problem. For the case of (k-1) edge-budget constraints, the extension of our algorithm saves a logarithmic factor from the best-known randomized running time of O(n^{2k} log^3 n). A main feature of our algorithms is to adaptively choose, at each step, the appropriate cost function used in the random selection of edges to be contracted. For the global min cut problem with a constant number of node budgets, we give a randomized algorithm with running time tilde{O}(n^2), improving the current best determinisitic running time of O(n^3) due to Goemans and Soto (SIAM Journal on Discrete Mathematics 2013). Our method also shows that the total number of distinct optimal solutions is bounded by O(n^2) as in the case of global min-cuts. Our algorithm extends to the node-budget constrained global min cut problem excluding a given sink with the same running time and bound on number of optimal solutions, again improving upon the best-known running time by a factor of O(n). For node-budget constrained problems, our improvements arise from incorporating the idea of merging any infeasible super-nodes that arise during the random contraction process. In contrast to cuts excluding a sink, we note that the node-cardinality constrained min-cut problem containing a given source is strongly NP-hard using a reduction from graph bisection.

Cite as

Hassene Aissi, Ali Ridha Mahjoub, and R. Ravi. Randomized Contractions for Multiobjective Minimum Cuts. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aissi_et_al:LIPIcs.ESA.2017.6,
  author =	{Aissi, Hassene and Mahjoub, Ali Ridha and Ravi, R.},
  title =	{{Randomized Contractions for Multiobjective Minimum Cuts}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.6},
  URN =		{urn:nbn:de:0030-drops-78686},
  doi =		{10.4230/LIPIcs.ESA.2017.6},
  annote =	{Keywords: minimum cut, multiobjective optimization, budget constraints, graph algorithms, randomized algorithms}
}
Document
Tight Bounds for Online Coloring of Basic Graph Classes

Authors: Susanne Albers and Sebastian Schraink


Abstract
We resolve a number of long-standing open problems in online graph coloring. More specifically, we develop tight lower bounds on the performance of online algorithms for fundamental graph classes. An important contribution is that our bounds also hold for randomized online algorithms, for which hardly any results were known. Technically, we construct lower bounds for chordal graphs. The constructions then allow us to derive results on the performance of randomized online algorithms for the following further graph classes: trees, planar, bipartite, inductive, bounded-treewidth and disk graphs. It shows that the best competitive ratio of both deterministic and randomized online algorithms is Theta(log n), where n is the number of vertices of a graph. Furthermore, we prove that this guarantee cannot be improved if an online algorithm has a lookahead of size O(n/log n) or access to a reordering buffer of size n^(1-epsilon), for any 0 < epsilon <= 1. A consequence of our results is that, for all of the above mentioned graph classes except bipartite graphs, the natural First Fit coloring algorithm achieves an optimal performance, up to constant factors, among deterministic and randomized online algorithms.

Cite as

Susanne Albers and Sebastian Schraink. Tight Bounds for Online Coloring of Basic Graph Classes. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{albers_et_al:LIPIcs.ESA.2017.7,
  author =	{Albers, Susanne and Schraink, Sebastian},
  title =	{{Tight Bounds for Online Coloring of Basic Graph Classes}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.7},
  URN =		{urn:nbn:de:0030-drops-78268},
  doi =		{10.4230/LIPIcs.ESA.2017.7},
  annote =	{Keywords: graph coloring, online algorithms, lower bounds, randomization}
}
Document
Combinatorics of Local Search: An Optimal 4-Local Hall's Theorem for Planar Graphs

Authors: Daniel Antunes, Claire Mathieu, and Nabil H. Mustafa


Abstract
Local search for combinatorial optimization problems is becoming a dominant algorithmic paradigm, with several papers using it to resolve long-standing open problems. In this paper, we prove the following `4-local' version of Hall's theorem for planar graphs: given a bipartite planar graph G = (B, R, E) such that |N(B')| >= |B'| for all |B'| <= 4, there exists a matching of size at least |B|/4 in G; furthermore this bound is tight. Besides immediately implying improved bounds for several problems studied in previous papers, we find this variant of Hall's theorem to be of independent interest in graph theory.

Cite as

Daniel Antunes, Claire Mathieu, and Nabil H. Mustafa. Combinatorics of Local Search: An Optimal 4-Local Hall's Theorem for Planar Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{antunes_et_al:LIPIcs.ESA.2017.8,
  author =	{Antunes, Daniel and Mathieu, Claire and Mustafa, Nabil H.},
  title =	{{Combinatorics of Local Search: An Optimal 4-Local Hall's Theorem for Planar Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.8},
  URN =		{urn:nbn:de:0030-drops-78293},
  doi =		{10.4230/LIPIcs.ESA.2017.8},
  annote =	{Keywords: Planar graphs, Local search, Hall's theorem, Combinatorial optimization, Expansion}
}
Document
In-Place Parallel Super Scalar Samplesort (IPSSSSo)

Authors: Michael Axtmann, Sascha Witt, Daniel Ferizovic, and Peter Sanders


Abstract
We present a sorting algorithm that works in-place, executes in parallel, is cache-efficient, avoids branch-mispredictions, and performs work O(n log n) for arbitrary inputs with high probability. The main algorithmic contributions are new ways to make distribution-based algorithms in-place: On the practical side, by using coarse-grained block-based permutations, and on the theoretical side, we show how to eliminate the recursion stack. Extensive experiments shw that our algorithm IPSSSSo scales well on a variety of multi-core machines. We outperform our closest in-place competitor by a factor of up to 3. Even as a sequential algorithm, we are up to 1.5 times faster than the closest sequential competitor, BlockQuicksort.

Cite as

Michael Axtmann, Sascha Witt, Daniel Ferizovic, and Peter Sanders. In-Place Parallel Super Scalar Samplesort (IPSSSSo). In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{axtmann_et_al:LIPIcs.ESA.2017.9,
  author =	{Axtmann, Michael and Witt, Sascha and Ferizovic, Daniel and Sanders, Peter},
  title =	{{In-Place Parallel Super Scalar Samplesort (IPSSSSo)}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.9},
  URN =		{urn:nbn:de:0030-drops-78542},
  doi =		{10.4230/LIPIcs.ESA.2017.9},
  annote =	{Keywords: shared memory, parallel sorting, in-place algorithm, comparison-based sorting, branch prediction}
}
Document
Online Bin Packing with Cardinality Constraints Resolved

Authors: Janos Balogh, Jozsef Bekesi, Gyorgy Dosa, Leah Epstein, and Asaf Levin


Abstract
Cardinality constrained bin packing or bin packing with cardinality constraints is a basic bin packing problem. In the online version with the parameter k >= 2, items having sizes in (0,1] associated with them are presented one by one to be packed into unit capacity bins, such that the capacities of bins are not exceeded, and no bin receives more than k items. We resolve the online problem in the sense that we prove a lower bound of 2 on the overall asymptotic competitive ratio. This closes the long standing open problem of finding the value of the best possible overall asymptotic competitive ratio, since an algorithm of an absolute competitive ratio 2 for any fixed value of k is known. Additionally, we significantly improve the known lower bounds on the asymptotic competitive ratio for every specific value of k. The novelty of our constructions is based on full adaptivity that creates large gaps between item sizes. Thus, our lower bound inputs do not follow the common practice for online bin packing problems of having a known in advance input consisting of batches for which the algorithm needs to be competitive on every prefix of the input. Last, we show a lower bound strictly larger than 2 on the asymptotic competitive ratio of the online 2-dimensional vector packing problem, and thus provide for the first time a lower bound larger than 2 on the asymptotic competitive ratio for the vector packing problem in any fixed dimension.

Cite as

Janos Balogh, Jozsef Bekesi, Gyorgy Dosa, Leah Epstein, and Asaf Levin. Online Bin Packing with Cardinality Constraints Resolved. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{balogh_et_al:LIPIcs.ESA.2017.10,
  author =	{Balogh, Janos and Bekesi, Jozsef and Dosa, Gyorgy and Epstein, Leah and Levin, Asaf},
  title =	{{Online Bin Packing with Cardinality Constraints Resolved}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.10},
  URN =		{urn:nbn:de:0030-drops-78514},
  doi =		{10.4230/LIPIcs.ESA.2017.10},
  annote =	{Keywords: Online algorithms, bin packing, cardinality constraints, lower bounds}
}
Document
Modeling and Engineering Constrained Shortest Path Algorithms for Battery Electric Vehicles

Authors: Moritz Baum, Julian Dibbelt, Dorothea Wagner, and Tobias Zündorf


Abstract
We study the problem of computing constrained shortest paths for battery electric vehicles. Since battery capacities are limited, fastest routes are often infeasible. Instead, users are interested in fast routes where the energy consumption does not exceed the battery capacity. For that, drivers can deliberately reduce speed to save energy. Hence, route planning should provide both path and speed recommendations. To tackle the resulting NP-hard optimization problem, previous work trades correctness or accuracy of the underlying model for practical running times. In this work, we present a novel framework to compute optimal constrained shortest paths for electric vehicles that uses more realistic physical models, while taking speed adaptation into account. Careful algorithm engineering makes the approach practical even on large, realistic road networks: We compute optimal solutions in less than a second for typical battery capacities, matching performance of previous inexact methods. For even faster performance, the approach can easily be extended with heuristics that provide high quality solutions within milliseconds.

Cite as

Moritz Baum, Julian Dibbelt, Dorothea Wagner, and Tobias Zündorf. Modeling and Engineering Constrained Shortest Path Algorithms for Battery Electric Vehicles. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{baum_et_al:LIPIcs.ESA.2017.11,
  author =	{Baum, Moritz and Dibbelt, Julian and Wagner, Dorothea and Z\"{u}ndorf, Tobias},
  title =	{{Modeling and Engineering Constrained Shortest Path Algorithms for Battery Electric Vehicles}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.11},
  URN =		{urn:nbn:de:0030-drops-78672},
  doi =		{10.4230/LIPIcs.ESA.2017.11},
  annote =	{Keywords: electric vehicles, constrained shortest paths, algorithm engineering}
}
Document
A Quasi-Polynomial-Time Approximation Scheme for Vehicle Routing on Planar and Bounded-Genus Graphs

Authors: Amariah Becker, Philip N. Klein, and David Saulpic


Abstract
The Capacitated Vehicle Routing problem is a generalization of the Traveling Salesman problem in which a set of clients must be visited by a collection of capacitated tours. Each tour can visit at most Q clients and must start and end at a specified depot. We present the first approximation scheme for Capacitated Vehicle Routing for non-Euclidean metrics. Specifically we give a quasi-polynomial-time approximation scheme for Capacitated Vehicle Routing with fixed capacities on planar graphs. We also show how this result can be extended to bounded-genus graphs and polylogarithmic capacities, as well as to variations of the problem that include multiple depots and charging penalties for unvisited clients.

Cite as

Amariah Becker, Philip N. Klein, and David Saulpic. A Quasi-Polynomial-Time Approximation Scheme for Vehicle Routing on Planar and Bounded-Genus Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ESA.2017.12,
  author =	{Becker, Amariah and Klein, Philip N. and Saulpic, David},
  title =	{{A Quasi-Polynomial-Time Approximation Scheme for Vehicle Routing on Planar and Bounded-Genus Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{12:1--12:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.12},
  URN =		{urn:nbn:de:0030-drops-78781},
  doi =		{10.4230/LIPIcs.ESA.2017.12},
  annote =	{Keywords: Capacitated Vehicle Routing, Approximation Algorithms, Planar Graphs}
}
Document
The Directed Disjoint Shortest Paths Problem

Authors: Kristof Berczi and Yusuke Kobayashi


Abstract
In the k disjoint shortest paths problem (k-DSPP), we are given a graph and its vertex pairs (s_1, t_1), ... , (s_k, t_k), and the objective is to find k pairwise disjoint paths P_1, ... , P_k such that each path P_i is a shortest path from s_i to t_i, if they exist. If the length of each edge is equal to zero, then this problem amounts to the disjoint paths problem, which is one of the well-studied problems in algorithmic graph theory and combinatorial optimization. Eilam-Tzoreff (1998) focused on the case when the length of each edge is positive, and showed that the undirected version of 2-DSPP can be solved in polynomial time. Polynomial solvability of the directed version was posed as an open problem by Eilam-Tzoreff (1998). In this paper, we solve this problem affirmatively, that is, we give a first polynomial time algorithm for the directed version of 2-DSPP when the length of each edge is positive. Note that the 2 disjoint paths problem in digraphs is NP-hard, which implies that the directed 2-DSPP is NP-hard if the length of each edge can be zero. We extend our result to the case when the instance has two terminal pairs and the number of paths is a fixed constant greater than two. We also show that the undirected k-DSPP and the vertex-disjoint version of the directed k-DSPP can be solved in polynomial time if the input graph is planar and k is a fixed constant.

Cite as

Kristof Berczi and Yusuke Kobayashi. The Directed Disjoint Shortest Paths Problem. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{berczi_et_al:LIPIcs.ESA.2017.13,
  author =	{Berczi, Kristof and Kobayashi, Yusuke},
  title =	{{The Directed Disjoint Shortest Paths Problem}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.13},
  URN =		{urn:nbn:de:0030-drops-78246},
  doi =		{10.4230/LIPIcs.ESA.2017.13},
  annote =	{Keywords: Disjoint paths, shortest path, polynomial time algorithm}
}
Document
Triangle Packing in (Sparse) Tournaments: Approximation and Kernelization

Authors: Stéphane Bessy, Marin Bougeret, and Jocelyn Thiebaut


Abstract
Given a tournament T and a positive integer k, the C_3-Packing-T asks if there exists a least k (vertex-)disjoint directed 3-cycles in T. This is the dual problem in tournaments of the classical minimal feedback vertex set problem. Surprisingly C_3-Packing-T did not receive a lot of attention in the literature. We show that it does not admit a PTAS unless P=NP, even if we restrict the considered instances to sparse tournaments, that is tournaments with a feedback arc set (FAS) being a matching. Focusing on sparse tournaments we provide a (1+6/(c-1)) approximation algorithm for sparse tournaments having a linear representation where all the backward arcs have "length" at least c. Concerning kernelization, we show that C_3-Packing-T admits a kernel with O(m) vertices, where m is the size of a given feedback arc set. In particular, we derive a O(k) vertices kernel for C_3-Packing-T when restricted to sparse instances. On the negative size, we show that C_3-Packing-T does not admit a kernel of (total bit) size O(k^{2-epsilon}) unless NP is a subset of coNP / Poly. The existence of a kernel in O(k) vertices for C_3-Packing-T remains an open question.

Cite as

Stéphane Bessy, Marin Bougeret, and Jocelyn Thiebaut. Triangle Packing in (Sparse) Tournaments: Approximation and Kernelization. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 14:1-14:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bessy_et_al:LIPIcs.ESA.2017.14,
  author =	{Bessy, St\'{e}phane and Bougeret, Marin and Thiebaut, Jocelyn},
  title =	{{Triangle Packing in (Sparse) Tournaments: Approximation and Kernelization}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{14:1--14:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.14},
  URN =		{urn:nbn:de:0030-drops-78622},
  doi =		{10.4230/LIPIcs.ESA.2017.14},
  annote =	{Keywords: Tournament Triangle packing, Feedback arc set, Approximation algorithms, Parameterized algorithms}
}
Document
Improved Algorithm for Dynamic b-Matching

Authors: Sayan Bhattacharya, Manoj Gupta, and Divyarthi Mohan


Abstract
Recently there has been extensive work on maintaining (approximate) maximum matchings in dynamic graphs. We consider a generalisation of this problem known as the maximum b-matching: Every node v has a positive integral capacity b_v, and the goal is to maintain an (approximate) maximum-cardinality subset of edges that contains at most b_v edges incident on every node v. The maximum matching problem is a special case of this problem where b_v = 1 for every node v. Bhattacharya, Henzinger and Italiano [ICALP 2015] showed how to maintain a O(1) approximate maximum b-matching in a graph in O(log^3 n) amortised update time. Their approximation ratio was a large (double digit) constant. We significantly improve their result both in terms of approximation ratio as well as update time. Specifically, we design a randomised dynamic algorithm that maintains a (2+epsilon)-approximate maximum $b$-matching in expected amortised O(1/epsilon^4) update time. Thus, for every constant epsilon in (0, 1), we get expected amortised O(1) update time. Our algorithm generalises the framework of Baswana, Gupta, Sen [FOCS 2011] and Solomon [FOCS 2016] for maintaining a maximal matching in a dynamic graph.

Cite as

Sayan Bhattacharya, Manoj Gupta, and Divyarthi Mohan. Improved Algorithm for Dynamic b-Matching. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ESA.2017.15,
  author =	{Bhattacharya, Sayan and Gupta, Manoj and Mohan, Divyarthi},
  title =	{{Improved Algorithm for Dynamic b-Matching}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.15},
  URN =		{urn:nbn:de:0030-drops-78443},
  doi =		{10.4230/LIPIcs.ESA.2017.15},
  annote =	{Keywords: dynamic data structures, graph algorithms}
}
Document
Fast Dynamic Arrays

Authors: Philip Bille, Anders Roy Christiansen, Mikko Berggren Ettienne, and Inge Li Gørtz


Abstract
We present a highly optimized implementation of tiered vectors, a data structure for maintaining a sequence of n elements supporting access in time O(1) and insertion and deletion in time O(n^e) for e > 0 while using o(n) extra space. We consider several different implementation optimizations in C++ and compare their performance to that of vector and set from the standard library on sequences with up to 10^8 elements. Our fastest implementation uses much less space than set while providing speedups of 40x for access operations compared to set and speedups of 10.000x compared to vector for insertion and deletion operations while being competitive with both data structures for all other operations.

Cite as

Philip Bille, Anders Roy Christiansen, Mikko Berggren Ettienne, and Inge Li Gørtz. Fast Dynamic Arrays. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.ESA.2017.16,
  author =	{Bille, Philip and Christiansen, Anders Roy and Ettienne, Mikko Berggren and G{\o}rtz, Inge Li},
  title =	{{Fast Dynamic Arrays}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.16},
  URN =		{urn:nbn:de:0030-drops-78309},
  doi =		{10.4230/LIPIcs.ESA.2017.16},
  annote =	{Keywords: Dynamic Arrays, Tiered Vectors}
}
Document
On the Impact of Singleton Strategies in Congestion Games

Authors: Vittorio Bilò and Cosimo Vinci


Abstract
To what extent does the structure of the players' strategy space influence the efficiency of decentralized solutions in congestion games? In this work, we investigate whether better performance is possible when restricting to load balancing games in which players can only choose among single resources. We consider three different solutions concepts, namely, approximate pure Nash equilibria, approximate one-round walks generated by selfish players aiming at minimizing their personal cost and approximate one-round walks generated by cooperative players aiming at minimizing the marginal increase in the sum of the players' personal costs. The last two concepts can also be interpreted as solutions of simple greedy online algorithms for the related resource selection problem. Under fairly general latency functions on the resources, we show that, for all three types of solutions, better bounds cannot be achieved if players are either weighted or asymmetric. On the positive side, we prove that, under mild assumptions on the latency functions, improvements on the performance of approximate pure Nash equilibria are possible for load balancing games with weighted and symmetric players in the case of identical resources. We also design lower bounds on the performance of one-round walks in load balancing games with unweighted players and identical resources (in this case, solutions generated by selfish and cooperative players coincide).

Cite as

Vittorio Bilò and Cosimo Vinci. On the Impact of Singleton Strategies in Congestion Games. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2017.17,
  author =	{Bil\`{o}, Vittorio and Vinci, Cosimo},
  title =	{{On the Impact of Singleton Strategies in Congestion Games}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.17},
  URN =		{urn:nbn:de:0030-drops-78576},
  doi =		{10.4230/LIPIcs.ESA.2017.17},
  annote =	{Keywords: Congestion games, Nash equilibrium, price of anarchy, online load balancing, greedy algorithms}
}
Document
Tight Lower Bounds for the Complexity of Multicoloring

Authors: Marthe Bonamy, Lukasz Kowalik, Michal Pilipczuk, Arkadiusz Socala, and Marcin Wrochna


Abstract
In the multicoloring problem, also known as (a:b)-coloring or b-fold coloring, we are given a graph G and a set of a colors, and the task is to assign a subset of b colors to each vertex of G so that adjacent vertices receive disjoint color subsets. This natural generalization of the classic coloring problem (the b=1 case) is equivalent to finding a homomorphism to the Kneser graph KG_{a,b}, and gives relaxations approaching the fractional chromatic number. We study the complexity of determining whether a graph has an (a:b)-coloring. Our main result is that this problem does not admit an algorithm with running time f(b) * 2^{o(log b) n}, for any computable f(b), unless the Exponential Time Hypothesis (ETH) fails. A (b+1)^n * poly(n)-time algorithm due to Nederlof [2008] shows that this is tight. A direct corollary of our result is that the graph homomorphism problem does not admit a 2^O(n+h) algorithm unless ETH fails, even if the target graph is required to be a Kneser graph. This refines the understanding given by the recent lower bound of Cygan et al. [SODA 2016]. The crucial ingredient in our hardness reduction is the usage of detecting matrices of Lindström [Canad. Math. Bull., 1965], which is a combinatorial tool that, to the best of our knowledge, has not yet been used for proving complexity lower bounds. As a side result, we prove that the running time of the algorithms of Abasi et al. [MFCS 2014] and of Gabizon et al. [ESA 2015] for the r-monomial detection problem are optimal under ETH.

Cite as

Marthe Bonamy, Lukasz Kowalik, Michal Pilipczuk, Arkadiusz Socala, and Marcin Wrochna. Tight Lower Bounds for the Complexity of Multicoloring. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bonamy_et_al:LIPIcs.ESA.2017.18,
  author =	{Bonamy, Marthe and Kowalik, Lukasz and Pilipczuk, Michal and Socala, Arkadiusz and Wrochna, Marcin},
  title =	{{Tight Lower Bounds for the Complexity of Multicoloring}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.18},
  URN =		{urn:nbn:de:0030-drops-78527},
  doi =		{10.4230/LIPIcs.ESA.2017.18},
  annote =	{Keywords: multicoloring, Kneser graph homomorphism, ETH lower bound}
}
Document
Exploring the Tractability of the Capped Hose Model

Authors: Thomas Bosman and Neil Olver


Abstract
Robust network design concerns the design of networks to support uncertain or varying traffic patterns. An especially important case is the VPN problem, where the total traffic emanating from any node is bounded, but there are no further constraints on the traffic pattern. Recently, Fréchette et al. [INFOCOM, 2013] studied a generalization of the VPN problem where in addition to these so-called hose constraints, there are individual upper bounds on the demands between pairs of nodes. They motivate their model, give some theoretical results, and propose a heuristic algorithm that performs well on real-world instances. Our theoretical understanding of this model is limited; it is APX-hard in general, but tractable when either the hose constraints or the individual demand bounds are redundant. In this work, we uncover further tractable cases of this model; our main result concerns the case where each terminal needs to communicate only with two others. Our algorithms all involve optimally embedding a certain auxiliary graph into the network, and have a connection to a heuristic suggested by Fréchette et al. for the capped hose model in general.

Cite as

Thomas Bosman and Neil Olver. Exploring the Tractability of the Capped Hose Model. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bosman_et_al:LIPIcs.ESA.2017.19,
  author =	{Bosman, Thomas and Olver, Neil},
  title =	{{Exploring the Tractability of the Capped Hose Model}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.19},
  URN =		{urn:nbn:de:0030-drops-78663},
  doi =		{10.4230/LIPIcs.ESA.2017.19},
  annote =	{Keywords: robust network design, VPN problem}
}
Document
Sampling Geometric Inhomogeneous Random Graphs in Linear Time

Authors: Karl Bringmann, Ralph Keusch, and Johannes Lengler


Abstract
Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for real-world networks shifted from classic models without geometry, such as Chung-Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs. With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we introduce a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behaviour of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies. We prove the following fundamental structural and algorithmic results on GIRGs. (1) As our main contribution we provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O(n^0.5). (2) We establish that GIRGs have clustering coefficients in Omega(1), (3) we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits.

Cite as

Karl Bringmann, Ralph Keusch, and Johannes Lengler. Sampling Geometric Inhomogeneous Random Graphs in Linear Time. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2017.20,
  author =	{Bringmann, Karl and Keusch, Ralph and Lengler, Johannes},
  title =	{{Sampling Geometric Inhomogeneous Random Graphs in Linear Time}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{20:1--20:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.20},
  URN =		{urn:nbn:de:0030-drops-78396},
  doi =		{10.4230/LIPIcs.ESA.2017.20},
  annote =	{Keywords: real-world networks, random graph models, sampling algorithms, compression algorithms, hyperbolic random graphs}
}
Document
Cache Oblivious Algorithms for Computing the Triplet Distance Between Trees

Authors: Gerth Stølting Brodal and Konstantinos Mampentzidis


Abstract
We study the problem of computing the triplet distance between two rooted unordered trees with n labeled leafs. Introduced by Dobson 1975, the triplet distance is the number of leaf triples that induce different topologies in the two trees. The current theoretically best algorithm is an O(nlogn) time algorithm by Brodal et al. [SODA 2013]. Recently Jansson et al. proposed a new algorithm that, while slower in theory, requiring O(n log^3 n) time, in practice it outperforms the theoretically faster O(n log n) algorithm. Both algorithms do not scale to external memory. We present two cache oblivious algorithms that combine the best of both worlds. The first algorithm is for the case when the two input trees are binary trees and the second a generalized algorithm for two input trees of arbitrary degree. Analyzed in the RAM model, both algorithms require O(n log n) time, and in the cache oblivious model O(n/B log_{2}(n/M)) I/Os. Their relative simplicity and the fact that they scale to external memory makes them achieve the best practical performance. We note that these are the first algorithms that scale to external memory, both in theory and practice, for this problem.

Cite as

Gerth Stølting Brodal and Konstantinos Mampentzidis. Cache Oblivious Algorithms for Computing the Triplet Distance Between Trees. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{stltingbrodal_et_al:LIPIcs.ESA.2017.21,
  author =	{St{\o}lting Brodal, Gerth and Mampentzidis, Konstantinos},
  title =	{{Cache Oblivious Algorithms for Computing the Triplet Distance Between Trees}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.21},
  URN =		{urn:nbn:de:0030-drops-78820},
  doi =		{10.4230/LIPIcs.ESA.2017.21},
  annote =	{Keywords: Phylogenetic tree, tree comparison, triplet distance, cache oblivious algorithm}
}
Document
Online Algorithms for Maximum Cardinality Matching with Edge Arrivals

Authors: Niv Buchbinder, Danny Segev, and Yevgeny Tkach


Abstract
In the adversarial edge arrival model for maximum cardinality matching, edges of an unknown graph are revealed one-by-one in arbitrary order, and should be irrevocably accepted or rejected. Here, the goal of an online algorithm is to maximize the number of accepted edges while maintaining a feasible matching at any point in time. For this model, the standard greedy heuristic is 1/2-competitive, and on the other hand, no algorithm that outperforms this ratio is currently known, even for very simple graphs. We present a clean Min-Index framework for devising a family of randomized algorithms, and provide a number of positive and negative results in this context. Among these results, we present a 5/9-competitive algorithm when the underlying graph is a forest, and prove that this ratio is best possible within the Min-Index framework. In addition, we prove a new general upper bound of 2/(3+1/phi^2) ~ 0.5914 on the competitiveness of any algorithm in the edge arrival model. Interestingly, this bound holds even for an easier model in which vertices (along with their adjacent edges) arrive online, and when the underlying graph is a tree of maximum degree at most 3.

Cite as

Niv Buchbinder, Danny Segev, and Yevgeny Tkach. Online Algorithms for Maximum Cardinality Matching with Edge Arrivals. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{buchbinder_et_al:LIPIcs.ESA.2017.22,
  author =	{Buchbinder, Niv and Segev, Danny and Tkach, Yevgeny},
  title =	{{Online Algorithms for Maximum Cardinality Matching with Edge Arrivals}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.22},
  URN =		{urn:nbn:de:0030-drops-78206},
  doi =		{10.4230/LIPIcs.ESA.2017.22},
  annote =	{Keywords: Maximum matching, online algorithms, competitive analysis, primal-dual method}
}
Document
Computing Optimal Homotopies over a Spiked Plane with Polygonal Boundary

Authors: Benjamin Burton, Erin Chambers, Marc van Kreveld, Wouter Meulemans, Tim Ophelders, and Bettina Speckmann


Abstract
Computing optimal deformations between two curves is a fundamental question with various applications, and has recently received much attention in both computational topology and in mathematics in the form of homotopies of disks and annular regions. In this paper, we examine this problem in a geometric setting, where we consider the boundary of a polygonal domain with spikes, point obstacles that can be crossed at an additive cost. We aim to continuously morph from one part of the boundary to another, necessarily passing over all spikes, such that the most expensive intermediate curve is minimized, where the cost of a curve is its geometric length plus the cost of any spikes it crosses. We first investigate the general setting where each spike may have a different cost. For the number of inflection points in an intermediate curve, we present a lower bound that is linear in the number of spikes, even if the domain is convex and the two boundaries for which we seek a morph share an endpoint. We describe a 2-approximation algorithm for the general case, and an optimal algorithm for the case that the two boundaries for which we seek a morph share both endpoints, thereby representing the entire boundary of the domain. We then consider the setting where all spikes have the same unit cost and we describe a polynomial-time exact algorithm. The algorithm combines structural properties of homotopies arising from the geometry with methodology for computing Fréchet distances.

Cite as

Benjamin Burton, Erin Chambers, Marc van Kreveld, Wouter Meulemans, Tim Ophelders, and Bettina Speckmann. Computing Optimal Homotopies over a Spiked Plane with Polygonal Boundary. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{burton_et_al:LIPIcs.ESA.2017.23,
  author =	{Burton, Benjamin and Chambers, Erin and van Kreveld, Marc and Meulemans, Wouter and Ophelders, Tim and Speckmann, Bettina},
  title =	{{Computing Optimal Homotopies over a Spiked Plane with Polygonal Boundary}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.23},
  URN =		{urn:nbn:de:0030-drops-78630},
  doi =		{10.4230/LIPIcs.ESA.2017.23},
  annote =	{Keywords: Fr\'{e}chet distance, polygonal domain, homotopy, geodesic, obstacle}
}
Document
Online Submodular Maximization Problem with Vector Packing Constraint

Authors: T.-H. Hubert Chan, Shaofeng H.-C. Jiang, Zhihao Gavin Tang, and Xiaowei Wu


Abstract
We consider the online vector packing problem in which we have a d dimensional knapsack and items u with weight vectors w_u in R_+^d arrive online in an arbitrary order. Upon the arrival of an item, the algorithm must decide immediately whether to discard or accept the item into the knapsack. When item u is accepted, w_u(i) units of capacity on dimension i will be taken up, for each i in [d]. To satisfy the knapsack constraint, an accepted item can be later disposed of with no cost, but discarded or disposed of items cannot be recovered. The objective is to maximize the utility of the accepted items S at the end of the algorithm, which is given by f(S) for some non-negative monotone submodular function f. For any small constant epsilon > 0, we consider the special case that the weight of an item on every dimension is at most a (1- epsilon) fraction of the total capacity, and give a polynomial-time deterministic O(k / epsilon^2)-competitive algorithm for the problem, where k is the (column) sparsity of the weight vectors. We also show several (almost) tight hardness results even when the algorithm is computationally unbounded. We first show that under the epsilon-slack assumption, no deterministic algorithm can obtain any o(k) competitive ratio, and no randomized algorithm can obtain any o(k / log k) competitive ratio. We then show that for the general case (when epsilon = 0), no randomized algorithm can obtain any o(k) competitive ratio. In contrast to the (1+delta) competitive ratio achieved in Kesselheim et al. [STOC 2014] for the problem with random arrival order of items and under large capacity assumption, we show that in the arbitrary arrival order case, even when |w_u|_infinity is arbitrarily small for all items u, it is impossible to achieve any o(log k / log log k) competitive ratio.

Cite as

T.-H. Hubert Chan, Shaofeng H.-C. Jiang, Zhihao Gavin Tang, and Xiaowei Wu. Online Submodular Maximization Problem with Vector Packing Constraint. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ESA.2017.24,
  author =	{Chan, T.-H. Hubert and Jiang, Shaofeng H.-C. and Tang, Zhihao Gavin and Wu, Xiaowei},
  title =	{{Online Submodular Maximization Problem with Vector Packing Constraint}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.24},
  URN =		{urn:nbn:de:0030-drops-78190},
  doi =		{10.4230/LIPIcs.ESA.2017.24},
  annote =	{Keywords: Submodular Maximization, Free-disposal, Vector Packing}
}
Document
Faster Approximate Diameter and Distance Oracles in Planar Graphs

Authors: Timothy M. Chan and Dimitrios Skrepetos


Abstract
We present an algorithm that computes a (1+varepsilon)-approximation of the diameter of a weighted, undirected planar graph of n vertices with non-negative edge lengths in O(nlog n(log n + (1/varepsilon)^5)) expected time, improving upon the O(n((1/varepsilon)^4 log^4(n) + 2^{O(1/varepsilon)}))-time algorithm of Weimann and Yuster [ICALP 2013]. Our algorithm makes two improvements over that result: first and foremost, it replaces the exponential dependency on 1/varepsilon with a polynomial one, by adapting and specializing Cabello's recent abstract-Voronoi-diagram-based technique [SODA 2017] for approximation purposes; second, it shaves off two logarithmic factors by choosing a better sequence of error parameters during recursion. Moreover, using similar techniques, we improve the (1+varepsilon)-approximate distance oracle of Gu and Xu [ISAAC 2015] by first replacing the exponential dependency on 1/varepsilon on the preprocessing time and space with a polynomial one and second removing a logarithmic factor from the preprocessing time.

Cite as

Timothy M. Chan and Dimitrios Skrepetos. Faster Approximate Diameter and Distance Oracles in Planar Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 25:1-25:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ESA.2017.25,
  author =	{Chan, Timothy M. and Skrepetos, Dimitrios},
  title =	{{Faster Approximate Diameter and Distance Oracles in Planar Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.25},
  URN =		{urn:nbn:de:0030-drops-78382},
  doi =		{10.4230/LIPIcs.ESA.2017.25},
  annote =	{Keywords: planar graphs, diameter, abstract Voronoi diagrams}
}
Document
Stability and Recovery for Independence Systems

Authors: Vaggos Chatziafratis, Tim Roughgarden, and Jan Vondrak


Abstract
Two genres of heuristics that are frequently reported to perform much better on "real-world" instances than in the worst case are greedy algorithms and local search algorithms. In this paper, we systematically study these two types of algorithms for the problem of maximizing a monotone submodular set function subject to downward-closed feasibility constraints. We consider perturbation-stable instances, in the sense of Bilu and Linial [11], and precisely identify the stability threshold beyond which these algorithms are guaranteed to recover the optimal solution. Byproducts of our work include the first definition of perturbation-stability for non-additive objective functions, and a resolution of the worst-case approximation guarantee of local search in p-extendible systems.

Cite as

Vaggos Chatziafratis, Tim Roughgarden, and Jan Vondrak. Stability and Recovery for Independence Systems. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatziafratis_et_al:LIPIcs.ESA.2017.26,
  author =	{Chatziafratis, Vaggos and Roughgarden, Tim and Vondrak, Jan},
  title =	{{Stability and Recovery for Independence Systems}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.26},
  URN =		{urn:nbn:de:0030-drops-78423},
  doi =		{10.4230/LIPIcs.ESA.2017.26},
  annote =	{Keywords: Submodular, approximation, stability, Local Search, Greedy, p-systems}
}
Document
On the Complexity of Bounded Context Switching

Authors: Peter Chini, Jonathan Kolberg, Andreas Krebs, Roland Meyer, and Prakash Saivasan


Abstract
Bounded context switching (BCS) is an under-approximate method for finding violations to safety properties in shared-memory concurrent programs. Technically, BCS is a reachability problem that is known to be NP-complete. Our contribution is a parameterized analysis of BCS. The first result is an algorithm that solves BCS when parameterized by the number of context switches (cs) and the size of the memory (m) in O*(m^(cs)2^(cs)). This is achieved by creating instances of the easier problem Shuff which we solve via fast subset convolution. We also present a lower bound for BCS of the form m^o(cs / log(cs)), based on the exponential time hypothesis. Interestingly, the gap is closely related to a conjecture that has been open since FOCS'07. Further, we prove that BCS admits no polynomial kernel. Next, we introduce a measure, called scheduling dimension, that captures the complexity of schedules. We study BCS parameterized by the scheduling dimension (sdim) and show that it can be solved in O*((2m)^(4sdim)4^t), where t is the number of threads. We consider variants of the problem for which we obtain (matching) upper and lower bounds.

Cite as

Peter Chini, Jonathan Kolberg, Andreas Krebs, Roland Meyer, and Prakash Saivasan. On the Complexity of Bounded Context Switching. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 27:1-27:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chini_et_al:LIPIcs.ESA.2017.27,
  author =	{Chini, Peter and Kolberg, Jonathan and Krebs, Andreas and Meyer, Roland and Saivasan, Prakash},
  title =	{{On the Complexity of Bounded Context Switching}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{27:1--27:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.27},
  URN =		{urn:nbn:de:0030-drops-78730},
  doi =		{10.4230/LIPIcs.ESA.2017.27},
  annote =	{Keywords: Shared memory concurrency, safety verification, fixed-parameter tractability, exponential time hypothesis, bounded context switching}
}
Document
Improved Approximate Rips Filtrations with Shifted Integer Lattices

Authors: Aruni Choudhary, Michael Kerber, and Sharath Raghvendra


Abstract
Rips complexes are important structures for analyzing topological features of metric spaces. Unfortunately, generating these complexes constitutes an expensive task because of a combinatorial explosion in the complex size. For n points in R^d, we present a scheme to construct a 4.24-approximation of the multi-scale filtration of the Rips complex in the L-infinity metric, which extends to a O(d^{0.25})-approximation of the Rips filtration for the Euclidean case. The k-skeleton of the resulting approximation has a total size of n2^{O(d log k)}. The scheme is based on the integer lattice and on the barycentric subdivision of the d-cube.

Cite as

Aruni Choudhary, Michael Kerber, and Sharath Raghvendra. Improved Approximate Rips Filtrations with Shifted Integer Lattices. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.ESA.2017.28,
  author =	{Choudhary, Aruni and Kerber, Michael and Raghvendra, Sharath},
  title =	{{Improved Approximate Rips Filtrations with Shifted Integer Lattices}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.28},
  URN =		{urn:nbn:de:0030-drops-78259},
  doi =		{10.4230/LIPIcs.ESA.2017.28},
  annote =	{Keywords: Persistent homology, Rips filtrations, Approximation algorithms, Topological Data Analysis}
}
Document
The Sparse Awakens: Streaming Algorithms for Matching Size Estimation in Sparse Graphs

Authors: Graham Cormode, Hossein Jowhari, Morteza Monemizadeh, and S. Muthukrishnan


Abstract
Estimating the size of the maximum matching is a canonical problem in graph analysis, and one that has attracted extensive study over a range of different computational models. We present improved streaming algorithms for approximating the size of maximum matching with sparse (bounded arboricity) graphs. * (Insert-Only Streams) We present a one-pass algorithm that takes O(alpha log n) space and approximates the size of the maximum matching in graphs with arboricity alpha within a factor of O(alpha). This improves significantly upon the state-of-the-art tilde{O}(alpha n^{2/3})-space streaming algorithms, and is the first poly-logarithmic space algorithm for this problem. * (Dynamic Streams) Given a dynamic graph stream (i.e., inserts and deletes) of edges of an underlying alpha-bounded arboricity graph, we present an one-pass algorithm that uses space tilde{O}(alpha^{10/3}n^{2/3}) and returns an O(alpha)-estimator for the size of the maximum matching on the condition that the number edge deletions in the stream is bounded by O(alpha n). For this class of inputs, our algorithm improves the state-of-the-art tilde{O}(\alpha n^{4/5})-space algorithms, where the \tilde{O}(.) notation hides logarithmic in n dependencies. In contrast to prior work, our results take more advantage of the streaming access to the input and characterize the matching size based on the ordering of the edges in the stream in addition to the degree distributions and structural properties of the sparse graphs.

Cite as

Graham Cormode, Hossein Jowhari, Morteza Monemizadeh, and S. Muthukrishnan. The Sparse Awakens: Streaming Algorithms for Matching Size Estimation in Sparse Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cormode_et_al:LIPIcs.ESA.2017.29,
  author =	{Cormode, Graham and Jowhari, Hossein and Monemizadeh, Morteza and Muthukrishnan, S.},
  title =	{{The Sparse Awakens: Streaming Algorithms for Matching Size Estimation in Sparse Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.29},
  URN =		{urn:nbn:de:0030-drops-78499},
  doi =		{10.4230/LIPIcs.ESA.2017.29},
  annote =	{Keywords: streaming algorithms, matching size}
}
Document
Improving TSP Tours Using Dynamic Programming over Tree Decompositions

Authors: Marek Cygan, Lukasz Kowalik, and Arkadiusz Socala


Abstract
Given a traveling salesman problem (TSP) tour H in graph G, a k-move is an operation which removes k edges from H, and adds k edges of G so that a new tour H' is formed. The popular k-opt heuristic for TSP finds a local optimum by starting from an arbitrary tour H and then improving it by a sequence of k-moves. Until 2016, the only known algorithm to find an improving k-move for a given tour was the naive solution in time O(n^k). At ICALP'16 de Berg, Buchin, Jansen and Woeginger showed an O(n^{floor(2/3k)+1})-time algorithm. We show an algorithm which runs in O(n^{(1/4 + epsilon_k)k}) time, where lim_{k -> infinity} epsilon_k = 0. It improves over the state of the art for every k >= 5. For the most practically relevant case k=5 we provide a slightly refined algorithm running in O(n^{3.4}) time. We also show that for the k=4 case, improving over the O(n^3)-time algorithm of de Berg et al. would be a major breakthrough: an O(n^{3 - epsilon})-time algorithm for any epsilon > 0 would imply an O(n^{3 - delta})-time algorithm for the All Pairs Shortest Paths problem, for some delta>0.

Cite as

Marek Cygan, Lukasz Kowalik, and Arkadiusz Socala. Improving TSP Tours Using Dynamic Programming over Tree Decompositions. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.ESA.2017.30,
  author =	{Cygan, Marek and Kowalik, Lukasz and Socala, Arkadiusz},
  title =	{{Improving TSP Tours Using Dynamic Programming over Tree Decompositions}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.30},
  URN =		{urn:nbn:de:0030-drops-78539},
  doi =		{10.4230/LIPIcs.ESA.2017.30},
  annote =	{Keywords: TSP, treewidth, local search, XP algorithm, hardness in P}
}
Document
On Minimizing the Makespan When Some Jobs Cannot Be Assigned on the Same Machine

Authors: Syamantak Das and Andreas Wiese


Abstract
We study the classical scheduling problem of assigning jobs to machines in order to minimize the makespan. It is well-studied and admits an EPTAS on identical machines and a (2-1/m)-approximation algorithm on unrelated machines. In this paper we study a variation in which the input jobs are partitioned into bags and no two jobs from the same bag are allowed to be assigned on the same machine. Such a constraint can easily arise, e.g., due to system stability and redundancy considerations. Unfortunately, as we demonstrate in this paper, the techniques of the above results break down in the presence of these additional constraints. Our first result is a PTAS for the case of identical machines. It enhances the methods from the known (E)PTASs by a finer classification of the input jobs and careful argumentations why a good schedule exists after enumerating over the large jobs. For unrelated machines, we prove that there can be no (log n)^{1/4-epsilon}-approximation algorithm for the problem for any epsilon > 0, assuming that NP nsubseteq ZPTIME(2^{(log n)^{O(1)}}). This holds even in the restricted assignment setting. However, we identify a special case of the latter in which we can do better: if the same set of machines we give an 8-approximation algorithm. It is based on rounding the LP-relaxation of the problem in phases and adjusting the residual fractional solution after each phase to order to respect the bag constraints.

Cite as

Syamantak Das and Andreas Wiese. On Minimizing the Makespan When Some Jobs Cannot Be Assigned on the Same Machine. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.ESA.2017.31,
  author =	{Das, Syamantak and Wiese, Andreas},
  title =	{{On Minimizing the Makespan When Some Jobs Cannot Be Assigned on the Same Machine}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.31},
  URN =		{urn:nbn:de:0030-drops-78453},
  doi =		{10.4230/LIPIcs.ESA.2017.31},
  annote =	{Keywords: approximation algorithms, scheduling, makespan minimization}
}
Document
Optimal Stopping Rules for Sequential Hypothesis Testing

Authors: Constantinos Daskalakis and Yasushi Kawase


Abstract
Suppose that we are given sample access to an unknown distribution p over n elements and an explicit distribution q over the same n elements. We would like to reject the null hypothesis "p=q" after seeing as few samples as possible, when p =/= q, while we never want to reject the null, when p=q. Well-known results show that Theta(sqrt{n}/epsilon^2) samples are necessary and sufficient for distinguishing whether p equals q versus p is epsilon-far from q in total variation distance. However, this requires the distinguishing radius epsilon to be fixed prior to deciding how many samples to request. Our goal is instead to design sequential hypothesis testers, i.e. online algorithms that request i.i.d. samples from p and stop as soon as they can confidently reject the hypothesis p=q, without being given a lower bound on the distance between p and q, when p =/= q. In particular, we want to minimize the number of samples requested by our tests as a function of the distance between p and q, and if p=q we want the algorithm, with high probability, to never reject the null. Our work is motivated by and addresses the practical challenge of sequential A/B testing in Statistics. We show that, when n=2, any sequential hypothesis test must see Omega(1/{d_{tv}(p,q)^2} log log 1/{d_{tv}(p,q)}) samples, with high (constant) probability, before it rejects p=q, where d_{tv}(p,q) is the - unknown to the tester - total variation distance between p and q. We match the dependence of this lower bound on d_{tv}(p,q) by proposing a sequential tester that rejects p=q from at most O({\sqrt{n}}/{d_{tv}(p,q)^2}log log 1/{d_{tv}(p,q)}) samples with high (constant) probability. The Omega(sqrt{n}) dependence on the support size n is also known to be necessary. We similarly provide two-sample sequential hypothesis testers, when sample access is given to both p and q, and discuss applications to sequential A/B testing.

Cite as

Constantinos Daskalakis and Yasushi Kawase. Optimal Stopping Rules for Sequential Hypothesis Testing. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{daskalakis_et_al:LIPIcs.ESA.2017.32,
  author =	{Daskalakis, Constantinos and Kawase, Yasushi},
  title =	{{Optimal Stopping Rules for Sequential Hypothesis Testing}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.32},
  URN =		{urn:nbn:de:0030-drops-78237},
  doi =		{10.4230/LIPIcs.ESA.2017.32},
  annote =	{Keywords: property testing, sequential hypothesis testing, A/B testing}
}
Document
The Online House Numbering Problem: Min-Max Online List Labeling

Authors: William E. Devanny, Jeremy T. Fineman, Michael T. Goodrich, and Tsvi Kopelowitz


Abstract
We introduce and study the online house numbering problem, where houses are added arbitrarily along a road and must be assigned labels to maintain their ordering along the road. The online house numbering problem is related to classic online list labeling problems, except that the optimization goal here is to minimize the maximum number of times that any house is relabeled. We provide several algorithms that achieve interesting tradeoffs between upper bounds on the number of maximum relabels per element and the number of bits used by labels.

Cite as

William E. Devanny, Jeremy T. Fineman, Michael T. Goodrich, and Tsvi Kopelowitz. The Online House Numbering Problem: Min-Max Online List Labeling. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{devanny_et_al:LIPIcs.ESA.2017.33,
  author =	{Devanny, William E. and Fineman, Jeremy T. and Goodrich, Michael T. and Kopelowitz, Tsvi},
  title =	{{The Online House Numbering Problem: Min-Max Online List Labeling}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.33},
  URN =		{urn:nbn:de:0030-drops-78831},
  doi =		{10.4230/LIPIcs.ESA.2017.33},
  annote =	{Keywords: house numbering, list labeling, file maintenance}
}
Document
Temporal Clustering

Authors: Tamal K. Dey, Alfred Rossi, and Anastasios Sidiropoulos


Abstract
We study the problem of clustering sequences of unlabeled point sets taken from a common metric space. Such scenarios arise naturally in applications where a system or process is observed in distinct time intervals, such as biological surveys and contagious disease surveillance. In this more general setting existing algorithms for classical (i.e. static) clustering problems are not applicable anymore. We propose a set of optimization problems which we collectively refer to as temporal clustering. The quality of a solution to a temporal clustering instance can be quantified using three parameters: the number of clusters k, the spatial clustering cost r, and the maximum cluster displacement delta between consecutive time steps. We consider spatial clustering costs which generalize the well-studied k-center, discrete k-median, and discrete k-means objectives of classical clustering problems. We develop new algorithms that achieve trade-offs between the three objectives k, r, and delta. Our upper bounds are complemented by inapproximability results.

Cite as

Tamal K. Dey, Alfred Rossi, and Anastasios Sidiropoulos. Temporal Clustering. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ESA.2017.34,
  author =	{Dey, Tamal K. and Rossi, Alfred and Sidiropoulos, Anastasios},
  title =	{{Temporal Clustering}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.34},
  URN =		{urn:nbn:de:0030-drops-78567},
  doi =		{10.4230/LIPIcs.ESA.2017.34},
  annote =	{Keywords: clustering, multi-objective optimization, dynamic metric spaces, moving point sets, approximation algorithms, hardness of approximation}
}
Document
Pricing Social Goods

Authors: Alon Eden, Tomer Ezra, and Michal Feldman


Abstract
Social goods are goods that grant value not only to their owners but also to the owners' surroundings, be it their families, friends or office mates. The benefit a non-owner derives from the good is affected by many factors, including the type of the good, its availability, and the social status of the non-owner. Depending on the magnitude of the benefit and on the price of the good, a potential buyer might stay away from purchasing the good, hoping to free ride on others' purchases. A revenue-maximizing seller who sells social goods must take these considerations into account when setting prices for the good. The literature on optimal pricing has advanced considerably over the last decade, but little is known about optimal pricing schemes for selling social goods. In this paper, we conduct a systematic study of revenue-maximizing pricing schemes for social goods: we introduce a Bayesian model for this scenario, and devise nearly-optimal pricing schemes for various types of externalities, both for simultaneous sales and for sequential sales.

Cite as

Alon Eden, Tomer Ezra, and Michal Feldman. Pricing Social Goods. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ESA.2017.35,
  author =	{Eden, Alon and Ezra, Tomer and Feldman, Michal},
  title =	{{Pricing Social Goods}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.35},
  URN =		{urn:nbn:de:0030-drops-78717},
  doi =		{10.4230/LIPIcs.ESA.2017.35},
  annote =	{Keywords: Public Goods, Posted Prices, Revenue Maximization, Externalities}
}
Document
Half-Integral Linkages in Highly Connected Directed Graphs

Authors: Katherine Edwards, Irene Muzi, and Paul Wollan


Abstract
We study the half-integral k-Directed Disjoint Paths Problem (1/2 kDDPP) in highly strongly connected digraphs. The integral kDDPP is NP-complete even when restricted to instances where k=2, and the input graph is L-strongly connected, for any L >= 1. We show that when the integrality condition is relaxed to allow each vertex to be used in two paths, the problem becomes efficiently solvable in highly connected digraphs (even with k as part of the input). Specifically, we show that there is an absolute constant c such that for each k >= 2 there exists L(k) such that 1/2 kDDPP is solvable in time O(|V(G)|^c) for a L(k)-strongly connected directed graph G. As the function L(k) grows rather quickly, we also show that 1/2 kDDPP is solvable in time O(|V(G)|^{f(k)}) in (36k^3+2k)-strongly connected directed graphs. We show that for each epsilon<1, deciding half-integral feasibility of kDDPP instances is NP-complete when k is given as part of the input, even when restricted to graphs with strong connectivity epsilon k.

Cite as

Katherine Edwards, Irene Muzi, and Paul Wollan. Half-Integral Linkages in Highly Connected Directed Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 36:1-36:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{edwards_et_al:LIPIcs.ESA.2017.36,
  author =	{Edwards, Katherine and Muzi, Irene and Wollan, Paul},
  title =	{{Half-Integral Linkages in Highly Connected Directed Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{36:1--36:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.36},
  URN =		{urn:nbn:de:0030-drops-78769},
  doi =		{10.4230/LIPIcs.ESA.2017.36},
  annote =	{Keywords: linkage, directed graph, treewidth}
}
Document
Bounds on the Satisfiability Threshold for Power Law Distributed Random SAT

Authors: Tobias Friedrich, Anton Krohmer, Ralf Rothenberger, Thomas Sauerwald, and Andrew M. Sutton


Abstract
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. The worst-case hardness of SAT lies at the core of computational complexity theory. The average-case analysis of SAT has triggered the development of sophisticated rigorous and non-rigorous techniques for analyzing random structures. Despite a long line of research and substantial progress, nearly all theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, real-world instances often exhibit large fluctuations in variable occurrence. This can be modeled by a scale-free distribution of the variables, which results in distributions closer to industrial SAT instances. We study random k-SAT on n variables, m = Theta(n) clauses, and a power law distribution on the variable occurrences with exponent beta. We observe a satisfiability threshold at beta = (2k-1)/(k-1). This threshold is tight in the sense that instances with beta <= (2k-1)/(k-1)-epsilon for any constant epsilon > 0 are unsatisfiable with high probability (w.h.p.). For beta >= (2k-1)/(k-1)+epsilon, the picture is reminiscent of the uniform case: instances are satisfiable w.h.p. for sufficiently small constant clause-variable ratios m/n; they are unsatisfiable above a ratio m/n that depends on beta.

Cite as

Tobias Friedrich, Anton Krohmer, Ralf Rothenberger, Thomas Sauerwald, and Andrew M. Sutton. Bounds on the Satisfiability Threshold for Power Law Distributed Random SAT. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{friedrich_et_al:LIPIcs.ESA.2017.37,
  author =	{Friedrich, Tobias and Krohmer, Anton and Rothenberger, Ralf and Sauerwald, Thomas and Sutton, Andrew M.},
  title =	{{Bounds on the Satisfiability Threshold for Power Law Distributed Random SAT}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.37},
  URN =		{urn:nbn:de:0030-drops-78356},
  doi =		{10.4230/LIPIcs.ESA.2017.37},
  annote =	{Keywords: satisfiability, random structures, random SAT, power law distribution, scale-freeness, phase transitions}
}
Document
An Encoding for Order-Preserving Matching

Authors: Travis Gagie, Giovanni Manzini, and Rossano Venturini


Abstract
Encoding data structures store enough information to answer the queries they are meant to support but not enough to recover their underlying datasets. In this paper we give the first encoding data structure for the challenging problem of order-preserving pattern matching. This problem was introduced only a few years ago but has already attracted significant attention because of its applications in data analysis. Two strings are said to be an order-preserving match if the relative order of their characters is the same: e.g., (4, 1, 3, 2) and (10, 3, 7, 5) are an order-preserving match. We show how, given a string S[1..n] over an arbitrary alphabet of size sigma and a constant c >=1, we can build an O(n log log n)-bit encoding such that later, given a pattern P[1..m] with m >= log^c n, we can return the number of order-preserving occurrences of P in S in O(m) time. Within the same time bound we can also return the starting position of some order-preserving match for P in S (if such a match exists). We prove that our space bound is within a constant factor of optimal if log(sigma) = Omega(log log n); our query time is optimal if log(sigma) = Omega(log n). Our space bound contrasts with the Omega(n log n) bits needed in the worst case to store S itself, an index for order-preserving pattern matching with no restrictions on the pattern length, or an index for standard pattern matching even with restrictions on the pattern length. Moreover, we can build our encoding knowing only how each character compares to O(log^c n) neighbouring characters.

Cite as

Travis Gagie, Giovanni Manzini, and Rossano Venturini. An Encoding for Order-Preserving Matching. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 38:1-38:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gagie_et_al:LIPIcs.ESA.2017.38,
  author =	{Gagie, Travis and Manzini, Giovanni and Venturini, Rossano},
  title =	{{An Encoding for Order-Preserving Matching}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{38:1--38:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.38},
  URN =		{urn:nbn:de:0030-drops-78726},
  doi =		{10.4230/LIPIcs.ESA.2017.38},
  annote =	{Keywords: Compact data structures, encodings, order-preserving matching}
}
Document
Distance-Preserving Subgraphs of Interval Graphs

Authors: Kshitij Gajjar and Jaikumar Radhakrishnan


Abstract
We consider the problem of finding small distance-preserving subgraphs of undirected, unweighted interval graphs that have k terminal vertices. We show that every interval graph admits a distance-preserving subgraph with O(k log k) branching vertices. We also prove a matching lower bound by exhibiting an interval graph based on bit-reversal permutation matrices. In addition, we show that interval graphs admit subgraphs with O(k) branching vertices that approximate distances up to an additive term of +1.

Cite as

Kshitij Gajjar and Jaikumar Radhakrishnan. Distance-Preserving Subgraphs of Interval Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 39:1-39:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gajjar_et_al:LIPIcs.ESA.2017.39,
  author =	{Gajjar, Kshitij and Radhakrishnan, Jaikumar},
  title =	{{Distance-Preserving Subgraphs of Interval Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{39:1--39:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.39},
  URN =		{urn:nbn:de:0030-drops-78798},
  doi =		{10.4230/LIPIcs.ESA.2017.39},
  annote =	{Keywords: interval graphs, shortest path, distance-preserving subgraphs, bit-reversal permutation matrix}
}
Document
Dispersion on Trees

Authors: Pawel Gawrychowski, Nadav Krasnopolsky, Shay Mozes, and Oren Weimann


Abstract
In the k-dispersion problem, we need to select k nodes of a given graph so as to maximize the minimum distance between any two chosen nodes. This can be seen as a generalization of the independent set problem, where the goal is to select nodes so that the minimum distance is larger than 1. We design an optimal O(n) time algorithm for the dispersion problem on trees consisting of n nodes, thus improving the previous O(n log n) time solution from 1997. We also consider the weighted case, where the goal is to choose a set of nodes of total weight at least W. We present an O(n log^2n) algorithm improving the previous O(n log^4 n) solution. Our solution builds on the search version (where we know the minimum distance lambda between the chosen nodes) for which we present tight Theta(n log n) upper and lower bounds.

Cite as

Pawel Gawrychowski, Nadav Krasnopolsky, Shay Mozes, and Oren Weimann. Dispersion on Trees. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 40:1-40:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2017.40,
  author =	{Gawrychowski, Pawel and Krasnopolsky, Nadav and Mozes, Shay and Weimann, Oren},
  title =	{{Dispersion on Trees}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{40:1--40:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.40},
  URN =		{urn:nbn:de:0030-drops-78438},
  doi =		{10.4230/LIPIcs.ESA.2017.40},
  annote =	{Keywords: parametric search, dispersion, k-center, dynamic programming}
}
Document
Real-Time Streaming Multi-Pattern Search for Constant Alphabet

Authors: Shay Golan and Ely Porat


Abstract
In the streaming multi-pattern search problem, which is also known as the streaming dictionary matching problem, a set D={P_1,P_2, . . . ,P_d} of d patterns (strings over an alphabet Sigma), called the dictionary, is given to be preprocessed. Then, a text T arrives one character at a time and the goal is to report, before the next character arrives, the longest pattern in the dictionary that is a current suffix of T. We prove that for a constant size alphabet, there exists a randomized Monte-Carlo algorithm for the streaming dictionary matching problem that takes constant time per character and uses O(d log m) words of space, where m is the length of the longest pattern in the dictionary. In the case where the alphabet size is not constant, we introduce two new randomized Monte-Carlo algorithms with the following complexities: * O(log log |Sigma|) time per character in the worst case and O(d log m) words of space. * O(1/epsilon) time per character in the worst case and O(d |\Sigma|^epsilon log m/epsilon) words of space for any 0<epsilon<= 1. These results improve upon the algorithm of [Clifford et al., ESA'15] which uses O(d log m) words of space and takes O(log log (m+d)) time per character.

Cite as

Shay Golan and Ely Porat. Real-Time Streaming Multi-Pattern Search for Constant Alphabet. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{golan_et_al:LIPIcs.ESA.2017.41,
  author =	{Golan, Shay and Porat, Ely},
  title =	{{Real-Time Streaming Multi-Pattern Search for Constant Alphabet}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.41},
  URN =		{urn:nbn:de:0030-drops-78550},
  doi =		{10.4230/LIPIcs.ESA.2017.41},
  annote =	{Keywords: multi-pattern, dictionary, streaming pattern matching, fingerprints}
}
Document
Improved Bounds for 3SUM, k-SUM, and Linear Degeneracy

Authors: Omer Gold and Micha Sharir


Abstract
Given a set of n real numbers, the 3SUM problem is to decide whether there are three of them that sum to zero. Until a recent breakthrough by Gronlund and Pettie [FOCS'14], a simple Theta(n^2)-time deterministic algorithm for this problem was conjectured to be optimal. Over the years many algorithmic problems have been shown to be reducible from the 3SUM problem or its variants, including the more generalized forms of the problem, such as k-SUM and k-variate linear degeneracy testing (k-LDT). The conjectured hardness of these problems have become extremely popular for basing conditional lower bounds for numerous algorithmic problems in P. In this paper, we show that the randomized 4-linear decision tree complexity of 3SUM is O(n^{3/2}), and that the randomized (2k-2)-linear decision tree complexity of k-SUM and k-LDT is O(n^{k/2}), for any odd >= 3. These bounds improve (albeit randomized) the corresponding O(n^{3/2} sqrt{log n}) and O(n^{k/2} sqrt{log n}) decision tree bounds obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized randomized variant of fractional cascading data structure. Additionally, we give another deterministic algorithm for 3SUM that runs in O(n^2 log log n / log n ) time. The latter bound matches a recent independent bound by Freund [Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use of the word-RAM model.

Cite as

Omer Gold and Micha Sharir. Improved Bounds for 3SUM, k-SUM, and Linear Degeneracy. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gold_et_al:LIPIcs.ESA.2017.42,
  author =	{Gold, Omer and Sharir, Micha},
  title =	{{Improved Bounds for 3SUM, k-SUM, and Linear Degeneracy}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.42},
  URN =		{urn:nbn:de:0030-drops-78364},
  doi =		{10.4230/LIPIcs.ESA.2017.42},
  annote =	{Keywords: 3SUM, k-SUM, Linear Degeneracy, Linear Decision Trees, Fractional Cascading}
}
Document
Profit Sharing and Efficiency in Utility Games

Authors: Sreenivas Gollapudi, Kostas Kollias, Debmalya Panigrahi, and Venetia Pliatsika


Abstract
We study utility games (Vetta, FOCS 2002) where a set of players join teams to produce social utility, and receive individual utility in the form of payments in return. These games have many natural applications in competitive settings such as labor markets, crowdsourcing, etc. The efficiency of such a game depends on the profit sharing mechanism - the rule that maps utility produced by the players to their individual payments. We study three natural and widely used profit sharing mechanisms - egalitarian or equal sharing, marginal gain or value addition when a player joins, and marginal loss or value depletion when a player leaves. For these settings, we give tight bounds on the price of anarchy, thereby allowing comparison between these popular mechanisms from a (worst case) social welfare perspective.

Cite as

Sreenivas Gollapudi, Kostas Kollias, Debmalya Panigrahi, and Venetia Pliatsika. Profit Sharing and Efficiency in Utility Games. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gollapudi_et_al:LIPIcs.ESA.2017.43,
  author =	{Gollapudi, Sreenivas and Kollias, Kostas and Panigrahi, Debmalya and Pliatsika, Venetia},
  title =	{{Profit Sharing and Efficiency in Utility Games}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.43},
  URN =		{urn:nbn:de:0030-drops-78329},
  doi =		{10.4230/LIPIcs.ESA.2017.43},
  annote =	{Keywords: Price of anarchy, submodular maximization, coverage functions}
}
Document
Improved Guarantees for Vertex Sparsification in Planar Graphs

Authors: Gramoz Goranci, Monika Henzinger, and Pan Peng


Abstract
Graph Sparsification aims at compressing large graphs into smaller ones while (approximately) preserving important characteristics of the input graph. In this work we study Vertex Sparsifiers, i.e., sparsifiers whose goal is to reduce the number of vertices. Given a weighted graph G=(V,E), and a terminal set K with |K|=k, a quality-q vertex cut sparsifier of G is a graph H with K contained in V_H that preserves the value of minimum cuts separating any bipartition of K, up to a factor of q. We show that planar graphs with all the k terminals lying on the same face admit quality-1 vertex cut sparsifier of size O(k^2) that are also planar. Our result extends to vertex flow and distance sparsifiers. It improves the previous best known bound of O(k^2 2^(2k)) for cut and flow sparsifiers by an exponential factor, and matches an Omega(k^2) lower-bound for this class of graphs. We also study vertex reachability sparsifiers for directed graphs. Given a digraph G=(V,E) and a terminal set K, a vertex reachability sparsifier of G is a digraph H=(V_H,E_H), K contained in V_H that preserves all reachability information among terminal pairs. We introduce the notion of reachability-preserving minors, i.e., we require H to be a minor of G. Among others, for general planar digraphs, we construct reachability-preserving minors of size O(k^2 log^2 k). We complement our upper-bound by showing that there exists an infinite family of acyclic planar digraphs such that any reachability-preserving minor must have Omega(k^2) vertices.

Cite as

Gramoz Goranci, Monika Henzinger, and Pan Peng. Improved Guarantees for Vertex Sparsification in Planar Graphs. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ESA.2017.44,
  author =	{Goranci, Gramoz and Henzinger, Monika and Peng, Pan},
  title =	{{Improved Guarantees for Vertex Sparsification in Planar Graphs}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.44},
  URN =		{urn:nbn:de:0030-drops-78337},
  doi =		{10.4230/LIPIcs.ESA.2017.44},
  annote =	{Keywords: Vertex Sparsification, Graph Sparsification, Planar Graphs, Metric Embedding, Reachability}
}
Document
The Power of Vertex Sparsifiers in Dynamic Graph Algorithms

Authors: Gramoz Goranci, Monika Henzinger, and Pan Peng


Abstract
We introduce a new algorithmic framework for designing dynamic graph algorithms in minor-free graphs, by exploiting the structure of such graphs and a tool called vertex sparsification, which is a way to compress large graphs into small ones that well preserve relevant properties among a subset of vertices and has previously mainly been used in the design of approximation algorithms. Using this framework, we obtain a Monte Carlo randomized fully dynamic algorithm for (1 + epsilon)-approximating the energy of electrical flows in n-vertex planar graphs with tilde{O}(r epsilon^{-2}) worst-case update time and tilde{O}((r + n / sqrt{r}) epsilon^{-2}) worst-case query time, for any r larger than some constant. For r=n^{2/3}, this gives tilde{O}(n^{2/3} epsilon^{-2}) update time and tilde{O}(n^{2/3} epsilon^{-2}) query time. We also extend this algorithm to work for minor-free graphs with similar approximation and running time guarantees. Furthermore, we illustrate our framework on the all-pairs max flow and shortest path problems by giving corresponding dynamic algorithms in minor-free graphs with both sublinear update and query times. To the best of our knowledge, our results are the first to systematically establish such a connection between dynamic graph algorithms and vertex sparsification. We also present both upper bound and lower bound for maintaining the energy of electrical flows in the incremental subgraph model, where updates consist of only vertex activations, which might be of independent interest.

Cite as

Gramoz Goranci, Monika Henzinger, and Pan Peng. The Power of Vertex Sparsifiers in Dynamic Graph Algorithms. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ESA.2017.45,
  author =	{Goranci, Gramoz and Henzinger, Monika and Peng, Pan},
  title =	{{The Power of Vertex Sparsifiers in Dynamic Graph Algorithms}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.45},
  URN =		{urn:nbn:de:0030-drops-78460},
  doi =		{10.4230/LIPIcs.ESA.2017.45},
  annote =	{Keywords: Dynamic graph algorithms, electrical flow, minor-free graphs, max flow}
}
Document
Single-Sink Fractionally Subadditive Network Design

Authors: Guru Guruganesh, Jennifer Iglesias, R. Ravi, and Laura Sanita


Abstract
We study a generalization of the Steiner tree problem, where we are given a weighted network G together with a collection of k subsets of its vertices and a root r. We wish to construct a minimum cost network such that the network supports one unit of flow to the root from every node in a subset simultaneously. The network constructed does not need to support flows from all the subsets simultaneously. We settle an open question regarding the complexity of this problem for k=2, and give a 3/2-approximation algorithm that improves over a (trivial) known 2-approximation. Furthermore, we prove some structural results that prevent many well-known techniques from doing better than the known O(log n)-approximation. Despite these obstacles, we conjecture that this problem should have an O(1)-approximation. We also give an approximation result for a variant of the problem where the solution is required to be a path.

Cite as

Guru Guruganesh, Jennifer Iglesias, R. Ravi, and Laura Sanita. Single-Sink Fractionally Subadditive Network Design. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{guruganesh_et_al:LIPIcs.ESA.2017.46,
  author =	{Guruganesh, Guru and Iglesias, Jennifer and Ravi, R. and Sanita, Laura},
  title =	{{Single-Sink Fractionally Subadditive Network Design}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.46},
  URN =		{urn:nbn:de:0030-drops-78581},
  doi =		{10.4230/LIPIcs.ESA.2017.46},
  annote =	{Keywords: Network design, single-commodity flow, approximation algorithms, Steiner tree}
}
Document
Path-Contractions, Edge Deletions and Connectivity Preservation

Authors: Gregory Gutin, M. S. Ramanujan, Felix Reidl, and Magnus Wahlström


Abstract
We study several problems related to graph modification problems under connectivity constraints from the perspective of parameterized complexity: (Weighted) Biconnectivity Deletion, where we are tasked with deleting k edges while preserving biconnectivity in an undirected graph, Vertexdeletion Preserving Strong Connectivity, where we want to maintain strong connectivity of a digraph while deleting exactly k vertices, and Path-contraction Preserving Strong Connectivity, in which the operation of path contraction on arcs is used instead. The parameterized tractability of this last problem was posed in [Bang-Jensen and Yeo, Discrete Applied Math 2008] as an open question and we answer it here in the negative: both variants of preserving strong connectivity are W[1]-hard. Preserving biconnectivity, on the other hand, turns out to be fixed parameter tractable (FPT) and we provide an FPT algorithm that solves Weighted Biconnectivity Deletion. Further, we show that the unweighted case even admits a randomized polynomial kernel. All our results provide further interesting data points for the systematic study of connectivitypreservation constraints in the parameterized setting.

Cite as

Gregory Gutin, M. S. Ramanujan, Felix Reidl, and Magnus Wahlström. Path-Contractions, Edge Deletions and Connectivity Preservation. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 47:1-47:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gutin_et_al:LIPIcs.ESA.2017.47,
  author =	{Gutin, Gregory and Ramanujan, M. S. and Reidl, Felix and Wahlstr\"{o}m, Magnus},
  title =	{{Path-Contractions, Edge Deletions and Connectivity Preservation}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{47:1--47:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.47},
  URN =		{urn:nbn:de:0030-drops-78270},
  doi =		{10.4230/LIPIcs.ESA.2017.47},
  annote =	{Keywords: connectivity, strong connectivity, vertex deletion, arc contraction}
}
Document
Dynamic Clustering to Minimize the Sum of Radii

Authors: Monika Henzinger, Dariusz Leniowski, and Claire Mathieu


Abstract
In this paper, we study the problem of opening centers to cluster a set of clients in a metric space so as to minimize the sum of the costs of the centers and of the cluster radii, in a dynamic environment where clients arrive and depart, and the solution must be updated efficiently while remaining competitive with respect to the current optimal solution. We call this dynamic sum-of-radii clustering problem. We present a data structure that maintains a solution whose cost is within a constant factor of the cost of an optimal solution in metric spaces with bounded doubling dimension and whose worst-case update time is logarithmic in the parameters of the problem.

Cite as

Monika Henzinger, Dariusz Leniowski, and Claire Mathieu. Dynamic Clustering to Minimize the Sum of Radii. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 48:1-48:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2017.48,
  author =	{Henzinger, Monika and Leniowski, Dariusz and Mathieu, Claire},
  title =	{{Dynamic Clustering to Minimize the Sum of Radii}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{48:1--48:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.48},
  URN =		{urn:nbn:de:0030-drops-78749},
  doi =		{10.4230/LIPIcs.ESA.2017.48},
  annote =	{Keywords: dynamic algorithm, clustering, approximation, doubling dimension}
}
Document
Shortest Paths in the Plane with Obstacle Violations

Authors: John Hershberger, Neeraj Kumar, and Subhash Suri


Abstract
We study the problem of finding shortest paths in the plane among h convex obstacles, where the path is allowed to pass through (violate) up to k obstacles, for k <= h. Equivalently, the problem is to find shortest paths that become obstacle-free if k obstacles are removed from the input. Given a fixed source point s, we show how to construct a map, called a shortest k-path map, so that all destinations in the same region of the map have the same combinatorial shortest path passing through at most k obstacles. We prove a tight bound of Theta(kn) on the size of this map, and show that it can be computed in O(k^2 n log n) time, where n is the total number of obstacle vertices.

Cite as

John Hershberger, Neeraj Kumar, and Subhash Suri. Shortest Paths in the Plane with Obstacle Violations. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hershberger_et_al:LIPIcs.ESA.2017.49,
  author =	{Hershberger, John and Kumar, Neeraj and Suri, Subhash},
  title =	{{Shortest Paths in the Plane with Obstacle Violations}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.49},
  URN =		{urn:nbn:de:0030-drops-78413},
  doi =		{10.4230/LIPIcs.ESA.2017.49},
  annote =	{Keywords: Shortest paths, Polygonal obstacles, Continuous Dijkstra, Obstacle crossing, Visibility}
}
Document
Contracting a Planar Graph Efficiently

Authors: Jacob Holm, Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, Eva Rotenberg, and Piotr Sankowski


Abstract
We present a data structure that can maintain a simple planar graph under edge contractions in linear total time. The data structure supports adjacency queries and provides access to neighbor lists in O(1) time. Moreover, it can report all the arising self-loops and parallel edges. By applying the data structure, we can achieve optimal running times for decremental bridge detection, 2-edge connectivity, maximal 3-edge connected components, and the problem of finding a unique perfect matching for a static planar graph. Furthermore, we improve the running times of algorithms for several planar graph problems, including decremental 2-vertex and 3-edge connectivity, and we show that using our data structure in a black-box manner, one obtains conceptually simple optimal algorithms for computing MST and 5-coloring in planar graphs.

Cite as

Jacob Holm, Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, Eva Rotenberg, and Piotr Sankowski. Contracting a Planar Graph Efficiently. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 50:1-50:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{holm_et_al:LIPIcs.ESA.2017.50,
  author =	{Holm, Jacob and Italiano, Giuseppe F. and Karczmarz, Adam and Lacki, Jakub and Rotenberg, Eva and Sankowski, Piotr},
  title =	{{Contracting a Planar Graph Efficiently}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{50:1--50:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.50},
  URN =		{urn:nbn:de:0030-drops-78755},
  doi =		{10.4230/LIPIcs.ESA.2017.50},
  annote =	{Keywords: planar graphs, algorithms, data structures, connectivity, coloring}
}
Document
Minimizing Maximum Flow Time on Related Machines via Dynamic Posted Pricing

Authors: Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Clifford Stein


Abstract
We consider a setting where selfish agents want to schedule jobs on related machines. The agent submitting a job picks a server that minimizes a linear combination of the server price and the resulting response time for that job on the selected server. The manager's task is to maintain server prices to (approximately) optimize the maximum response time, which is a measure of social good. We show that the existence of a pricing scheme with certain competitiveness is equivalent to the existence of a monotone immediate-dispatch algorithm. Our main result is a monotone immediate-dispatch algorithm that is O(1)-competitive with respect to the maximum response time.

Cite as

Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Clifford Stein. Minimizing Maximum Flow Time on Related Machines via Dynamic Posted Pricing. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 51:1-51:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{im_et_al:LIPIcs.ESA.2017.51,
  author =	{Im, Sungjin and Moseley, Benjamin and Pruhs, Kirk and Stein, Clifford},
  title =	{{Minimizing Maximum Flow Time on Related Machines via  Dynamic Posted Pricing}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{51:1--51:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.51},
  URN =		{urn:nbn:de:0030-drops-78287},
  doi =		{10.4230/LIPIcs.ESA.2017.51},
  annote =	{Keywords: Posted pricing scheme, online scheduling, related machines, maximum flow time, competitiveness analysis}
}
Document
Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points

Authors: Haim Kaplan, Sasanka Roy, and Micha Sharir


Abstract
Let P be a set of n points in the plane in general position, and consider the problem of finding an axis-parallel rectangle with a given perimeter, or area, or diagonal, that encloses the maximum number of points of P. We present an exact algorithm that finds such a rectangle in O(n^{5/2} log n) time, and, for the case of a fixed perimeter or diagonal, we also obtain (i) an improved exact algorithm that runs in O(nk^{3/2} log k) time, and (ii) an approximation algorithm that finds, in O(n+(n/(k epsilon^5))*log^{5/2}(n/k)log((1/epsilon) log(n/k))) time, a rectangle of the given perimeter or diagonal that contains at least (1-epsilon)k points of P, where k is the optimum value. We then show how to turn this algorithm into one that finds, for a given k, an axis-parallel rectangle of smallest perimeter (or area, or diagonal) that contains k points of P. We obtain the first subcubic algorithms for these problems, significantly improving the current state of the art.

Cite as

Haim Kaplan, Sasanka Roy, and Micha Sharir. Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 52:1-52:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2017.52,
  author =	{Kaplan, Haim and Roy, Sasanka and Sharir, Micha},
  title =	{{Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{52:1--52:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.52},
  URN =		{urn:nbn:de:0030-drops-78608},
  doi =		{10.4230/LIPIcs.ESA.2017.52},
  annote =	{Keywords: Computational geometry, geometric optimization, rectangles, perimeter, area}
}
Document
LZ-End Parsing in Linear Time

Authors: Dominik Kempa and Dmitry Kosolobov


Abstract
We present a deterministic algorithm that constructs in linear time and space the LZ-End parsing (a variation of LZ77) of a given string over an integer polynomially bounded alphabet.

Cite as

Dominik Kempa and Dmitry Kosolobov. LZ-End Parsing in Linear Time. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 53:1-53:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kempa_et_al:LIPIcs.ESA.2017.53,
  author =	{Kempa, Dominik and Kosolobov, Dmitry},
  title =	{{LZ-End Parsing in Linear Time}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{53:1--53:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.53},
  URN =		{urn:nbn:de:0030-drops-78471},
  doi =		{10.4230/LIPIcs.ESA.2017.53},
  annote =	{Keywords: LZ-End, LZ77, construction algorithm, linear time}
}
Document
Combinatorial n-fold Integer Programming and Applications

Authors: Dusan Knop, Martin Koutecky, and Matthias Mnich


Abstract
Many fundamental NP-hard problems can be formulated as integer linear programs (ILPs). A famous algorithm by Lenstra allows to solve ILPs in time that is exponential only in the dimension of the program. That algorithm therefore became a ubiquitous tool in the design of fixed-parameter algorithms for NP-hard problems, where one wishes to isolate the hardness of a problem by some parameter. However, it was discovered that in many cases using Lenstra’s algorithm has two drawbacks: First, the run time of the resulting algorithms is often doubly-exponential in the parameter, and second, an ILP formulation in small dimension can not easily express problems which involve many different costs. Inspired by the work of Hemmecke, Onn and Romanchuk [Math. Prog. 2013], we develop a single-exponential algorithm for so-called combinatorial n-fold integer programs, which are remarkably similar to prior ILP formulations for various problems, but unlike them, also allow variable dimension. We then apply our algorithm to a few representative problems like Closest String, Swap Bribery, Weighted Set Multicover, and obtain exponential speedups in the dependence on the respective parameters, the input size, or both. Unlike Lenstra’s algorithm, which is essentially a bounded search tree algorithm, our result uses the technique of augmenting steps. At its heart is a deep result stating that in combinatorial n-fold IPs an existence of an augmenting step implies an existence of a “local” augmenting step, which can be found using dynamic programming. Our results provide an important insight into many problems by showing that they exhibit this phenomenon, and highlights the importance of augmentation techniques.

Cite as

Dusan Knop, Martin Koutecky, and Matthias Mnich. Combinatorial n-fold Integer Programming and Applications. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{knop_et_al:LIPIcs.ESA.2017.54,
  author =	{Knop, Dusan and Koutecky, Martin and Mnich, Matthias},
  title =	{{Combinatorial n-fold Integer Programming and Applications}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{54:1--54:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.54},
  URN =		{urn:nbn:de:0030-drops-78616},
  doi =		{10.4230/LIPIcs.ESA.2017.54},
  annote =	{Keywords: integer programming, closest strings, fixed-parameter algorithms}
}
Document
Local Search Algorithms for Maximum Carpool Matching

Authors: Gilad Kutiel and Dror Rawitz


Abstract
The Maximum Carpool Matching problem is a star packing problem in directed graphs. Formally, given a directed graph G = (V, A), a capacity function c: V -> N, and a weight function w: A -> R^+, a carpool matching is a subset of arcs, M subseteq A, such that every v in V satisfies: (i) d^{in}_M(v) cdot d^{out}_M(v) = 0, (ii) d^{in}_M(v) <= c(v), and (iii) d^{out}_M(v) <= 1. A vertex v for which d^{out}_M(v) = 1 is a passenger, and a vertex for which d^{out}_M(v) = 0 is a driver who has d^{in}_M(v) passengers. In the Maximum Carpool Matching problem the goal is to find a carpool matching M of maximum total weight. The problem arises when designing an online carpool service, such as Zimride, which tries to connect between users based on a similarity function. The problem is known to be NP-hard, even in the unweighted and uncapacitated case. The Maximum Group Carpool Matching problem, is an extension of Maximum Carpool Matching where each vertex represents an unsplittable group of passengers. Formally, each vertex u in V has a size s(u) in N, and the constraint d^{in}_M(v) <= c(v) is replaced with sum_{u:(u,v) in M} s(u) <= c(v). We show that Maximum Carpool Matching can be formulated as an unconstrained submodular maximization problem, thus it admits a 1/2-approximation algorithm. We show that the same formulation does not work for Maximum Group Carpool Matching, nevertheless, we present a local search (1/2 - epsilon)-approximation algorithm for Maximum Group Carpool Matching. For the unweighted variant of both problems when the maximum possible capacity, c_{max}, is bounded by a constant, we provide a local search (1/2 + 1/{2c_{max}} - epsilon)-approximation algorithm. We also show that the problem is APX-hard, even if the maximum degree and c_{max} are at most 3.

Cite as

Gilad Kutiel and Dror Rawitz. Local Search Algorithms for Maximum Carpool Matching. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 55:1-55:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kutiel_et_al:LIPIcs.ESA.2017.55,
  author =	{Kutiel, Gilad and Rawitz, Dror},
  title =	{{Local Search Algorithms for Maximum Carpool Matching}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{55:1--55:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.55},
  URN =		{urn:nbn:de:0030-drops-78210},
  doi =		{10.4230/LIPIcs.ESA.2017.55},
  annote =	{Keywords: approximation algorithms, local search, star packing, submodular maximization}
}
Document
Computing Maximum Agreement Forests without Cluster Partitioning is Folly

Authors: Zhijiang Li and Norbert Zeh


Abstract
Computing a maximum (acyclic) agreement forest (M(A)AF) of a pair of phylogenetic trees is known to be fixed-parameter tractable; the two main techniques are kernelization and depth-bounded search. In theory, kernelization-based algorithms for this problem are not competitive, but they perform remarkably well in practice. We shed light on why this is the case. Our results show that, probably unsurprisingly, the kernel is often much smaller in practice than the theoretical worst case, but not small enough to fully explain the good performance of these algorithms. The key to performance is cluster partitioning, a technique used in almost all fast M(A)AF algorithms. In theory, cluster partitioning does not help: some instances are highly clusterable, others not at all. However, our experiments show that cluster partitioning leads to substantial performance improvements for kernelization-based M(A)AF algorithms. In contrast, kernelizing the individual clusters before solving them using exponential search yields only very modest performance improvements or even hurts performance; for the vast majority of inputs, kernelization leads to no reduction in the maximal cluster size at all. The choice of the algorithm applied to solve individual clusters also significantly impacts performance, even though our limited experiment to evaluate this produced no clear winner; depth-bounded search, exponential search interleaved with kernelization, and an ILP-based algorithm all achieved competitive performance.

Cite as

Zhijiang Li and Norbert Zeh. Computing Maximum Agreement Forests without Cluster Partitioning is Folly. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2017.56,
  author =	{Li, Zhijiang and Zeh, Norbert},
  title =	{{Computing Maximum Agreement Forests without Cluster Partitioning is Folly}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.56},
  URN =		{urn:nbn:de:0030-drops-78819},
  doi =		{10.4230/LIPIcs.ESA.2017.56},
  annote =	{Keywords: fixed-parameter tractability, agreement forests, hybridization, subtree prune-and-regraft}
}
Document
A Linear-Time Parameterized Algorithm for Node Unique Label Cover

Authors: Daniel Lokshtanov, M. S. Ramanujan, and Saket Saurabh


Abstract
The optimization version of the Unique Label Cover problem is at the heart of the Unique Games Conjecture which has played an important role in the proof of several tight inapproximability results. In recent years, this problem has been also studied extensively from the point of view of parameterized complexity. Chitnis et al. [FOCS 2012, SICOMP 2016] proved that this problem is fixed-parameter tractable (FPT) and Wahlström [SODA 2014] gave an FPT algorithm with an improved parameter dependence. Subsequently, Iwata, Wahlström and Yoshida [SICOMP 2016] proved that the edge version of Unique Label Cover can be solved in linear FPT-time, and they left open the existence of such an algorithm for the node version of the problem. In this paper, we resolve this question by presenting the first linear-time FPT algorithm for Node Unique Label Cover.

Cite as

Daniel Lokshtanov, M. S. Ramanujan, and Saket Saurabh. A Linear-Time Parameterized Algorithm for Node Unique Label Cover. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ESA.2017.57,
  author =	{Lokshtanov, Daniel and Ramanujan, M. S. and Saurabh, Saket},
  title =	{{A Linear-Time  Parameterized Algorithm for Node Unique Label Cover}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.57},
  URN =		{urn:nbn:de:0030-drops-78152},
  doi =		{10.4230/LIPIcs.ESA.2017.57},
  annote =	{Keywords: Algorithms and data structures, Fixed Parameter Tractability, Unique Label Cover, Linear Time FPT Algorithms.}
}
Document
Dynamic Space Efficient Hashing

Authors: Tobias Maier and Peter Sanders


Abstract
We consider space efficient hash tables that can grow and shrink dynamically and are always highly space efficient, i.e., their space consumption is always close to the lower bound even while growing and when taking into account storage that is only needed temporarily. None of the traditionally used hash tables have this property. We show how known approaches like linear probing and bucket cuckoo hashing can be adapted to this scenario by subdividing them into many subtables or using virtual memory overcommitting. However, these rather straightforward solutions suffer from slow amortized insertion times due to frequent reallocation in small increments. Our main result is DySECT (Dynamic Space Efficient Cuckoo Table) which avoids these problems. DySECT consists of many subtables which grow by doubling their size. The resulting inhomogeneity in subtable sizes is equalized by the flexibility available in bucket cuckoo hashing where each element can go to several buckets each of which containing several cells. Experiments indicate that DySECT works well with load factors up to 98%. With up to 2.7 times better performance than the next best solution.

Cite as

Tobias Maier and Peter Sanders. Dynamic Space Efficient Hashing. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{maier_et_al:LIPIcs.ESA.2017.58,
  author =	{Maier, Tobias and Sanders, Peter},
  title =	{{Dynamic Space Efficient Hashing}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.58},
  URN =		{urn:nbn:de:0030-drops-78487},
  doi =		{10.4230/LIPIcs.ESA.2017.58},
  annote =	{Keywords: Dynamic data structures, open addressing, closed hashing, cuckoo hashing, space efficiency}
}
Document
Subexponential Parameterized Algorithms for Graphs of Polynomial Growth

Authors: Dániel Marx and Marcin Pilipczuk


Abstract
We show that for a number of parameterized problems for which only 2^{O(k)} n^{O(1)} time algorithms are known on general graphs, subexponential parameterized algorithms with running time 2^{O(k^{1-1/(1+d)} log^2 k)} n^{O(1)} are possible for graphs of polynomial growth with growth rate (degree) d, that is, if we assume that every ball of radius r contains only O(r^d) vertices. The algorithms use the technique of low-treewidth pattern covering, introduced by Fomin et al. [FOCS 2016] for planar graphs; here we show how this strategy can be made to work for graphs of polynomial growth. Formally, we prove that, given a graph G of polynomial growth with growth rate d and an integer k, one can in randomized polynomial time find a subset A of V(G) such that on one hand the treewidth of G[A] is O(k^{1-1/(1+d)} log k), and on the other hand for every set X of vertices of size at most k, the probability that X is a subset of A is 2^{-O(k^{1-1/(1+d)} log^2 k)}. Together with standard dynamic programming techniques on graphs of bounded treewidth, this statement gives subexponential parameterized algorithms for a number of subgraph search problems, such as Long Path or Steiner Tree, in graphs of polynomial growth. We complement the algorithm with an almost tight lower bound for Long Path: unless the Exponential Time Hypothesis fails, no parameterized algorithm with running time 2^{k^{1-1/d-epsilon}}n^{O(1)} is possible for any positive epsilon and any integer d >= 3.

Cite as

Dániel Marx and Marcin Pilipczuk. Subexponential Parameterized Algorithms for Graphs of Polynomial Growth. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 59:1-59:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{marx_et_al:LIPIcs.ESA.2017.59,
  author =	{Marx, D\'{a}niel and Pilipczuk, Marcin},
  title =	{{Subexponential Parameterized Algorithms for Graphs of Polynomial Growth}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{59:1--59:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.59},
  URN =		{urn:nbn:de:0030-drops-78162},
  doi =		{10.4230/LIPIcs.ESA.2017.59},
  annote =	{Keywords: polynomial growth, subexponential algorithm, low treewidth pattern covering}
}
Document
Benchmark Graphs for Practical Graph Isomorphism

Authors: Daniel Neuen and Pascal Schweitzer


Abstract
The state-of-the-art solvers for the graph isomorphism problem can readily solve generic instances with tens of thousands of vertices. Indeed, experiments show that on inputs without particular combinatorial structure the algorithms scale almost linearly. In fact, it is non-trivial to create challenging instances for such solvers and the number of difficult benchmark graphs available is quite limited. We describe a construction to efficiently generate small instances for the graph isomorphism problem that are difficult or even infeasible for said solvers. Up to this point the only other available instances posing challenges for isomorphism solvers were certain incidence structures of combinatorial objects (such as projective planes, Hadamard matrices, Latin squares, etc.). Experiments show that starting from 1500 vertices our new instances are several orders of magnitude more difficult on comparable input sizes. More importantly, our method is generic and efficient in the sense that one can quickly create many isomorphism instances on a desired number of vertices. In contrast to this, said combinatorial objects are rare and difficult to generate and with the new construction it is possible to generate an abundance of instances of arbitrary size. Our construction hinges on the multipedes of Gurevich and Shelah and the Cai-Fürer-Immerman gadgets that realize a certain abelian automorphism group and have repeatedly played a role in the context of graph isomorphism. Exploring limits, we also explain that there are group theoretic obstructions to generalizing the construction with non-abelian gadgets.

Cite as

Daniel Neuen and Pascal Schweitzer. Benchmark Graphs for Practical Graph Isomorphism. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{neuen_et_al:LIPIcs.ESA.2017.60,
  author =	{Neuen, Daniel and Schweitzer, Pascal},
  title =	{{Benchmark Graphs for Practical Graph Isomorphism}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.60},
  URN =		{urn:nbn:de:0030-drops-78701},
  doi =		{10.4230/LIPIcs.ESA.2017.60},
  annote =	{Keywords: graph isomorphism, benchmark instances, practical solvers}
}
Document
On the Tree Augmentation Problem

Authors: Zeev Nutov


Abstract
In the Tree Augmentation problem we are given a tree T=(V,F) and a set E of edges with positive integer costs {c_e:e in E}. The goal is to augment T by a minimum cost edge set J subseteq E such that T cup J is 2-edge-connected. We obtain the following results. Recently, Adjiashvili [SODA 17] introduced a novel LP for the problem and used it to break the 2-approximation barrier for instances when the maximum cost M of an edge in E is bounded by a constant; his algorithm computes a 1.96418+epsilon approximate solution in time n^{{(M/epsilon^2)}^{O(1)}}. Using a simpler LP, we achieve ratio 12/7+epsilon in time ^{O(M/epsilon^2)}. This also gives ratio better than 2 for logarithmic costs, and not only for constant costs. In addition, we will show that (for arbitrary costs) the problem admits ratio 3/2 for trees of diameter <= 7. One of the oldest open questions for the problem is whether for unit costs (when M=1) the standard LP-relaxation, so called Cut-LP, has integrality gap less than 2. We resolve this open question by proving that for unit costs the integrality gap of the Cut-LP is at most 28/15=2-2/15. In addition, we will suggest another natural LP-relaxation that is much simpler than the ones in previous work, and prove that it has integrality gap at most 7/4.

Cite as

Zeev Nutov. On the Tree Augmentation Problem. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nutov:LIPIcs.ESA.2017.61,
  author =	{Nutov, Zeev},
  title =	{{On the Tree Augmentation Problem}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.61},
  URN =		{urn:nbn:de:0030-drops-78345},
  doi =		{10.4230/LIPIcs.ESA.2017.61},
  annote =	{Keywords: Tree augmentation, Logarithmic costs, Approximation algorithm, Half-integral extreme points, Integrality gap}
}
Document
Prize-Collecting TSP with a Budget Constraint

Authors: Alice Paul, Daniel Freund, Aaron Ferber, David B. Shmoys, and David P. Williamson


Abstract
We consider constrained versions of the prize-collecting traveling salesman and the minimum spanning tree problems. The goal is to maximize the number of vertices in the returned tour/tree subject to a bound on the tour/tree cost. We present a 2-approximation algorithm for these problems based on a primal-dual approach. The algorithm relies on finding a threshold value for the dual variable corresponding to the budget constraint in the primal and then carefully constructing a tour/tree that is just within budget. Thereby, we improve the best-known guarantees from 3+epsilon and 2+epsilon for the tree and the tour version, respectively. Our analysis extends to the setting with weighted vertices, in which we want to maximize the total weight of vertices in the tour/tree subject to the same budget constraint.

Cite as

Alice Paul, Daniel Freund, Aaron Ferber, David B. Shmoys, and David P. Williamson. Prize-Collecting TSP with a Budget Constraint. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{paul_et_al:LIPIcs.ESA.2017.62,
  author =	{Paul, Alice and Freund, Daniel and Ferber, Aaron and Shmoys, David B. and Williamson, David P.},
  title =	{{Prize-Collecting TSP with a Budget Constraint}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.62},
  URN =		{urn:nbn:de:0030-drops-78375},
  doi =		{10.4230/LIPIcs.ESA.2017.62},
  annote =	{Keywords: approximation algorithms, traveling salesman problem}
}
Document
Counting Restricted Homomorphisms via Möbius Inversion over Matroid Lattices

Authors: Marc Roth


Abstract
We present a framework for the complexity classification of parameterized counting problems that can be formulated as the summation over the numbers of homomorphisms from small pattern graphs H_1,...,H_l to a big host graph G with the restriction that the coefficients correspond to evaluations of the Möbius function over the lattice of a graphic matroid. This generalizes the idea of Curticapean, Dell and Marx [STOC 17] who used a result of Lovász stating that the number of subgraph embeddings from a graph H to a graph G can be expressed as such a sum over the lattice of partitions of H. In the first step we introduce what we call graphically restricted homomorphisms that, inter alia, generalize subgraph embeddings as well as locally injective homomorphisms. We provide a complete parameterized complexity dichotomy for counting such homomorphisms, that is, we identify classes of patterns for which the problem is fixed-parameter tractable (FPT), including an algorithm, and prove that all other pattern classes lead to #W[1]-hard problems. The main ingredients of the proof are the complexity classification of linear combinations of homomorphisms due to Curticapean, Dell and Marx [STOC 17] as well as a corollary of Rota's NBC Theorem which states that the sign of the Möbius function over a geometric lattice only depends on the rank of its arguments. We apply the general theorem to the problem of counting locally injective homomorphisms from small pattern graphs to big host graphs yielding a concrete dichotomy criterion. It turns out that - in contrast to subgraph embeddings - counting locally injective homomorphisms has "real" FPT cases, that is, cases that are fixed-parameter tractable but not polynomial time solvable under standard complexity assumptions. To prove this we show in an intermediate step that the subgraph counting problem remains #P-hard when both the pattern and the host graphs are restricted to be trees. We then investigate the more general problem of counting homomorphisms that are injective in the r-neighborhood of every vertex. As those are graphically restricted as well, they can also easily be classified via the general theorem. Finally we show that the dichotomy for counting graphically restricted homomorphisms readily extends to so-called linear combinations.

Cite as

Marc Roth. Counting Restricted Homomorphisms via Möbius Inversion over Matroid Lattices. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{roth:LIPIcs.ESA.2017.63,
  author =	{Roth, Marc},
  title =	{{Counting Restricted Homomorphisms via M\"{o}bius Inversion over Matroid Lattices}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.63},
  URN =		{urn:nbn:de:0030-drops-78311},
  doi =		{10.4230/LIPIcs.ESA.2017.63},
  annote =	{Keywords: homomorphisms, matroids, counting complexity, parameterized complexity, dichotomy theorems}
}
Document
Clustering in Hypergraphs to Minimize Average Edge Service Time

Authors: Ori Rottenstreich, Haim Kaplan, and Avinatan Hassidim


Abstract
We study the problem of clustering the vertices of a weighted hypergraph such that on average the vertices of each edge can be covered by a small number of clusters. This problem has many applications such as for designing medical tests, clustering files on disk servers, and placing network services on servers. The edges of the hypergraph model groups of items that are likely to be needed together, and the optimization criteria which we use can be interpreted as the average delay (or cost) to serve the items of a typical edge. We describe and analyze algorithms for this problem for the case in which the clusters have to be disjoint and for the case where clusters can overlap. The analysis is often subtle and reveals interesting structure and invariants that one can utilize.

Cite as

Ori Rottenstreich, Haim Kaplan, and Avinatan Hassidim. Clustering in Hypergraphs to Minimize Average Edge Service Time. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 64:1-64:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rottenstreich_et_al:LIPIcs.ESA.2017.64,
  author =	{Rottenstreich, Ori and Kaplan, Haim and Hassidim, Avinatan},
  title =	{{Clustering in Hypergraphs to Minimize Average Edge Service Time}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{64:1--64:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.64},
  URN =		{urn:nbn:de:0030-drops-78777},
  doi =		{10.4230/LIPIcs.ESA.2017.64},
  annote =	{Keywords: Clustering, average cover time, hypergraphs, set cover}
}
Document
K-Dominance in Multidimensional Data: Theory and Applications

Authors: Thomas Schibler and Subhash Suri


Abstract
We study the problem of k-dominance in a set of d-dimensional vectors, prove bounds on the number of maxima (skyline vectors), under both worst-case and average-case models, perform experimental evaluation using synthetic and real-world data, and explore an application of k-dominant skyline for extracting a small set of top-ranked vectors in high dimensions where the full skylines can be unmanageably large.

Cite as

Thomas Schibler and Subhash Suri. K-Dominance in Multidimensional Data: Theory and Applications. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 65:1-65:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{schibler_et_al:LIPIcs.ESA.2017.65,
  author =	{Schibler, Thomas and Suri, Subhash},
  title =	{{K-Dominance in Multidimensional Data: Theory and Applications}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{65:1--65:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.65},
  URN =		{urn:nbn:de:0030-drops-78402},
  doi =		{10.4230/LIPIcs.ESA.2017.65},
  annote =	{Keywords: Dominance, skyline, database search, average case analysis, random vectors}
}
Document
New Abilities and Limitations of Spectral Graph Bisection

Authors: Martin R. Schuster and Maciej Liskiewicz


Abstract
Spectral based heuristics belong to well-known commonly used methods which determines provably minimal graph bisection or outputs "fail" when the optimality cannot be certified. In this paper we focus on Boppana's algorithm which belongs to one of the most prominent methods of this type. It is well known that the algorithm works well in the random planted bisection model - the standard class of graphs for analysis minimum bisection and relevant problems. In 2001 Feige and Kilian posed the question if Boppana's algorithm works well in the semirandom model by Blum and Spencer. In our paper we answer this question affirmatively. We show also that the algorithm achieves similar performance on graph classes which extend the semirandom model. Since the behavior of Boppana's algorithm on the semirandom graphs remained unknown, Feige and Kilian proposed a new semidefinite programming (SDP) based approach and proved that it works on this model. The relationship between the performance of the SDP based algorithm and Boppana's approach was left as an open problem. In this paper we solve the problem in a complete way by proving that the bisection algorithm of Feige and Kilian provides exactly the same results as Boppana's algorithm. As a consequence we get that Boppana's algorithm achieves the optimal threshold for exact cluster recovery in the stochastic block model. On the other hand we prove some limitations of Boppana's approach: we show that if the density difference on the parameters of the planted bisection model is too small then the algorithm fails with high probability in the model.

Cite as

Martin R. Schuster and Maciej Liskiewicz. New Abilities and Limitations of Spectral Graph Bisection. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 66:1-66:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{schuster_et_al:LIPIcs.ESA.2017.66,
  author =	{Schuster, Martin R. and Liskiewicz, Maciej},
  title =	{{New Abilities and Limitations of Spectral Graph Bisection}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{66:1--66:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.66},
  URN =		{urn:nbn:de:0030-drops-78658},
  doi =		{10.4230/LIPIcs.ESA.2017.66},
  annote =	{Keywords: Minimum Graph Bisection, Spectral Methods, Convex Programming}
}
Document
A Space-Optimal Grammar Compression

Authors: Yoshimasa Takabatake, Tomohiro I, and Hiroshi Sakamoto


Abstract
A grammar compression is a context-free grammar (CFG) deriving a single string deterministically. For an input string of length N over an alphabet of size sigma, the smallest CFG is O(log N)-approximable in the offline setting and O(log N log^* N)-approximable in the online setting. In addition, an information-theoretic lower bound for representing a CFG in Chomsky normal form of n variables is log (n!/n^sigma) + n + o(n) bits. Although there is an online grammar compression algorithm that directly computes the succinct encoding of its output CFG with O(log N log^* N) approximation guarantee, the problem of optimizing its working space has remained open. We propose a fully-online algorithm that requires the fewest bits of working space asymptotically equal to the lower bound in O(N log log n) compression time. In addition we propose several techniques to boost grammar compression and show their efficiency by computational experiments.

Cite as

Yoshimasa Takabatake, Tomohiro I, and Hiroshi Sakamoto. A Space-Optimal Grammar Compression. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 67:1-67:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{takabatake_et_al:LIPIcs.ESA.2017.67,
  author =	{Takabatake, Yoshimasa and I, Tomohiro and Sakamoto, Hiroshi},
  title =	{{A Space-Optimal Grammar Compression}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{67:1--67:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.67},
  URN =		{urn:nbn:de:0030-drops-78640},
  doi =		{10.4230/LIPIcs.ESA.2017.67},
  annote =	{Keywords: Grammar compression, fully-online algorithm, succinct data structure}
}
Document
Positive-Instance Driven Dynamic Programming for Treewidth

Authors: Hisao Tamaki


Abstract
Consider a dynamic programming scheme for a decision problem in which all subproblems involved are also decision problems. An implementation of such a scheme is positive-instance driven (PID), if it generates positive subproblem instances, but not negative ones, building each on smaller positive instances. We take the dynamic programming scheme due to Bouchitté and Todinca for treewidth computation, which is based on minimal separators and potential maximal cliques, and design a variant (for the decision version of the problem) with a natural PID implementation. The resulting algorithm performs extremely well: it solves a number of standard benchmark instances for which the optimal solutions have not previously been known. Incorporating a new heuristic algorithm for detecting safe separators, it also solves all of the 100 public instances posed by the exact treewidth track in PACE 2017, a competition on algorithm implementation. We describe the algorithm and prove its correctness. We also perform an experimental analysis counting combinatorial structures involved, which gives insights into the advantage of our approach over more conventional approaches and points to the future direction of theoretical and engineering research on treewidth computation.

Cite as

Hisao Tamaki. Positive-Instance Driven Dynamic Programming for Treewidth. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 68:1-68:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tamaki:LIPIcs.ESA.2017.68,
  author =	{Tamaki, Hisao},
  title =	{{Positive-Instance Driven Dynamic Programming for Treewidth}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{68:1--68:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.68},
  URN =		{urn:nbn:de:0030-drops-78804},
  doi =		{10.4230/LIPIcs.ESA.2017.68},
  annote =	{Keywords: treewidth, dynamic programming, minimal separators, potential maximal cliques, positive instances}
}
Document
Exponential Lower Bounds for History-Based Simplex Pivot Rules on Abstract Cubes

Authors: Antonis Thomas


Abstract
The behavior of the simplex algorithm is a widely studied subject. Specifically, the question of the existence of a polynomial pivot rule for the simplex algorithm is of major importance. Here, we give exponential lower bounds for three history-based pivot rules. Those rules decide their next step based on memory of the past steps. In particular, we study Zadeh's least entered rule, Johnson's least-recently basic rule and Cunningham's least-recently considered (or round-robin) rule. We give exponential lower bounds on Acyclic Unique Sink Orientations of the abstract cube, for all of these pivot rules. For Johnson's rule our bound is the first superpolynomial one in any context; for Zadeh's it is the first one for AUSO. Those two are our main results.

Cite as

Antonis Thomas. Exponential Lower Bounds for History-Based Simplex Pivot Rules on Abstract Cubes. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 69:1-69:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{thomas:LIPIcs.ESA.2017.69,
  author =	{Thomas, Antonis},
  title =	{{Exponential Lower Bounds for History-Based Simplex Pivot Rules on Abstract Cubes}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{69:1--69:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.69},
  URN =		{urn:nbn:de:0030-drops-78505},
  doi =		{10.4230/LIPIcs.ESA.2017.69},
  annote =	{Keywords: pivot rule, lower bound, exponential, unique sink orientation, zadeh}
}
Document
Maxent-Stress Optimization of 3D Biomolecular Models

Authors: Michael Wegner, Oskar Taubert, Alexander Schug, and Henning Meyerhenke


Abstract
Knowing a biomolecule's structure is inherently linked to and a prerequisite for any detailed understanding of its function. Significant effort has gone into developing technologies for structural characterization. These technologies do not directly provide 3D structures; instead they typically yield noisy and erroneous distance information between specific entities such as atoms or residues, which have to be translated into consistent 3D models. Here we present an approach for this translation process based on maxent-stress optimization. Our new approach extends the original graph drawing method for the new application's specifics by introducing additional constraints and confidence values as well as algorithmic components. Extensive experiments demonstrate that our approach infers structural models (i.e., sensible 3D coordinates for the molecule's atoms) that correspond well to the distance information, can handle noisy and error-prone data, and is considerably faster than established tools. Our results promise to allow domain scientists nearly-interactive structural modeling based on distance constraints.

Cite as

Michael Wegner, Oskar Taubert, Alexander Schug, and Henning Meyerhenke. Maxent-Stress Optimization of 3D Biomolecular Models. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 70:1-70:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{wegner_et_al:LIPIcs.ESA.2017.70,
  author =	{Wegner, Michael and Taubert, Oskar and Schug, Alexander and Meyerhenke, Henning},
  title =	{{Maxent-Stress Optimization of 3D Biomolecular Models}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{70:1--70:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.70},
  URN =		{urn:nbn:de:0030-drops-78175},
  doi =		{10.4230/LIPIcs.ESA.2017.70},
  annote =	{Keywords: Distance geometry, protein structure determination, 3D graph drawing, maxent-stress optimization}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail