LIPIcs, Volume 57

24th Annual European Symposium on Algorithms (ESA 2016)



Thumbnail PDF

Event

ESA 2016, August 22-24, 2016, Aarhus, Denmark

Editors

Piotr Sankowski
Christos Zaroliagis

Publication Details

  • published at: 2016-08-18
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-015-6
  • DBLP: db/conf/esa/esa2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 57, ESA'16, Complete Volume

Authors: Piotr Sankowski and Christos Zaroliagis


Abstract
LIPIcs, Volume 57, ESA'16, Complete Volume

Cite as

24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{sankowski_et_al:LIPIcs.ESA.2016,
  title =	{{LIPIcs, Volume 57, ESA'16, Complete Volume}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016},
  URN =		{urn:nbn:de:0030-drops-65854},
  doi =		{10.4230/LIPIcs.ESA.2016},
  annote =	{Keywords: Data Structures, Nonnumerical Algorithms and Problems, Optimization, Discrete Mathematics, Mathematical Software, Algorithms, Problem Solving, Control Methods and Search, Computational Geometry and Object Modeling}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Programm Commitee, External Reviewers

Authors: Piotr Sankowski and Christos Zaroliagis


Abstract
Front Matter, Table of Contents, Preface, Programm Commitee, External Reviewers

Cite as

24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 0:i-0:xxiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sankowski_et_al:LIPIcs.ESA.2016.0,
  author =	{Sankowski, Piotr and Zaroliagis, Christos},
  title =	{{Front Matter, Table of Contents, Preface, Programm Commitee, External Reviewers}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{0:i--0:xxiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.0},
  URN =		{urn:nbn:de:0030-drops-63429},
  doi =		{10.4230/LIPIcs.ESA.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Programm Commitee, External Reviewers}
}
Document
Invited Talk
2-Connectivity in Directed Graphs (Invited Talk)

Authors: Loukas Georgiadis, Giuseppe F. Italiano, and Nikos Parotsidis


Abstract
We survey some recent results on 2-edge and 2-vertex connectivity problems in directed graphs. Despite being complete analogs of the corresponding notions on undirected graphs, in digraphs 2-vertex and 2-edge connectivity have a much richer and more complicated structure. It is thus not surprising that 2-connectivity problems on directed graphs appear to be more difficult than on undirected graphs. For undirected graphs it has been known for over 40 years how to compute all bridges, articulation points, 2-edge- and 2-vertex-connected components in linear time, by simply using depth-first search. In the case of digraphs, however, the very same problems have been much more challenging and required the development of new tools and techniques.

Cite as

Loukas Georgiadis, Giuseppe F. Italiano, and Nikos Parotsidis. 2-Connectivity in Directed Graphs (Invited Talk). In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ESA.2016.1,
  author =	{Georgiadis, Loukas and Italiano, Giuseppe F. and Parotsidis, Nikos},
  title =	{{2-Connectivity in Directed Graphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.1},
  URN =		{urn:nbn:de:0030-drops-63451},
  doi =		{10.4230/LIPIcs.ESA.2016.1},
  annote =	{Keywords: 2-edge and 2-vertex connectivity on directed graphs, graph algorithms, dominator trees}
}
Document
Invited Talk
Algorithms with Provable Guarantees for Clustering (Invited Talk)

Authors: Ola Svensson


Abstract
In this talk, we give an overview of the current best approximation algorithms for fundamental clustering problems, such as k-center, k-median, k-means, and facility location. We focus on recent progress and point out several important open problems. For the uncapacitated versions, a variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to a standard linear programming relaxation. This has given a uniform way of addressing these problems resulting in small constant approximation guarantees. In spite of this impressive progress, it remains a challenging open problem to give tight guarantees. Moreover, this collection of powerful algorithmic techniques is not easily applicable to the capacitated setting. In fact, there is no simple strong convex relaxation known for the capacitated versions. As a result, our understanding of these problems is significantly weaker and several fundamental questions remain open.

Cite as

Ola Svensson. Algorithms with Provable Guarantees for Clustering (Invited Talk). In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{svensson:LIPIcs.ESA.2016.2,
  author =	{Svensson, Ola},
  title =	{{Algorithms with Provable Guarantees for Clustering}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.2},
  URN =		{urn:nbn:de:0030-drops-63435},
  doi =		{10.4230/LIPIcs.ESA.2016.2},
  annote =	{Keywords: Approximation algorithms, clustering}
}
Document
Beating Ratio 0.5 for Weighted Oblivious Matching Problems

Authors: Melika Abolhassani, T.-H. Hubert Chan, Fei Chen, Hossein Esfandiari, MohammadTaghi Hajiaghayi, Mahini Hamid, and Xiaowei Wu


Abstract
We prove the first non-trivial performance ratios strictly above 0.5 for weighted versions of the oblivious matching problem. Even for the unweighted version, since Aronson, Dyer, Frieze, and Suen first proved a non-trivial ratio above 0.5 in the mid-1990s, during the next twenty years several attempts have been made to improve this ratio, until Chan, Chen, Wu and Zhao successfully achieved a significant ratio of 0.523 very recently (SODA 2014). To the best of our knowledge, our work is the first in the literature that considers the node-weighted and edge-weighted versions of the problem in arbitrary graphs (as opposed to bipartite graphs). (1) For arbitrary node weights, we prove that a weighted version of the Ranking algorithm has ratio strictly above 0.5. We have discovered a new structural property of the ranking algorithm: if a node has two unmatched neighbors at the end of algorithm, then it will still be matched even when its rank is demoted to the bottom. This property allows us to form LP constraints for both the node-weighted and the unweighted oblivious matching problems. As a result, we prove that the ratio for the node-weighted case is at least 0.501512. Interestingly via the structural property, we can also improve slightly the ratio for the unweighted case to 0.526823 (from the previous best 0.523166 in SODA 2014). (2) For a bounded number of distinct edge weights, we show that ratio strictly above 0.5 can be achieved by partitioning edges carefully according to the weights, and running the (unweighted) Ranking algorithm on each part. Our analysis is based on a new primal-dual framework known as \emph{matching coverage}, in which dual feasibility is bypassed. Instead, only dual constraints corresponding to edges in an optimal matching are satisfied. Using this framework we also design and analyze an algorithm for the edge-weighted online bipartite matching problem with free disposal. We prove that for the case of bounded online degrees, the ratio is strictly above 0.5.

Cite as

Melika Abolhassani, T.-H. Hubert Chan, Fei Chen, Hossein Esfandiari, MohammadTaghi Hajiaghayi, Mahini Hamid, and Xiaowei Wu. Beating Ratio 0.5 for Weighted Oblivious Matching Problems. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abolhassani_et_al:LIPIcs.ESA.2016.3,
  author =	{Abolhassani, Melika and Chan, T.-H. Hubert and Chen, Fei and Esfandiari, Hossein and Hajiaghayi, MohammadTaghi and Hamid, Mahini and Wu, Xiaowei},
  title =	{{Beating Ratio 0.5 for Weighted Oblivious Matching Problems}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.3},
  URN =		{urn:nbn:de:0030-drops-63443},
  doi =		{10.4230/LIPIcs.ESA.2016.3},
  annote =	{Keywords: Weighted matching, oblivious algorithms, Ranking, linear programming}
}
Document
Outer Common Tangents and Nesting of Convex Hulls in Linear Time and Constant Workspace

Authors: Mikkel Abrahamsen and Bartosz Walczak


Abstract
We describe the first algorithm to compute the outer common tangents of two disjoint simple polygons using linear time and only constant workspace. A tangent of a polygon is a line touching the polygon such that all of the polygon lies on the same side of the line. An outer common tangent of two polygons is a tangent of both polygons such that the polygons lie on the same side of the tangent. Each polygon is given as a read-only array of its corners in cyclic order. The algorithm detects if an outer common tangent does not exist, which is the case if and only if the convex hull of one of the polygons is contained in the convex hull of the other. Otherwise, two corners defining an outer common tangent are returned.

Cite as

Mikkel Abrahamsen and Bartosz Walczak. Outer Common Tangents and Nesting of Convex Hulls in Linear Time and Constant Workspace. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abrahamsen_et_al:LIPIcs.ESA.2016.4,
  author =	{Abrahamsen, Mikkel and Walczak, Bartosz},
  title =	{{Outer Common Tangents and Nesting of Convex Hulls in Linear Time and Constant Workspace}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.4},
  URN =		{urn:nbn:de:0030-drops-63465},
  doi =		{10.4230/LIPIcs.ESA.2016.4},
  annote =	{Keywords: simple polygon, common tangent, optimal algorithm, constant workspace}
}
Document
Sublinear Distance Labeling

Authors: Stephen Alstrup, Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Ely Porat


Abstract
A distance labeling scheme labels the n nodes of a graph with binary strings such that, given the labels of any two nodes, one can determine the distance in the graph between the two nodes by looking only at the labels. A D-preserving distance labeling scheme only returns precise distances between pairs of nodes that are at distance at least D from each other. In this paper we consider distance labeling schemes for the classical case of unweighted and undirected graphs. We present a O(n/D * log^2(D)) bit D-preserving distance labeling scheme, improving the previous bound by Bollobás et al. [SIAM J. Discrete Math. 2005]. We also give an almost matching lower bound of Omega(n/D). With our D-preserving distance labeling scheme as a building block, we additionally achieve the following results: 1. We present the first distance labeling scheme of size o(n) for sparse graphs (and hence bounded degree graphs). This addresses an open problem by Gavoille et. al. [J. Algo. 2004], hereby separating the complexity from distance labeling in general graphs which require Omega(n) bits, Moon [Proc. of Glasgow Math. Association 1965]. 2. For approximate r-additive labeling schemes, that return distances within an additive error of r we show a scheme of size O(n/r * polylog(r*log(n))/log(n)) for r >= 2. This improves on the current best bound of O(n/r) by Alstrup et al. [SODA 2016] for sub-polynomial r, and is a generalization of a result by Gawrychowski et al. [arXiv preprint 2015] who showed this for r=2.

Cite as

Stephen Alstrup, Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Ely Porat. Sublinear Distance Labeling. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{alstrup_et_al:LIPIcs.ESA.2016.5,
  author =	{Alstrup, Stephen and Dahlgaard, S{\o}ren and Knudsen, Mathias B{\ae}k Tejs and Porat, Ely},
  title =	{{Sublinear Distance Labeling}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.5},
  URN =		{urn:nbn:de:0030-drops-63479},
  doi =		{10.4230/LIPIcs.ESA.2016.5},
  annote =	{Keywords: Graph labeling schemes, Distance labeling, Graph theory, Sparse graphs}
}
Document
Probabilistic Routing for On-Street Parking Search

Authors: Tobias Arndt, Danijar Hafner, Thomas Kellermeier, Simon Krogmann, Armin Razmjou, Martin S. Krejca, Ralf Rothenberger, and Tobias Friedrich


Abstract
An estimated 30% of urban traffic is caused by search for parking spots [Shoup, 2005]. Suggesting routes along highly probable parking spots could reduce traffic. In this paper, we formalize parking search as a probabilistic problem on a road graph and show that it is NP-complete. We explore heuristics that optimize for the driving duration and the walking distance to the destination. Routes are constrained to reach a certain probability threshold of finding a spot. Empirically estimated probabilities of successful parking attempts are provided by TomTom on a per-street basis. We release these probabilities as a dataset of about 80,000 roads covering the Berlin area. This allows to evaluate parking search algorithms on a real road network with realistic probabilities for the first time. However, for many other areas, parking probabilities are not openly available. Because they are effortful to collect, we propose an algorithm that relies on conventional road attributes only. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. This leads to the conclusion that conventional road attributes may be sufficient to compute reasonably good parking search routes.

Cite as

Tobias Arndt, Danijar Hafner, Thomas Kellermeier, Simon Krogmann, Armin Razmjou, Martin S. Krejca, Ralf Rothenberger, and Tobias Friedrich. Probabilistic Routing for On-Street Parking Search. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arndt_et_al:LIPIcs.ESA.2016.6,
  author =	{Arndt, Tobias and Hafner, Danijar and Kellermeier, Thomas and Krogmann, Simon and Razmjou, Armin and Krejca, Martin S. and Rothenberger, Ralf and Friedrich, Tobias},
  title =	{{Probabilistic Routing for On-Street Parking Search}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.6},
  URN =		{urn:nbn:de:0030-drops-63489},
  doi =		{10.4230/LIPIcs.ESA.2016.6},
  annote =	{Keywords: parking search, on-street parking, probabilistic routing, constrained optimization, dataset}
}
Document
Scalable Exact Visualization of Isocontours in Road Networks via Minimum-Link Paths

Authors: Moritz Baum, Thomas Bläsius, Andreas Gemsa, Ignaz Rutter, and Franziska Wegner


Abstract
Isocontours in road networks represent the area that is reachable from a source within a given resource limit. We study the problem of computing accurate isocontours in realistic, large-scale networks. We propose isocontours represented by polygons with minimum number of segments that separate reachable and unreachable components of the network. Since the resulting problem is not known to be solvable in polynomial time, we introduce several heuristics that run in (almost) linear time and are simple enough to be implemented in practice. A key ingredient is a new practical linear-time algorithm for minimum-link paths in simple polygons. Experiments in a challenging realistic setting show excellent performance of our algorithms in practice, computing near-optimal solutions in a few milliseconds on average, even for long ranges.

Cite as

Moritz Baum, Thomas Bläsius, Andreas Gemsa, Ignaz Rutter, and Franziska Wegner. Scalable Exact Visualization of Isocontours in Road Networks via Minimum-Link Paths. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{baum_et_al:LIPIcs.ESA.2016.7,
  author =	{Baum, Moritz and Bl\"{a}sius, Thomas and Gemsa, Andreas and Rutter, Ignaz and Wegner, Franziska},
  title =	{{Scalable Exact Visualization of Isocontours in Road Networks via Minimum-Link Paths}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.7},
  URN =		{urn:nbn:de:0030-drops-63498},
  doi =		{10.4230/LIPIcs.ESA.2016.7},
  annote =	{Keywords: isocontours, separating polygons, minimum-link paths}
}
Document
Computing Equilibria in Markets with Budget-Additive Utilities

Authors: Xiaohui Bei, Jugal Garg, Martin Hoefer, and Kurt Mehlhorn


Abstract
We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement this positive result with a number of hardness results for related computational questions. We prove that it isNP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.

Cite as

Xiaohui Bei, Jugal Garg, Martin Hoefer, and Kurt Mehlhorn. Computing Equilibria in Markets with Budget-Additive Utilities. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bei_et_al:LIPIcs.ESA.2016.8,
  author =	{Bei, Xiaohui and Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt},
  title =	{{Computing Equilibria in Markets with Budget-Additive Utilities}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.8},
  URN =		{urn:nbn:de:0030-drops-63504},
  doi =		{10.4230/LIPIcs.ESA.2016.8},
  annote =	{Keywords: Budget-Additive Utility, Market Equilibrium, Equilibrium Computation}
}
Document
On the Lattice Distortion Problem

Authors: Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz


Abstract
We introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a n^{O(log(n))} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2^{O(n*log(log(n))/log(n))} factor of optimal in polynomial time and within a n^{O(log(n))} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

Cite as

Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz. On the Lattice Distortion Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ESA.2016.9,
  author =	{Bennett, Huck and Dadush, Daniel and Stephens-Davidowitz, Noah},
  title =	{{On the Lattice Distortion Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.9},
  URN =		{urn:nbn:de:0030-drops-63519},
  doi =		{10.4230/LIPIcs.ESA.2016.9},
  annote =	{Keywords: lattices, lattice distortion, lattice isomoprhism, geometry of numbers, basis reduction}
}
Document
Plurality Consensus in Arbitrary Graphs: Lessons Learned from Load Balancing

Authors: Petra Berenbrink, Tom Friedetzky, Peter Kling, Frederik Mallmann-Trenn, and Chris Wastell


Abstract
We consider plurality consensus in networks of n nodes. Initially, each node has one of k opinions. The nodes execute a (randomized) distributed protocol to agree on the plurality opinion (the opinion initially supported by the most nodes). In certain types of networks the nodes can be quite cheap and simple, and hence one seeks protocols that are not only time efficient but also simple and space efficient. Typically, protocols depend heavily on the employed communication mechanism, which ranges from sequential (only one pair of nodes communicates at any time) to fully parallel (all nodes communicate with all their neighbors at once) and everything in-between. We propose a framework to design protocols for a multitude of communication mechanisms. We introduce protocols that solve the plurality consensus problem and are, with probability 1-o(1), both time and space efficient. Our protocols are based on an interesting relationship between plurality consensus and distributed load balancing. This relationship allows us to design protocols that generalize the state of the art for a large range of problem parameters.

Cite as

Petra Berenbrink, Tom Friedetzky, Peter Kling, Frederik Mallmann-Trenn, and Chris Wastell. Plurality Consensus in Arbitrary Graphs: Lessons Learned from Load Balancing. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{berenbrink_et_al:LIPIcs.ESA.2016.10,
  author =	{Berenbrink, Petra and Friedetzky, Tom and Kling, Peter and Mallmann-Trenn, Frederik and Wastell, Chris},
  title =	{{Plurality Consensus in Arbitrary Graphs: Lessons Learned from Load Balancing}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.10},
  URN =		{urn:nbn:de:0030-drops-63610},
  doi =		{10.4230/LIPIcs.ESA.2016.10},
  annote =	{Keywords: Plurality Consensus, Distributed Computing, Load Balancing}
}
Document
On the Hardness of Learning Sparse Parities

Authors: Arnab Bhattacharyya, Ameet Gadekar, Suprovat Ghoshal, and Rishi Saket


Abstract
This work investigates the hardness of computing sparse solutions to systems of linear equations over F_2. Consider the k-EventSet problem: given a homogeneous system of linear equations over $\F_2$ on $n$ variables, decide if there exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse solution). While there is a simple O(n^{k/2})-time algorithm for it, establishing fixed parameter intractability for k-EventSet has been a notorious open problem. Towards this goal, we show that unless \kclq can be solved in n^{o(k)} time, k-EventSet has no polynomial time algorithm when k = omega(log^2(n)). Our work also shows that the non-homogeneous generalization of the problem - which we call k-VectorSum - is W[1]-hard on instances where the number of equations is O(k*log(n)), improving on previous reductions which produced Omega(n) equations. We use the hardness of k-VectorSum as a starting point to prove the result for k-EventSet, and additionally strengthen the former to show the hardness of approximately learning k-juntas. In particular, we prove that given a system of O(exp(O(k))*log(n)) linear equations, it is W[1]-hard to decide if there is a k-sparse linear form satisfying all the equations or any function on at most k-variables (a k-junta) satisfies at most (1/2 + epsilon)-fraction of the equations, for any constant epsilon > 0. In the setting of computational learning, this shows hardness of approximate non-proper learning of k-parities. In a similar vein, we use the hardness of k-EventSet to show that that for any constant d, unless k-Clique can be solved in n^{o(k)} time, there is no poly(m,n)*2^{o(sqrt{k})} time algorithm to decide whether a given set of $m$ points in F_2^n satisfies: (i) there exists a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the points, or (ii) any non-trivial degree d polynomial P supported on at most k variables evaluates to zero on approx Pr_{F_2^n}[P({z}) = 0] fraction of the points i.e., P is fooled by the set of points. Lastly, we study the approximation in the sparsity of the solution. Let the Gap-k-VectorSum problem be: given an instance of k-VectorSum of size n, decide if there exist a k-sparse solution, or every solution is of sparsity at least k' = (1+delta_0)k. Assuming the Exponential Time Hypothesis, we show that for some constants c_0, delta_0 > 0 there is no poly(n) time algorithm for Gap-k-VectorSum when k = omega((log(log( n)))^{c_0}).

Cite as

Arnab Bhattacharyya, Ameet Gadekar, Suprovat Ghoshal, and Rishi Saket. On the Hardness of Learning Sparse Parities. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ESA.2016.11,
  author =	{Bhattacharyya, Arnab and Gadekar, Ameet and Ghoshal, Suprovat and Saket, Rishi},
  title =	{{On the Hardness of Learning Sparse Parities}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.11},
  URN =		{urn:nbn:de:0030-drops-63628},
  doi =		{10.4230/LIPIcs.ESA.2016.11},
  annote =	{Keywords: Fixed Parameter Tractable, Juntas, Minimum Distance of Code, Psuedorandom Generators}
}
Document
Online Algorithms for Multi-Level Aggregation

Authors: Marcin Bienkowski, Martin Böhm, Jaroslaw Byrka, Marek Chrobak, Christoph Dürr, Lukas Folwarczny, Lukasz Jez, Jiri Sgall, Nguyen Kim Thang, and Pavel Vesely


Abstract
In the Multi-Level Aggregation Problem (MLAP), requests arrive at the nodes of an edge-weighted tree T, and have to be served eventually. A service is defined as a subtree X of T that contains its root. This subtree X serves all requests that are pending in the nodes of X, and the cost of this service is equal to the total weight of X. Each request also incurs waiting cost between its arrival and service times. The objective is to minimize the total waiting cost of all requests plus the total cost of all service subtrees. MLAP is a generalization of some well-studied optimization problems; for example, for trees of depth 1, MLAP is equivalent to the TCP Acknowledgment Problem, while for trees of depth 2, it is equivalent to the Joint Replenishment Problem. Aggregation problem for trees of arbitrary depth arise in multicasting, sensor networks, communication in organization hierarchies, and in supply-chain management. The instances of MLAP associated with these applications are naturally online, in the sense that aggregation decisions need to be made without information about future requests. Constant-competitive online algorithms are known for MLAP with one or two levels. However, it has been open whether there exist constant competitive online algorithms for trees of depth more than 2. Addressing this open problem, we give the first constant competitive online algorithm for networks of arbitrary (fixed) number of levels. The competitive ratio is O(D^4*2^D), where D is the depth of T. The algorithm works for arbitrary waiting cost functions, including the variant with deadlines. We include several additional results in the paper. We show that a standard lower-bound technique for MLAP, based on so-called Single-Phase instances, cannot give super-constant lower bounds (as a function of the tree depth). This result is established by giving an online algorithm with optimal competitive ratio 4 for such instances on arbitrary trees. We also study the MLAP variant when the tree is a path, for which we give a lower bound of 4 on the competitive ratio, improving the lower bound known for general MLAP. We complement this with a matching upper bound for the deadline setting.

Cite as

Marcin Bienkowski, Martin Böhm, Jaroslaw Byrka, Marek Chrobak, Christoph Dürr, Lukas Folwarczny, Lukasz Jez, Jiri Sgall, Nguyen Kim Thang, and Pavel Vesely. Online Algorithms for Multi-Level Aggregation. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.ESA.2016.12,
  author =	{Bienkowski, Marcin and B\"{o}hm, Martin and Byrka, Jaroslaw and Chrobak, Marek and D\"{u}rr, Christoph and Folwarczny, Lukas and Jez, Lukasz and Sgall, Jiri and Kim Thang, Nguyen and Vesely, Pavel},
  title =	{{Online Algorithms for Multi-Level Aggregation}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.12},
  URN =		{urn:nbn:de:0030-drops-63637},
  doi =		{10.4230/LIPIcs.ESA.2016.12},
  annote =	{Keywords: algorithmic aspects of networks, online algorithms, scheduling and resource allocation}
}
Document
Compact and Fast Sensitivity Oracles for Single-Source Distances

Authors: Davide Bilo, Luciano Guala, Stefano Leucci, and Guido Proietti


Abstract
Let s denote a distinguished source vertex of a non-negatively real weighted and undirected graph G with n vertices and m edges. In this paper we present two efficient single-source approximate-distance sensitivity oracles, namely compact data structures which are able to quickly report an approximate (by a multiplicative stretch factor) distance from s to any node of G following the failure of any edge in G. More precisely, we first present a sensitivity oracle of size O(n) which is able to report 2-approximate distances from the source in O(1) time. Then, we further develop our construction by building, for any 0<epsilon<1, another sensitivity oracle having size O(n*1/epsilon*log(1/epsilon)), and is able to report a (1+epsilon)-approximate distance from s to any vertex of G in O(log(n)*1/epsilon*log(1/epsilon)) time. Thus, this latter oracle is essentially optimal as far as size and stretch are concerned, and it only asks for a logarithmic query time. Finally, our results are complemented with a space lower bound for the related class of single-source additively-stretched sensitivity oracles, which is helpful to realize the hardness of designing compact oracles of this type.

Cite as

Davide Bilo, Luciano Guala, Stefano Leucci, and Guido Proietti. Compact and Fast Sensitivity Oracles for Single-Source Distances. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2016.13,
  author =	{Bilo, Davide and Guala, Luciano and Leucci, Stefano and Proietti, Guido},
  title =	{{Compact and Fast Sensitivity Oracles for Single-Source Distances}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.13},
  URN =		{urn:nbn:de:0030-drops-63640},
  doi =		{10.4230/LIPIcs.ESA.2016.13},
  annote =	{Keywords: fault-tolerant shortest-path tree, approximate distance, distance sensitivity oracle}
}
Document
Efficient Algorithms with Asymmetric Read and Write Costs

Authors: Guy E. Blelloch, Jeremy T. Fineman, Phillip B. Gibbons, Yan Gu, and Julian Shun


Abstract
In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,omega)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost omega >> 1. For FFT and sorting networks we show a lower bound cost of Omega(omega*n*log_{omega*M}(n)), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when omega is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(omega,log(n)/log(omega*M)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n*n diamond DAG of Omega(omega*n^2/M) cost, which indicates no asymptotic improvement is achievable with fast reads. However, we show that for the minimum edit distance problem (and related problems), which would seem to be a diamond DAG, we can beat this lower bound with an algorithm with only O(omega*n^2/(M*min(omega^{1/3},M^{1/2}))) cost. To achieve this we make use of a "path sketch" technique that is forbidden in a strict DAG computation. Finally, we show several interesting upper bounds for shortest path problems, minimum spanning trees, and other problems. A common theme in many of the upper bounds is that they require redundant computation and a tradeoff between reads and writes.

Cite as

Guy E. Blelloch, Jeremy T. Fineman, Phillip B. Gibbons, Yan Gu, and Julian Shun. Efficient Algorithms with Asymmetric Read and Write Costs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{blelloch_et_al:LIPIcs.ESA.2016.14,
  author =	{Blelloch, Guy E. and Fineman, Jeremy T. and Gibbons, Phillip B. and Gu, Yan and Shun, Julian},
  title =	{{Efficient Algorithms with Asymmetric Read and Write Costs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.14},
  URN =		{urn:nbn:de:0030-drops-63656},
  doi =		{10.4230/LIPIcs.ESA.2016.14},
  annote =	{Keywords: Computational Model, Lower Bounds, Shortest-paths, Non-Volatile Memory, Sorting Networks, Fast Fourier Transform, Diamond DAG, Minimum Spanning Tree}
}
Document
Hyperbolic Random Graphs: Separators and Treewidth

Authors: Thomas Bläsius, Tobias Friedrich, and Anton Krohmer


Abstract
Hyperbolic random graphs share many common properties with complex real-world networks; e.g., small diameter and average distance, large clustering coefficient, and a power-law degree sequence with adjustable exponent beta. Thus, when analyzing algorithms for large networks, potentially more realistic results can be achieved by assuming the input to be a hyperbolic random graph of size n. The worst-case run-time is then replaced by the expected run-time or by bounds that hold with high probability (whp), i.e., with probability 1-O(1/n). Though many structural properties of hyperbolic random graphs have been studied, almost no algorithmic results are known. Divide-and-conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that they can be expected to have balanced separator hierarchies with separators of size O(n^{3/2-beta/2}), O(log n), and O(1) if 2 < beta < 3, beta = 3, and 3 < beta, respectively. We infer that these graphs have whp a treewidth of O(n^{3/2-beta/2}), O(log^2 n), and O(log n), respectively. For 2 < \beta < 3, this matches a known lower bound. To demonstrate the usefulness of our results, we give several algorithmic applications.

Cite as

Thomas Bläsius, Tobias Friedrich, and Anton Krohmer. Hyperbolic Random Graphs: Separators and Treewidth. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 15:1-15:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2016.15,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Krohmer, Anton},
  title =	{{Hyperbolic Random Graphs: Separators and Treewidth}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{15:1--15:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.15},
  URN =		{urn:nbn:de:0030-drops-63667},
  doi =		{10.4230/LIPIcs.ESA.2016.15},
  annote =	{Keywords: hyperbolic random graphs, scale-free networks, power-law graphs, separators, treewidth}
}
Document
Efficient Embedding of Scale-Free Graphs in the Hyperbolic Plane

Authors: Thomas Bläsius, Tobias Friedrich, Anton Krohmer, and Sören Laue


Abstract
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scale-free graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of Omega(n^2). Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Log-likelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.

Cite as

Thomas Bläsius, Tobias Friedrich, Anton Krohmer, and Sören Laue. Efficient Embedding of Scale-Free Graphs in the Hyperbolic Plane. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2016.16,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Krohmer, Anton and Laue, S\"{o}ren},
  title =	{{Efficient Embedding of Scale-Free Graphs in the Hyperbolic Plane}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.16},
  URN =		{urn:nbn:de:0030-drops-63670},
  doi =		{10.4230/LIPIcs.ESA.2016.16},
  annote =	{Keywords: hyperbolic random graphs, embedding, power-law graphs, hyperbolic plane}
}
Document
Fully Dynamic Spanners with Worst-Case Update Time

Authors: Greg Bodwin and Sebastian Krinninger


Abstract
An alpha-spanner of a graph G is a subgraph H such that H preserves all distances of G within a factor of alpha. In this paper, we give fully dynamic algorithms for maintaining a spanner H of a graph G undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: - a 3-spanner with ~O(n^{1+1/2}) edges with worst-case update time ~O(n^{3/4}), or - a 5-spanner with ~O(n^{1+1/3}) edges with worst-case update time ~O (n^{5/9}). These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a 5-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time. To the best of our knowledge, these are the first dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small amortized}but large worst-case update time [Baswana et al. SODA'08], obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem.

Cite as

Greg Bodwin and Sebastian Krinninger. Fully Dynamic Spanners with Worst-Case Update Time. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ESA.2016.17,
  author =	{Bodwin, Greg and Krinninger, Sebastian},
  title =	{{Fully Dynamic Spanners with Worst-Case Update Time}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.17},
  URN =		{urn:nbn:de:0030-drops-63688},
  doi =		{10.4230/LIPIcs.ESA.2016.17},
  annote =	{Keywords: Dynamic graph algorithms, spanners}
}
Document
Fixed-Parameter Approximability of Boolean MinCSPs

Authors: Édouard Bonnet, László Egri, and Dániel Marx


Abstract
The minimum unsatisfiability version of a constraint satisfaction problem (CSP) asks for an assignment where the number of unsatisfied constraints is minimum possible, or equivalently, asks for a minimum-size set of constraints whose deletion makes the instance satisfiable. For a finite set Gamma of constraints, we denote by CSP(Gamma) the restriction of the problem where each constraint is from Gamma. The polynomial-time solvability and the polynomial-time approximability of CSP(Gamma) were fully characterized by [Khanna et al. SICOMP 2000]. Here we study the fixed-parameter (FP-) approximability of the problem: given an instance and an integer k, one has to find a solution of size at most g(k) in time f(k)n^{O(1)} if a solution of size at most k exists. We especially focus on the case of constant-factor FP-approximability. Our main result classifies each finite constraint language Gamma into one of three classes: (1) CSP(Gamma) has a constant-factor FP-approximation; (2) CSP(Gamma) has a (constant-factor) FP-approximation if and only if Nearest Codeword has a (constant-factor) FP-approximation; (3) CSP(Gamma) has no FP-approximation, unless FPT=W[P]. We show that problems in the second class do not have constant-factor FP-approximations if both the Exponential-Time Hypothesis (ETH) and the Linear PCP Conjecture (LPC) hold. We also show that such an approximation would imply the existence of an FP-approximation for the k-Densest Subgraph problem with ratio 1-epsilon for any epsilon>0.

Cite as

Édouard Bonnet, László Egri, and Dániel Marx. Fixed-Parameter Approximability of Boolean MinCSPs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ESA.2016.18,
  author =	{Bonnet, \'{E}douard and Egri, L\'{a}szl\'{o} and Marx, D\'{a}niel},
  title =	{{Fixed-Parameter Approximability of Boolean MinCSPs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.18},
  URN =		{urn:nbn:de:0030-drops-63694},
  doi =		{10.4230/LIPIcs.ESA.2016.18},
  annote =	{Keywords: constraint satisfaction problems, approximability, fixed-parameter tractability}
}
Document
Parameterized Hardness of Art Gallery Problems

Authors: Édouard Bonnet and Tillmann Miltzow


Abstract
Given a simple polygon P on n vertices, two points x,y in P are said to be visible to each other if the line segment between x and y is contained in P. The Point Guard Art Gallery problem asks for a minimum set S such that every point in P is visible from a point in S. The Vertex Guard Art Gallery problem asks for such a set S subset of the vertices of P. A point in the set S is referred to as a guard. For both variants, we rule out a f(k)*n^{o(k/log k)} algorithm, for any computable function f, where k := |S| is the number of guards, unless the Exponential Time Hypothesis fails. These lower bounds almost match the n^{O(k)} algorithms that exist for both problems.

Cite as

Édouard Bonnet and Tillmann Miltzow. Parameterized Hardness of Art Gallery Problems. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 19:1-19:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ESA.2016.19,
  author =	{Bonnet, \'{E}douard and Miltzow, Tillmann},
  title =	{{Parameterized Hardness of Art Gallery Problems}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{19:1--19:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.19},
  URN =		{urn:nbn:de:0030-drops-63700},
  doi =		{10.4230/LIPIcs.ESA.2016.19},
  annote =	{Keywords: art gallery problem, computational geometry, parameterized complexity, ETH-based lower bound, geometric set cover/hitting set}
}
Document
KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation

Authors: Michele Borassi and Emanuele Natale


Abstract
We present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time |E|^{1/2+o(1)} with high probability, obtaining a significant speedup with respect to the Theta(|E|) worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the k most central nodes. Furthermore, our analysis is general, and it might be extended to other settings, as well.

Cite as

Michele Borassi and Emanuele Natale. KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{borassi_et_al:LIPIcs.ESA.2016.20,
  author =	{Borassi, Michele and Natale, Emanuele},
  title =	{{KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.20},
  URN =		{urn:nbn:de:0030-drops-63712},
  doi =		{10.4230/LIPIcs.ESA.2016.20},
  annote =	{Keywords: Betweenness centrality, shortest path algorithm, graph mining, sampling, network analysis}
}
Document
Separation of Cycle Inequalities for the Periodic Timetabling Problem

Authors: Ralf Borndörfer, Heide Hoppmann, and Marika Karbstein


Abstract
Cycle inequalities play an important role in the polyhedral study of the periodic timetabling problem. We give the first pseudo-polynomial time separation algorithm for cycle inequalities, and we give a rigorous proof for the pseudo-polynomial time separability of the change-cycle inequalities. The efficiency of these cutting planes is demonstrated on real-world instances of the periodic timetabling problem.

Cite as

Ralf Borndörfer, Heide Hoppmann, and Marika Karbstein. Separation of Cycle Inequalities for the Periodic Timetabling Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 21:1-21:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{borndorfer_et_al:LIPIcs.ESA.2016.21,
  author =	{Bornd\"{o}rfer, Ralf and Hoppmann, Heide and Karbstein, Marika},
  title =	{{Separation of Cycle Inequalities for the Periodic Timetabling Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{21:1--21:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.21},
  URN =		{urn:nbn:de:0030-drops-63722},
  doi =		{10.4230/LIPIcs.ESA.2016.21},
  annote =	{Keywords: periodic timetabling, cycle inequalities, separation algorithm}
}
Document
Mapping Polygons to the Grid with Small Hausdorff and Fréchet Distance

Authors: Quirijn W. Bouts, Irina Irina Kostitsyna, Marc van Kreveld, Wouter Meulemans, Willem Sonke, and Kevin Verbeek


Abstract
We show how to represent a simple polygon P by a (pixel-based) grid polygon Q that is simple and whose Hausdorff or Fréchet distance to P is small. For any simple polygon P, a grid polygon exists with constant Hausdorff distance between their boundaries and their interiors. Moreover, we show that with a realistic input assumption we can also realize constant Fréchet distance between the boundaries. We present algorithms accompanying these constructions, heuristics to improve their output while keeping the distance bounds, and experiments to assess the output.

Cite as

Quirijn W. Bouts, Irina Irina Kostitsyna, Marc van Kreveld, Wouter Meulemans, Willem Sonke, and Kevin Verbeek. Mapping Polygons to the Grid with Small Hausdorff and Fréchet Distance. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 22:1-22:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouts_et_al:LIPIcs.ESA.2016.22,
  author =	{Bouts, Quirijn W. and Irina Kostitsyna, Irina and van Kreveld, Marc and Meulemans, Wouter and Sonke, Willem and Verbeek, Kevin},
  title =	{{Mapping Polygons to the Grid with Small Hausdorff and Fr\'{e}chet Distance}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{22:1--22:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.22},
  URN =		{urn:nbn:de:0030-drops-63738},
  doi =		{10.4230/LIPIcs.ESA.2016.22},
  annote =	{Keywords: grid mapping, Hausdorff distance, Fr\'{e}chet distance, digital geometry}
}
Document
Hitting Set for Hypergraphs of Low VC-dimension

Authors: Karl Bringmann, László Kozma, Shay Moran, and N. S. Narayanaswamy


Abstract
We study the complexity of the Hitting Set problem in set systems (hypergraphs) that avoid certain sub-structures. In particular, we characterize the classical and parameterized complexity of the problem when the Vapnik-Chervonenkis dimension (VC-dimension) of the input is small. VC-dimension is a natural measure of complexity of set systems. Several tractable instances of Hitting Set with a geometric or graph-theoretical flavor are known to have low VC-dimension. In set systems of bounded VC-dimension, Hitting Set is known to admit efficient and almost optimal approximation algorithms (Brönnimann and Goodrich, 1995; Even, Rawitz, and Shahar, 2005; Agarwal and Pan, 2014). In contrast to these approximation-results, a low VC-dimension does not necessarily imply tractability in the parameterized sense. In fact, we show that Hitting Set is W[1]-hard already on inputs with VC-dimension 2, even if the VC-dimension of the dual set system is also 2. Thus, Hitting Set is very unlikely to be fixed-parameter tractable even in this arguably simple case. This answers an open question raised by King in 2010. For set systems whose (primal or dual) VC-dimension is 1, we show that Hitting Set is solvable in polynomial time. To bridge the gap in complexity between the classes of inputs with VC-dimension 1 and 2, we use a measure that is more fine-grained than VC-dimension. In terms of this measure, we identify a sharp threshold where the complexity of Hitting Set transitions from polynomial-time-solvable to NP-hard. The tractable class that lies just under the threshold is a generalization of Edge Cover, and thus extends the domain of polynomial-time tractability of Hitting Set.

Cite as

Karl Bringmann, László Kozma, Shay Moran, and N. S. Narayanaswamy. Hitting Set for Hypergraphs of Low VC-dimension. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2016.23,
  author =	{Bringmann, Karl and Kozma, L\'{a}szl\'{o} and Moran, Shay and Narayanaswamy, N. S.},
  title =	{{Hitting Set for Hypergraphs of Low VC-dimension}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.23},
  URN =		{urn:nbn:de:0030-drops-63749},
  doi =		{10.4230/LIPIcs.ESA.2016.23},
  annote =	{Keywords: hitting set, VC-dimension}
}
Document
New Algorithms, Better Bounds, and a Novel Model for Online Stochastic Matching

Authors: Brian Brubach, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu


Abstract
Online matching has received significant attention over the last 15 years due to its close connection to Internet advertising. As the seminal work of Karp, Vazirani, and Vazirani has an optimal (1 - 1/epsilon) competitive ratio in the standard adversarial online model, much effort has gone into developing useful online models that incorporate some stochasticity in the arrival process. One such popular model is the "known I.I.D. model" where different customer-types arrive online from a known distribution. We develop algorithms with improved competitive ratios for some basic variants of this model with integral arrival rates, including: (a) the case of general weighted edges, where we improve the best-known ratio of 0.667 due to [Haeupler, Mirrokni and Zadimoghaddam WINE 2011] to 0.705; and (b) the vertex-weighted case, where we improve the 0.7250 ratio of [Jaillet and Lu Math. Oper. Res 2013] to 0.7299. We also consider two extensions, one is "known I.I.D." with non-integral arrival rate and stochastic rewards; the other is "known I.I.D." b-matching with non-integral arrival rate and stochastic rewards. We present a simple non-adaptive algorithm which works well simultaneously on the two extensions. One of the key ingredients of our improvement is the following (offline) approach to bipartite-matching polytopes with additional constraints. We first add several valid constraints in order to get a good fractional solution f; however, these give us less control over the structure of f. We next remove all these additional constraints and randomly move from f to a feasible point on the matching polytope with all coordinates being from the set {0, 1/k, 2/k,..., 1} for a chosen integer k. The structure of this solution is inspired by [Jaillet and Lu Math. Oper. Res 2013] and is a tractable structure for algorithm design and analysis. The appropriate random move preserves many of the removed constraints (approximately [exactly] with high probability [in expectation]). This underlies some of our improvements, and, we hope, could be of independent interest.

Cite as

Brian Brubach, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu. New Algorithms, Better Bounds, and a Novel Model for Online Stochastic Matching. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{brubach_et_al:LIPIcs.ESA.2016.24,
  author =	{Brubach, Brian and Sankararaman, Karthik Abinav and Srinivasan, Aravind and Xu, Pan},
  title =	{{New Algorithms, Better Bounds, and a Novel Model for Online Stochastic Matching}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.24},
  URN =		{urn:nbn:de:0030-drops-63753},
  doi =		{10.4230/LIPIcs.ESA.2016.24},
  annote =	{Keywords: Ad-Allocation, Online Matching, Randomized Algorithms}
}
Document
Solving k-SUM Using Few Linear Queries

Authors: Jean Cardinal, John Iacono, and Aurélien Ooms


Abstract
The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<d where d is the ceiling of k/2. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^2 n) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in ~O(n^{d+8}) time, and performs O(n^3 log^2 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^2 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-a-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth ~O(n^3) for the k-SUM problem.

Cite as

Jean Cardinal, John Iacono, and Aurélien Ooms. Solving k-SUM Using Few Linear Queries. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cardinal_et_al:LIPIcs.ESA.2016.25,
  author =	{Cardinal, Jean and Iacono, John and Ooms, Aur\'{e}lien},
  title =	{{Solving k-SUM Using Few Linear Queries}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.25},
  URN =		{urn:nbn:de:0030-drops-63763},
  doi =		{10.4230/LIPIcs.ESA.2016.25},
  annote =	{Keywords: k-SUM problem, linear decision trees, point location, \$varepsilon\$-nets}
}
Document
Optimal Staged Self-Assembly of General Shapes

Authors: Cameron Chalk, Eric Martinez, Robert Schweller, Luis Vega, Andrew Winslow, and Tim Wylie


Abstract
We analyze the number of stages, tiles, and bins needed to construct n * n squares and scaled shapes in the staged tile assembly model. In particular, we prove that there exists a staged system with b bins and t tile types assembling an n * n square using O((log n - tb - t log t)/b^2 + log log b/log t) stages and Omega((log n - tb - t log t)/b^2) are necessary for almost all n. For a shape S, we prove O((K(S) - tb - t log t)/b^2 + (log log b)/log t) stages suffice and Omega((K(S) - tb - t log t)/b^2) are necessary for the assembly of a scaled version of S, where K(S) denotes the Kolmogorov complexity of S. Similarly tight bounds are also obtained when more powerful flexible glue functions are permitted. These are the first staged results that hold for all choices of b and t and generalize prior results. The upper bound constructions use a new technique for efficiently converting each both sources of system complexity, namely the tile types and mixing graph, into a "bit string" assembly.

Cite as

Cameron Chalk, Eric Martinez, Robert Schweller, Luis Vega, Andrew Winslow, and Tim Wylie. Optimal Staged Self-Assembly of General Shapes. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chalk_et_al:LIPIcs.ESA.2016.26,
  author =	{Chalk, Cameron and Martinez, Eric and Schweller, Robert and Vega, Luis and Winslow, Andrew and Wylie, Tim},
  title =	{{Optimal Staged Self-Assembly of General Shapes}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.26},
  URN =		{urn:nbn:de:0030-drops-63776},
  doi =		{10.4230/LIPIcs.ESA.2016.26},
  annote =	{Keywords: Tile self-assembly, 2HAM, aTAM, DNA computing, biocomputing}
}
Document
Homotopy Measures for Representative Trajectories

Authors: Erin Chambers, Irina Kostitsyna, Maarten Löffler, and Frank Staals


Abstract
An important task in trajectory analysis is defining a meaningful representative for a cluster of similar trajectories. Formally defining and computing such a representative r is a challenging problem. We propose and discuss two new definitions, both of which use only the geometry of the input trajectories. The definitions are based on the homotopy area as a measure of similarity between two curves, which is a minimum area swept by all possible deformations of one curve into the other. In the first definition we wish to minimize the maximum homotopy area between r and any input trajectory, whereas in the second definition we wish to minimize the sum of the homotopy areas between r and the input trajectories. For both definitions computing an optimal representative is NP-hard. However, for the case of minimizing the sum of the homotopy areas, an optimal representative can be found efficiently in a natural class of restricted inputs, namely, when the arrangement of trajectories forms a directed acyclic graph.

Cite as

Erin Chambers, Irina Kostitsyna, Maarten Löffler, and Frank Staals. Homotopy Measures for Representative Trajectories. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chambers_et_al:LIPIcs.ESA.2016.27,
  author =	{Chambers, Erin and Kostitsyna, Irina and L\"{o}ffler, Maarten and Staals, Frank},
  title =	{{Homotopy Measures for Representative Trajectories}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.27},
  URN =		{urn:nbn:de:0030-drops-63783},
  doi =		{10.4230/LIPIcs.ESA.2016.27},
  annote =	{Keywords: trajectory analysis, representative trajectory, homotopy area}
}
Document
Optimal Reachability and a Space-Time Tradeoff for Distance Queries in Constant-Treewidth Graphs

Authors: Krishnendu Chatterjee, Rasmus Rasmus Ibsen-Jensen, and Andreas Pavlogiannis


Abstract
We consider data-structures for answering reachability and distance queries on constant-treewidth graphs with n nodes, on the standard RAM computational model with wordsize W=Theta(log n). Our first contribution is a data-structure that after O(n) preprocessing time, allows (1) pair reachability queries in O(1) time; and (2) single-source reachability queries in O(n/log n) time. This is (asymptotically) optimal and is faster than DFS/BFS when answering more than a constant number of single-source queries. The data-structure uses at all times O(n) space. Our second contribution is a space-time tradeoff data-structure for distance queries. For any epsilon in [1/2,1], we provide a data-structure with polynomial preprocessing time that allows pair queries in O(n^{1-\epsilon} alpha(n)) time, where alpha is the inverse of the Ackermann function, and at all times uses O(n^epsilon) space. The input graph G is not considered in the space complexity.

Cite as

Krishnendu Chatterjee, Rasmus Rasmus Ibsen-Jensen, and Andreas Pavlogiannis. Optimal Reachability and a Space-Time Tradeoff for Distance Queries in Constant-Treewidth Graphs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.ESA.2016.28,
  author =	{Chatterjee, Krishnendu and Rasmus Ibsen-Jensen, Rasmus and Pavlogiannis, Andreas},
  title =	{{Optimal Reachability and a Space-Time Tradeoff for Distance Queries in Constant-Treewidth Graphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.28},
  URN =		{urn:nbn:de:0030-drops-63797},
  doi =		{10.4230/LIPIcs.ESA.2016.28},
  annote =	{Keywords: Graph algorithms, Constant-treewidth graphs, Reachability queries, Distance queries}
}
Document
An ILP-based Proof System for the Crossing Number Problem

Authors: Markus Chimani and Tilo Wiedera


Abstract
Formally, approaches based on mathematical programming are able to find provably optimal solutions. However, the demands on a verifiable formal proof are typically much higher than the guarantees we can sensibly attribute to implementations of mathematical programs. We consider this in the context of the crossing number problem, one of the most prominent problems in topological graph theory. The problem asks for the minimum number of edge crossings in any drawing of a given graph. Graph-theoretic proofs for this problem are known to be notoriously hard to obtain. At the same time, proofs even for very specific graphs are often of interest in crossing number research, as they can, e.g., form the basis for inductive proofs. We propose a system to automatically generate a formal proof based on an ILP computation. Such a proof is (relatively) easily verifiable, and does not require the understanding of any complex ILP codes. As such, we hope our proof system may serve as a showcase for the necessary steps and central design goals of how to establish formal proof systems based on mathematical programming formulations.

Cite as

Markus Chimani and Tilo Wiedera. An ILP-based Proof System for the Crossing Number Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 29:1-29:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chimani_et_al:LIPIcs.ESA.2016.29,
  author =	{Chimani, Markus and Wiedera, Tilo},
  title =	{{An ILP-based Proof System for the Crossing Number Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{29:1--29:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.29},
  URN =		{urn:nbn:de:0030-drops-63803},
  doi =		{10.4230/LIPIcs.ESA.2016.29},
  annote =	{Keywords: automatic formal proof, crossing number, integer linear programming}
}
Document
Strategic Contention Resolution with Limited Feedback

Authors: George Christodoulou, Martin Gairing, Sotiris Nikoletseas, Christoforos Raptopoulos, and Paul Spirakis


Abstract
In this paper, we study contention resolution protocols from a game-theoretic perspective. We focus on acknowledgment-based protocols, where a user gets feedback from the channel only when she attempts transmission. In this case she will learn whether her transmission was successful or not. Users that do not transmit will not receive any feedback. We are interested in equilibrium protocols, where no player has an incentive to deviate. The limited feedback makes the design of equilibrium protocols a hard task as best response policies usually have to be modeled as Partially Observable Markov Decision Processes, which are hard to analyze. Nevertheless, we show how to circumvent this for the case of two players and present an equilibrium protocol. For many players, we give impossibility results for a large class of acknowledgment-based protocols, namely age-based and backoff protocols with finite expected finishing time. Finally, we provide an age-based equilibrium protocol, which has infinite expected finishing time, but every player finishes in linear time with high probability.

Cite as

George Christodoulou, Martin Gairing, Sotiris Nikoletseas, Christoforos Raptopoulos, and Paul Spirakis. Strategic Contention Resolution with Limited Feedback. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 30:1-30:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{christodoulou_et_al:LIPIcs.ESA.2016.30,
  author =	{Christodoulou, George and Gairing, Martin and Nikoletseas, Sotiris and Raptopoulos, Christoforos and Spirakis, Paul},
  title =	{{Strategic Contention Resolution with Limited Feedback}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{30:1--30:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.30},
  URN =		{urn:nbn:de:0030-drops-63813},
  doi =		{10.4230/LIPIcs.ESA.2016.30},
  annote =	{Keywords: contention resolution, acknowledgment-based protocols, game theory}
}
Document
Cell-Probe Lower Bounds for Bit Stream Computation

Authors: Raphaël Clifford, Markus Jalsenius, and Benjamin Sach


Abstract
We revisit the complexity of online computation in the cell probe model. We consider a class of problems where we are first given a fixed pattern F of n symbols and then one symbol arrives at a time in a stream. After each symbol has arrived we must output some function of F and the n-length suffix of the arriving stream. Cell probe bounds of Omega(delta lg n/w) have previously been shown for both convolution and Hamming distance in this setting, where delta is the size of a symbol in bits and w in Omega(lg n) is the cell size in bits. However, when delta is a constant, as it is in many natural situations, the existing approaches no longer give us non-trivial bounds. We introduce a lop-sided information transfer proof technique which enables us to prove meaningful lower bounds even for constant size input alphabets. Our new framework is capable of proving amortised cell probe lower bounds of Omega(lg^2 n/(w lg lg n)) time per arriving bit. We demonstrate this technique by showing a new lower bound for a problem known as pattern matching with address errors or the L_2-rearrangement distance problem. This gives the first non-trivial cell probe lower bound for any online problem on bit streams that still holds when the cell size is large.

Cite as

Raphaël Clifford, Markus Jalsenius, and Benjamin Sach. Cell-Probe Lower Bounds for Bit Stream Computation. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 31:1-31:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{clifford_et_al:LIPIcs.ESA.2016.31,
  author =	{Clifford, Rapha\"{e}l and Jalsenius, Markus and Sach, Benjamin},
  title =	{{Cell-Probe Lower Bounds for Bit Stream Computation}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{31:1--31:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.31},
  URN =		{urn:nbn:de:0030-drops-63827},
  doi =		{10.4230/LIPIcs.ESA.2016.31},
  annote =	{Keywords: Cell-probe lower bounds, algorithms, data streaming}
}
Document
Stochastic Streams: Sample Complexity vs. Space Complexity

Authors: Michael Crouch, Andrew McGregor, Gregory Valiant, and David P. Woodruff


Abstract
We address the trade-off between the computational resources needed to process a large data set and the number of samples available from the data set. Specifically, we consider the following abstraction: we receive a potentially infinite stream of IID samples from some unknown distribution D, and are tasked with computing some function f(D). If the stream is observed for time t, how much memory, s, is required to estimate f(D)? We refer to t as the sample complexity and s as the space complexity. The main focus of this paper is investigating the trade-offs between the space and sample complexity. We study these trade-offs for several canonical problems studied in the data stream model: estimating the collision probability, i.e., the second moment of a distribution, deciding if a graph is connected, and approximating the dimension of an unknown subspace. Our results are based on techniques for simulating different classical sampling procedures in this model, emulating random walks given a sequence of IID samples, as well as leveraging a characterization between communication bounded protocols and statistical query algorithms.

Cite as

Michael Crouch, Andrew McGregor, Gregory Valiant, and David P. Woodruff. Stochastic Streams: Sample Complexity vs. Space Complexity. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 32:1-32:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{crouch_et_al:LIPIcs.ESA.2016.32,
  author =	{Crouch, Michael and McGregor, Andrew and Valiant, Gregory and Woodruff, David P.},
  title =	{{Stochastic Streams: Sample Complexity vs. Space Complexity}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{32:1--32:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.32},
  URN =		{urn:nbn:de:0030-drops-63838},
  doi =		{10.4230/LIPIcs.ESA.2016.32},
  annote =	{Keywords: data streams, sample complexity, frequency moments, graph connectivity, subspace approximation}
}
Document
Counting Matchings with k Unmatched Vertices in Planar Graphs

Authors: Radu Curticapean


Abstract
We consider the problem of counting matchings in planar graphs. While perfect matchings in planar graphs can be counted by a classical polynomial-time algorithm [Kasteleyn 1961], the problem of counting all matchings (possibly containing unmatched vertices, also known as defects) is known to be #P-complete on planar graphs [Jerrum 1987]. To interpolate between matchings and perfect matchings, we study the parameterized problem of counting matchings with k unmatched vertices in a planar graph G, on input G and k. This setting has a natural interpretation in statistical physics, and it is a special case of counting perfect matchings in k-apex graphs (graphs that become planar after removing k vertices). Starting from a recent #W[1]-hardness proof for counting perfect matchings on k-apex graphs [Curtican and Xia 2015], we obtain: - Counting matchings with k unmatched vertices in planar graphs is #W[1]-hard. - In contrast, given a plane graph G with s distinguished faces, there is an O(2^s n^3) time algorithm for counting those matchings with k unmatched vertices such that all unmatched vertices lie on the distinguished faces. This implies an f(k,s)n^O(1) time algorithm for counting perfect matchings in k-apex graphs whose apex neighborhood is covered by s faces.

Cite as

Radu Curticapean. Counting Matchings with k Unmatched Vertices in Planar Graphs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 33:1-33:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{curticapean:LIPIcs.ESA.2016.33,
  author =	{Curticapean, Radu},
  title =	{{Counting Matchings with k Unmatched Vertices in Planar Graphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{33:1--33:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.33},
  URN =		{urn:nbn:de:0030-drops-63847},
  doi =		{10.4230/LIPIcs.ESA.2016.33},
  annote =	{Keywords: counting complexity, parameterized complexity, matchings, planar graphs}
}
Document
On Interference Among Moving Sensors and Related Problems

Authors: Jean-Lou De Carufel, Matthew J. Katz, Matias Korman, André van Renssen, Marcel Roeloffzen, and Shakhar Smorodinsky


Abstract
We show that for any set of n moving points in R^d and any parameter 2<=k<n, one can select a fixed non-empty subset of the points of size O(k log k), such that the Voronoi diagram of this subset is "balanced" at any given time (i.e., it contains O(n/k) points per cell). We also show that the bound O(k log k) is near optimal even for the one dimensional case in which points move linearly in time. As an application, we show that one can assign communication radii to the sensors of a network of $n$ moving sensors so that at any given time, their interference is O( (n log n)^0.5). This is optimal up to an O((log n)^0.5) factor.

Cite as

Jean-Lou De Carufel, Matthew J. Katz, Matias Korman, André van Renssen, Marcel Roeloffzen, and Shakhar Smorodinsky. On Interference Among Moving Sensors and Related Problems. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 34:1-34:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{decarufel_et_al:LIPIcs.ESA.2016.34,
  author =	{De Carufel, Jean-Lou and Katz, Matthew J. and Korman, Matias and van Renssen, Andr\'{e} and Roeloffzen, Marcel and Smorodinsky, Shakhar},
  title =	{{On Interference Among Moving Sensors and Related Problems}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{34:1--34:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.34},
  URN =		{urn:nbn:de:0030-drops-63850},
  doi =		{10.4230/LIPIcs.ESA.2016.34},
  annote =	{Keywords: Range spaces, Voronoi diagrams, moving points, facility location, interference minimization}
}
Document
SimBa: An Efficient Tool for Approximating Rips-Filtration Persistence via Simplicial Batch-Collapse

Authors: Tamal K. Dey, Dayu Shi, and Yusu Wang


Abstract
In topological data analysis, a point cloud data P extracted from a metric space is often analyzed by computing the persistence diagram or barcodes of a sequence of Rips complexes built on P indexed by a scale parameter. Unfortunately, even for input of moderate size, the size of the Rips complex may become prohibitively large as the scale parameter increases. Starting with the Sparse Rips filtration introduced by Sheehy, some existing methods aim to reduce the size of the complex so as to improve the time efficiency as well. However, as we demonstrate, existing approaches still fall short of scaling well, especially for high dimensional data. In this paper, we investigate the advantages and limitations of existing approaches. Based on insights gained from the experiments, we propose an efficient new algorithm, called SimBa, for approximating the persistent homology of Rips filtrations with quality guarantees. Our new algorithm leverages a batch collapse strategy as well as a new sparse Rips-like filtration. We experiment on a variety of low and high dimensional data sets. We show that our strategy presents a significant size reduction, and our algorithm for approximating Rips filtration persistence is order of magnitude faster than existing methods in practice.

Cite as

Tamal K. Dey, Dayu Shi, and Yusu Wang. SimBa: An Efficient Tool for Approximating Rips-Filtration Persistence via Simplicial Batch-Collapse. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ESA.2016.35,
  author =	{Dey, Tamal K. and Shi, Dayu and Wang, Yusu},
  title =	{{SimBa: An Efficient Tool for Approximating  Rips-Filtration Persistence via Simplicial Batch-Collapse}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.35},
  URN =		{urn:nbn:de:0030-drops-63869},
  doi =		{10.4230/LIPIcs.ESA.2016.35},
  annote =	{Keywords: Rips filtration, Homology groups, Persistence, Topological data analysis}
}
Document
Exponential Time Paradigms Through the Polynomial Time Lens

Authors: Andrew Drucker, Jesper Nederlof, and Rahul Santhanam


Abstract
We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as branching and dynamic programming, and to shed light on the true complexity of various problems. As one instantiation, we model branching using the notion of witness compression, i.e., reducibility to the circuit satisfiability problem parameterized by the number of variables of the circuit. We show this is equivalent to the previously studied notion of `OPP-algorithms', and provide a technique for proving conditional lower bounds for witness compressions via a constructive variant of AND-composition, which is a notion previously studied in theory of preprocessing. In the context of parameterized complexity we use this to show that problems such as Pathwidth and Treewidth and Independent Set parameterized by pathwidth do not have witness compression, assuming NP subseteq coNP/poly. Since these problems admit fast fixed parameter tractable algorithms via dynamic programming, this shows that dynamic programming can be stronger than branching, under a standard complexity hypothesis. Our approach has applications outside parameterized complexity as well: for example, we show if a polynomial time algorithm outputs a maximum independent set of a given planar graph on n vertices with probability exp(-n^{1-epsilon}) for some epsilon>0, then NP subseteq coNP/poly. This negative result dims the prospects for one very natural approach to sub-exponential time algorithms for problems on planar graphs. As two other illustrations (more exploratory) of our approach, we model algorithms based on inclusion-exclusion or group algebras via the notion of "parity compression", and we model a subclass of dynamic programming algorithms with the notion of "disjunctive dynamic programming". These models give us a way to naturally classify various parameterized problems with FPT algorithms. In the case of the dynamic programming model, we show that Independent Set parameterized by pathwidth is complete for this model.

Cite as

Andrew Drucker, Jesper Nederlof, and Rahul Santhanam. Exponential Time Paradigms Through the Polynomial Time Lens. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{drucker_et_al:LIPIcs.ESA.2016.36,
  author =	{Drucker, Andrew and Nederlof, Jesper and Santhanam, Rahul},
  title =	{{Exponential Time Paradigms Through the Polynomial Time Lens}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.36},
  URN =		{urn:nbn:de:0030-drops-63871},
  doi =		{10.4230/LIPIcs.ESA.2016.36},
  annote =	{Keywords: exponential time paradigms, branching, dynamic programming, lower bounds}
}
Document
On the Power of Advice and Randomization for Online Bipartite Matching

Authors: Christoph Dürr, Christian Konrad, and Marc Renault


Abstract
While randomized online algorithms have access to a sequence of uniform random bits, deterministic online algorithms with advice have access to a sequence of advice bits, i.e., bits that are set by an all-powerful oracle prior to the processing of the request sequence. Advice bits are at least as helpful as random bits, but how helpful are they? In this work, we investigate the power of advice bits and random bits for online maximum bipartite matching (MBM). The well-known Karp-Vazirani-Vazirani algorithm [Karp, Vazirani and Vazirani 90] is an optimal randomized (1-1/e)-competitive algorithm for MBM that requires access to Theta(n log n) uniform random bits. We show that Omega(log(1/epsilon) n) advice bits are necessary and O(1/epsilon^5 n) sufficient in order to obtain a (1-epsilon)-competitive deterministic advice algorithm. Furthermore, for a large natural class of deterministic advice algorithms, we prove that Omega(log log log n) advice bits are required in order to improve on the 1/2-competitiveness of the best deterministic online algorithm, while it is known that O(log n) bits are sufficient [Böckenhauer, Komm, Královic and Královic 2011]. Last, we give a randomized online algorithm that uses cn random bits, for integers c >= 1, and a competitive ratio that approaches 1-1/e very quickly as c is increasing. For example if c = 10, then the difference between 1-1/e and the achieved competitive ratio is less than 0.0002.

Cite as

Christoph Dürr, Christian Konrad, and Marc Renault. On the Power of Advice and Randomization for Online Bipartite Matching. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{durr_et_al:LIPIcs.ESA.2016.37,
  author =	{D\"{u}rr, Christoph and Konrad, Christian and Renault, Marc},
  title =	{{On the Power of Advice and Randomization for Online Bipartite Matching}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.37},
  URN =		{urn:nbn:de:0030-drops-63888},
  doi =		{10.4230/LIPIcs.ESA.2016.37},
  annote =	{Keywords: On-line algorithms, Bipartite matching, Randomization}
}
Document
BlockQuicksort: Avoiding Branch Mispredictions in Quicksort

Authors: Stefan Edelkamp and Armin Weiss


Abstract
Since the work of Kaligosi and Sanders (2006), it is well-known that Quicksort - which is commonly considered as one of the fastest in-place sorting algorithms - suffers in an essential way from branch mispredictions. We present a novel approach to address this problem by partially decoupling control from data flow: in order to perform the partitioning, we split the input in blocks of constant size (we propose 128 data elements); then, all elements in one block are compared with the pivot and the outcomes of the comparisons are stored in a buffer. In a second pass, the respective elements are rearranged. By doing so, we avoid conditional branches based on outcomes of comparisons at all (except for the final Insertionsort). Moreover, we prove that for a static branch predictor the average total number of branch mispredictions is at most epsilon n log n + O(n) for some small epsilon depending on the block size when sorting n elements. Our experimental results are promising: when sorting random integer data, we achieve an increase in speed (number of elements sorted per second) of more than 80% over the GCC implementation of C++ std::sort. Also for many other types of data and non-random inputs, there is still a significant speedup over std::sort. Only in few special cases like sorted or almost sorted inputs, std::sort can beat our implementation. Moreover, even on random input permutations, our implementation is even slightly faster than an implementation of the highly tuned Super Scalar Sample Sort, which uses a linear amount of additional space.

Cite as

Stefan Edelkamp and Armin Weiss. BlockQuicksort: Avoiding Branch Mispredictions in Quicksort. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{edelkamp_et_al:LIPIcs.ESA.2016.38,
  author =	{Edelkamp, Stefan and Weiss, Armin},
  title =	{{BlockQuicksort: Avoiding Branch Mispredictions in Quicksort}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.38},
  URN =		{urn:nbn:de:0030-drops-63890},
  doi =		{10.4230/LIPIcs.ESA.2016.38},
  annote =	{Keywords: in-place sorting, Quicksort, branch mispredictions, lean programs}
}
Document
Counting Linear Extensions: Parameterizations by Treewidth

Authors: Eduard Eiben, Robert Ganian, Kustaa Kangas, and Sebastian Ordyniak


Abstract
We consider the #P-complete problem of counting the number of linear extensions of a poset (#LE); a fundamental problem in order theory with applications in a variety of distinct areas. In particular, we study the complexity of #LE parameterized by the well-known decompositional parameter treewidth for two natural graphical representations of the input poset, i.e., the cover and the incomparability graph. Our main result shows that #LE is fixed-parameter intractable parameterized by the treewidth of the cover graph. This resolves an open problem recently posed in the Dagstuhl seminar on Exact Algorithms. On the positive side we show that #LE becomes fixed-parameter tractable parameterized by the treewidth of the incomparability graph.

Cite as

Eduard Eiben, Robert Ganian, Kustaa Kangas, and Sebastian Ordyniak. Counting Linear Extensions: Parameterizations by Treewidth. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 39:1-39:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.ESA.2016.39,
  author =	{Eiben, Eduard and Ganian, Robert and Kangas, Kustaa and Ordyniak, Sebastian},
  title =	{{Counting Linear Extensions: Parameterizations by Treewidth}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{39:1--39:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.39},
  URN =		{urn:nbn:de:0030-drops-63903},
  doi =		{10.4230/LIPIcs.ESA.2016.39},
  annote =	{Keywords: Partially ordered sets, Linear extensions, Parameterized Complexity, Structural parameters, Treewidth}
}
Document
A Constant Approximation Algorithm for Scheduling Packets on Line Networks

Authors: Guy Even, Moti Medina, and Adi Rosén


Abstract
In this paper we improve the approximation ratio for the problem of scheduling packets on line networks with bounded buffers with the aim of maximizing the throughput. Each node in the network has a local buffer of bounded size B, and each edge (or link) can transmit a limited number c of packets in every time unit. The input to the problem consists of a set of packet requests, each defined by a source node, a destination node, and a release time. We denote by n the size of the network. A solution for this problem is a schedule that delivers (some of the) packets to their destinations without violating the capacity constraints of the network (buffers or edges). Our goal is to design an efficient algorithm that computes a schedule that maximizes the number of packets that arrive to their respective destinations. We give a randomized approximation algorithm with constant approximation ratio for the case where the buffer-size to link-capacity ratio, B/c, does not depend on the input size. This improves over the previously best result of O(log^* n) [Räcke and Rosén SPAA 2009]. Our improvement is based on a new combinatorial lemma that we prove, stating, roughly speaking, that if packets are allowed to stay put in buffers only a limited number of time steps, 2d, where d is the longest source-destination distance, then the optimal solution is decreased by only a constant factor. This claim was not previously known in the integral (unsplitable, zero-one) case, and may find additional applications for routing and scheduling algorithms. While we are not able to give the same improvement for the related problem when packets have hard deadlines, our algorithm does support "soft deadlines". That is, if packets have deadlines, we achieve a constant approximation ratio when the produced solution is allowed to miss deadlines by at most log n time units.

Cite as

Guy Even, Moti Medina, and Adi Rosén. A Constant Approximation Algorithm for Scheduling Packets on Line Networks. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{even_et_al:LIPIcs.ESA.2016.40,
  author =	{Even, Guy and Medina, Moti and Ros\'{e}n, Adi},
  title =	{{A Constant Approximation Algorithm for Scheduling Packets on Line Networks}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.40},
  URN =		{urn:nbn:de:0030-drops-63524},
  doi =		{10.4230/LIPIcs.ESA.2016.40},
  annote =	{Keywords: approximation algorithms, linear programming, randomized rounding, packet scheduling, admission control}
}
Document
Distributed Signaling Games

Authors: Moran Feldman, Moshe Tennenholtz, and Omri Weinstein


Abstract
The study of the algorithmic and computational complexity of designing efficient signaling schemes for mechanisms aiming to optimize social welfare or revenue is a recurring theme in recent computer science literature. In reality, however, information is typically not held by a central authority, but is distributed among multiple sources (third-party "mediators"), a fact that dramatically changes the strategic and combinatorial nature of the signaling problem. In this paper we introduce distributed signaling games, while using display advertising as a canonical example for introducing this foundational framework. A distributed signaling game may be a pure coordination game (i.e., a distributed optimization task), or a non-cooperative game. In the context of pure coordination games, we show a wide gap between the computational complexity of the centralized and distributed signaling problems, proving that distributed coordination on revenue-optimal signaling is a much harder problem than its "centralized" counterpart. In the context of non-cooperative games, the outcome generated by the mediators' signals may have different value to each. The reason for that is typically the desire of the auctioneer to align the incentives of the mediators with his own by a compensation relative to the marginal benefit from their signals. We design a mechanism for this problem via a novel application of Shapley's value, and show that it possesses a few interesting economical properties.

Cite as

Moran Feldman, Moshe Tennenholtz, and Omri Weinstein. Distributed Signaling Games. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.ESA.2016.41,
  author =	{Feldman, Moran and Tennenholtz, Moshe and Weinstein, Omri},
  title =	{{Distributed Signaling Games}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.41},
  URN =		{urn:nbn:de:0030-drops-63536},
  doi =		{10.4230/LIPIcs.ESA.2016.41},
  annote =	{Keywords: Signaling, display advertising, mechanism design, shapley value}
}
Document
New Algorithms for Maximum Disjoint Paths Based on Tree-Likeness

Authors: Krzysztof Fleszar, Matthias Mnich, and Joachim Spoerhase


Abstract
We study the classical NP-hard problems of finding maximum-size subsets from given sets of k terminal pairs that can be routed via edge-disjoint paths (MaxEDP) or node-disjoint paths (MaxNDP) in a given graph. The approximability of MaxEDP/NDP is currently not well understood; the best known lower bound is Omega(log^{1/2 - varepsilon} n), assuming NP not subseteq ZPTIME(n^{poly log n}). This constitutes a significant gap to the best known approximation upper bound of O(n^1/2) due to Chekuri et al. (2006) and closing this gap is currently one of the big open problems in approximation algorithms. In their seminal paper, Raghavan and Thompson (Combinatorica, 1987) introduce the technique of randomized rounding for LPs; their technique gives an O(1)-approximation when edges (or nodes) may be used by O(log n/log log n) paths. In this paper, we strengthen the above fundamental results. We provide new bounds formulated in terms of the feedback vertex set number r of a graph, which measures its vertex deletion distance to a forest. In particular, we obtain the following. - For MaxEDP, we give an O(r^0.5 log^1.5 kr)-approximation algorithm. As r<=n, up to logarithmic factors, our result strengthens the best known ratio O(n^0.5) due to Chekuri et al. - Further, we show how to route Omega(opt) pairs with congestion O(log(kr)/log log(kr)), strengthening the bound obtained by the classic approach of Raghavan and Thompson. - For MaxNDP, we give an algorithm that gives the optimal answer in time (k+r)^O(r)n. This is a substantial improvement on the run time of 2^kr^O(r)n, which can be obtained via an algorithm by Scheffler. We complement these positive results by proving that MaxEDP is NP-hard even for r=1, and MaxNDP is W[1]-hard for parameter r. This shows that neither problem is fixed-parameter tractable in r unless FPT = W[1] and that our approximability results are relevant even for very small constant values of r.

Cite as

Krzysztof Fleszar, Matthias Mnich, and Joachim Spoerhase. New Algorithms for Maximum Disjoint Paths Based on Tree-Likeness. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 42:1-42:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fleszar_et_al:LIPIcs.ESA.2016.42,
  author =	{Fleszar, Krzysztof and Mnich, Matthias and Spoerhase, Joachim},
  title =	{{New Algorithms for Maximum Disjoint Paths Based on Tree-Likeness}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{42:1--42:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.42},
  URN =		{urn:nbn:de:0030-drops-63542},
  doi =		{10.4230/LIPIcs.ESA.2016.42},
  annote =	{Keywords: disjoint paths, approximation algorithms, feedback vertex set}
}
Document
Streaming Property Testing of Visibly Pushdown Languages

Authors: Nathanaël François, Frédéric Magniez, Michel de Rougemont, and Olivier Serre


Abstract
In the context of formal language recognition, we demonstrate the superiority of streaming property testers against streaming algorithms and property testers, when they are not combined. Initiated by Feigenbaum et al., a streaming property tester is a streaming algorithm recognizing a language under the property testing approximation: it must distinguish inputs of the language from those that are eps-far from it, while using the smallest possible memory (rather than limiting its number of input queries). Our main result is a streaming eps-property tester for visibly pushdown languages (V_{PL}) with memory space poly(log n /epsilon). Our construction is done in three steps. First, we simulate a visibly pushdown automaton in one pass using a stack of small height but whose items can be of linear size. In a second step, those items are replaced by small sketches. Those sketches rely on a notion of suffix-sampling we introduce. This sampling is the key idea for taking benefit of both streaming algorithms and property testers in the third step. Indeed, the last step relies on a (non-streaming) property tester for weighted regular languages based on a previous tester by Alon et al. This tester can directly be used for streaming testing special cases of instances of V_{PL} that are already hard for both streaming algorithms and property testers. We then use it to decide the correctness of completed items, given their sketches, before removing them from the stack.

Cite as

Nathanaël François, Frédéric Magniez, Michel de Rougemont, and Olivier Serre. Streaming Property Testing of Visibly Pushdown Languages. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 43:1-43:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{francois_et_al:LIPIcs.ESA.2016.43,
  author =	{Fran\c{c}ois, Nathana\"{e}l and Magniez, Fr\'{e}d\'{e}ric and de Rougemont, Michel and Serre, Olivier},
  title =	{{Streaming Property Testing of Visibly Pushdown Languages}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{43:1--43:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.43},
  URN =		{urn:nbn:de:0030-drops-63559},
  doi =		{10.4230/LIPIcs.ESA.2016.43},
  annote =	{Keywords: Streaming Algorithm, Property Testing, Visibly Pushdown Languages}
}
Document
Streaming Pattern Matching with d Wildcards

Authors: Shay Golan, Tsvi Kopelowitz, and Ely Porat


Abstract
In the pattern matching with d wildcards problem we are given a text T of length n and a pattern P of length m that contains d wildcard characters, each denoted by a special symbol '?'. A wildcard character matches any other character. The goal is to establish for each m-length substring of T whether it matches P. In the streaming model variant of the pattern matching with d wildcards problem the text T arrives one character at a time and the goal is to report, before the next character arrives, if the last m characters match P while using only o(m) words of space. In this paper we introduce two new algorithms for the d wildcard pattern matching problem in the streaming model. The first is a randomized Monte Carlo algorithm that is parameterized by a constant 0<=delta<=1. This algorithm uses ~O(d^{1-delta}) amortized time per character and ~O(d^{1+delta}) words of space. The second algorithm, which is used as a black box in the first algorithm, is a randomized Monte Carlo algorithm which uses O(d+log m) worst-case time per character and O(d log m) words of space.

Cite as

Shay Golan, Tsvi Kopelowitz, and Ely Porat. Streaming Pattern Matching with d Wildcards. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 44:1-44:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{golan_et_al:LIPIcs.ESA.2016.44,
  author =	{Golan, Shay and Kopelowitz, Tsvi and Porat, Ely},
  title =	{{Streaming Pattern Matching with d Wildcards}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{44:1--44:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.44},
  URN =		{urn:nbn:de:0030-drops-63561},
  doi =		{10.4230/LIPIcs.ESA.2016.44},
  annote =	{Keywords: wildcards, don't-cares, streaming pattern matching, fingerprints}
}
Document
How Hard is it to Find (Honest) Witnesses?

Authors: Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat


Abstract
In recent years much effort has been put into developing polynomial-time conditional lower bounds for algorithms and data structures in both static and dynamic settings. Along these lines we introduce a framework for proving conditional lower bounds based on the well-known 3SUM conjecture. Our framework creates a compact representation of an instance of the 3SUM problem using hashing and domain specific encoding. This compact representation admits false solutions to the original 3SUM problem instance which we reveal and eliminate until we find a true solution. In other words, from all witnesses (candidate solutions) we figure out if an honest one (a true solution) exists. This enumeration of witnesses is used to prove conditional lower bounds on reporting problems that generate all witnesses. In turn, these reporting problems are then reduced to various decision problems using special search data structures which are able to enumerate the witnesses while only using solutions to decision variants. Hence, 3SUM-hardness of the decision problems is deduced. We utilize this framework to show conditional lower bounds for several variants of convolutions, matrix multiplication and string problems. Our framework uses a strong connection between all of these problems and the ability to find witnesses. Specifically, we prove conditional lower bounds for computing partial outputs of convolutions and matrix multiplication for sparse inputs. These problems are inspired by the open question raised by Muthukrishnan 20 years ago. The lower bounds we show rule out the possibility (unless the 3SUM conjecture is false) that almost linear time solutions to sparse input-output convolutions or matrix multiplications exist. This is in contrast to standard convolutions and matrix multiplications that have, or assumed to have, almost linear solutions. Moreover, we improve upon the conditional lower bounds of Amir et al. for histogram indexing, a problem that has been of much interest recently. The conditional lower bounds we show apply for both reporting and decision variants. For the well-studied decision variant, we show a full tradeoff between preprocessing and query time for every alphabet size > 2. At an extreme, this implies that no solution to this problem exists with subquadratic preprocessing time and ~O(1) query time for every alphabet size > 2, unless the 3SUM conjecture is false. This is in contrast to a recent result by Chan and Lewenstein for a binary alphabet. While these specific applications are used to demonstrate the techniques of our framework, we believe that this novel framework is useful for many other problems as well.

Cite as

Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat. How Hard is it to Find (Honest) Witnesses?. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goldstein_et_al:LIPIcs.ESA.2016.45,
  author =	{Goldstein, Isaac and Kopelowitz, Tsvi and Lewenstein, Moshe and Porat, Ely},
  title =	{{How Hard is it to Find (Honest) Witnesses?}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.45},
  URN =		{urn:nbn:de:0030-drops-63575},
  doi =		{10.4230/LIPIcs.ESA.2016.45},
  annote =	{Keywords: 3SUM, convolutions, matrix multiplication, histogram indexing}
}
Document
Incremental Exact Min-Cut in Poly-logarithmic Amortized Update Time

Authors: Gramoz Goranci, Monika Henzinger, and Mikkel Thorup


Abstract
We present a deterministic incremental algorithm for exactly maintaining the size of a minimum cut with ~O(1) amortized time per edge insertion and O(1) query time. This result partially answers an open question posed by Thorup [Combinatorica 2007]. It also stays in sharp contrast to a polynomial conditional lower-bound for the fully-dynamic weighted minimum cut problem. Our algorithm is obtained by combining a recent sparsification technique of Kawarabayashi and Thorup [STOC 2015] and an exact incremental algorithm of Henzinger [J. of Algorithm 1997]. We also study space-efficient incremental algorithms for the minimum cut problem. Concretely, we show that there exists an O(n log n/epsilon^2) space Monte-Carlo algorithm that can process a stream of edge insertions starting from an empty graph, and with high probability, the algorithm maintains a (1+epsilon)-approximation to the minimum cut. The algorithm has ~O(1) amortized update-time and constant query-time.

Cite as

Gramoz Goranci, Monika Henzinger, and Mikkel Thorup. Incremental Exact Min-Cut in Poly-logarithmic Amortized Update Time. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 46:1-46:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ESA.2016.46,
  author =	{Goranci, Gramoz and Henzinger, Monika and Thorup, Mikkel},
  title =	{{Incremental Exact Min-Cut in Poly-logarithmic Amortized Update Time}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{46:1--46:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.46},
  URN =		{urn:nbn:de:0030-drops-63584},
  doi =		{10.4230/LIPIcs.ESA.2016.46},
  annote =	{Keywords: Dynamic Graph Algorithms, Minimum Cut, Edge Connectivity}
}
Document
Packing and Covering with Non-Piercing Regions

Authors: Sathish Govindarajan, Rajiv Raman, Saurabh Ray, and Aniket Basu Roy


Abstract
In this paper, we design the first polynomial time approximation schemes for the Set Cover and Dominating Set problems when the underlying sets are non-piercing regions (which include pseudodisks). We show that the local search algorithm that yields PTASs when the regions are disks [Aschner/Katz/Morgenstern/Yuditsky, WALCOM 2013; Gibson/Pirwani, 2005; Mustafa/Raman/Ray, 2015] can be extended to work for non-piercing regions. While such an extension is intuitive and natural, attempts to settle this question have failed even for pseudodisks. The techniques used for analysis when the regions are disks rely heavily on the underlying geometry, and do not extend to topologically defined settings such as pseudodisks. In order to prove our results, we introduce novel techniques that we believe will find applications in other problems. We then consider the Capacitated Region Packing problem. Here, the input consists of a set of points with capacities, and a set of regions. The objective is to pick a maximum cardinality subset of regions so that no point is covered by more regions than its capacity. We show that this problem admits a PTAS when the regions are k-admissible regions (pseudodisks are 2-admissible), and the capacities are bounded. Our result settles a conjecture of Har-Peled (see Conclusion of [Har-Peled, SoCG 2014]) in the affirmative. The conjecture was for a weaker version of the problem, namely when the regions are pseudodisks, the capacities are uniform, and the point set consists of all points in the plane. Finally, we consider the Capacitated Point Packing problem. In this setting, the regions have capacities, and our objective is to find a maximum cardinality subset of points such that no region has more points than its capacity. We show that this problem admits a PTAS when the capacity is unity, extending one of the results of Ene et al. [Ene/Har-Peled/Raichel, SoCG 2012].

Cite as

Sathish Govindarajan, Rajiv Raman, Saurabh Ray, and Aniket Basu Roy. Packing and Covering with Non-Piercing Regions. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 47:1-47:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{govindarajan_et_al:LIPIcs.ESA.2016.47,
  author =	{Govindarajan, Sathish and Raman, Rajiv and Ray, Saurabh and Basu Roy, Aniket},
  title =	{{Packing and Covering with Non-Piercing Regions}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{47:1--47:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.47},
  URN =		{urn:nbn:de:0030-drops-63591},
  doi =		{10.4230/LIPIcs.ESA.2016.47},
  annote =	{Keywords: Local Search, Set Cover, Dominating Set, Capacitated Packing, Approximation algorithms}
}
Document
Incremental and Fully Dynamic Subgraph Connectivity For Emergency Planning

Authors: Monika Henzinger and Stefan Neumann


Abstract
During the last 10 years it has become popular to study dynamic graph problems in a emergency planning or sensitivity setting: Instead of considering the general fully dynamic problem, we only have to process a single batch update of size d; after the update we have to answer queries. In this paper, we consider the dynamic subgraph connectivity problem with sensitivity d: We are given a graph of which some vertices are activated and some are deactivated. After that we get a single update in which the states of up to $d$ vertices are changed. Then we get a sequence of connectivity queries in the subgraph of activated vertices. We present the first fully dynamic algorithm for this problem which has an update and query time only slightly worse than the best decremental algorithm. In addition, we present the first incremental algorithm which is tight with respect to the best known conditional lower bound; moreover, the algorithm is simple and we believe it is implementable and efficient in practice.

Cite as

Monika Henzinger and Stefan Neumann. Incremental and Fully Dynamic Subgraph Connectivity For Emergency Planning. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 48:1-48:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2016.48,
  author =	{Henzinger, Monika and Neumann, Stefan},
  title =	{{Incremental and Fully Dynamic Subgraph Connectivity For Emergency Planning}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{48:1--48:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.48},
  URN =		{urn:nbn:de:0030-drops-63607},
  doi =		{10.4230/LIPIcs.ESA.2016.48},
  annote =	{Keywords: connectivity, emergency planning, sensitivity}
}
Document
A Combinatorial Approximation Algorithm for Graph Balancing with Light Hyper Edges

Authors: Chien-Chung Huang and Sebastian Ott


Abstract
Makespan minimization in restricted assignment (R|p_{ij} in {p_j, infinity}|C_{max}) is a classical problem in the field of machine scheduling. In a landmark paper, [Lenstra, Shmoys, and Tardos, Math. Progr. 1990] gave a 2-approximation algorithm and proved that the problem cannot be approximated within 1.5 unless P=NP. The upper and lower bounds of the problem have been essentially unimproved in the intervening 25 years, despite several remarkable successful attempts in some special cases of the problem recently. In this paper, we consider a special case called graph-balancing with light hyper edges, where heavy jobs can be assigned to at most two machines while light jobs can be assigned to any number of machines. For this case, we present algorithms with approximation ratios strictly better than 2. Specifically, - Two job sizes: Suppose that light jobs have weight w and heavy jobs have weight W, and w < W. We give a 1.5-approximation algorithm (note that the current 1.5 lower bound is established in an even more restrictive setting). Indeed, depending on the specific values of w and W, sometimes our algorithm guarantees sub-1.5 approximation ratios. - Arbitrary job sizes: Suppose that W is the largest given weight, heavy jobs have weights in the range of (beta W, W], where 4/7 <= beta < 1, and light jobs have weights in the range of (0,beta W]. We present a (5/3+beta/3)-approximation algorithm. Our algorithms are purely combinatorial, without the need of solving a linear program as required in most other known approaches.

Cite as

Chien-Chung Huang and Sebastian Ott. A Combinatorial Approximation Algorithm for Graph Balancing with Light Hyper Edges. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ESA.2016.49,
  author =	{Huang, Chien-Chung and Ott, Sebastian},
  title =	{{A Combinatorial Approximation Algorithm for Graph Balancing with Light Hyper Edges}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.49},
  URN =		{urn:nbn:de:0030-drops-63919},
  doi =		{10.4230/LIPIcs.ESA.2016.49},
  annote =	{Keywords: Approximation Algorithms, Machine Scheduling, Graph Balancing, Combinatorial Algorithms}
}
Document
epsilon-Kernel Coresets for Stochastic Points

Authors: Lingxiao Huang, Jian Li, Jeff M. Phillips, and Haitao Wang


Abstract
With the dramatic growth in the number of application domains that generate probabilistic, noisy and uncertain data, there has been an increasing interest in designing algorithms for geometric or combinatorial optimization problems over such data. In this paper, we initiate the study of constructing epsilon-kernel coresets for uncertain points. We consider uncertainty in the existential model where each point's location is fixed but only occurs with a certain probability, and the locational model where each point has a probability distribution describing its location. An epsilon-kernel coreset approximates the width of a point set in any direction. We consider approximating the expected width (an epsilon-EXP-KERNEL), as well as the probability distribution on the width (an (epsilon, tau)-QUANT-KERNEL) for any direction. We show that there exists a set of O(epsilon^{-(d-1)/2}) deterministic points which approximate the expected width under the existential and locational models, and we provide efficient algorithms for constructing such coresets. We show, however, it is not always possible to find a subset of the original uncertain points which provides such an approximation. However, if the existential probability of each point is lower bounded by a constant, an epsilon-EXP-KERNEL is still possible. We also provide efficient algorithms for construct an (epsilon, tau)-QUANT-KERNEL coreset in nearly linear time. Our techniques utilize or connect to several important notions in probability and geometry, such as Kolmogorov distances, VC uniform convergence and Tukey depth, and may be useful in other geometric optimization problem in stochastic settings. Finally, combining with known techniques, we show a few applications to approximating the extent of uncertain functions, maintaining extent measures for stochastic moving points and some shape fitting problems under uncertainty.

Cite as

Lingxiao Huang, Jian Li, Jeff M. Phillips, and Haitao Wang. epsilon-Kernel Coresets for Stochastic Points. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 50:1-50:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ESA.2016.50,
  author =	{Huang, Lingxiao and Li, Jian and Phillips, Jeff M. and Wang, Haitao},
  title =	{{epsilon-Kernel Coresets for Stochastic Points}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{50:1--50:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.50},
  URN =		{urn:nbn:de:0030-drops-63921},
  doi =		{10.4230/LIPIcs.ESA.2016.50},
  annote =	{Keywords: e-kernel, coreset, stochastic point, shape fitting}
}
Document
Every Property Is Testable on a Natural Class of Scale-Free Multigraphs

Authors: Hiro Ito


Abstract
In this paper, we introduce a natural class of multigraphs called hierarchical-scale-free (HSF) multigraphs, and consider constant-time testability on the class. We show that a very wide subclass of HSF is hyperfinite. Based on this result, an algorithm for a deterministic partitioning oracle can be constructed. We conclude by showing that every property is constant-time testable on the above subclass of HSF. This algorithm utilizes findings by Newman and Sohler of STOC'11. However, their algorithm is based on a bounded-degree model, while it is known that actual scale-free networks usually include hubs, which have a very large degree. HSF is based on scale-free properties and includes such hubs. This is the first universal result of constant-time testability on a class of graphs made by a model of scale-free networks, and it has the potential to be applicable on a very wide range of scale-free networks.

Cite as

Hiro Ito. Every Property Is Testable on a Natural Class of Scale-Free Multigraphs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 51:1-51:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ito:LIPIcs.ESA.2016.51,
  author =	{Ito, Hiro},
  title =	{{Every Property Is Testable on a Natural Class of Scale-Free Multigraphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{51:1--51:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.51},
  URN =		{urn:nbn:de:0030-drops-63937},
  doi =		{10.4230/LIPIcs.ESA.2016.51},
  annote =	{Keywords: constant-time algorithms, scale-free networks, complex networks, isolated cliques, hyperfinite}
}
Document
Explicit Correlation Amplifiers for Finding Outlier Correlations in Deterministic Subquadratic Time

Authors: Matti Karppa, Petteri Kaski, Jukka Kohonen, and Padraig Ó Catháin


Abstract
We derandomize G. Valiant's [J.ACM 62(2015) Art.13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math 155(2002), 157-187]. We say that a function f:{-1,1}^d ->{-1,1}^D is a correlation amplifier with threshold 0 <= tau <= 1, error gamma >= 1, and strength p an even positive integer if for all pairs of vectors x,y in {-1,1}^d it holds that (i) |<x,y>|<tau d implies |<f(x),f(y)>| <= (tau*gamma)^p*D; and (ii) |<x,y>| >= tau*d implies (<x,y>/gamma^d})^p*D <= <f(x),f(y)> <= (gamma*<x,y>/d)^p*D.

Cite as

Matti Karppa, Petteri Kaski, Jukka Kohonen, and Padraig Ó Catháin. Explicit Correlation Amplifiers for Finding Outlier Correlations in Deterministic Subquadratic Time. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{karppa_et_al:LIPIcs.ESA.2016.52,
  author =	{Karppa, Matti and Kaski, Petteri and Kohonen, Jukka and \'{O} Cath\'{a}in, Padraig},
  title =	{{Explicit Correlation Amplifiers for Finding Outlier Correlations in Deterministic Subquadratic Time}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.52},
  URN =		{urn:nbn:de:0030-drops-63944},
  doi =		{10.4230/LIPIcs.ESA.2016.52},
  annote =	{Keywords: correlation, derandomization, outlier, similarity search, expander graph}
}
Document
Faster Worst Case Deterministic Dynamic Connectivity

Authors: Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup


Abstract
We present a deterministic dynamic connectivity data structure for undirected graphs with worst case update time O(sqrt{(n(log(log(n)))^2)/log(n)}) and constant query time. This improves on the previous best deterministic worst case algorithm of Frederickson (SIAM J. Comput., 1985) and Eppstein Galil, Italiano, and Nissenzweig (J. ACM, 1997), which had update time O(sqrt{n}). All other algorithms for dynamic connectivity are either randomized (Monte Carlo) or have only amortized performance guarantees.

Cite as

Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup. Faster Worst Case Deterministic Dynamic Connectivity. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 53:1-53:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kejlbergrasmussen_et_al:LIPIcs.ESA.2016.53,
  author =	{Kejlberg-Rasmussen, Casper and Kopelowitz, Tsvi and Pettie, Seth and Thorup, Mikkel},
  title =	{{Faster Worst Case Deterministic Dynamic Connectivity}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{53:1--53:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.53},
  URN =		{urn:nbn:de:0030-drops-63953},
  doi =		{10.4230/LIPIcs.ESA.2016.53},
  annote =	{Keywords: dynamic graph, spanning tree}
}
Document
Think Eternally: Improved Algorithms for the Temp Secretary Problem and Extensions

Authors: Thomas Kesselheim and Andreas Tönnis


Abstract
The Temp Secretary Problem was recently introduced by [Fiat et al., ESA 2015]. It is a generalization of the Secretary Problem, in which commitments are temporary for a fixed duration. We present a simple online algorithm with improved performance guarantees for cases already considered by [Fiat et al., ESA 2015] and give competitive ratios for new generalizations of the problem. In the classical setting, where candidates have identical contract durations gamma << 1 and we are allowed to hire up to B candidates simultaneously, our algorithm is (1/2) - O(sqrt{gamma})-competitive. For large B, the bound improves to 1 - O(1/sqrt{B}) - O(sqrt{gamma}). Furthermore we generalize the problem from cardinality constraints towards general packing constraints. We achieve a competitive ratio of 1 - O(sqrt{(1+log(d) + log(B))/B}) - O(sqrt{gamma}), where d is the sparsity of the constraint matrix and B is generalized to the capacity ratio of linear constraints. Additionally we extend the problem towards arbitrary hiring durations. Our algorithmic approach is a relaxation that aggregates all temporal constraints into a non-temporal constraint. Then we apply a linear scaling algorithm that, on every arrival, computes a tentative solution on the input that is known up to this point. This tentative solution uses the non-temporal, relaxed constraints scaled down linearly by the amount of time that has already passed.

Cite as

Thomas Kesselheim and Andreas Tönnis. Think Eternally: Improved Algorithms for the Temp Secretary Problem and Extensions. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kesselheim_et_al:LIPIcs.ESA.2016.54,
  author =	{Kesselheim, Thomas and T\"{o}nnis, Andreas},
  title =	{{Think Eternally: Improved Algorithms for the Temp Secretary Problem and Extensions}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.54},
  URN =		{urn:nbn:de:0030-drops-63966},
  doi =		{10.4230/LIPIcs.ESA.2016.54},
  annote =	{Keywords: Secretary Problem, Online Algorithms, Scheduling Problems}
}
Document
Hardness of Bipartite Expansion

Authors: Subhash Khot and Rishi Saket


Abstract
We study the natural problem of estimating the expansion of subsets of vertices on one side of a bipartite graph. More precisely, given a bipartite graph G(U,V,E) and a parameter beta, the goal is to find a subset V' subseteq V containing beta fraction of the vertices of V which minimizes the size of N(V'), the neighborhood of V'. This problem, which we call Bipartite Expansion, is a special case of submodular minimization subject to a cardinality constraint, and is also related to other problems in graph partitioning and expansion. Previous to this work, there was no hardness of approximation known for Bipartite Expansion. In this paper we show the following strong inapproximability for Bipartite Expansion: for any constants tau, gamma > 0 there is no algorithm which, given a constant beta > 0 and a bipartite graph G(U,V,E), runs in polynomial time and decides whether - (YES case) There is a subset S^* subseteq V s.t. |S^*| >= beta*|V| satisfying |N(S^*)| <= gamma |U|, or - (NO case) Any subset S subseteq V s.t. |S| >= tau*beta*|V| satisfies |N(S)| >= (1 - gamma)|U|, unless NP subseteq intersect_{epsilon > 0}{DTIME}(2^{n^epsi;on}) i.e. NP has subexponential time algorithms. We note that our hardness result stated above is a vertex expansion analogue of the Small Set (Edge) Expansion Conjecture of Raghavendra and Steurer 2010.

Cite as

Subhash Khot and Rishi Saket. Hardness of Bipartite Expansion. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 55:1-55:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{khot_et_al:LIPIcs.ESA.2016.55,
  author =	{Khot, Subhash and Saket, Rishi},
  title =	{{Hardness of Bipartite Expansion}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{55:1--55:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.55},
  URN =		{urn:nbn:de:0030-drops-63971},
  doi =		{10.4230/LIPIcs.ESA.2016.55},
  annote =	{Keywords: inapproximability, bipartite expansion, PCP, submodular minimization}
}
Document
A Streaming Algorithm for the Undirected Longest Path Problem

Authors: Lasse Kliemann, Christian Schielke, and Anand Srivastav


Abstract
We present the first streaming algorithm for the longest path problem in undirected graphs. The input graph is given as a stream of edges and RAM is limited to only a linear number of edges at a time (linear in the number of vertices n). We prove a per-edge processing time of O(n), where a naive solution would have required Omega(n^2). Moreover, we give a concrete linear upper bound on the number of bits of RAM that are required. On a set of graphs with various structure, we experimentally compare our algorithm with three leading RAM algorithms: Warnsdorf (1823), Pohl-Warnsdorf (1967), and Pongrasz (2012). Although conducting only a small constant number of passes over the input, our algorithm delivers competitive results: with the exception of preferential attachment graphs, we deliver at least 71% of the solution of the best RAM algorithm. The same minimum relative performance of 71% is observed over all graph classes after removing the 10% worst cases. This comparison has strong meaning, since for each instance class there is one algorithm that on average delivers at least 84% of a Hamilton path. In some cases we deliver even better results than any of the RAM algorithms.

Cite as

Lasse Kliemann, Christian Schielke, and Anand Srivastav. A Streaming Algorithm for the Undirected Longest Path Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 56:1-56:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kliemann_et_al:LIPIcs.ESA.2016.56,
  author =	{Kliemann, Lasse and Schielke, Christian and Srivastav, Anand},
  title =	{{A Streaming Algorithm for the Undirected Longest Path Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{56:1--56:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.56},
  URN =		{urn:nbn:de:0030-drops-63980},
  doi =		{10.4230/LIPIcs.ESA.2016.56},
  annote =	{Keywords: Streaming Algorithms, Undirected Longest Path Problem, Graph Algorithms, Combinatorial Optimization}
}
Document
A Note On Spectral Clustering

Authors: Pavel Kolev and Kurt Mehlhorn


Abstract
Spectral clustering is a popular and successful approach for partitioning the nodes of a graph into clusters for which the ratio of outside connections compared to the volume (sum of degrees) is small. In order to partition into k clusters, one first computes an approximation of the bottom k eigenvectors of the (normalized) Laplacian of G, uses it to embed the vertices of G into k-dimensional Euclidean space R^k, and then partitions the resulting points via a k-means clustering algorithm. It is an important task for theory to explain the success of spectral clustering. Peng et al. (COLT, 2015) made an important step in this direction. They showed that spectral clustering provably works if the gap between the (k+1)-th and the k-th eigenvalue of the normalized Laplacian is sufficiently large. They proved a structural and an algorithmic result. The algorithmic result needs a considerably stronger gap assumption and does not analyze the standard spectral clustering paradigm; it replaces spectral embedding by heat kernel embedding and k-means clustering by locality sensitive hashing. We extend their work in two directions. Structurally, we improve the quality guarantee for spectral clustering by a factor of k and simultaneously weaken the gap assumption. Algorithmically, we show that the standard paradigm for spectral clustering works. Moreover, it even works with the same gap assumption as required for the structural result.

Cite as

Pavel Kolev and Kurt Mehlhorn. A Note On Spectral Clustering. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kolev_et_al:LIPIcs.ESA.2016.57,
  author =	{Kolev, Pavel and Mehlhorn, Kurt},
  title =	{{A Note On Spectral Clustering}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.57},
  URN =		{urn:nbn:de:0030-drops-63994},
  doi =		{10.4230/LIPIcs.ESA.2016.57},
  annote =	{Keywords: spectral embedding, k-means clustering, power method, gap assumption}
}
Document
On the Fine-Grained Complexity of Rainbow Coloring

Authors: Lukasz Kowalik, Juho Lauri, and Arkadiusz Socala


Abstract
The Rainbow k-Coloring problem asks whether the edges of a given graph can be colored in k colors so that every pair of vertices is connected by a rainbow path, i.e., a path with all edges of different colors. Our main result states that for any k >= 2, there is no algorithm for Rainbow k-Coloring running in time 2^{o(n^{3/2})}, unless ETH fails. Motivated by this negative result we consider two parameterized variants of the problem. In the Subset Rainbow k-Coloring problem, introduced by Chakraborty et al. [STACS 2009, J. Comb. Opt. 2009], we are additionally given a set S of pairs of vertices and we ask if there is a coloring in which all the pairs in S are connected by rainbow paths. We show that Subset Rainbow k-Coloring is FPT when parameterized by |S|. We also study Subset Rainbow k-Coloring problem, where we are additionally given an integer q and we ask if there is a coloring in which at least q anti-edges are connected by rainbow paths. We show that the problem is FPT when parameterized by q and has a kernel of size O(q) for every k >= 2, extending the result of Ananth et al. [FSTTCS 2011]. We believe that our techniques used for the lower bounds may shed some light on the complexity of the classical Edge Coloring problem, where it is a major open question if a 2^{O(n)}-time algorithm exists.

Cite as

Lukasz Kowalik, Juho Lauri, and Arkadiusz Socala. On the Fine-Grained Complexity of Rainbow Coloring. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 58:1-58:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kowalik_et_al:LIPIcs.ESA.2016.58,
  author =	{Kowalik, Lukasz and Lauri, Juho and Socala, Arkadiusz},
  title =	{{On the Fine-Grained Complexity of Rainbow Coloring}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{58:1--58:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.58},
  URN =		{urn:nbn:de:0030-drops-64001},
  doi =		{10.4230/LIPIcs.ESA.2016.58},
  annote =	{Keywords: graph coloring, computational complexity, lower bounds, exponential time hypothesis, FPT algorithms}
}
Document
A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter

Authors: Stefan Kratsch


Abstract
In the Vertex Cover problem we are given a graph G=(V,E) and an integer k and have to determine whether there is a set X subseteq V of size at most k such that each edge in E has at least one endpoint in X. The problem can be easily solved in time O^*(2^k), making it fixed-parameter tractable (FPT) with respect to k. While the fastest known algorithm takes only time O^*(1.2738^k), much stronger improvements have been obtained by studying parameters that are smaller than k. Apart from treewidth-related results, the arguably best algorithm for Vertex Cover runs in time O^*(2.3146^p), where p = k - LP(G) is only the excess of the solution size k over the best fractional vertex cover [Lokshtanov et al., TALG 2014]. Since p <= k but k cannot be bounded in terms of p alone, this strictly increases the range of tractable instances. Recently, Garg and Philip (SODA 2016) greatly contributed to understanding the parameterized complexity of the Vertex Cover problem. They prove that 2LP(G) - MM(G) is a lower bound for the vertex cover size of G, where MM(G) is the size of a largest matching of G, and proceed to study parameter l = k - (2LP(G)-MM(G)). They give an algorithm of running time O^*(3^l), proving that Vertex Cover is FPT in l. It can be easily observed that l <= p whereas p cannot be bounded in terms of l alone. We complement the work of Garg and Philip by proving that Vertex Cover admits a randomized polynomial kernelization in terms of l, i.e., an efficient preprocessing to size polynomial in l. This improves over parameter p = k - LP(G) for which this was previously known [Kratsch and Wahlström, FOCS 2012].

Cite as

Stefan Kratsch. A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 59:1-59:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kratsch:LIPIcs.ESA.2016.59,
  author =	{Kratsch, Stefan},
  title =	{{A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{59:1--59:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.59},
  URN =		{urn:nbn:de:0030-drops-64066},
  doi =		{10.4230/LIPIcs.ESA.2016.59},
  annote =	{Keywords: Vertex cover, parameterized complexity, kernelization}
}
Document
The Strongly Stable Roommates Problem

Authors: Adam Kunysz


Abstract
An instance of the strongly stable roommates problem with incomplete lists and ties (SRTI) is an undirected non-bipartite graph G = (V,E), with an adjacency list being a linearly ordered list of ties, which are vertices equally good for a given vertex. Ties are disjoint and may contain one vertex. A matching M is a set of vertex-disjoint edges. An edge {x, y} in E\M is a blocking edge for M if x is either unmatched or strictly prefers y to its current partner in M, and y is either unmatched or strictly prefers x to its current partner in M or is indifferent between them. A matching is strongly stable if there is no blocking edge with respect to it. We present an O(nm) time algorithm for computing a strongly stable matching, where we denote n = |V| and m = |E|. The best previously known solution had running time O(m^2) [Scott, 2005]. We also give a characterisation of the set of all strongly stable matchings. We show that there exists a partial order with O(m) elements representing the set of all strongly stable matchings, and we give an O(nm) algorithm for constructing such a representation. Our algorithms are based on a simple reduction to the bipartite version of the problem.

Cite as

Adam Kunysz. The Strongly Stable Roommates Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 60:1-60:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kunysz:LIPIcs.ESA.2016.60,
  author =	{Kunysz, Adam},
  title =	{{The Strongly Stable Roommates Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{60:1--60:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.60},
  URN =		{urn:nbn:de:0030-drops-64012},
  doi =		{10.4230/LIPIcs.ESA.2016.60},
  annote =	{Keywords: strongly stable matching, stable roommates, rotations, matching theory}
}
Document
Faster External Memory LCP Array Construction

Authors: Juha Kärkkäinen and Dominik Kempa


Abstract
The suffix array, perhaps the most important data structure in modern string processing, needs to be augmented with the longest-common-prefix (LCP) array in many applications. Their construction is often a major bottleneck especially when the data is too big for internal memory. We describe two new algorithms for computing the LCP array from the suffix array in external memory. Experiments demonstrate that the new algorithms are about a factor of two faster than the fastest previous algorithm.

Cite as

Juha Kärkkäinen and Dominik Kempa. Faster External Memory LCP Array Construction. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 61:1-61:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{karkkainen_et_al:LIPIcs.ESA.2016.61,
  author =	{K\"{a}rkk\"{a}inen, Juha and Kempa, Dominik},
  title =	{{Faster External Memory LCP Array Construction}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{61:1--61:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.61},
  URN =		{urn:nbn:de:0030-drops-64026},
  doi =		{10.4230/LIPIcs.ESA.2016.61},
  annote =	{Keywords: LCP array, suffix array, external memory algorithms}
}
Document
Almost All Even Yao-Yao Graphs Are Spanners

Authors: Jian Li and Wei Zhan


Abstract
It is an open problem whether Yao-Yao graphs YY_{k} (also known as sparse-Yao graphs) are all spanners when the integer parameter k is large enough. In this paper we show that, for any integer k >= 42, the Yao-Yao graph YY_{2k} is a t_k-spanner, with stretch factor t_k = 6.03+O(k^{-1}) when k tends to infinity. Our result generalizes the best known result which asserts that all YY_{6k} are spanners for k >= 6 [Bauer and Damian, SODA'13]. Our proof is also somewhat simpler.

Cite as

Jian Li and Wei Zhan. Almost All Even Yao-Yao Graphs Are Spanners. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 62:1-62:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2016.62,
  author =	{Li, Jian and Zhan, Wei},
  title =	{{Almost All Even Yao-Yao Graphs Are Spanners}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{62:1--62:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.62},
  URN =		{urn:nbn:de:0030-drops-64033},
  doi =		{10.4230/LIPIcs.ESA.2016.62},
  annote =	{Keywords: Yao-Yao graph, geometric spanner, curved trapezoid}
}
Document
Online Non-Preemptive Scheduling in a Resource Augmentation Model Based on Duality

Authors: Giorgio Lucarelli, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram


Abstract
Resource augmentation is a well-established model for analyzing algorithms, particularly in the online setting. It has been successfully used for providing theoretical evidence for several heuristics in scheduling with good performance in practice. According to this model, the algorithm is applied to a more powerful environment than that of the adversary. Several types of resource augmentation for scheduling problems have been proposed up to now, including speed augmentation, machine augmentation and more recently rejection. In this paper, we present a framework that unifies the various types of resource augmentation. Moreover, it allows generalize the notion of resource augmentation for other types of resources. Our framework is based on mathematical programming and it consists of extending the domain of feasible solutions for the algorithm with respect to the domain of the adversary. This, in turn allows the natural concept of duality for mathematical programming to be used as a tool for the analysis of the algorithm's performance. As an illustration of the above ideas, we apply this framework and we propose a primal-dual algorithm for the online scheduling problem of minimizing the total weighted flow time of jobs on unrelated machines when the preemption of jobs is not allowed. This is a well representative problem for which no online algorithm with performance guarantee is known. Specifically, a strong lower bound of Omega(sqrt{n}) exists even for the offline unweighted version of the problem on a single machine. In this paper, we first show a strong negative result even when speed augmentation is used in the online setting. Then, using the generalized framework for resource augmentation and by combining speed augmentation and rejection, we present an (1+epsilon_s)-speed O(1/(epsilon_s epsilon_r))-competitive algorithm if we are allowed to reject jobs whose total weight is an epsilon_r-fraction of the weights of all jobs, for any epsilon_s > 0 and epsilon_r in (0,1). Furthermore, we extend the idea for analysis of the above problem and we propose an (1+\epsilon_s)-speed epsilon_r-rejection O({k^{(k+3)/k}}/{epsilon_{r}^{1/k}*epsilon_{s}^{(k+2)/k}})-competitive algorithm for the more general objective of minimizing the weighted l_k-norm of the flow times of jobs.

Cite as

Giorgio Lucarelli, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram. Online Non-Preemptive Scheduling in a Resource Augmentation Model Based on Duality. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 63:1-63:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lucarelli_et_al:LIPIcs.ESA.2016.63,
  author =	{Lucarelli, Giorgio and Kim Thang, Nguyen and Srivastav, Abhinav and Trystram, Denis},
  title =	{{Online Non-Preemptive Scheduling in a Resource Augmentation Model Based on Duality}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{63:1--63:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.63},
  URN =		{urn:nbn:de:0030-drops-64047},
  doi =		{10.4230/LIPIcs.ESA.2016.63},
  annote =	{Keywords: Online algorithms, Non-preemptive scheduling, Resource augmentation, Primal-dual}
}
Document
Admissible Colourings of 3-Manifold Triangulations for Turaev-Viro Type Invariants

Authors: Clément Maria and Jonathan Spreer


Abstract
Turaev-Viro invariants are amongst the most powerful tools to distinguish 3-manifolds. They are invaluable for mathematical software, but current algorithms to compute them rely on the enumeration of an extremely large set of combinatorial data defined on the triangulation, regardless of the underlying topology of the manifold. In the article, we propose a finer study of these combinatorial data, called admissible colourings, in relation with the cohomology of the manifold. We prove that the set of admissible colourings to be considered is substantially smaller than previously known, by furnishing new upper bounds on its size that are aware of the topology of the manifold. Moreover, we deduce new topology-sensitive enumeration algorithms based on these bounds. The paper provides a theoretical analysis, as well as a detailed experimental study of the approach. We give strong experimental evidence on large manifold censuses that our upper bounds are tighter than the previously known ones, and that our algorithms outperform significantly state of the art implementations to compute Turaev-Viro invariants.

Cite as

Clément Maria and Jonathan Spreer. Admissible Colourings of 3-Manifold Triangulations for Turaev-Viro Type Invariants. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 64:1-64:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{maria_et_al:LIPIcs.ESA.2016.64,
  author =	{Maria, Cl\'{e}ment and Spreer, Jonathan},
  title =	{{Admissible Colourings of 3-Manifold Triangulations for Turaev-Viro Type Invariants}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{64:1--64:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.64},
  URN =		{urn:nbn:de:0030-drops-64050},
  doi =		{10.4230/LIPIcs.ESA.2016.64},
  annote =	{Keywords: low-dimensional topology, triangulations of 3-manifolds, cohomology theory, Turaev-Viro invariants, combinatorial algorithms}
}
Document
The Computational Complexity of Genetic Diversity

Authors: Ruta Mehta, Ioannis Panageas, Georgios Piliouras, and Sadra Yazdanbod


Abstract
A key question in biological systems is whether genetic diversity persists in the long run under evolutionary competition, or whether a single dominant genotype emerges. Classic work by [Kalmus, J. og Genetics, 1945] has established that even in simple diploid species (species with chromosome pairs) diversity can be guaranteed as long as the heterozygous (having different alleles for a gene on two chromosomes) individuals enjoy a selective advantage. Despite the classic nature of the problem, as we move towards increasingly polymorphic traits (e.g., human blood types) predicting diversity (and its implications) is still not fully understood. Our key contribution is to establish complexity theoretic hardness results implying that even in the textbook case of single locus (gene) diploid models, predicting whether diversity survives or not given its fitness landscape is algorithmically intractable. Our hardness results are structurally robust along several dimensions, e.g., choice of parameter distribution, different definitions of stability/persistence, restriction to typical subclasses of fitness landscapes. Technically, our results exploit connections between game theory, nonlinear dynamical systems, and complexity theory and establish hardness results for predicting the evolution of a deterministic variant of the well known multiplicative weights update algorithm in symmetric coordination games; finding one Nash equilibrium is easy in these games. In the process we characterize stable fixed points of these dynamics using the notions of Nash equilibrium and negative semidefiniteness. This as well as hardness results for decision problems in coordination games may be of independent interest. Finally, we complement our results by establishing that under randomly chosen fitness landscapes diversity survives with significant probability. The full version of this paper is available at http://arxiv.org/abs/1411.6322.

Cite as

Ruta Mehta, Ioannis Panageas, Georgios Piliouras, and Sadra Yazdanbod. The Computational Complexity of Genetic Diversity. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 65:1-65:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mehta_et_al:LIPIcs.ESA.2016.65,
  author =	{Mehta, Ruta and Panageas, Ioannis and Piliouras, Georgios and Yazdanbod, Sadra},
  title =	{{The Computational Complexity of Genetic Diversity}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{65:1--65:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.65},
  URN =		{urn:nbn:de:0030-drops-64073},
  doi =		{10.4230/LIPIcs.ESA.2016.65},
  annote =	{Keywords: Dynamical Systems, Stability, Complexity, Optimization, Equilibrium}
}
Document
Approximation and Hardness of Token Swapping

Authors: Tillmann Miltzow, Lothar Narins, Yoshio Okamoto, Günter Rote, Antonis Thomas, and Takeaki Uno


Abstract
Given a graph G=(V,E) with V={1,...,n}, we place on every vertex a token T_1,...,T_n. A swap is an exchange of tokens on adjacent vertices. We consider the algorithmic question of finding a shortest sequence of swaps such that token T_i is on vertex i. We are able to achieve essentially matching upper and lower bounds, for exact algorithms and approximation algorithms. For exact algorithms, we rule out any 2^{o(n)} algorithm under the ETH. This is matched with a simple 2^{O(n*log(n))} algorithm based on a breadth-first search in an auxiliary graph. We show one general 4-approximation and show APX-hardness. Thus, there is a small constant delta > 1 such that every polynomial time approximation algorithm has approximation factor at least delta. Our results also hold for a generalized version, where tokens and vertices are colored. In this generalized version each token must go to a vertex with the same color.

Cite as

Tillmann Miltzow, Lothar Narins, Yoshio Okamoto, Günter Rote, Antonis Thomas, and Takeaki Uno. Approximation and Hardness of Token Swapping. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 66:1-66:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{miltzow_et_al:LIPIcs.ESA.2016.66,
  author =	{Miltzow, Tillmann and Narins, Lothar and Okamoto, Yoshio and Rote, G\"{u}nter and Thomas, Antonis and Uno, Takeaki},
  title =	{{Approximation and Hardness of Token Swapping}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{66:1--66:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.66},
  URN =		{urn:nbn:de:0030-drops-64084},
  doi =		{10.4230/LIPIcs.ESA.2016.66},
  annote =	{Keywords: token swapping, minimum generator sequence, graph theory, NP-hardness, approximation algorithms}
}
Document
A 7/3-Approximation for Feedback Vertex Sets in Tournaments

Authors: Matthias Mnich, Virginia Vassilevska Williams, and László A. Végh


Abstract
We consider the minimum-weight feedback vertex set problem in tournaments: given a tournament with non-negative vertex weights, remove a minimum-weight set of vertices that intersects all cycles. This problem is NP-hard to solve exactly, and Unique Games-hard to approximate by a factor better than 2. We present the first 7/3 approximation algorithm for this problem, improving on the previously best known ratio 5/2 given by Cai et al. [FOCS 1998, SICOMP 2001].

Cite as

Matthias Mnich, Virginia Vassilevska Williams, and László A. Végh. A 7/3-Approximation for Feedback Vertex Sets in Tournaments. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mnich_et_al:LIPIcs.ESA.2016.67,
  author =	{Mnich, Matthias and Vassilevska Williams, Virginia and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{A 7/3-Approximation for Feedback Vertex Sets in Tournaments}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.67},
  URN =		{urn:nbn:de:0030-drops-64098},
  doi =		{10.4230/LIPIcs.ESA.2016.67},
  annote =	{Keywords: Approximation algorithms, feedback vertex sets, tournaments}
}
Document
Scheduling Distributed Clusters of Parallel Machines: Primal-Dual and LP-based Approximation Algorithms

Authors: Riley Murray, Megan Chao, and Samir Khuller


Abstract
The Map-Reduce computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs. As datasets approach the exabyte scale, a single job may need distributed processing not only on multiple machines, but on multiple clusters. We consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines. In keeping with the scale of the problems motivating this work, we assume that (1) each job is divided into m "subjobs" and (2) distinct subjobs of a given job may be processed concurrently. When each cluster is a single machine, this is the NP-Hard concurrent open shop problem. A clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel. Our algorithms explicitly model clusters as pools of resources and effectively overcome this issue. Under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. The first algorithm uses an LP relaxation tailored to this problem from prior work. This LP-based algorithm provides strong performance guarantees. Our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. This mapping-based algorithm is combinatorial and extremely fast. These are the first constant factor approximations for this problem.

Cite as

Riley Murray, Megan Chao, and Samir Khuller. Scheduling Distributed Clusters of Parallel Machines: Primal-Dual and LP-based Approximation Algorithms. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 68:1-68:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.ESA.2016.68,
  author =	{Murray, Riley and Chao, Megan and Khuller, Samir},
  title =	{{Scheduling Distributed Clusters of Parallel Machines: Primal-Dual and LP-based Approximation Algorithms}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{68:1--68:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.68},
  URN =		{urn:nbn:de:0030-drops-64104},
  doi =		{10.4230/LIPIcs.ESA.2016.68},
  annote =	{Keywords: approximation algorithms, distributed computing, machine scheduling, LP relaxations, primal-dual algorithms}
}
Document
Finding Large Set Covers Faster via the Representation Method

Authors: Jesper Nederlof


Abstract
The worst-case fastest known algorithm for the Set Cover problem on universes with n elements still essentially is the simple O^*(2^n)-time dynamic programming algorithm, and no non-trivial consequences of an O^*(1.01^n)-time algorithm are known. Motivated by this chasm, we study the following natural question: Which instances of Set Cover can we solve faster than the simple dynamic programming algorithm? Specifically, we give a Monte Carlo algorithm that determines the existence of a set cover of size sigma*n in O^*(2^{(1-Omega(sigma^4))n}) time. Our approach is also applicable to Set Cover instances with exponentially many sets: By reducing the task of finding the chromatic number chi(G) of a given n-vertex graph G to Set Cover in the natural way, we show there is an O^*(2^{(1-Omega(sigma^4))n})-time randomized algorithm that given integer s = sigma*n, outputs NO if chi(G) > s and YES with constant probability if \chi(G) <= s - 1. On a high level, our results are inspired by the "representation method" of Howgrave-Graham and Joux~[EUROCRYPT'10] and obtained by only evaluating a randomly sampled subset of the table entries of a dynamic programming algorithm.

Cite as

Jesper Nederlof. Finding Large Set Covers Faster via the Representation Method. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 69:1-69:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{nederlof:LIPIcs.ESA.2016.69,
  author =	{Nederlof, Jesper},
  title =	{{Finding Large Set Covers Faster via the Representation Method}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{69:1--69:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.69},
  URN =		{urn:nbn:de:0030-drops-64114},
  doi =		{10.4230/LIPIcs.ESA.2016.69},
  annote =	{Keywords: Set Cover, Exact Exponential Algorithms, Fine-Grained Complexity}
}
Document
Graph Isomorphism for Unit Square Graphs

Authors: Daniel Neuen


Abstract
In the past decades for more and more graph classes the Graph Isomorphism Problem was shown to be solvable in polynomial time. An interesting family of graph classes arises from intersection graphs of geometric objects. In this work we show that the Graph Isomorphism Problem for unit square graphs, intersection graphs of axis-parallel unit squares in the plane, can be solved in polynomial time. Since the recognition problem for this class of graphs is NP-hard we can not rely on standard techniques for geometric graphs based on constructing a canonical realization. Instead, we develop new techniques which combine structural insights into the class of unit square graphs with understanding of the automorphism group of such graphs. For the latter we introduce a generalization of bounded degree graphs which is used to capture the main structure of unit square graphs. Using group theoretic algorithms we obtain sufficient information to solve the isomorphism problem for unit square graphs.

Cite as

Daniel Neuen. Graph Isomorphism for Unit Square Graphs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 70:1-70:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{neuen:LIPIcs.ESA.2016.70,
  author =	{Neuen, Daniel},
  title =	{{Graph Isomorphism for Unit Square Graphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{70:1--70:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.70},
  URN =		{urn:nbn:de:0030-drops-64120},
  doi =		{10.4230/LIPIcs.ESA.2016.70},
  annote =	{Keywords: graph isomorphism, geometric graphs, unit squares}
}
Document
The Alternating Stock Size Problem and the Gasoline Puzzle

Authors: Alantha Newman, Heiko Röglin, and Johanna Seif


Abstract
Given a set S of integers whose sum is zero, consider the problem of finding a permutation of these integers such that: (i) all prefixes of the ordering are non-negative, and (ii) the maximum value of a prefix sum is minimized. Kellerer et al. referred to this problem as the stock size problem and showed that it can be approximated to within 3/2. They also showed that an approximation ratio of 2 can be achieved via several simple algorithms. We consider a related problem, which we call the alternating stock size problem, where the number of positive and negative integers in the input set S are equal. The problem is the same as above, but we are additionally required to alternate the positive and negative numbers in the output ordering. This problem also has several simple 2-approximations. We show that it can be approximated to within 1.79. Then we show that this problem is closely related to an optimization version of the gasoline puzzle due to Lovász, in which we want to minimize the size of the gas tank necessary to go around the track. We present a 2-approximation for this problem, using a natural linear programming relaxation whose feasible solutions are doubly stochastic matrices. Our novel rounding algorithm is based on a transformation that yields another doubly stochastic matrix with special properties, from which we can extract a suitable permutation.

Cite as

Alantha Newman, Heiko Röglin, and Johanna Seif. The Alternating Stock Size Problem and the Gasoline Puzzle. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{newman_et_al:LIPIcs.ESA.2016.71,
  author =	{Newman, Alantha and R\"{o}glin, Heiko and Seif, Johanna},
  title =	{{The Alternating Stock Size Problem and the Gasoline Puzzle}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{71:1--71:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.71},
  URN =		{urn:nbn:de:0030-drops-64134},
  doi =		{10.4230/LIPIcs.ESA.2016.71},
  annote =	{Keywords: approximation algorithms, stock size problem, scheduling with non-renewable resources}
}
Document
New Parameterized Algorithms for APSP in Directed Graphs

Authors: Ely Porat, Eduard Shahbazian, and Roei Tov


Abstract
All Pairs Shortest Path (APSP) is a classic problem in graph theory. While for general weighted graphs there is no algorithm that computes APSP in O(n^{3-epsilon}) time (epsilon > 0), by using fast matrix multiplication algorithms, we can compute APSP in O(n^{omega}*log(n)) time (omega < 2.373) for undirected unweighted graphs, and in O(n^{2.5302}) time for directed unweighted graphs. In the current state of matters, there is a substantial gap between the upper bounds of the problem for undirected and directed graphs, and for a long time, it is remained an important open question whether it is possible to close this gap. In this paper we introduce a new parameter that measures the symmetry of directed graphs (i.e. their closeness to undirected graphs), and obtain a new parameterized APSP algorithm for directed unweighted graphs, that generalizes Seidel's O(n^{omega}*log(n)) time algorithm for undirected unweighted graphs. Given a directed unweighted graph G, unless it is highly asymmetric, our algorithms can compute APSP in o(n^{2.5}) time for G, providing for such graphs a faster APSP algorithm than the state-of-the-art algorithms for the problem.

Cite as

Ely Porat, Eduard Shahbazian, and Roei Tov. New Parameterized Algorithms for APSP in Directed Graphs. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 72:1-72:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{porat_et_al:LIPIcs.ESA.2016.72,
  author =	{Porat, Ely and Shahbazian, Eduard and Tov, Roei},
  title =	{{New Parameterized Algorithms for APSP in Directed Graphs}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{72:1--72:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.72},
  URN =		{urn:nbn:de:0030-drops-64207},
  doi =		{10.4230/LIPIcs.ESA.2016.72},
  annote =	{Keywords: Graphs, distances, APSP, fast matrix multiplication}
}
Document
Online Budgeted Maximum Coverage

Authors: Dror Rawitz and Adi Rosén


Abstract
We study the Online Budgeted Maximum Coverage (OBMC) problem. Subsets of a weighted ground set U arrive one by one, where each set has a cost. The online algorithm has to select a collection of sets, under the constraint that their cost is at most a given budget. Upon arrival of a set the algorithm must decide whether to accept or to reject the arriving set, and it may also drop previously accepted sets (preemption). Rejecting or dropping a set is irrevocable. The goal is to maximize the total weight of the elements covered by the sets in the chosen collection. We present a deterministic 4/(1-r)-competitive algorithm for OBMC, where r is the maximum ratio between the cost of a set and the total budget. Building on that algorithm, we then present a randomized O(1)-competitive algorithm for OBMC. On the other hand, we show that the competitive ratio of any deterministic online algorithm is Omega(1/(sqrt{1-r})). We also give a deterministic O(Delta)-competitive algorithm, where Delta is the maximum weight of a set (given that the minimum element weight is 1), and if the total weight of all elements, w(U), is known in advance, we show that a slight modification of that algorithm is O(min{Delta,sqrt{w(U)}})-competitive. A matching lower bound of Omega(min{Delta,sqrt{w(U)}}) is also given. Previous to the present work, only the unit cost version of OBMC was studied under the online setting, giving a 4-competitive algorithm [Saha, Getoor, 2009]. Finally, our results, including the lower bounds, apply to Removable Online Knapsack which is the preemptive version of the Online Knapsack problem.

Cite as

Dror Rawitz and Adi Rosén. Online Budgeted Maximum Coverage. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 73:1-73:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{rawitz_et_al:LIPIcs.ESA.2016.73,
  author =	{Rawitz, Dror and Ros\'{e}n, Adi},
  title =	{{Online Budgeted Maximum Coverage}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{73:1--73:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.73},
  URN =		{urn:nbn:de:0030-drops-64146},
  doi =		{10.4230/LIPIcs.ESA.2016.73},
  annote =	{Keywords: budgeted coverage, maximum coverage, online algorithms, competitive analysis, removable online knapsack}
}
Document
Min-Sum Scheduling Under Precedence Constraints

Authors: Andreas S. Schulz and José Verschae


Abstract
In many scheduling situations, it is important to consider non-linear functions of job completions times in the objective. This was already recognized by Smith (1956). Recently, the theory community has begun a thorough study of the resulting problems, mostly on single-machine instances for which all permutations of jobs are feasible. However, a typical feature of many scheduling problems is that some jobs can only be processed after others. In this paper, we give the first approximation algorithms for min-sum scheduling with (nonnegative, non-decreasing) non-linear functions and general precedence constraints. In particular, for 1|prec|sum w_j f(C_j), we propose a polynomial-time universal algorithm that performs well for all functions f simultaneously. Its approximation guarantee is 2 for all concave functions, at worst. We also provide a (non-universal) polynomial-time algorithm for the more general case 1|prec|sum f_j(C_j). The performance guarantee is no worse than 2+epsilon for all concave functions. Our results match the best bounds known for the case of linear functions, a widely studied problem, and considerably extend the results for minimizing sum w_jf(C_j) without precedence constraints.

Cite as

Andreas S. Schulz and José Verschae. Min-Sum Scheduling Under Precedence Constraints. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 74:1-74:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{schulz_et_al:LIPIcs.ESA.2016.74,
  author =	{Schulz, Andreas S. and Verschae, Jos\'{e}},
  title =	{{Min-Sum Scheduling Under Precedence Constraints}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{74:1--74:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.74},
  URN =		{urn:nbn:de:0030-drops-64157},
  doi =		{10.4230/LIPIcs.ESA.2016.74},
  annote =	{Keywords: scheduling, approximation algorithms, linear programming relaxations, precedence constraints}
}
Document
The Power of Migration for Online Slack Scheduling

Authors: Chris Schwiegelshohn and Uwe Schwiegelshohn


Abstract
We investigate the power of migration in online scheduling for parallel identical machines. Our objective is to maximize the total processing time of accepted jobs. Once we decide to accept a job, we have to complete it before its deadline d that satisfies d >= (1+epsilon)p + r, where p is the processing time, r the submission time and the slack epsilon > 0 a system parameter. Typically, the hard case arises for small slack epsilon << 1, i.e. for near-tight deadlines. Without migration, a greedy acceptance policy is known to be an optimal deterministic online algorithm with a competitive factor of (1+epsilon)/epsilon (DasGupta and Palis, APPROX 2000). Our first contribution is to show that migrations do not improve the competitive ratio of the greedy acceptance policy, i.e. the competitive ratio remains (1+epsilon)/epsilon for any number of machines. Our main contribution is a deterministic online algorithm with almost tight competitive ratio on any number of machines. For a single machine, the competitive factor matches the optimal bound of (1+epsilon)/epsilon of the greedy acceptance policy. The competitive ratio improves with an increasing number of machines. It approaches (1+epsilon) ln((1+epsilon)/epsilon) as the number of machines converges to infinity. This is an exponential improvement over the greedy acceptance policy for small epsilon. Moreover, we show a matching lower bound on the competitive ratio for deterministic algorithms on any number of machines.

Cite as

Chris Schwiegelshohn and Uwe Schwiegelshohn. The Power of Migration for Online Slack Scheduling. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 75:1-75:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{schwiegelshohn_et_al:LIPIcs.ESA.2016.75,
  author =	{Schwiegelshohn, Chris and Schwiegelshohn, Uwe},
  title =	{{The Power of Migration for Online Slack Scheduling}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{75:1--75:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.75},
  URN =		{urn:nbn:de:0030-drops-64162},
  doi =		{10.4230/LIPIcs.ESA.2016.75},
  annote =	{Keywords: Online scheduling, deadlines, preemption with migration, competitive analysis}
}
Document
Sampling-Based Bottleneck Pathfinding with Applications to Fréchet Matching

Authors: Kiril Solovey and Dan Halperin


Abstract
We describe a general probabilistic framework to address a variety of Fréchet-distance optimization problems. Specifically, we are interested in finding minimal bottleneck-paths in d-dimensional Euclidean space between given start and goal points, namely paths that minimize the maximal value over a continuous cost map. We present an efficient and simple sampling-based framework for this problem, which is inspired by, and draws ideas from, techniques for robot motion planning. We extend the framework to handle not only standard bottleneck pathfinding, but also the more demanding case, where the path needs to be monotone in all dimensions. Finally, we provide experimental results of the framework on several types of problems.

Cite as

Kiril Solovey and Dan Halperin. Sampling-Based Bottleneck Pathfinding with Applications to Fréchet Matching. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 76:1-76:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{solovey_et_al:LIPIcs.ESA.2016.76,
  author =	{Solovey, Kiril and Halperin, Dan},
  title =	{{Sampling-Based Bottleneck Pathfinding with Applications to Fr\'{e}chet Matching}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{76:1--76:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.76},
  URN =		{urn:nbn:de:0030-drops-64179},
  doi =		{10.4230/LIPIcs.ESA.2016.76},
  annote =	{Keywords: Computational geometry, Fr\'{e}chet distances, sampling-based algorithms, random geometric graphs, bottleneck pathfinding}
}
Document
On the Geodesic Centers of Polygonal Domains

Authors: Haitao Wang


Abstract
In this paper, we study the problem of computing Euclidean geodesic centers of a polygonal domain P of n vertices. We give a necessary condition for a point being a geodesic center. We show that there is at most one geodesic center among all points of P that have topologically-equivalent shortest path maps. This implies that the total number of geodesic centers is bounded by the size of the shortest path map equivalence decomposition of P, which is known to be O(n^{10}). One key observation is a pi-range property on shortest path lengths when points are moving. With these observations, we propose an algorithm that computes all geodesic centers in O(n^{11}*log(n)) time. Previously, an algorithm of O(n^{12+epsilon}) time was known for this problem, for any epsilon > 0.

Cite as

Haitao Wang. On the Geodesic Centers of Polygonal Domains. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 77:1-77:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang:LIPIcs.ESA.2016.77,
  author =	{Wang, Haitao},
  title =	{{On the Geodesic Centers of Polygonal Domains}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{77:1--77:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.77},
  URN =		{urn:nbn:de:0030-drops-64189},
  doi =		{10.4230/LIPIcs.ESA.2016.77},
  annote =	{Keywords: geodesic centers, shortest paths, polygonal domains}
}
Document
The Complexity of the k-means Method

Authors: Tim Roughgarden and Joshua R. Wang


Abstract
The k-means method is a widely used technique for clustering points in Euclidean space. While it is extremely fast in practice, its worst-case running time is exponential in the number of data points. We prove that the k-means method can implicitly solve PSPACE-complete problems, providing a complexity-theoretic explanation for its worst-case running time. Our result parallels recent work on the complexity of the simplex method for linear programming.

Cite as

Tim Roughgarden and Joshua R. Wang. The Complexity of the k-means Method. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 78:1-78:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{roughgarden_et_al:LIPIcs.ESA.2016.78,
  author =	{Roughgarden, Tim and Wang, Joshua R.},
  title =	{{The Complexity of the k-means Method}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{78:1--78:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.78},
  URN =		{urn:nbn:de:0030-drops-64191},
  doi =		{10.4230/LIPIcs.ESA.2016.78},
  annote =	{Keywords: k-means, PSPACE-complete}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail