eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
0
0
10.4230/LIPIcs.SWAT.2016
article
LIPIcs, Volume 53, SWAT'16, Complete Volume
Pagh, Rasmus
LIPIcs, Volume 53, SWAT'16, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016/LIPIcs.SWAT.2016.pdf
Theory of Computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
0:i
0:xiv
10.4230/LIPIcs.SWAT.2016.0
article
Front Matter, Table of Contents, Preface, Program Committee, Subreviewers
Pagh, Rasmus
Front Matter, Table of Contents, Preface, Program Committee, Subreviewers
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.0/LIPIcs.SWAT.2016.0.pdf
Front Matter
Table of Contents
Preface
Program Committee
Subreviewers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
1:1
1:14
10.4230/LIPIcs.SWAT.2016.1
article
Approximating Connected Facility Location with Lower and Upper Bounds via LP Rounding
Friggstad, Zachary
Rezapour, Mohsen
Salavatipour, Mohammad R.
We consider a lower- and upper-bounded generalization of the classical facility location problem, where each facility has a capacity (upper bound) that limits the number of clients it can serve and a lower bound on the number of clients it must serve if it is opened. We develop an LP rounding framework that exploits a Voronoi diagram-based clustering approach to derive the first bicriteria constant approximation algorithm for this problem with non-uniform lower bounds and uniform upper bounds. This naturally leads to the the first LP-based approximation algorithm for the lower bounded facility location problem (with non-uniform lower bounds).
We also demonstrate the versatility of our framework by extending this and presenting the first constant approximation algorithm for some connected variant of the problems in which the facilities are required to be connected as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.1/LIPIcs.SWAT.2016.1.pdf
Facility Location
Approximation Algorithm
LP Rounding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
2:1
2:14
10.4230/LIPIcs.SWAT.2016.2
article
Approximation Algorithms for Node-Weighted Prize-Collecting Steiner Tree Problems on Planar Graphs
Byrka, Jaroslaw
Lewandowski, Mateusz
Moldenhauer, Carsten
We study the prize-collecting version of the node-weighted Steiner tree problem (NWPCST) restricted to planar graphs. We give a new primal-dual Lagrangian-multiplier-preserving (LMP) 3-approximation algorithm for planar NWPCST. We then show a 2.88-approximation which establishes a new best approximation guarantee for planar NWPCST. This is done by combining our LMP algorithm with a threshold rounding technique and utilizing the 2.4-approximation of Berman and Yaroslavtsev [6] for the version without penalties. We also give a primal-dual 4-approximation algorithm for the more general forest version using techniques introduced by Hajiaghay and Jain [17].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.2/LIPIcs.SWAT.2016.2.pdf
approximation algorithms
Node-Weighted Steiner Tree
primal-dual algorithm
LMP
planar graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
3:1
3:11
10.4230/LIPIcs.SWAT.2016.3
article
A Logarithmic Integrality Gap Bound for Directed Steiner Tree in Quasi-bipartite Graphs
Friggstad, Zachary
Könemann, Jochen
Shadravan, Mohammad
We demonstrate that the integrality gap of the natural cut-based LP relaxation for the directed Steiner tree problem is O(log k) in quasi-bipartite graphs with k terminals. Such instances can be seen to generalize set cover, so the integrality gap analysis is tight up to a constant factor. A novel aspect of our approach is that we use the primal-dual method; a technique that is rarely used in designing approximation algorithms for network design problems in directed graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.3/LIPIcs.SWAT.2016.3.pdf
Approximation algorithm
Primal-Dual algorithm
Directed Steiner tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
4:1
4:14
10.4230/LIPIcs.SWAT.2016.4
article
A Linear Kernel for Finding Square Roots of Almost Planar Graphs
Golovach, Petr A.
Kratsch, Dieter
Paulusma, Daniël
Stewart, Anthony
A graph H is a square root of a graph G if G can be obtained from H by the addition of edges between any two vertices in H that are of distance 2 of each other. The Square Root problem is that of deciding whether a given graph admits a square root. We consider this problem for planar graphs in the context of the "distance from triviality" framework. For an integer k, a planar+kv graph is a graph that can be made planar by the removal of at most k vertices. We prove that the generalization of Square Root, in which we are given two subsets of edges prescribed to be in or out of a square root, respectively, has a kernel of size O(k) for planar+kv graphs, when parameterized by k. Our result is based on a new edge reduction rule which, as we shall also show, has a wider applicability for the Square Root problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.4/LIPIcs.SWAT.2016.4.pdf
planar graphs
square roots
linear kernel
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
5:1
5:14
10.4230/LIPIcs.SWAT.2016.5
article
Linear-Time Recognition of Map Graphs with Outerplanar Witness
Mnich, Matthias
Rutter, Ignaz
Schmidt, Jens M.
Map graphs generalize planar graphs and were introduced by Chen, Grigni and Papadimitriou [STOC 1998, J.ACM 2002]. They showed that the problem of recognizing map graphs is in NP by proving the existence of a planar witness graph W. Shortly after, Thorup [FOCS 1998] published a polynomial-time recognition algorithm for map graphs. However, the run time of this algorithm is estimated to be Omega(n^{120}) for n-vertex graphs, and a full description of its details remains unpublished.
We give a new and purely combinatorial algorithm that decides whether a graph G is a map graph having an outerplanar witness W. This is a step towards a first combinatorial recognition algorithm for general map graphs. The algorithm runs in time and space O(n+m). In contrast to Thorup's approach, it computes the witness graph W in the affirmative case.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.5/LIPIcs.SWAT.2016.5.pdf
Algorithms and data structures
map graphs
recognition
planar graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
6:1
6:15
10.4230/LIPIcs.SWAT.2016.6
article
The p-Center Problem in Tree Networks Revisited
Banik, Aritra
Bhattacharya, Binay
Das, Sandip
Kameda, Tsunehiko
Song, Zhao
We present two improved algorithms for weighted discrete p-center problem for tree networks with n vertices. One of our proposed algorithms runs in O(n*log(n) + p*log^2(n) * log(n/p)) time. For all values of p, our algorithm thus runs as fast as or faster than the most efficient O(n*log^2(n)) time algorithm obtained by applying Cole's [1987] speed-up technique to the algorithm due to Megiddo and Tamir [1983], which has remained unchallenged for nearly 30 years.
Our other algorithm, which is more practical, runs in O(n*log(n) + p^2*log^2(n/p)) time, and when p=O(sqrt(n)) it is faster than Megiddo and Tamir's O(n*log^2(n) * log(log(n))) time algorithm [1983].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.6/LIPIcs.SWAT.2016.6.pdf
Facility location
p-center
parametric search
tree network
sorting network
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
7:1
7:13
10.4230/LIPIcs.SWAT.2016.7
article
A Simple Mergeable Dictionary
Karczmarz, Adam
A mergeable dictionary is a data structure storing a dynamic subset S of a totally ordered set U and supporting predecessor searches in S. Apart from insertions and deletions to S, we can both merge two arbitrarily interleaved dictionaries and split a given dictionary around some pivot x. We present an implementation of a mergeable dictionary matching the optimal amortized logarithmic bounds of Iacono and Ökzan [Iacono/Ökzan,ICALP'10]. However, our solution is significantly simpler. The proposed data structure can also be generalized to the case when the universe U is dynamic or infinite, thus addressing one issue of [Iacono/Ökzan,ICALP'10].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.7/LIPIcs.SWAT.2016.7.pdf
dictionary
mergeable
data structure
merge
split
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
8:1
8:12
10.4230/LIPIcs.SWAT.2016.8
article
Cuckoo Filter: Simplification and Analysis
Eppstein, David
The cuckoo filter data structure of Fan, Andersen, Kaminsky, and Mitzenmacher (CoNEXT 2014) performs the same approximate set operations as a Bloom filter in less memory, with better locality of reference, and adds the ability to delete elements as well as to insert them. However, until now it has lacked theoretical guarantees on its performance. We describe a simplified version of the cuckoo filter using fewer hash function calls per query. With this simplification, we provide the first theoretical performance guarantees on cuckoo filters, showing that they succeed with high probability whenever their fingerprint length is large enough.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.8/LIPIcs.SWAT.2016.8.pdf
approximate set
Bloom filter
cuckoo filter
cuckoo hashing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
9:1
9:14
10.4230/LIPIcs.SWAT.2016.9
article
Randomized Algorithms for Finding a Majority Element
Gawrychowski, Pawel
Suomela, Jukka
Uznanski, Przemyslaw
Given n colored balls, we want to detect if more than n/2 of them have the same color, and if so find one ball with such majority color. We are only allowed to choose two balls and compare their colors, and the goal is to minimize the total number of such operations. A well-known exercise is to show how to find such a ball with only 2n comparisons while using only a logarithmic number of bits for bookkeeping. The resulting algorithm is called the Boyer-Moore majority vote algorithm. It is known that any deterministic method needs 3n/2-2 comparisons in the worst case, and this is tight. However, it is not clear what is the required number of comparisons if we allow randomization. We construct a randomized algorithm which always correctly finds a ball of the majority color (or detects that there is none) using, with high probability, only 7n/6+o(n) comparisons. We also prove that the expected number of comparisons used by any such randomized method is at least 1.019n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.9/LIPIcs.SWAT.2016.9.pdf
majority
randomized algorithms
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
10:1
10:14
10.4230/LIPIcs.SWAT.2016.10
article
A Framework for Dynamic Parameterized Dictionary Matching
Ganguly, Arnab
Hon, Wing-Kai
Shah, Rahul
Two equal-length strings S and S' are a parameterized-match (p-match) iff there exists a one-to-one function that renames the characters in S to those in S'. Let P be a collection of d patterns of total length n characters that are chosen from an alphabet Sigma of cardinality sigma. The task is to index P such that we can support the following operations.
* search(T): given a text T, report all occurrences <j,P_i> such that there exists a pattern P_i in P that is a p-match with the substring T[j,j+|P_i|-1].
* ins(P_i)/del(P_i): modify the index when a pattern P_i is inserted/deleted.
We present a linear-space index that occupies O(n*log n) bits and supports (i) search(T) in worst-case O(|T|*log^2 n + occ) time, where occ is the number of occurrences reported, and (ii) ins(P_i) and del(P_i) in amortized O(|P_i|*polylog(n)) time.
Then, we present a succinct index that occupies (1+o(1))n*log sigma + O(d*log n) bits and supports (i) search(T) in worst-case O(|T|*log^2 n + occ) time, and (ii) ins(P_i) and del(P_i) in amortized O(|P_i|*polylog(n)) time. We also present results related to the semi-dynamic variant of the problem, where deletion is not allowed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.10/LIPIcs.SWAT.2016.10.pdf
Parameterized Dictionary Indexing
Generalized Suffix Tree
Succinct Data Structures
Sparsification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
11:1
11:14
10.4230/LIPIcs.SWAT.2016.11
article
Efficient Summing over Sliding Windows
Ben Basat, Ran
Einziger, Gil
Friedman, Roy
Kassner, Yaron
This paper considers the problem of maintaining statistic aggregates over the last W elements of a data stream. First, the problem of counting the number of 1's in the last W bits of a binary stream is considered. A lower bound of Omega(1/epsilon + log(W)) memory bits for Wepsilon-additive approximations is derived. This is followed by an algorithm whose memory consumption is O(1/epsilon + log(W)) bits, indicating that the algorithm is optimal and that the bound is tight. Next, the more general problem of maintaining a sum of the last W integers, each in the range of {0, 1, ..., R}, is addressed. The paper shows that approximating the sum within an additive error of RW epsilon can also be done using Theta(1/epsilon + log(W)) bits for epsilon = Omega(1/W). For epsilon = o(1/W), we present a succinct algorithm which uses B(1 + o(1)) bits, where B = Theta(W*log(1/(W*epsilon))) is the derived lower bound. We show that all lower bounds generalize to randomized algorithms as well. All algorithms process new elements and answer queries in O(1) worst-case time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.11/LIPIcs.SWAT.2016.11.pdf
Streaming
Statistics
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
12:1
12:10
10.4230/LIPIcs.SWAT.2016.12
article
Lower Bounds for Approximation Schemes for Closest String
Cygan, Marek
Lokshtanov, Daniel
Pilipczuk, Marcin
Pilipczuk, Michal
Saurabh, Saket
In the Closest String problem one is given a family S of equal-length strings over some fixed alphabet, and the task is to find a string y that minimizes the maximum Hamming distance between y and a string from S. While polynomial-time approximation schemes (PTASes) for this problem are known for a long time [Li et al.; J. ACM'02], no efficient polynomial-time approximation scheme (EPTAS) has been proposed so far. In this paper, we prove that the existence of an EPTAS for Closest String is in fact unlikely, as it would imply that FPT=W[1], a highly unexpected collapse in the hierarchy of parameterized complexity classes. Our proof also shows that the existence of a PTAS for Closest String with running time f(eps) n^o(1/eps), for any computable function f, would contradict the Exponential Time Hypothesis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.12/LIPIcs.SWAT.2016.12.pdf
closest string
PTAS
efficient PTAS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
13:1
13:9
10.4230/LIPIcs.SWAT.2016.13
article
Coloring Graphs Having Few Colorings Over Path Decompositions
Björklund, Andreas
Lokshtanov, Marx, and Saurabh SODA 2011 proved that there is no (k-epsilon)^pw(G)poly(n) time algorithm for deciding if an n-vertex graph G with pathwidth pw admits a proper vertex coloring with k colors unless the Strong Exponential Time Hypothesis (SETH) is false, for any constant epsilon>0. We show here that nevertheless, when k>lfloor Delta/2 rfloor + 1, where Delta is the maximum degree in the graph G, there is a better algorithm, at least when there are few colorings. We present a Monte Carlo algorithm that given a graph G along with a path decomposition of G with pathwidth pw(G) runs in (lfloor Delta/2 rfloor + 1)^pw(G)poly(n)s time, that distinguishes between k-colorable graphs having at most s proper k-colorings and non-k-colorable graphs. We also show how to obtain a k-coloring in the same asymptotic running time. Our algorithm avoids violating SETH for one since high degree vertices still cost too much and the mentioned hardness construction uses a lot of them.
We exploit a new variation of the famous Alon--Tarsi theorem that has an algorithmic advantage over the original form. The original theorem shows a graph has an orientation with outdegree less than k at every vertex, with a different number of odd and even Eulerian subgraphs only if the graph is k-colorable, but there is no known way of efficiently finding such an orientation. Our new form shows that if we instead count another difference of even and odd subgraphs meeting modular degree constraints at every vertex picked uniformly at random, we have a fair chance of getting a non-zero value if the graph has few k-colorings. Yet every non-k-colorable graph gives a zero difference, so a random set of constraints stands a good chance of being useful for separating the two cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.13/LIPIcs.SWAT.2016.13.pdf
Graph vertex coloring
path decomposition
Alon-Tarsi theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
14:1
14:14
10.4230/LIPIcs.SWAT.2016.14
article
Parameterized Algorithms for Recognizing Monopolar and 2-Subcolorable Graphs
Kanj, Iyad
Komusiewicz, Christian
Sorge, Manuel
Jan van Leeuwen, Erik
We consider the recognition problem for two graph classes that generalize split and unipolar graphs, respectively.
First, we consider the recognizability of graphs that admit a monopolar partition: a partition of the vertex set into sets A,B such that G[A] is a disjoint union of cliques and G[B] an independent set. If in such a partition G[A] is a single clique, then G is a split graph. We show that in
O(2^k * k^3 * (|V(G)| + |E(G)|)) time we can decide whether G admits a monopolar partition
(A,B) where G[A] has at most k cliques. This generalizes the linear-time algorithm for recognizing split graphs corresponding to the case when k=1.
Second, we consider the recognizability of graphs that admit a 2-subcoloring: a partition of the vertex set into sets A,B such that each of G[A] and G[B] is a disjoint union of cliques. If in such a partition G[A] is a single clique, then G is a unipolar graph. We show that in
O(k^(2k+2) * (|V(G)|^2+|V(G)| * |E(G)|)) time we can decide whether G admits a
2-subcoloring (A,B) where G[A] has at most k cliques. This generalizes the polynomial-time algorithm for recognizing unipolar graphs corresponding to the case when k=1.
We also show that in O(4^k) time we can decide whether G admits a 2-subcoloring (A,B) where G[A] and G[B] have at most k cliques in total.
To obtain the first two results above, we formalize a technique, which we dub inductive recognition, that can
be viewed as an adaptation of iterative compression to recognition problems. We believe that the formalization
of this technique will prove useful in general for designing parameterized algorithms for recognition problems.
Finally, we show that, unless the Exponential Time Hypothesis fails, no subexponential-time algorithms for the
above recognition problems exist, and that, unless P=NP, no generic fixed-parameter algorithm exists for the
recognizability of graphs whose vertex set can be bipartitioned such that one part is a disjoint union of k
cliques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.14/LIPIcs.SWAT.2016.14.pdf
vertex-partition problems
monopolar graphs
subcolorings
split graphs
unipolar graphs
fixed-parameter algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
15:1
15:15
10.4230/LIPIcs.SWAT.2016.15
article
On Routing Disjoint Paths in Bounded Treewidth Graphs
Ene, Alina
Mnich, Matthias
Pilipczuk, Marcin
Risteski, Andrej
We study the problem of routing on disjoint paths in bounded treewidth graphs with both edge and node capacities. The input consists of a capacitated graph G and a collection of k source-destination pairs M = (s_1, t_1), ..., (s_k, t_k). The goal is to maximize the number of pairs that can be routed subject to the capacities in the graph. A routing of a subset M' of the pairs is a collection P of paths such that, for each pair (s_i, t_i) in M', there is a path in P connecting s_i to t_i. In the Maximum Edge Disjoint Paths (MaxEDP) problem, the graph G has capacities cap(e) on the edges and a routing P is feasible if each edge e is in at most cap(e) of the paths of P. The Maximum Node Disjoint Paths (MaxNDP) problem is the node-capacitated counterpart of MaxEDP.
In this paper we obtain an O(r^3) approximation for MaxEDP on graphs of treewidth at most r and a matching approximation for MaxNDP on graphs of pathwidth at most r. Our results build on and significantly improve the work by Chekuri et al. [ICALP 2013] who obtained an O(r * 3^r) approximation for MaxEDP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.15/LIPIcs.SWAT.2016.15.pdf
Algorithms and data structures
disjoint paths
treewidth
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
16:1
16:14
10.4230/LIPIcs.SWAT.2016.16
article
Colouring Diamond-free Graphs
Dabrowski, Konrad K.
Dross, François
Paulusma, Daniël
The Colouring problem is that of deciding, given a graph G and an integer k, whether G admits a (proper) k-colouring. For all graphs H up to five vertices, we classify the computational complexity of Colouring for (diamond,H)-free graphs. Our proof is based on combining known results together with proving that the clique-width is bounded for (diamond,P_1+2P_2)-free graphs. Our technique for handling this case is to reduce the graph under consideration to a k-partite graph that has a very specific decomposition. As a by-product of this general technique we are also able to prove boundedness of clique-width for four other new classes of (H_1,H_2)-free graphs. As such, our work also continues a recent systematic study into the (un)boundedness of clique-width of (H_1,H_2)-free graphs, and our five new classes of bounded clique-width reduce the number of open cases from 13 to 8.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.16/LIPIcs.SWAT.2016.16.pdf
colouring
clique-width
diamond-free
graph class
hereditary graph class
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
17:1
17:11
10.4230/LIPIcs.SWAT.2016.17
article
Below All Subsets for Some Permutational Counting Problems
Björklund, Andreas
We show that the two problems of computing the permanent of an n*n matrix of poly(n)-bit integers and counting the number of Hamiltonian cycles in a directed n-vertex multigraph with exp(poly(n)) edges can be reduced to relatively few smaller instances of themselves. In effect we derive the first deterministic algorithms for these two problems that run in o(2^n) time in the worst case. Classic poly(n)2^n time algorithms for the two problems have been known since the early 1960's.
Our algorithms run in 2^{n-Omega(sqrt{n/log(n)})} time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.17/LIPIcs.SWAT.2016.17.pdf
Matrix Permanent
Hamiltonian Cycles
Asymmetric TSP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
18:1
18:14
10.4230/LIPIcs.SWAT.2016.18
article
Extension Complexity, MSO Logic, and Treewidth
Kolman, Petr
Koutecký, Martin
Tiwary, Hans Raj
We consider the convex hull P_phi(G) of all satisfying assignments of a given MSO_2 formula phi on a given graph G. We show that there exists an extended formulation of the polytope P_phi(G) that can be described by f(|phi|,tau)*n inequalities, where n is the number of vertices in G, tau is the treewidth of G and f is a computable function depending only on phi and tau.
In other words, we prove that the extension complexity of P_phi(G) is linear in the size of the graph G, with a constant depending on the treewidth of G and the formula phi. This provides a very general yet very simple meta-theorem about the extension complexity of polytopes related to a wide class of problems and graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.18/LIPIcs.SWAT.2016.18.pdf
Extension Complexity
FPT
Courcelle's Theorem
MSO Logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
19:1
19:14
10.4230/LIPIcs.SWAT.2016.19
article
Optimal Online Escape Path Against a Certificate
Langetepe, Elmar
Kübel, David
More than fifty years ago Bellman asked for the best escape path within a known forest but for an unknown starting position. This deterministic finite path is the shortest path that leads out of a given environment from any starting point. There are some worst case positions where the full path length is required. Up to now such a fixed ultimate optimal escape path for a known shape for any starting position is only known for some special convex shapes (i.e., circles, strips of a given width, fat convex bodies, some isosceles triangles).
Therefore, we introduce a different, simple and intuitive escape path, the so-called certificate path which make use of some additional information w.r.t. the starting point s. This escape path depends on the starting position s and takes the distances from s to the outer boundary of the environment into account. Because of this, in the above convex examples the certificate path always (for any position s) leaves the environment earlier than the ultimate escape path.
Next we assume that neither the precise shape of the environment nor the location of the starting point is not known, we have much less information. For a class of environments (convex shapes and shapes with kernel positions) we design an online strategy that always leaves the environment. We show that the path length for leaving the environment is always shorter than 3.318764 the length of the corresponding certificate path. We also give a lower bound of 3.313126 which shows that for the above class of environments the factor 3.318764 is (almost) tight.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.19/LIPIcs.SWAT.2016.19.pdf
Search games
online algorithms
escape path
competitive analysis
spiral conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
20:1
20:14
10.4230/LIPIcs.SWAT.2016.20
article
Lagrangian Duality based Algorithms in Online Energy-Efficient Scheduling
Kim Thang, Nguyen
We study online scheduling problems in the general energy model of speed scaling with power down. The latter is a combination of the two extensively studied energy models, speed scaling and power down, toward a more realistic one. Due to the limits of the current techniques, only few results have been known in the general energy model in contrast to the large literature of the previous ones.
In the paper, we consider a Lagrangian duality based approach to design and analyze algorithms in the general energy model. We show the applicability of the approach to problems which are unlikely to admit a convex relaxation. Specifically, we consider the problem of minimizing energy with a single machine in which jobs arrive online and have to be processed before their deadlines. We present an alpha^alpha-competitive algorithm (whose the analysis is tight up to a constant factor) where the energy power function is of typical form z^alpha + g for constants alpha > 2 and g non-negative. Besides, we also consider the problem of minimizing the weighted flow-time plus energy. We give an O(alpha/ln(alpha))-competitive algorithm; that matches (up to a constant factor) to the currently best known algorithm for this problem in the restricted model of speed scaling.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.20/LIPIcs.SWAT.2016.20.pdf
Online Scheduling
Energy Minimization
Speed Scaling and Power-down
Lagrangian Duality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
21:1
21:15
10.4230/LIPIcs.SWAT.2016.21
article
Online Dominating Set
Boyar, Joan
Eidenbenz, Stephan J.
Favrholdt, Lene M.
Kotrbcik, Michal
Larsen, Kim S.
This paper is devoted to the online dominating set problem and its variants on trees, bipartite, bounded-degree, planar, and general graphs, distinguishing between connected and not necessarily connected graphs. We believe this paper represents the first systematic study of the effect of two limitations of online algorithms: making irrevocable decisions while not knowing the future, and being incremental, i.e., having to maintain solutions to all prefixes of the input. This is quantified through competitive analyses of online algorithms against two optimal algorithms, both knowing the entire input, but only one having to be incremental. We also consider the competitive ratio of the weaker of the two optimal algorithms against the other. In most cases, we obtain tight bounds on the competitive ratios. Our results show that requiring the graphs to be presented in a connected fashion allows the online algorithms to obtain provably better solutions. Furthermore, we get detailed information regarding the significance of the necessary requirement that online algorithms be incremental. In some cases, having to be incremental fully accounts for the online algorithm's disadvantage.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.21/LIPIcs.SWAT.2016.21.pdf
online algorithms
dominating set
competitive analysis
graph classes
connected graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
22:1
22:13
10.4230/LIPIcs.SWAT.2016.22
article
Sorting Under Forbidden Comparisons
Banerjee, Indranil
Richards, Dana
In this paper we study the problem of sorting under forbidden comparisons where some pairs of elements may not be compared (forbidden pairs). Along with the set of elements V the input to our problem is a graph G(V, E), whose edges represents the pairs that we can compare in constant time. Given a graph with n vertices and m = binom(n)(2) - q edges we propose the first non-trivial deterministic algorithm which makes O((q + n)*log(n)) comparisons with a total complexity of O(n^2 + q^(omega/2)), where omega is the exponent in the complexity of matrix multiplication. We also propose a simple randomized algorithm for the problem which makes widetilde O(n^2/sqrt(q+n) + nsqrt(q)) probes with high probability. When the input graph is random we show that widetilde O(min(n^(3/2), pn^2)) probes suffice, where p is the edge probability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.22/LIPIcs.SWAT.2016.22.pdf
Sorting
Random Graphs
Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
23:1
23:12
10.4230/LIPIcs.SWAT.2016.23
article
Total Stability in Stable Matching Games
Gupta, Sushmita
Iwama, Kazuo
Miyazaki, Shuichi
The stable marriage problem (SMP) can be seen as a typical game, where each player wants to obtain the best possible partner by manipulating his/her preference list. Thus the set Q of preference lists submitted to the matching agency may differ from P, the set of true preference lists. In this paper, we study the stability of the stated lists in Q. If Q is not Nash equilibrium, i.e., if a player can obtain a strictly better partner (with respect to the preference order in P) by only changing his/her list, then in the view of standard game theory, Q is vulnerable. In the case of SMP, however, we need to consider another factor, namely that all valid matchings should not include any "blocking pairs" with respect to P. Thus, if the above manipulation of a player introduces blocking pairs, it would prevent this manipulation. Consequently, we say Q is totally stable if either Q is a Nash equilibrium or if any attempt at manipulation by a single player causes blocking pairs with respect to P. We study the complexity of testing the total stability of a stated strategy. It is known that this question is answered in polynomial time if the instance (P,Q) always satisfies P=Q. We extend this polynomially solvable class to the general one, where P and Q may be arbitrarily different.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.23/LIPIcs.SWAT.2016.23.pdf
stable matching
Gale-Shapley algorithm
manipulation
stability
Nash equilibrium
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
24:1
24:13
10.4230/LIPIcs.SWAT.2016.24
article
Estimating The Makespan of The Two-Valued Restricted Assignment Problem
Jansen, Klaus
Land, Kati
Maack, Marten
We consider a special case of the scheduling problem on unrelated machines, namely the Restricted Assignment Problem with two different processing times. We show that the configuration LP has an integrality gap of at most 5/3 ~ 1.667 for this problem. This allows us to estimate the optimal makespan within a factor of 5/3, improving upon the previously best known estimation algorithm with ratio 11/6 ~ 1.833 due to Chakrabarty, Khanna, and Li.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.24/LIPIcs.SWAT.2016.24.pdf
unrelated scheduling
restricted assignment
configuration LP
integrality gap
estimation algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
25:1
25:14
10.4230/LIPIcs.SWAT.2016.25
article
A Plane 1.88-Spanner for Points in Convex Position
Amani, Mahdi
Biniaz, Ahmad
Bose, Prosenjit
De Carufel, Jean-Lou
Maheshwari, Anil
Smid, Michiel
Let S be a set of n points in the plane that is in convex position. For a real number t>1, we say that a point p in S is t-good if for every point q of S, the shortest-path distance between p and q along the boundary of the convex hull of S is at most t times the Euclidean distance between p and q. We prove that any point that is part of (an approximation to) the diameter of S is 1.88-good. Using this, we show how to compute a plane 1.88-spanner of S in O(n) time, assuming that the points of S are given in sorted order along their convex hull. Previously, the best known stretch factor for plane spanners was 1.998 (which, in fact, holds for any point set, i.e., even if it is not in convex position).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.25/LIPIcs.SWAT.2016.25.pdf
points in convex position
plane spanner
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
26:1
26:14
10.4230/LIPIcs.SWAT.2016.26
article
Approximating the Integral Fréchet Distance
Maheshwari, Anil
Sack, Jörg-Rüdiger
Scheffer, Christian
We present a pseudo-polynomial time (1 + epsilon)-approximation algorithm for computing the integral and average Fréchet distance between two given polygonal curves T_1 and T_2. The running time is in O(zeta^{4}n^4/epsilon^2) where n is the complexity of T_1 and T_2 and zeta is the maximal ratio of the lengths of any pair of segments from T_1 and T_2.
Furthermore, we give relations between weighted shortest paths inside a single parameter cell C and the monotone free space axis of C. As a result we present a simple construction of weighted shortest paths inside a parameter cell. Additionally, such a shortest path provides an optimal solution for the partial Fréchet similarity of segments for all leash lengths. These two aspects are related to each other and are of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.26/LIPIcs.SWAT.2016.26.pdf
Fréchet distance
partial Fréchet similarity
curve matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
28:1
28:13
10.4230/LIPIcs.SWAT.2016.28
article
A Clustering-Based Approach to Kinetic Closest Pair
Chan, Timothy M.
Rahmati, Zahed
Given a set P of n moving points in fixed dimension d, where the trajectory of each point is a polynomial of degree bounded by some constant, we present a kinetic data structure (KDS) for maintenance of the closest pair on P.
Assuming the closest pair distance is between 1 and Delta over time, our KDS uses O(n log Delta) space and processes O(n^2 beta log Delta log n + n^2 beta log Delta log log Delta)) events, each in worst-case time O(log^2 n + log^2 log Delta). Here, beta is an extremely slow-growing function. The locality of the KDS is O(log n + log log Delta). Our closest pair KDS supports insertions and deletions of points. An insertion or deletion takes worst-case time O(log Delta log^2 n + log Delta log^2 log Delta).
Also, we use a similar approach to provide a KDS for the all epsilon-nearest neighbors in R^d.
The complexities of the previous KDSs, for both closest pair and all epsilon-nearest neighbors, have polylogarithmic factor, where the number of logs depends on dimension d. Assuming Delta is polynomial in n, our KDSs obtain improvements on the previous KDSs.
Our solutions are based on a kinetic clustering on P. Though we use ideas from the previous clustering KDS by Hershberger, we simplify and improve his work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.28/LIPIcs.SWAT.2016.28.pdf
Kinetic Data Structure
Clustering
Closest Pair
All Nearest Neighbors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
29:1
29:13
10.4230/LIPIcs.SWAT.2016.29
article
Constrained Geodesic Centers of a Simple Polygon
Oh, Eunjin
Son, Wanbin
Ahn, Hee-Kap
For any two points in a simple polygon P, the geodesic distance between them is the length of the shortest path contained in P that connects them. A geodesic center of a set S of sites (points) with respect to P is a point in P that minimizes the geodesic distance to its farthest site. In many realistic facility location problems, however, the facilities are constrained to lie in feasible regions. In this paper, we show how to compute the geodesic centers constrained to a set of line segments or simple polygonal regions contained in P. Our results provide substantial improvements over previous algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.29/LIPIcs.SWAT.2016.29.pdf
Geodesic distance
simple polygons
constrained center
facility location
farthest-point Voronoi diagram
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
30:1
30:12
10.4230/LIPIcs.SWAT.2016.30
article
Time-Space Trade-offs for Triangulating a Simple Polygon
Aronov, Boris
Korman, Matias
Pratt, Simon
van Renssen, André
Roeloffzen, Marcel
An s-workspace algorithm is an algorithm that has read-only access to the values of the input, write-only access to the output, and only uses O(s) additional words of space. We give a randomized s-workspace algorithm for triangulating a simple polygon P of n vertices, for any s up to n. The algorithm runs in O(n^2/s+n(log s)log^5(n/s)) expected time using O(s) variables, for any s up to n. In particular, the algorithm runs in O(n^2/s) expected time for most values of s.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.30/LIPIcs.SWAT.2016.30.pdf
simple polygon
triangulation
shortest path
time-space trade-off
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
31:1
31:1
10.4230/LIPIcs.SWAT.2016.31
article
Excluded Grid Theorem: Improved and Simplified (Invited Talk)
Chuzhoy, Julia
One of the key results in Robertson and Seymour's seminal work on graph minors is the Excluded Grid Theorem. The theorem states that there is a function f, such that for every positive integer g, every graph whose treewidth is at least f(g) contains the (gxg)-grid as a minor. This theorem has found many applications in graph theory and algorithms. An important open question is establishing tight bounds on f(g) for which the theorem holds. Robertson and Seymour showed that f(g)>= \Omega(g^2 log g), and this remains the best current lower bound on f(g). Until recently, the best upper bound was super-exponential in g. In this talk, we will give an overview of a recent sequence of results, that has lead to the best current upper bound of f(g)=O(g^{19}polylog(g)). We will also survey some connections to algorithms for graph routing problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.31/LIPIcs.SWAT.2016.31.pdf
Graph Minor Theory
Excluded Grid Theorem
Graph Routing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
32:1
32:1
10.4230/LIPIcs.SWAT.2016.32
article
The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems (Invited Talk)
Marx, Dániel
Given a directed graph G and a list (s_1,t_1), ..., (s_k,t_k) of terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of G that contains a directed s_i-> t_i path for every 1<= i <= k. Feldman and Ruhl presented an n^{O(k)} time algorithm for the problem, which shows that it is polynomial-time solvable for every fixed number k of demands. There are special cases of the problem that can be solved much more efficiently: for example, the special case Directed Steiner Tree (when we ask for paths from a root r to terminals t_1, ..., t_k) is known to be fixed-parameter tractable parameterized by the number of terminals, that is, algorithms with running time of the form f(k)*n^{O(1)} exist for the problem. On the other hand, the special case Strongly Connected Steiner Subgraph (when we ask for a path from every t_i to every other t_j) is known to be W[1]-hard parameterized by the number of terminals, hence it is unlikely to be fixed-parameter tractable. In the talk, we survey results on parameterized algorithms for special cases of Directed Steiner Network, including a recent complete classification result (joint work with Andreas Feldmann) that systematically explores the complexity landscape of directed Steiner problems to fully understand which special cases are FPT or W[1]-hard.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.32/LIPIcs.SWAT.2016.32.pdf
Directed Steiner Tree
Directed Steiner Network
fixed-parameter tractability
dichotomy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
33:1
33:1
10.4230/LIPIcs.SWAT.2016.33
article
Computation as a Scientific Weltanschauung (Invited Talk)
Papadimitriou, Christos H.
Computation as a mechanical reality is young - almost exactly seventy years of age - and yet the spirit of computation can be traced several millennia back. Any moderately advanced civilization depends on calculation (for inventory, taxation, navigation, land partition, among many others) - our civilization is the first one that is conscious of this reliance.
Computation has also been central to science for centuries. This is most immediately apparent in the case of mathematics: the idea of the algorithm as a mathematical object of some significance was pioneered by Euclid in the 4th century BC, and advanced by Archimedes a century later. But computation plays an important role in virtually all sciences: natural, life, or social. Implicit algorithmic processes are present in the great objects of scientific inquiry - the cell, the universe, the market, the brain - as well as in the models developed by scientists over the centuries for studying them. This brings about a very recent - merely a few decades old - mode of scientific inquiry, which is sometime referred to as the lens of computation: When students of computation revisit central problems in science from the computational viewpoint, often unexpected progress results. This has happened in statistical physics through the study of phase transitions in terms of the convergence of Markov chain-Monte Carlo algorithms, and in quantum mechanics through quantum computing.
This talk will focus on three other manifestations of this phenomenon. Almost a decade ago, ideas and methodologies from computational complexity revealed a subtle conceptual flaw in the solution concept of Nash equilibrium, which lies at the foundations of modern economic thought. In the study of evolution, a new understanding of century-old questions has been achieved through surprisingly algorithmic ideas. Finally, current work in theoretical neuroscience suggests that the algorithmic point of view may be invaluable in the central scientific question of our era, namely understanding how behavior and cognition emerge from the structure and activity of neurons and synapses.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.33/LIPIcs.SWAT.2016.33.pdf
Lens of computation
Nash equilibrium
neuroscience
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-06-22
53
27:1
27:14
10.4230/LIPIcs.SWAT.2016.27
article
Minimizing the Continuous Diameter when Augmenting Paths and Cycles with Shortcuts
De Carufel, Jean-Lou
Grimm, Carsten
Maheshwari, Anil
Smid, Michiel
We seek to augment a geometric network in the Euclidean plane with shortcuts to minimize its continuous diameter, i.e., the largest network distance between any two points on the augmented network. Unlike in the discrete setting where a shortcut connects two vertices and the diameter is measured between vertices, we take all points along the edges of the network into account when placing a shortcut and when measuring distances in the augmented network.
We study this network augmentation problem for paths and cycles. For paths, we determine an optimal shortcut in linear time. For cycles, we show that a single shortcut never decreases the continuous diameter and that two shortcuts always suffice to reduce the continuous diameter. Furthermore, we characterize optimal pairs of shortcuts for convex and non-convex cycles. Finally, we develop a linear time algorithm that produces an optimal pair of shortcuts for convex cycles. Apart from the algorithms, our results extend to rectifiable curves.
Our work reveals some of the underlying challenges that must be overcome when addressing the discrete version of this network augmentation problem, where we minimize the discrete diameter of a network with shortcuts that connect only vertices.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol053-swat2016/LIPIcs.SWAT.2016.27/LIPIcs.SWAT.2016.27.pdf
Network Augmentation
Shortcuts
Diameter
Paths
Cycles