eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
0
0
10.4230/LIPIcs.ISAAC.2018
article
LIPIcs, Volume 123, ISAAC'18, Complete Volume
Hsu, Wen-Lian
Lee, Der-Tsai
Liao, Chung-Shou
LIPIcs, Volume 123, ISAAC'18, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018/LIPIcs.ISAAC.2018.pdf
Mathematics of computing, Theory of computation, Data structures design and analysis, Computing methodologies
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
0:i
0:xviii
10.4230/LIPIcs.ISAAC.2018.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Hsu, Wen-Lian
Lee, Der-Tsai
Liao, Chung-Shou
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.0/LIPIcs.ISAAC.2018.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
1:1
1:1
10.4230/LIPIcs.ISAAC.2018.1
article
Going Beyond Traditional Characterizations in the Age of Big Data and Network Sciences (Invited Talk)
Teng, Shang-Hua
1
University of Southern California, Los Angeles, USA
What are efficient algorithms? What are network models? Big Data and Network Sciences have fundamentally challenged the traditional polynomial-time characterization of efficiency and the conventional graph-theoretical characterization of networks.
More than ever before, it is not just desirable, but essential, that efficient algorithms should be scalable. In other words, their complexity should be nearly linear or sub-linear with respect to the problem size. Thus, scalability, not just polynomial-time computability, should be elevated as the central complexity notion for characterizing efficient computation.
For a long time, graphs have been widely used for defining the structure of social and information networks. However, real-world network data and phenomena are much richer and more complex than what can be captured by nodes and edges. Network data are multifaceted, and thus network science requires a new theory, going beyond traditional graph theory, to capture the multifaceted data.
In this talk, I discuss some aspects of these challenges. Using basic tasks in network analysis, social influence modeling, and machine learning as examples, I highlight the role of scalable algorithms and axiomatization in shaping our understanding of "effective solution concepts" in data and network sciences, which need to be both mathematically meaningful and algorithmically efficient.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.1/LIPIcs.ISAAC.2018.1.pdf
scalable algorithms
axiomatization
graph sparsification
local algorithms
advanced sampling
big data
network sciences
machine learning
social influence
beyond graph theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
2:1
2:1
10.4230/LIPIcs.ISAAC.2018.2
article
Approximate Matchings in Massive Graphs via Local Structure (Invited Talk)
Stein, Clifford
1
Columbia University, New York City, USA
Finding a maximum matching is a fundamental algorithmic problem and is fairly well understood in traditional sequential computing models. Some modern applications require that we handle massive graphs and hence we need to consider algorithms in models that do not allow the entire input graph to be held in the memory of one computer, or models in which the graph is evolving over time.
We introduce a new concept called an "Edge Degree Constrained Subgraph (EDCS)", which is a subgraph that is guaranteed to contain a large matching, and which can be identified via local conditions. We then show how to use an EDCS to find 1.5-approximate matchings in several different models including Map Reduce, streaming and distributed computing. We can also use an EDCS to maintain a 1.5-optimal matching in a dynamic graph.
This work is joint with Sepehr Asadi, Aaron Bernstein, Mohammad Hossein Bateni and Vahab Marrokni.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.2/LIPIcs.ISAAC.2018.2.pdf
matching
dynamic algorithms
parallel algorithms
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
3:1
3:11
10.4230/LIPIcs.ISAAC.2018.3
article
Exploiting Sparsity for Bipartite Hamiltonicity
Björklund, Andreas
1
Department of Computer Science, Lund University, Sweden
We present a Monte Carlo algorithm that detects the presence of a Hamiltonian cycle in an n-vertex undirected bipartite graph of average degree delta >= 3 almost surely and with no false positives, in (2-2^{1-delta})^{n/2}poly(n) time using only polynomial space. With the exception of cubic graphs, this is faster than the best previously known algorithms. Our method is a combination of a variant of Björklund's 2^{n/2}poly(n) time Monte Carlo algorithm for Hamiltonicity detection in bipartite graphs, SICOMP 2014, and a simple fast solution listing algorithm for very sparse CNF-SAT formulas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.3/LIPIcs.ISAAC.2018.3.pdf
Hamiltonian cycle
bipartite graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
4:1
4:13
10.4230/LIPIcs.ISAAC.2018.4
article
Opinion Forming in Erdös-Rényi Random Graph and Expanders
N. Zehmakan, Ahad
1
ETH Zurich, Switzerland
Assume for a graph G=(V,E) and an initial configuration, where each node is blue or red, in each discrete-time round all nodes simultaneously update their color to the most frequent color in their neighborhood and a node keeps its color in case of a tie. We study the behavior of this basic process, which is called majority model, on the Erdös-Rényi random graph G_{n,p} and regular expanders. First we consider the behavior of the majority model on G_{n,p} with an initial random configuration, where each node is blue independently with probability p_b and red otherwise. It is shown that in this setting the process goes through a phase transition at the connectivity threshold, namely (log n)/n. Furthermore, we say a graph G is lambda-expander if the second-largest absolute eigenvalue of its adjacency matrix is lambda. We prove that for a Delta-regular lambda-expander graph if lambda/Delta is sufficiently small, then the majority model by starting from (1/2-delta)n blue nodes (for an arbitrarily small constant delta>0) results in fully red configuration in sub-logarithmically many rounds. Roughly speaking, this means the majority model is an "efficient" and "fast" density classifier on regular expanders. As a by-product of our results, we show regular Ramanujan graphs are asymptotically optimally immune, that is for an n-node Delta-regular Ramanujan graph if the initial number of blue nodes is s <= beta n, the number of blue nodes in the next round is at most cs/Delta for some constants c,beta>0. This settles an open problem by Peleg [Peleg, 2014].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.4/LIPIcs.ISAAC.2018.4.pdf
majority model
random graph
expander graphs
dynamic monopoly
bootstrap percolation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
5:1
5:13
10.4230/LIPIcs.ISAAC.2018.5
article
Colouring (P_r+P_s)-Free Graphs
Klimosová, Tereza
1
Malík, Josef
2
Masarík, Tomás
1
Novotná, Jana
1
Paulusma, Daniël
3
Slívová, Veronika
4
Department of Applied Mathematics, Charles University, Prague, Czech Republic
Czech Technical University in Prague, Czech Republic
Department of Computer Science, Durham University, Durham, UK
Computer Science Institute of Charles University, Prague, Czech Republic
The k-Colouring problem is to decide if the vertices of a graph can be coloured with at most k colours for a fixed integer k such that no two adjacent vertices are coloured alike. If each vertex u must be assigned a colour from a prescribed list L(u) subseteq {1,...,k}, then we obtain the List k-Colouring problem. A graph G is H-free if G does not contain H as an induced subgraph. We continue an extensive study into the complexity of these two problems for H-free graphs. We prove that List 3-Colouring is polynomial-time solvable for (P_2+P_5)-free graphs and for (P_3+P_4)-free graphs. Combining our results with known results yields complete complexity classifications of 3-Colouring and List 3-Colouring on H-free graphs for all graphs H up to seven vertices. We also prove that 5-Colouring is NP-complete for (P_3+P_5)-free graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.5/LIPIcs.ISAAC.2018.5.pdf
vertex colouring
H-free graph
linear forest
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
6:1
6:13
10.4230/LIPIcs.ISAAC.2018.6
article
The Use of a Pruned Modular Decomposition for Maximum Matching Algorithms on Some Graph Classes
Ducoffe, Guillaume
1
Popa, Alexandru
2
ICI – National Institute for Research and Development in Informatics, Bucharest, Romania , The Research Institute of the University of Bucharest ICUB, Bucharest, Romania
University of Bucharest, Bucharest, Romania , ICI – National Institute for Research and Development in Informatics, Bucharest, Romania
We address the following general question: given a graph class C on which we can solve Maximum Matching in (quasi) linear time, does the same hold true for the class of graphs that can be modularly decomposed into C? As a way to answer this question for distance-hereditary graphs and some other superclasses of cographs, we study the combined effect of modular decomposition with a pruning process over the quotient subgraphs. We remove sequentially from all such subgraphs their so-called one-vertex extensions (i.e., pendant, anti-pendant, twin, universal and isolated vertices). Doing so, we obtain a "pruned modular decomposition", that can be computed in quasi linear time. Our main result is that if all the pruned quotient subgraphs have bounded order then a maximum matching can be computed in linear time. The latter result strictly extends a recent framework in (Coudert et al., SODA'18). Our work is the first to explain why the existence of some nice ordering over the modules of a graph, instead of just over its vertices, can help to speed up the computation of maximum matchings on some graph classes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.6/LIPIcs.ISAAC.2018.6.pdf
maximum matching
FPT in P
modular decomposition
pruned graphs
one-vertex extensions
P_4-structure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
7:1
7:12
10.4230/LIPIcs.ISAAC.2018.7
article
A Novel Algorithm for the All-Best-Swap-Edge Problem on Tree Spanners
Bilò, Davide
1
https://orcid.org/0000-0003-3169-4300
Papadopoulos, Kleitos
2
https://orcid.org/0000-0002-7086-0335
Department of Humanities and Social Sciences, University of Sassari, Italy
InSPIRE, Agamemnonos 20, Nicosia, 1041, Cyprus
Given a 2-edge connected, unweighted, and undirected graph G with n vertices and m edges, a sigma-tree spanner is a spanning tree T of G in which the ratio between the distance in T of any pair of vertices and the corresponding distance in G is upper bounded by sigma. The minimum value of sigma for which T is a sigma-tree spanner of G is also called the stretch factor of T. We address the fault-tolerant scenario in which each edge e of a given tree spanner may temporarily fail and has to be replaced by a best swap edge, i.e. an edge that reconnects T-e at a minimum stretch factor. More precisely, we design an O(n^2) time and space algorithm that computes a best swap edge of every tree edge. Previously, an O(n^2 log^4 n) time and O(n^2+m log^2n) space algorithm was known for edge-weighted graphs [Bilò et al., ISAAC 2017]. Even if our improvements on both the time and space complexities are of a polylogarithmic factor, we stress the fact that the design of a o(n^2) time and space algorithm would be considered a breakthrough.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.7/LIPIcs.ISAAC.2018.7.pdf
Transient edge failure
best swap edges
tree spanner
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
8:1
8:13
10.4230/LIPIcs.ISAAC.2018.8
article
Efficient Enumeration of Dominating Sets for Sparse Graphs
Kurita, Kazuhiro
1
Wasa, Kunihiro
2
https://orcid.org/0000-0001-9822-6283
Arimura, Hiroki
1
Uno, Takeaki
2
IST, Hokkaido University, Sapporo, Japan
National Institute of Informatics, Tokyo, Japan
A dominating set D of a graph G is a set of vertices such that any vertex in G is in D or its neighbor is in D. Enumeration of minimal dominating sets in a graph is one of central problems in enumeration study since enumeration of minimal dominating sets corresponds to enumeration of minimal hypergraph transversal. However, enumeration of dominating sets including non-minimal ones has not been received much attention. In this paper, we address enumeration problems for dominating sets from sparse graphs which are degenerate graphs and graphs with large girth, and we propose two algorithms for solving the problems. The first algorithm enumerates all the dominating sets for a k-degenerate graph in O(k) time per solution using O(n + m) space, where n and m are respectively the number of vertices and edges in an input graph. That is, the algorithm is optimal for graphs with constant degeneracy such as trees, planar graphs, H-minor free graphs with some fixed H. The second algorithm enumerates all the dominating sets in constant time per solution for input graphs with girth at least nine.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.8/LIPIcs.ISAAC.2018.8.pdf
Enumeration algorithm
polynomial amortized time
dominating set
girth
degeneracy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
9:1
9:12
10.4230/LIPIcs.ISAAC.2018.9
article
Complexity of Unordered CNF Games
Rahman, Md Lutfar
1
Watson, Thomas
1
The University of Memphis, Memphis, TN, USA
The classic TQBF problem is to determine who has a winning strategy in a game played on a given CNF formula, where the two players alternate turns picking truth values for the variables in a given order, and the winner is determined by whether the CNF gets satisfied. We study variants of this game in which the variables may be played in any order, and each turn consists of picking a remaining variable and a truth value for it.
- For the version where the set of variables is partitioned into two halves and each player may only pick variables from his/her half, we prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for unbounded-width CNFs (Schaefer, STOC 1976).
- For the general unordered version (where each variable can be picked by either player), we also prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for 6-CNFs (Ahlroth and Orponen, MFCS 2012) and PSPACE-complete for positive 11-CNFs (Schaefer, STOC 1976).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.9/LIPIcs.ISAAC.2018.9.pdf
CNF
Games
PSPACE-complete
SAT
Linear Time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
10:1
10:12
10.4230/LIPIcs.ISAAC.2018.10
article
Half-Duplex Communication Complexity
Hoover, Kenneth
1
Impagliazzo, Russell
1
Mihajlin, Ivan
1
Smal, Alexander V.
2
University of California San Diego, USA
St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences, Russia
Suppose Alice and Bob are communicating in order to compute some function f, but instead of a classical communication channel they have a pair of walkie-talkie devices. They can use some classical communication protocol for f where in each round one player sends a bit and the other one receives it. The question is whether talking via walkie-talkie gives them more power? Using walkie-talkies instead of a classical communication channel allows players two extra possibilities: to speak simultaneously (but in this case they do not hear each other) and to listen at the same time (but in this case they do not transfer any bits). The motivation for this kind of a communication model comes from the study of the KRW conjecture. We show that for some definitions this non-classical communication model is, in fact, more powerful than the classical one as it allows to compute some functions in a smaller number of rounds. We also prove lower bounds for these models using both combinatorial and information theoretic methods.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.10/LIPIcs.ISAAC.2018.10.pdf
communication complexity
half-duplex channel
information theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
11:1
11:12
10.4230/LIPIcs.ISAAC.2018.11
article
On the Complexity of Stable Fractional Hypergraph Matching
Ishizuka, Takashi
1
Kamiyama, Naoyuki
2
Graduate School of Mathematics, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
Institute of Mathematics for Industry, Kyushu University, JST, PRESTO, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
In this paper, we consider the complexity of the problem of finding a stable fractional matching in a hypergraphic preference system. Aharoni and Fleiner proved that there exists a stable fractional matching in every hypergraphic preference system. Furthermore, Kintali, Poplawski, Rajaraman, Sundaram, and Teng proved that the problem of finding a stable fractional matching in a hypergraphic preference system is PPAD-complete. In this paper, we consider the complexity of the problem of finding a stable fractional matching in a hypergraphic preference system whose maximum degree is bounded by some constant. The proof by Kintali, Poplawski, Rajaraman, Sundaram, and Teng implies the PPAD-completeness of the problem of finding a stable fractional matching in a hypergraphic preference system whose maximum degree is 5. In this paper, we prove that (i) this problem is PPAD-complete even if the maximum degree is 3, and (ii) if the maximum degree is 2, then this problem can be solved in polynomial time. Furthermore, we prove that the problem of finding an approximate stable fractional matching in a hypergraphic preference system is PPAD-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.11/LIPIcs.ISAAC.2018.11.pdf
fractional hypergraph matching
stable matching
PPAD-completeness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
12:1
12:13
10.4230/LIPIcs.ISAAC.2018.12
article
Deciding the Closure of Inconsistent Rooted Triples Is NP-Complete
Johnson, Matthew P.
1
Department of Computer Science, Lehman College, Ph.D. Program in Computer Science, The Graduate Center, City University of New York, USA
Interpreting three-leaf binary trees or rooted triples as constraints yields an entailment relation, whereby binary trees satisfying some rooted triples must also thus satisfy others, and thence a closure operator, which is known to be polynomial-time computable. This is extended to inconsistent triple sets by defining that a triple is entailed by such a set if it is entailed by any consistent subset of it.
Determining whether the closure of an inconsistent rooted triple set can be computed in polynomial time was posed as an open problem in the Isaac Newton Institute's "Phylogenetics" program in 2007. It appears (as NC4) in a collection of such open problems maintained by Mike Steel, and it is the last of that collection's five problems concerning computational complexity to have remained open. We resolve the complexity of computing this closure, proving that its decision version is NP-Complete.
In the process, we also prove that detecting the existence of any acyclic B-hyperpath (from specified source to destination) is NP-Complete, in a significantly narrower special case than the version whose minimization problem was recently proven NP-hard by Ritz et al. This implies it is NP-hard to approximate (our special case of) their minimization problem to within any factor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.12/LIPIcs.ISAAC.2018.12.pdf
phylogenetic trees
rooted triple entailment
NP-Completeness
directed hypergraphs
acyclic induced subgraphs
computational complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
13:1
13:12
10.4230/LIPIcs.ISAAC.2018.13
article
Computing Vertex-Disjoint Paths in Large Graphs Using MAOs
Preißer, Johanna E.
1
Schmidt, Jens M.
1
Institute of Mathematics, TU Ilmenau, Germany
We consider the problem of computing k in N internally vertex-disjoint paths between special vertex pairs of simple connected graphs. For general vertex pairs, the best deterministic time bound is, since 42 years, O(min{k,sqrt{n}}m) for each pair by using traditional flow-based methods.
The restriction of our vertex pairs comes from the machinery of maximal adjacency orderings (MAOs). Henzinger showed for every MAO and every 1 <= k <= delta (where delta is the minimum degree of the graph) the existence of k internally vertex-disjoint paths between every pair of the last delta-k+2 vertices of this MAO. Later, Nagamochi generalized this result by using the machinery of mixed connectivity. Both results are however inherently non-constructive.
We present the first algorithm that computes these k internally vertex-disjoint paths in linear time O(m), which improves the previously best time O(min{k,sqrt{n}}m). Due to the linear running time, this algorithm is suitable for large graphs. The algorithm is simple, works directly on the MAO structure, and completes a long history of purely existential proofs with a constructive method. We extend our algorithm to compute several other path systems and discuss its impact for certifying algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.13/LIPIcs.ISAAC.2018.13.pdf
Computing Disjoint Paths
Large Graphs
Vertex-Connectivity
Linear-Time
Maximal Adjacency Ordering
Maximum Cardinality Search
Big Data
Certifying Algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
14:1
14:13
10.4230/LIPIcs.ISAAC.2018.14
article
An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks
Bhattacharya, Binay
1
Higashikawa, Yuya
2
Kameda, Tsunehiko
1
Katoh, Naoki
3
School of Computing Science, Simon Fraser University, Burnaby, Canada
School of Business Administration, University of Hyogo, Kobe, Japan
School of Science and Technology, Kwansei Gakuin University, Sanda, Japan
We model evacuation in emergency situations by dynamic flow in a network. We want to minimize the aggregate evacuation time to an evacuation center (called a sink) on a path network with uniform edge capacities. The evacuees are initially located at the vertices, but their precise numbers are unknown, and are given by upper and lower bounds. Under this assumption, we compute a sink location that minimizes the maximum "regret." We present the first sub-cubic time algorithm in n to solve this problem, where n is the number of vertices. Although we cast our problem as evacuation, our result is accurate if the "evacuees" are fluid-like continuous material, but is a good approximation for discrete evacuees.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.14/LIPIcs.ISAAC.2018.14.pdf
Facility location
minsum sink
evacuation problem
minmax regret
dynamic flow path network
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
15:1
15:12
10.4230/LIPIcs.ISAAC.2018.15
article
Computing Optimal Shortcuts for Networks
Garijo, Delia
1
Márquez, Alberto
1
Rodríguez, Natalia
2
Silveira, Rodrigo I.
3
Departamento de Matemática Aplicada I, Universidad de Sevilla, Spain
Departamento de Computación, Universidad de Buenos Aires, Argentina
Departament de Matemàtiques, Universitat Politècnica de Catalunya, Spain
We study augmenting a plane Euclidean network with a segment, called shortcut, to minimize the largest distance between any two points along the edges of the resulting network. Questions of this type have received considerable attention recently, mostly for discrete variants of the problem. We study a fully continuous setting, where all points on the network and the inserted segment must be taken into account. We present the first results on the computation of optimal shortcuts for general networks in this model, together with several results for networks that are paths, restricted to two types of shortcuts: shortcuts with a fixed orientation and simple shortcuts.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.15/LIPIcs.ISAAC.2018.15.pdf
graph augmentation
shortcut
diameter
geometric graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
16:1
16:12
10.4230/LIPIcs.ISAAC.2018.16
article
Algorithmic Channel Design
Avarikioti, Georgia
1
Wang, Yuyi
1
Wattenhofer, Roger
1
ETH Zurich, Switzerland
Payment networks, also known as channels, are a most promising solution to the throughput problem of cryptocurrencies. In this paper we study the design of capital-efficient payment networks, offline as well as online variants. We want to know how to compute an efficient payment network topology, how capital should be assigned to the individual edges, and how to decide which transactions to accept. Towards this end, we present a flurry of interesting results, basic but generally applicable insights on the one hand, and hardness results and approximation algorithms on the other hand.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.16/LIPIcs.ISAAC.2018.16.pdf
blockchain
payment channels
layer 2 solution
network design
payment hubs
routing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
17:1
17:12
10.4230/LIPIcs.ISAAC.2018.17
article
Counting Connected Subgraphs with Maximum-Degree-Aware Sieving
Björklund, Andreas
1
Husfeldt, Thore
2
Kaski, Petteri
3
Koivisto, Mikko
4
Department of Computer Science, Lund University, Sweden
BARC, IT University of Copenhagen, Denmark and Lund University, Sweden
Department of Computer Science, Aalto University, Finland
Department of Computer Science, University of Helsinki, Finland
We study the problem of counting the isomorphic occurrences of a k-vertex pattern graph P as a subgraph in an n-vertex host graph G. Our specific interest is on algorithms for subgraph counting that are sensitive to the maximum degree Delta of the host graph.
Assuming that the pattern graph P is connected and admits a vertex balancer of size b, we present an algorithm that counts the occurrences of P in G in O ((2 Delta-2)^{(k+b)/2} 2^{-b} n/(Delta) k^2 log n) time. We define a balancer as a vertex separator of P that can be represented as an intersection of two equal-size vertex subsets, the union of which is the vertex set of P, and both of which induce connected subgraphs of P.
A corollary of our main result is that we can count the number of k-vertex paths in an n-vertex graph in O((2 Delta-2)^{floor[k/2]} n k^2 log n) time, which for all moderately dense graphs with Delta <= n^{1/3} improves on the recent breakthrough work of Curticapean, Dell, and Marx [STOC 2017], who show how to count the isomorphic occurrences of a q-edge pattern graph as a subgraph in an n-vertex host graph in time O(q^q n^{0.17q}) for all large enough q. Another recent result of Brand, Dell, and Husfeldt [STOC 2018] shows that k-vertex paths in a bounded-degree graph can be approximately counted in O(4^kn) time. Our result shows that the exact count can be recovered at least as fast for Delta<10.
Our algorithm is based on the principle of inclusion and exclusion, and can be viewed as a sparsity-sensitive version of the "counting in halves"-approach explored by Björklund, Husfeldt, Kaski, and Koivisto [ESA 2009].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.17/LIPIcs.ISAAC.2018.17.pdf
graph embedding
k-path
subgraph counting
maximum degree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
18:1
18:13
10.4230/LIPIcs.ISAAC.2018.18
article
Target Set Selection in Dense Graph Classes
Dvorák, Pavel
1
Knop, Dusan
2
Toufar, Tomás
1
Computer Science Institute, Charles University, Prague, Czech Republic
Algorithmics and Computational Complexity, Faculty IV, TU Berlin, Berlin, Germany and Faculty of Information Technology, Czech Technical University in Prague, Czech Republic
In this paper we study the Target Set Selection problem from a parameterized complexity perspective. Here for a given graph and a threshold for each vertex the task is to find a set of vertices (called a target set) to activate at the beginning which activates the whole graph during the following iterative process. A vertex outside the active set becomes active if the number of so far activated vertices in its neighborhood is at least its threshold.
We give two parameterized algorithms for a special case where each vertex has the threshold set to the half of its neighbors (the so called Majority Target Set Selection problem) for parameterizations by the neighborhood diversity and the twin cover number of the input graph.
We complement these results from the negative side. We give a hardness proof for the Majority Target Set Selection problem when parameterized by (a restriction of) the modular-width - a natural generalization of both previous structural parameters. We show that the Target Set Selection problem parameterized by the neighborhood diversity when there is no restriction on the thresholds is W[1]-hard.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.18/LIPIcs.ISAAC.2018.18.pdf
parameterized complexity
target set selection
dense graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
19:1
19:13
10.4230/LIPIcs.ISAAC.2018.19
article
Counting Shortest Two Disjoint Paths in Cubic Planar Graphs with an NC Algorithm
Björklund, Andreas
1
Husfeldt, Thore
2
Department of Computer Science, Lund University, Sweden
BARC, IT University of Copenhagen, Denmark and Lund University, Sweden
Given an undirected graph and two disjoint vertex pairs s_1,t_1 and s_2,t_2, the Shortest two disjoint paths problem (S2DP) asks for the minimum total length of two vertex disjoint paths connecting s_1 with t_1, and s_2 with t_2, respectively.
We show that for cubic planar graphs there are NC algorithms, uniform circuits of polynomial size and polylogarithmic depth, that compute the S2DP and moreover also output the number of such minimum length path pairs.
Previously, to the best of our knowledge, no deterministic polynomial time algorithm was known for S2DP in cubic planar graphs with arbitrary placement of the terminals. In contrast, the randomized polynomial time algorithm by Björklund and Husfeldt, ICALP 2014, for general graphs is much slower, is serial in nature, and cannot count the solutions.
Our results are built on an approach by Hirai and Namba, Algorithmica 2017, for a generalisation of S2DP, and fast algorithms for counting perfect matchings in planar graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.19/LIPIcs.ISAAC.2018.19.pdf
Shortest disjoint paths
Cubic planar graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
20:1
20:13
10.4230/LIPIcs.ISAAC.2018.20
article
Data-Compression for Parametrized Counting Problems on Sparse Graphs
Kim, Eun Jung
1
Serna, Maria
2
Thilikos, Dimitrios M.
3
4
Université Paris-Dauphine, PSL Research University, CNRS/LAMSADE, 75016, Paris, France
Computer Science Department & BGSMath, Universitat Politècnica de Catalunya, Barcelona, Spain
AlGCo project-team, LIRMM, Université de Montpellier, CNRS, Montpellier, France
and, Department of Mathematics, National and Kapodistrian University of Athens, Greece
We study the concept of compactor, which may be seen as a counting-analogue of kernelization in counting parameterized complexity. For a function F:Sigma^* -> N and a parameterization kappa: Sigma^* -> N, a compactor (P,M) consists of a polynomial-time computable function P, called condenser, and a computable function M, called extractor, such that F=M o P, and the condensing P(x) of x has length at most s(kappa(x)), for any input x in Sigma^*. If s is a polynomial function, then the compactor is said to be of polynomial-size. Although the study on counting-analogue of kernelization is not unprecedented, it has received little attention so far. We study a family of vertex-certified counting problems on graphs that are MSOL-expressible; that is, for an MSOL-formula phi with one free set variable to be interpreted as a vertex subset, we want to count all A subseteq V(G) where |A|=k and (G,A) models phi. In this paper, we prove that every vertex-certified counting problems on graphs that is MSOL-expressible and treewidth modulable, when parameterized by k, admits a polynomial-size compactor on H-topological-minor-free graphs with condensing time O(k^2n^2) and decoding time 2^{O(k)}. This implies the existence of an FPT-algorithm of running time O(n^2 k^2)+2^{O(k)}. All aforementioned complexities are under the Uniform Cost Measure (UCM) model where numbers can be stored in constant space and arithmetic operations can be done in constant time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.20/LIPIcs.ISAAC.2018.20.pdf
Parameterized counting
compactor
protrusion decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
21:1
21:13
10.4230/LIPIcs.ISAAC.2018.21
article
Planar Maximum Matching: Towards a Parallel Algorithm
Datta, Samir
1
Kulkarni, Raghav
2
Kumar, Ashish
2
Mukherjee, Anish
2
Chennai Mathematical Institute & UMI ReLaX, Chennai, India
Chennai Mathematical Institute, Chennai, India
Perfect matchings in planar graphs have been extensively studied and understood in the context of parallel complexity [P W Kastelyn, 1967; Vijay Vazirani, 1988; Meena Mahajan and Kasturi R. Varadarajan, 2000; Datta et al., 2010; Nima Anari and Vijay V. Vazirani, 2017]. However, corresponding results for maximum matchings have been elusive. We partly bridge this gap by proving:
1) An SPL upper bound for planar bipartite maximum matching search.
2) Planar maximum matching search reduces to planar maximum matching decision.
3) Planar maximum matching count reduces to planar bipartite maximum matching count and planar maximum matching decision.
The first bound improves on the known [Thanh Minh Hoang, 2010] bound of L^{C_=L} and is adaptable to any special bipartite graph class with non-zero circulation such as bounded genus graphs, K_{3,3}-free graphs and K_5-free graphs. Our bounds and reductions non-trivially combine techniques like the Gallai-Edmonds decomposition [L. Lovász and M.D. Plummer, 1986], deterministic isolation [Datta et al., 2010; Samir Datta et al., 2012; Rahul Arora et al., 2016], and the recent breakthroughs in the parallel search for planar perfect matchings [Nima Anari and Vijay V. Vazirani, 2017; Piotr Sankowski, 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.21/LIPIcs.ISAAC.2018.21.pdf
maximum matching
planar graphs
parallel complexity
reductions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
22:1
22:12
10.4230/LIPIcs.ISAAC.2018.22
article
Distributed Approximation Algorithms for the Minimum Dominating Set in K_h-Minor-Free Graphs
Czygrinow, Andrzej
1
Hanckowiak, Michal
2
Wawrzyniak, Wojciech
2
Witkowski, Marcin
2
School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287-1804, USA
Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Poznań, Poland
In this paper we will give two distributed approximation algorithms (in the Local model) for the minimum dominating set problem. First we will give a distributed algorithm which finds a dominating set D of size O(gamma(G)) in a graph G which has no topological copy of K_h. The algorithm runs L_h rounds where L_h is a constant which depends on h only. This procedure can be used to obtain a distributed algorithm which given epsilon>0 finds in a graph G with no K_h-minor a dominating set D of size at most (1+epsilon)gamma(G). The second algorithm runs in O(log^*{|V(G)|}) rounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.22/LIPIcs.ISAAC.2018.22.pdf
Distributed algorithms
minor-closed family of graphs
MDS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
23:1
23:13
10.4230/LIPIcs.ISAAC.2018.23
article
Proving the Turing Universality of Oritatami Co-Transcriptional Folding
Geary, Cody
1
Meunier, Pierre-Étienne
2
Schabanel, Nicolas
3
Seki, Shinnosuke
4
California Institute of Technology, Pasadena, CA, USA
Maynooth University, Ireland
CNRS, ÉNS de Lyon (LIP, UMR 5668), France and IXXI, U. Lyon, France, http://perso.ens-lyon.fr/nicolas.schabanel/
Oritatami Lab, University of Electro-Communications, Tokyo, Japan, http://www.sseki.lab.uec.ac.jp/
We study the oritatami model for molecular co-transcriptional folding. In oritatami systems, the transcript (the "molecule") folds as it is synthesized (transcribed), according to a local energy optimisation process, which is similar to how actual biomolecules such as RNA fold into complex shapes and functions as they are transcribed. We prove that there is an oritatami system embedding universal computation in the folding process itself.
Our result relies on the development of a generic toolbox, which is easily reusable for future work to design complex functions in oritatami systems. We develop "low-level" tools that allow to easily spread apart the encoding of different "functions" in the transcript, even if they are required to be applied at the same geometrical location in the folding. We build upon these low-level tools, a programming framework with increasing levels of abstraction, from encoding of instructions into the transcript to logical analysis. This framework is similar to the hardware-to-algorithm levels of abstractions in standard algorithm theory. These various levels of abstractions allow to separate the proof of correctness of the global behavior of our system, from the proof of correctness of its implementation. Thanks to this framework, we were able to computerise the proof of correctness of its implementation and produce certificates, in the form of a relatively small number of proof trees, compact and easily readable/checkable by human, while encapsulating huge case enumerations. We believe this particular type of certificates can be generalised to other discrete dynamical systems, where proofs involve large case enumerations as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.23/LIPIcs.ISAAC.2018.23.pdf
Molecular computing
Turing universality
co-transcriptional folding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
24:1
24:13
10.4230/LIPIcs.ISAAC.2018.24
article
Cluster Editing in Multi-Layer and Temporal Graphs
Chen, Jiehua
1
Molter, Hendrik
2
Sorge, Manuel
1
Suchý, Ondrej
3
Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, Poland
Algorithmics and Computational Complexity, Faculty IV, TU Berlin, Berlin, Germany
Department of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech Republic
Motivated by the recent rapid growth of research for algorithms to cluster multi-layer and temporal graphs, we study extensions of the classical Cluster Editing problem. In Multi-Layer Cluster Editing we receive a set of graphs on the same vertex set, called layers and aim to transform all layers into cluster graphs (disjoint unions of cliques) that differ only slightly. More specifically, we want to mark at most d vertices and to transform each layer into a cluster graph using at most k edge additions or deletions per layer so that, if we remove the marked vertices, we obtain the same cluster graph in all layers. In Temporal Cluster Editing we receive a sequence of layers and we want to transform each layer into a cluster graph so that consecutive layers differ only slightly. That is, we want to transform each layer into a cluster graph with at most k edge additions or deletions and to mark a distinct set of d vertices in each layer so that each two consecutive layers are the same after removing the vertices marked in the first of the two layers. We study the combinatorial structure of the two problems via their parameterized complexity with respect to the parameters d and k, among others. Despite the similar definition, the two problems behave quite differently: In particular, Multi-Layer Cluster Editing is fixed-parameter tractable with running time k^{O(k + d)} s^{O(1)} for inputs of size s, whereas Temporal Cluster Editing is W[1]-hard with respect to k even if d = 3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.24/LIPIcs.ISAAC.2018.24.pdf
Cluster Editing
Temporal Graphs
Multi-Layer Graphs
Fixed-Parameter Algorithms
Polynomial Kernels
Parameterized Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
25:1
25:12
10.4230/LIPIcs.ISAAC.2018.25
article
Parameterized Query Complexity of Hitting Set Using Stability of Sunflowers
Bishnu, Arijit
1
Ghosh, Arijit
2
Kolay, Sudeshna
3
Mishra, Gopinath
1
Saurabh, Saket
2
Indian Statistical Institute, Kolkata, India
The Institute of Mathematical Sciences, Chennai, India
Eindhoven University of Technology, Eindhoven, Netherlands
In this paper, we study the query complexity of parameterized decision and optimization versions of Hitting-Set. We also investigate the query complexity of Packing. In doing so, we use generalizations to hypergraphs of an earlier query model, known as BIS introduced by Beame et al. in ITCS'18. The query models considered are the GPIS and GPISE oracles. The GPIS and GPISE oracles are used for the decision and optimization versions of the problems, respectively. We use color coding and queries to the oracles to generate subsamples from the hypergraph, that retain some structural properties of the original hypergraph. We use the stability of the sunflowers in a non-trivial way to do so.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.25/LIPIcs.ISAAC.2018.25.pdf
Query complexity
Hitting set
Parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
26:1
26:13
10.4230/LIPIcs.ISAAC.2018.26
article
Approximate Minimum-Weight Matching with Outliers Under Translation
Agarwal, Pankaj K.
1
Kaplan, Haim
2
Kipper, Geva
2
Mulzer, Wolfgang
3
https://orcid.org/0000-0002-1948-5840
Rote, Günter
3
https://orcid.org/0000-0002-0351-5945
Sharir, Micha
2
Xiao, Allen
1
Department of Computer Science, Duke University, Durham, NC 27708, USA
School of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel
Institut für Informatik, Freie Universität Berlin, 14195 Berlin, Germany
Our goal is to compare two planar point sets by finding subsets of a given size such that a minimum-weight matching between them has the smallest weight. This can be done by a translation of one set that minimizes the weight of the matching. We give efficient algorithms (a) for finding approximately optimal matchings, when the cost of a matching is the L_p-norm of the tuple of the Euclidean distances between the pairs of matched points, for any p in [1,infty], and (b) for constructing small-size approximate minimization (or matching) diagrams: partitions of the translation space into regions, together with an approximate optimal matching for each region.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.26/LIPIcs.ISAAC.2018.26.pdf
Minimum-weight partial matching
Pattern matching
Approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
27:1
27:12
10.4230/LIPIcs.ISAAC.2018.27
article
New and Improved Algorithms for Unordered Tree Inclusion
Akutsu, Tatsuya
1
Jansson, Jesper
2
Li, Ruiming
1
Takasu, Atsuhiro
3
Tamura, Takeyuki
1
Bioinformatics Center, Institute for Chemical Research, Kyoto University, Kyoto 611-0011, Japan
Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
National Institute of Informatics, Chiyoda-ku, Tokyo, 101-8430, Japan
The tree inclusion problem is, given two node-labeled trees P and T (the "pattern tree" and the "text tree"), to locate every minimal subtree in T (if any) that can be obtained by applying a sequence of node insertion operations to P. Although the ordered tree inclusion problem is solvable in polynomial time, the unordered tree inclusion problem is NP-hard. The currently fastest algorithm for the latter is from 1995 and runs in O(poly(m,n) * 2^{2d}) = O^*(2^{2d}) time, where m and n are the sizes of the pattern and text trees, respectively, and d is the maximum outdegree of the pattern tree. Here, we develop a new algorithm that improves the exponent 2d to d by considering a particular type of ancestor-descendant relationships and applying dynamic programming, thus reducing the time complexity to O^*(2^d). We then study restricted variants of the unordered tree inclusion problem where the number of occurrences of different node labels and/or the input trees' heights are bounded. We show that although the problem remains NP-hard in many such cases, it can be solved in polynomial time for c = 2 and in O^*(1.8^d) time for c = 3 if the leaves of P are distinctly labeled and each label occurs at most c times in T. We also present a randomized O^*(1.883^d)-time algorithm for the case that the heights of P and T are one and two, respectively.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.27/LIPIcs.ISAAC.2018.27.pdf
parameterized algorithms
tree inclusion
unordered trees
dynamic programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
28:1
28:13
10.4230/LIPIcs.ISAAC.2018.28
article
Beyond-Planarity: Turán-Type Results for Non-Planar Bipartite Graphs
Angelini, Patrizio
1
Bekos, Michael A.
1
Kaufmann, Michael
1
Pfister, Maximilian
1
Ueckerdt, Torsten
2
Wilhelm-Schickhard-Institut für Informatik, Universität Tübingen, Germany
Fakultät für Informatik, KIT, Karlsruhe, Germany
Beyond-planarity focuses on the study of geometric and topological graphs that are in some sense nearly planar. Here, planarity is relaxed by allowing edge crossings, but only with respect to some local forbidden crossing configurations. Early research dates back to the 1960s (e.g., Avital and Hanani 1966) for extremal problems on geometric graphs, but is also related to graph drawing problems where visual clutter due to edge crossings should be minimized (e.g., Huang et al. 2018).
Most of the literature focuses on Turán-type problems, which ask for the maximum number of edges a beyond-planar graph can have. Here, we study this problem for bipartite topological graphs, considering several types of beyond-planar graphs, i.e. 1-planar, 2-planar, fan-planar, and RAC graphs. We prove bounds on the number of edges that are tight up to additive constants; some of them are surprising and not along the lines of the known results for non-bipartite graphs. Our findings lead to an improvement of the leading constant of the well-known Crossing Lemma for bipartite graphs, as well as to a number of interesting questions on topological graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.28/LIPIcs.ISAAC.2018.28.pdf
Bipartite topological graphs
beyond planarity
density
Crossing Lemma
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
29:1
29:13
10.4230/LIPIcs.ISAAC.2018.29
article
A Dichotomy Result for Cyclic-Order Traversing Games
Chen, Yen-Ting
1
Tsai, Meng-Tsung
1
Tsai, Shi-Chun
1
Department of Computer Science, National Chiao Tung University, Hsinchu City, Taiwan
Traversing game is a two-person game played on a connected undirected simple graph with a source node and a destination node. A pebble is placed on the source node initially and then moves autonomously according to some rules. Alice is the player who wants to set up rules for each node to determine where to forward the pebble while the pebble reaches the node, so that the pebble can reach the destination node. Bob is the second player who tries to deter Alice's effort by removing edges. Given access to Alice's rules, Bob can remove as many edges as he likes, while retaining the source and destination nodes connected. Under the guide of Alice's rules, if the pebble arrives at the destination node, then we say Alice wins the traversing game; otherwise the pebble enters an endless loop without passing through the destination node, then Bob wins. We assume that Alice and Bob both play optimally.
We study the problem: When will Alice have a winning strategy? This actually models a routing recovery problem in Software Defined Networking in which some links may be broken. In this paper, we prove a dichotomy result for certain traversing games, called cyclic-order traversing games. We also give a linear-time algorithm to find the corresponding winning strategy, if one exists.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.29/LIPIcs.ISAAC.2018.29.pdf
st-planar graphs
biconnectivity
fault-tolerant routing algorithms
software defined network
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
30:1
30:13
10.4230/LIPIcs.ISAAC.2018.30
article
The b-Matching Problem in Distance-Hereditary Graphs and Beyond
Ducoffe, Guillaume
1
Popa, Alexandru
2
ICI – National Institute for Research and Development in Informatics, Bucharest, Romania , The Research Institute of the University of Bucharest ICUB, Bucharest, Romania
University of Bucharest, Bucharest, Romania , ICI – National Institute for Research and Development in Informatics, Bucharest, Romania
We make progress on the fine-grained complexity of Maximum-Cardinality Matching on graphs of bounded clique-width. Quasi linear-time algorithms for this problem have been recently proposed for the important subclasses of bounded-treewidth graphs (Fomin et al., SODA'17) and graphs of bounded modular-width (Coudert et al., SODA'18). We present such algorithm for bounded split-width graphs - a broad generalization of graphs of bounded modular-width, of which an interesting subclass are the distance-hereditary graphs. Specifically, we solve Maximum-Cardinality Matching in O((k log^2{k})*(m+n) * log{n})-time on graphs with split-width at most k. We stress that the existence of such algorithm was not even known for distance-hereditary graphs until our work. Doing so, we improve the state of the art (Dragan, WG'97) and we answer an open question of (Coudert et al., SODA'18). Our work brings more insights on the relationships between matchings and splits, a.k.a., join operations between two vertex-subsets in different connected components. Furthermore, our analysis can be extended to the more general (unit cost) b-Matching problem. On the way, we introduce new tools for b-Matching and dynamic programming over split decompositions, that can be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.30/LIPIcs.ISAAC.2018.30.pdf
maximum-cardinality matching
b-matching
FPT in P
split decomposition
distance-hereditary graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
31:1
31:12
10.4230/LIPIcs.ISAAC.2018.31
article
New Algorithms for Edge Induced König-Egerváry Subgraph Based on Gallai-Edmonds Decomposition
Feng, Qilong
1
Tan, Guanlan
1
Zhu, Senmin
1
Fu, Bin
2
Wang, Jianxin
1
School of Information Science and Engineering, Central South University, Changsha, P.R. China
Department of Computer Science, University of Texas-Rio Grande Valley, USA
König-Egerváry graphs form an important graph class which has been studied extensively in graph theory. Much attention has also been paid on König-Egerváry subgraphs and König-Egerváry graph modification problems. In this paper, we focus on one König-Egerváry subgraph problem, called the Maximum Edge Induced König Subgraph problem. By exploiting the classical Gallai-Edmonds decomposition, we establish connections between minimum vertex cover, Gallai-Edmonds decomposition structure, maximum matching, maximum bisection, and König-Egerváry subgraph structure. We obtain a new structural property of König-Egerváry subgraph: every graph G=(V, E) has an edge induced König-Egerváry subgraph with at least 2|E|/3 edges. Based on the new structural property proposed, an approximation algorithm with ratio 10/7 for the Maximum Edge Induced König Subgraph problem is presented, improving the current best ratio of 5/3. To the best of our knowledge, this paper is the first one establishing the connection between Gallai-Edmonds decomposition and König-Egerváry graphs. Using 2|E|/3 as a lower bound, we define the Edge Induced König Subgraph above lower bound problem, and give a kernel of at most 30k edges for the problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.31/LIPIcs.ISAAC.2018.31.pdf
König-Egerváry graph
Gallai-Edmonds decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
32:1
32:13
10.4230/LIPIcs.ISAAC.2018.32
article
Computing Approximate Statistical Discrepancy
Matheny, Michael
1
Phillips, Jeff M.
1
University of Utah, Salt Lake City, USA
Consider a geometric range space (X,A) where X is comprised of the union of a red set R and blue set B. Let Phi(A) define the absolute difference between the fraction of red and fraction of blue points which fall in the range A. The maximum discrepancy range A^* = arg max_{A in (X,A)} Phi(A). Our goal is to find some A^ in (X,A) such that Phi(A^*) - Phi(A^) <= epsilon. We develop general algorithms for this approximation problem for range spaces with bounded VC-dimension, as well as significant improvements for specific geometric range spaces defined by balls, halfspaces, and axis-aligned rectangles. This problem has direct applications in discrepancy evaluation and classification, and we also show an improved reduction to a class of problems in spatial scan statistics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.32/LIPIcs.ISAAC.2018.32.pdf
Scan Statistics
Discrepancy
Rectangles
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
33:1
33:12
10.4230/LIPIcs.ISAAC.2018.33
article
Diversity Maximization in Doubling Metrics
Cevallos, Alfonso
1
https://orcid.org/0000-0001-8622-5830
Eisenbrand, Friedrich
2
https://orcid.org/0000-0001-7928-1076
Morell, Sarah
3
Swiss Federal Institute of Technology (ETH), Switzerland
École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Technische Universität Berlin (TU Berlin), Germany
Diversity maximization is an important geometric optimization problem with many applications in recommender systems, machine learning or search engines among others. A typical diversification problem is as follows: Given a finite metric space (X,d) and a parameter k in N, find a subset of k elements of X that has maximum diversity. There are many functions that measure diversity. One of the most popular measures, called remote-clique, is the sum of the pairwise distances of the chosen elements. In this paper, we present novel results on three widely used diversity measures: Remote-clique, remote-star and remote-bipartition.
Our main result are polynomial time approximation schemes for these three diversification problems under the assumption that the metric space is doubling. This setting has been discussed in the recent literature. The existence of such a PTAS however was left open.
Our results also hold in the setting where the distances are raised to a fixed power q >= 1, giving rise to more variants of diversity functions, similar in spirit to the variations of clustering problems depending on the power applied to the pairwise distances. Finally, we provide a proof of NP-hardness for remote-clique with squared distances in doubling metric spaces.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.33/LIPIcs.ISAAC.2018.33.pdf
Remote-clique
remote-star
remote-bipartition
doubling dimension
grid rounding
epsilon-nets
polynomial time approximation scheme
facility location
information retrieval
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
34:1
34:12
10.4230/LIPIcs.ISAAC.2018.34
article
On Polynomial Time Constructions of Minimum Height Decision Tree
H. Bshouty, Nader
1
Makhoul, Waseem
1
Department of Computer Science, Technion, Haifa, Israel
A decision tree T in B_m:={0,1}^m is a binary tree where each of its internal nodes is labeled with an integer in [m]={1,2,...,m}, each leaf is labeled with an assignment a in B_m and each internal node has two outgoing edges that are labeled with 0 and 1, respectively. Let A subset {0,1}^m. We say that T is a decision tree for A if (1) For every a in A there is one leaf of T that is labeled with a. (2) For every path from the root to a leaf with internal nodes labeled with i_1,i_2,...,i_k in[m], a leaf labeled with a in A and edges labeled with xi_{i_1},...,xi_{i_k}in {0,1}, a is the only element in A that satisfies a_{i_j}=xi_{i_j} for all j=1,...,k.
Our goal is to write a polynomial time (in n:=|A| and m) algorithm that for an input A subseteq B_m outputs a decision tree for A of minimum depth. This problem has many applications that include, to name a few, computer vision, group testing, exact learning from membership queries and game theory.
Arkin et al. and Moshkov [Esther M. Arkin et al., 1998; Mikhail Ju. Moshkov, 2004] gave a polynomial time (ln |A|)- approximation algorithm (for the depth). The result of Dinur and Steurer [Irit Dinur and David Steurer, 2014] for set cover implies that this problem cannot be approximated with ratio (1-o(1))* ln |A|, unless P=NP. Moshkov studied in [Mikhail Ju. Moshkov, 2004; Mikhail Ju. Moshkov, 1982; Mikhail Ju. Moshkov, 1982] the combinatorial measure of extended teaching dimension of A, ETD(A). He showed that ETD(A) is a lower bound for the depth of the decision tree for A and then gave an exponential time ETD(A)/log(ETD(A))-approximation algorithm and a polynomial time 2(ln 2)ETD(A)-approximation algorithm.
In this paper we further study the ETD(A) measure and a new combinatorial measure, DEN(A), that we call the density of the set A. We show that DEN(A) <=ETD(A)+1. We then give two results. The first result is that the lower bound ETD(A) of Moshkov for the depth of the decision tree for A is greater than the bounds that are obtained by the classical technique used in the literature. The second result is a polynomial time (ln 2)DEN(A)-approximation (and therefore (ln 2)ETD(A)-approximation) algorithm for the depth of the decision tree of A.
We then apply the above results to learning the class of disjunctions of predicates from membership queries [Nader H. Bshouty et al., 2017]. We show that the ETD of this class is bounded from above by the degree d of its Hasse diagram. We then show that Moshkov algorithm can be run in polynomial time and is (d/log d)-approximation algorithm. This gives optimal algorithms when the degree is constant. For example, learning axis parallel rays over constant dimension space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.34/LIPIcs.ISAAC.2018.34.pdf
Decision Tree
Minimal Depth
Approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
35:1
35:13
10.4230/LIPIcs.ISAAC.2018.35
article
Improved Algorithms for the Shortest Vector Problem and the Closest Vector Problem in the Infinity Norm
Aggarwal, Divesh
1
Mukhopadhyay, Priyanka
2
Centre for Quantum Technologies and School of Computing, National University of Singapore
Centre for Quantum Technologies, National University of Singapore
Ajtai, Kumar and Sivakumar [Ajtai et al., 2001] gave the first 2^O(n) algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. The algorithm starts with N in 2^O(n) randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice, and eventually obtaining the shortest non-zero vector. The running time of the sieving procedure is quadratic in N. Subsequent works [Arvind and Joglekar, 2008; Blömer and Naewe, 2009] generalized the algorithm to other norms.
We study this problem for the special but important case of the l_infty norm. We give a new sieving procedure that runs in time linear in N, thereby improving the running time of the algorithm for SVP in the l_infty norm. As in [Ajtai et al., 2002; Blömer and Naewe, 2009], we also extend this algorithm to obtain significantly faster algorithms for approximate versions of the shortest vector problem and the closest vector problem (CVP) in the l_infty norm.
We also show that the heuristic sieving algorithms of Nguyen and Vidick [Nguyen and Vidick, 2008] and Wang et al. [Wang et al., 2011] can also be analyzed in the l_infty norm. The main technical contribution in this part is to calculate the expected volume of intersection of a unit ball centred at origin and another ball of a different radius centred at a uniformly random point on the boundary of the unit ball. This might be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.35/LIPIcs.ISAAC.2018.35.pdf
Lattice
Shortest Vector Problem
Closest Vector Problem
l_infty norm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
36:1
36:13
10.4230/LIPIcs.ISAAC.2018.36
article
An Adaptive Version of Brandes' Algorithm for Betweenness Centrality
Bentert, Matthias
1
Dittmann, Alexander
1
Kellerhals, Leon
1
Nichterlein, André
1
Niedermeier, Rolf
1
Algorithmics and Computational Complexity, Faculty IV, TU Berlin, Germany
Betweenness centrality - measuring how many shortest paths pass through a vertex - is one of the most important network analysis concepts for assessing the relative importance of a vertex. The well-known algorithm of Brandes [2001] computes, on an n-vertex and m-edge graph, the betweenness centrality of all vertices in O(nm) worst-case time. In follow-up work, significant empirical speedups were achieved by preprocessing degree-one vertices and by graph partitioning based on cut vertices. We further contribute an algorithmic treatment of degree-two vertices, which turns out to be much richer in mathematical structure than the case of degree-one vertices. Based on these three algorithmic ingredients, we provide a strengthened worst-case running time analysis for betweenness centrality algorithms. More specifically, we prove an adaptive running time bound O(kn), where k < m is the size of a minimum feedback edge set of the input graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.36/LIPIcs.ISAAC.2018.36.pdf
network science
social network analysis
centrality measures
shortest paths
tree-like graphs
efficient pre- and postprocessing
FPT in P
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
37:1
37:13
10.4230/LIPIcs.ISAAC.2018.37
article
Algorithms for Coloring Reconfiguration Under Recolorability Constraints
Osawa, Hiroki
1
Suzuki, Akira
1
https://orcid.org/0000-0002-5212-0202
Ito, Takehiro
1
https://orcid.org/0000-0002-9912-6898
Zhou, Xiao
1
Graduate School of Information Sciences, Tohoku University, Japan
Coloring reconfiguration is one of the most well-studied reconfiguration problems. In the problem, we are given two (vertex-)colorings of a graph using at most k colors, and asked to determine whether there exists a transformation between them by recoloring only a single vertex at a time, while maintaining a k-coloring throughout. It is known that this problem is solvable in linear time for any graph if k <=3, while is PSPACE-complete for a fixed k >= 4. In this paper, we further investigate the problem from the viewpoint of recolorability constraints, which forbid some pairs of colors to be recolored directly. More specifically, the recolorability constraint is given in terms of an undirected graph R such that each node in R corresponds to a color, and each edge in R represents a pair of colors that can be recolored directly. In this paper, we give a linear-time algorithm to solve the problem under such a recolorability constraint if R is of maximum degree at most two. In addition, we show that the minimum number of recoloring steps required for a desired transformation can be computed in linear time for a yes-instance. We note that our results generalize the known positive ones for coloring reconfiguration.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.37/LIPIcs.ISAAC.2018.37.pdf
combinatorial reconfiguration
graph algorithm
graph coloring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
38:1
38:9
10.4230/LIPIcs.ISAAC.2018.38
article
A Cut Tree Representation for Pendant Pairs
Lo, On-Hei S.
1
Schmidt, Jens M.
1
Institut für Mathematik, Technische Universität Ilmenau, Weimarer Strasse 25, D-98693 Ilmenau, Germany
Two vertices v and w of a graph G are called a pendant pair if the maximal number of edge-disjoint paths in G between them is precisely min{d(v),d(w)}, where d denotes the degree function. The importance of pendant pairs stems from the fact that they are the key ingredient in one of the simplest and most widely used algorithms for the minimum cut problem today.
Mader showed 1974 that every simple graph with minimum degree delta contains Omega(delta^2) pendant pairs; this is the best bound known so far. We improve this result by showing that every simple graph G with minimum degree delta >= 5 or with edge-connectivity lambda >= 4 or with vertex-connectivity kappa >= 3 contains in fact Omega(delta |V|) pendant pairs. We prove that this bound is tight from several perspectives, and that Omega(delta |V|) pendant pairs can be computed efficiently, namely in linear time when a Gomory-Hu tree is given. Our method utilizes a new cut tree representation of graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.38/LIPIcs.ISAAC.2018.38.pdf
Pendant Pairs
Pendant Tree
Maximal Adjacency Ordering
Maximum Cardinality Search
Testing Edge-Connectivity
Gomory-Hu Tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
39:1
39:13
10.4230/LIPIcs.ISAAC.2018.39
article
Polyline Drawings with Topological Constraints
Di Giacomo, Emilio
1
https://orcid.org/0000-0002-9794-1928
Eades, Peter
2
Liotta, Giuseppe
1
https://orcid.org/0000-0002-2886-9694
Meijer, Henk
3
Montecchiani, Fabrizio
1
https://orcid.org/0000-0002-0543-8912
Università degli Studi di Perugia, Perugia, Italy
University of Sydney, Sydney, Australia
University College Roosevelt, Middelburg, The Netherlands
Let G be a simple topological graph and let Gamma be a polyline drawing of G. We say that Gamma partially preserves the topology of G if it has the same external boundary, the same rotation system, and the same set of crossings as G. Drawing Gamma fully preserves the topology of G if the planarization of G and the planarization of Gamma have the same planar embedding. We show that if the set of crossing-free edges of G forms a connected spanning subgraph, then G admits a polyline drawing that partially preserves its topology and that has curve complexity at most three (i.e., at most three bends per edge). If, however, the set of crossing-free edges of G is not a connected spanning subgraph, the curve complexity may be Omega(sqrt{n}). Concerning drawings that fully preserve the topology, we show that if G has skewness k, it admits one such drawing with curve complexity at most 2k; for skewness-1 graphs, the curve complexity can be reduced to one, which is a tight bound. We also consider optimal 2-plane graphs and discuss trade-offs between curve complexity and crossing angle resolution of drawings that fully preserve the topology.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.39/LIPIcs.ISAAC.2018.39.pdf
Topological graphs
graph drawing
curve complexity
skewness-k graphs
k-planar graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
40:1
40:13
10.4230/LIPIcs.ISAAC.2018.40
article
Almost Optimal Algorithms for Diameter-Optimally Augmenting Trees
Bilò, Davide
1
https://orcid.org/0000-0003-3169-4300
Department of Humanities and Social Sciences, University of Sassari, Via Roma 151, 07100 Sassari (SS), Italy
We consider the problem of augmenting an n-vertex tree with one shortcut in order to minimize the diameter of the resulting graph. The tree is embedded in an unknown space and we have access to an oracle that, when queried on a pair of vertices u and v, reports the weight of the shortcut (u,v) in constant time. Previously, the problem was solved in O(n^2 log^3 n) time for general weights [Oh and Ahn, ISAAC 2016], in O(n^2 log n) time for trees embedded in a metric space [Große et al., https://arxiv.org/abs/1607.05547], and in O(n log n) time for paths embedded in a metric space [Wang, WADS 2017]. Furthermore, a (1+epsilon)-approximation algorithm running in O(n+1/epsilon^3) has been designed for paths embedded in R^d, for constant values of d [Große et al., ICALP 2015].
The contribution of this paper is twofold: we address the problem for trees (not only paths) and we also improve upon all known results. More precisely, we design a time-optimal O(n^2) time algorithm for general weights. Moreover, for trees embedded in a metric space, we design (i) an exact O(n log n) time algorithm and (ii) a (1+epsilon)-approximation algorithm that runs in O(n+ epsilon^{-1}log epsilon^{-1}) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.40/LIPIcs.ISAAC.2018.40.pdf
Graph diameter
augmentation problem
trees
time-efficient algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
41:1
41:13
10.4230/LIPIcs.ISAAC.2018.41
article
Approximation Algorithms for Facial Cycles in Planar Embeddings
Da Lozzo, Giordano
1
https://orcid.org/0000-0003-2396-5174
Rutter, Ignaz
2
https://orcid.org/0000-0002-3794-4406
Computer Science Department, Roma Tre University, Italy
Department of Computer Science and Mathematics, University of Passau, Germany
Consider the following combinatorial problem: Given a planar graph G and a set of simple cycles C in G, find a planar embedding E of G such that the number of cycles in C that bound a face in E is maximized. This problem, called Max Facial C-Cycles, was first studied by Mutzel and Weiskircher [IPCO '99, http://dx.doi.org/10.1007/3-540-48777-8_27) and then proved NP-hard by Woeginger [Oper. Res. Lett., 2002, http://dx.doi.org/10.1016/S0167-6377(02)00119-0].
We establish a tight border of tractability for Max Facial C-Cycles in biconnected planar graphs by giving conditions under which the problem is NP-hard and showing that strengthening any of these conditions makes the problem polynomial-time solvable. Our main results are approximation algorithms for Max Facial C-Cycles. Namely, we give a 2-approximation for series-parallel graphs and a (4+epsilon)-approximation for biconnected planar graphs. Remarkably, this provides one of the first approximation algorithms for constrained embedding problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.41/LIPIcs.ISAAC.2018.41.pdf
Planar Embeddings
Facial Cycles
Complexity
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
42:1
42:13
10.4230/LIPIcs.ISAAC.2018.42
article
An Algorithm for the Maximum Weight Strongly Stable Matching Problem
Kunysz, Adam
1
Institute of Computer Science, University of Wrocław, Poland
An instance of the maximum weight strongly stable matching problem with incomplete lists and ties is an undirected bipartite graph G = (A cup B, E), with an adjacency list being a linearly ordered list of ties, which are vertices equally good for a given vertex. We are also given a weight function w on the set E. An edge (x, y) in E setminus M is a blocking edge for M if by getting matched to each other neither of the vertices x and y would become worse off and at least one of them would become better off. A matching is strongly stable if there is no blocking edge with respect to it. The goal is to compute a strongly stable matching of maximum weight with respect to w.
We give a polyhedral characterisation of the problem and prove that the strongly stable matching polytope is integral. This result implies that the maximum weight strongly stable matching problem can be solved in polynomial time. Thereby answering an open question by Gusfield and Irving [Dan Gusfield and Robert W. Irving, 1989]. The main result of this paper is an efficient O(nm log{(Wn)}) time algorithm for computing a maximum weight strongly stable matching, where we denote n = |V|, m = |E| and W is a maximum weight of an edge in G. For small edge weights we show that the problem can be solved in O(nm) time. Note that the fastest known algorithm for the unweighted version of the problem has O(nm) runtime [Telikepalli Kavitha et al., 2007]. Our algorithm is based on the rotation structure which was constructed for strongly stable matchings in [Adam Kunysz et al., 2016].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.42/LIPIcs.ISAAC.2018.42.pdf
Stable marriage
Strongly stable matching
Weighted matching
Rotation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
43:1
43:11
10.4230/LIPIcs.ISAAC.2018.43
article
Approximation Algorithm for Vertex Cover with Multiple Covering Constraints
Hong, Eunpyeong
1
Kao, Mong-Jen
2
Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
Department of Computer Science and Information Engineering, National Chung-Cheng University, Chiayi, Taiwan
We consider the vertex cover problem with multiple coverage constraints in hypergraphs. In this problem, we are given a hypergraph G=(V,E) with a maximum edge size f, a cost function w: V - > Z^+, and edge subsets P_1,P_2,...,P_r of E along with covering requirements k_1,k_2,...,k_r for each subset. The objective is to find a minimum cost subset S of V such that, for each edge subset P_i, at least k_i edges of it are covered by S. This problem is a basic yet general form of classical vertex cover problem and a generalization of the edge-partitioned vertex cover problem considered by Bera et al.
We present a primal-dual algorithm yielding an (f * H_r + H_r)-approximation for this problem, where H_r is the r^{th} harmonic number. This improves over the previous ratio of (3cf log r), where c is a large constant used to ensure a low failure probability for Monte-Carlo randomized algorithms. Compared to previous result, our algorithm is deterministic and pure combinatorial, meaning that no Ellipsoid solver is required for this basic problem. Our result can be seen as a novel reinterpretation of a few classical tight results using the language of LP primal-duality.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.43/LIPIcs.ISAAC.2018.43.pdf
Vertex cover
multiple cover constraints
Approximation algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
44:1
44:13
10.4230/LIPIcs.ISAAC.2018.44
article
Correlation Clustering Generalized
Gleich, David F.
1
Veldt, Nate
2
Wirth, Anthony
3
https://orcid.org/0000-0003-3746-6704
Department of Computer Science, Purdue University, West Lafayette, Indiana, USA
Department of Mathematics, Purdue University, West Lafayette, Indiana, USA
School of Computing and Information Systems, The University of Melbourne, Parkville, Victoria, Australia
We present new results for LambdaCC and MotifCC, two recently introduced variants of the well-studied correlation clustering problem. Both variants are motivated by applications to network analysis and community detection, and have non-trivial approximation algorithms.
We first show that the standard linear programming relaxation of LambdaCC has a Theta(log n) integrality gap for a certain choice of the parameter lambda. This sheds light on previous challenges encountered in obtaining parameter-independent approximation results for LambdaCC. We generalize a previous constant-factor algorithm to provide the best results, from the LP-rounding approach, for an extended range of lambda.
MotifCC generalizes correlation clustering to the hypergraph setting. In the case of hyperedges of degree 3 with weights satisfying probability constraints, we improve the best approximation factor from 9 to 8. We show that in general our algorithm gives a 4(k-1) approximation when hyperedges have maximum degree k and probability weights. We additionally present approximation results for LambdaCC and MotifCC where we restrict to forming only two clusters.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.44/LIPIcs.ISAAC.2018.44.pdf
Correlation Clustering
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
45:1
45:12
10.4230/LIPIcs.ISAAC.2018.45
article
Partitioning Vectors into Quadruples: Worst-Case Analysis of a Matching-Based Algorithm
Ficker, Annette M. C.
1
Erlebach, Thomas
2
https://orcid.org/0000-0002-4470-5868
Mihalák, Matús
3
https://orcid.org/0000-0002-1898-607X
Spieksma, Frits C. R.
4
https://orcid.org/0000-0002-2547-3782
Faculty of Economics and Business, KU Leuven, Leuven, Belgium
Department of Informatics, University of Leicester, Leicester, United Kingdom
Department of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
Consider a problem where 4k given vectors need to be partitioned into k clusters of four vectors each. A cluster of four vectors is called a quad, and the cost of a quad is the sum of the component-wise maxima of the four vectors in the quad. The problem is to partition the given 4k vectors into k quads with minimum total cost. We analyze a straightforward matching-based algorithm and prove that this algorithm is a 3/2-approximation algorithm for this problem. We further analyze the performance of this algorithm on a hierarchy of special cases of the problem and prove that, in one particular case, the algorithm is a 5/4-approximation algorithm. Our analysis is tight in all cases except one.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.45/LIPIcs.ISAAC.2018.45.pdf
approximation algorithm
matching
clustering problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
46:1
46:12
10.4230/LIPIcs.ISAAC.2018.46
article
Coresets for Fuzzy K-Means with Applications
Blömer, Johannes
1
Brauer, Sascha
1
Bujna, Kathrin
1
Department of Computer Science, Paderborn University, Paderborn, Germany
The fuzzy K-means problem is a popular generalization of the well-known K-means problem to soft clusterings. We present the first coresets for fuzzy K-means with size linear in the dimension, polynomial in the number of clusters, and poly-logarithmic in the number of points. We show that these coresets can be employed in the computation of a (1+epsilon)-approximation for fuzzy K-means, improving previously presented results. We further show that our coresets can be maintained in an insertion-only streaming setting, where data points arrive one-by-one.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.46/LIPIcs.ISAAC.2018.46.pdf
clustering
fuzzy k-means
coresets
approximation algorithms
streaming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
47:1
47:13
10.4230/LIPIcs.ISAAC.2018.47
article
Streaming Algorithms for Planar Convex Hulls
Farach-Colton, Martín
1
Li, Meng
1
Tsai, Meng-Tsung
2
Department of Computer Science, Rutgers University, Piscataway, USA
Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
Many classical algorithms are known for computing the convex hull of a set of n point in R^2 using O(n) space. For large point sets, whose size exceeds the size of the working space, these algorithms cannot be directly used. The current best streaming algorithm for computing the convex hull is computationally expensive, because it needs to solve a set of linear programs.
In this paper, we propose simpler and faster streaming and W-stream algorithms for computing the convex hull. Our streaming algorithm has small pass complexity, which is roughly a square root of the current best bound, and it is simpler in the sense that our algorithm mainly relies on computing the convex hulls of smaller point sets. Our W-stream algorithms, one of which is deterministic and the other of which is randomized, have nearly-optimal tradeoff between the pass complexity and space usage, as we established by a new unconditional lower bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.47/LIPIcs.ISAAC.2018.47.pdf
Convex Hulls
Streaming Algorithms
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
48:1
48:13
10.4230/LIPIcs.ISAAC.2018.48
article
Deterministic Treasure Hunt in the Plane with Angular Hints
Bouchard, Sébastien
1
Dieudonné, Yoann
2
Pelc, Andrzej
3
Petit, Franck
1
Sorbonne Université, CNRS, INRIA, LIP6, F-75005 Paris, France
Laboratoire MIS, Université de Picardie Jules Verne, Amiens, France
Département d'informatique, Université du Québec en Outaouais, Gatineau, Canada
A mobile agent equipped with a compass and a measure of length has to find an inert treasure in the Euclidean plane. Both the agent and the treasure are modeled as points. In the beginning, the agent is at a distance at most D>0 from the treasure, but knows neither the distance nor any bound on it. Finding the treasure means getting at distance at most 1 from it. The agent makes a series of moves. Each of them consists in moving straight in a chosen direction at a chosen distance. In the beginning and after each move the agent gets a hint consisting of a positive angle smaller than 2 pi whose vertex is at the current position of the agent and within which the treasure is contained. We investigate the problem of how these hints permit the agent to lower the cost of finding the treasure, using a deterministic algorithm, where the cost is the worst-case total length of the agent's trajectory. It is well known that without any hint the optimal (worst case) cost is Theta(D^2). We show that if all angles given as hints are at most pi, then the cost can be lowered to O(D), which is optimal. If all angles are at most beta, where beta<2 pi is a constant unknown to the agent, then the cost is at most O(D^{2-epsilon}), for some epsilon>0. For both these positive results we present deterministic algorithms achieving the above costs. Finally, if angles given as hints can be arbitrary, smaller than 2 pi, then we show that cost Theta(D^2) cannot be beaten.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.48/LIPIcs.ISAAC.2018.48.pdf
treasure hunt
deterministic algorithm
mobile agent
hint
plane
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
49:1
49:12
10.4230/LIPIcs.ISAAC.2018.49
article
Competitive Searching for a Line on a Line Arrangement
Bouts, Quirijn
1
Castermans, Thom
2
van Goethem, Arthur
2
van Kreveld, Marc
3
Meulemans, Wouter
2
ASML Veldhoven, the Netherlands
TU Eindhoven, the Netherlands
Utrecht University, the Netherlands
We discuss the problem of searching for an unknown line on a known or unknown line arrangement by a searcher S, and show that a search strategy exists that finds the line competitively, that is, with detour factor at most a constant when compared to the situation where S has all knowledge. In the case where S knows all lines but not which one is sought, the strategy is 79-competitive. We also show that it may be necessary to travel on Omega(n) lines to realize a constant competitive ratio. In the case where initially, S does not know any line, but learns about the ones it encounters during the search, we give a 414.2-competitive search strategy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.49/LIPIcs.ISAAC.2018.49.pdf
Competitive searching
line arrangement
detour factor
search strategy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
50:1
50:12
10.4230/LIPIcs.ISAAC.2018.50
article
Stabbing Pairwise Intersecting Disks by Five Points
Har-Peled, Sariel
1
https://orcid.org/0000-0003-2638-9635
Kaplan, Haim
2
Mulzer, Wolfgang
3
https://orcid.org/0000-0002-1948-5840
Roditty, Liam
4
Seiferth, Paul
3
Sharir, Micha
2
Willert, Max
3
Department of Computer Science, University of Illinois, Urbana, IL 61801, USA
School of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel
Institut für Informatik, Freie Universität Berlin, 14195 Berlin, Germany
Department of Computer Science, Bar Ilan University, Ramat Gan 5290002, Israel
Suppose we are given a set D of n pairwise intersecting disks in the plane. A planar point set P stabs D if and only if each disk in D contains at least one point from P. We present a deterministic algorithm that takes O(n) time to find five points that stab D. Furthermore, we give a simple example of 13 pairwise intersecting disks that cannot be stabbed by three points.
This provides a simple - albeit slightly weaker - algorithmic version of a classical result by Danzer that such a set D can always be stabbed by four points.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.50/LIPIcs.ISAAC.2018.50.pdf
Disk graph
piercing set
LP-type problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
51:1
51:12
10.4230/LIPIcs.ISAAC.2018.51
article
Point Location in Incremental Planar Subdivisions
Oh, Eunjin
1
Max Planck Institute for Informatics, Saarbrücken, Germany
We study the point location problem in incremental (possibly disconnected) planar subdivisions, that is, dynamic subdivisions allowing insertions of edges and vertices only. Specifically, we present an O(n log n)-space data structure for this problem that supports queries in O(log^2 n) time and updates in O(log n log log n) amortized time. This is the first result that achieves polylogarithmic query and update times simultaneously in incremental planar subdivisions. Its update time is significantly faster than the update time of the best known data structure for fully-dynamic (possibly disconnected) planar subdivisions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.51/LIPIcs.ISAAC.2018.51.pdf
Dynamic point location
general incremental planar subdivisions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
52:1
52:12
10.4230/LIPIcs.ISAAC.2018.52
article
Convex Partial Transversals of Planar Regions
Keikha, Vahideh
1
van de Kerkhof, Mees
2
van Kreveld, Marc
2
Kostitsyna, Irina
3
Löffler, Maarten
2
Staals, Frank
2
Urhausen, Jérôme
2
Vermeulen, Jordi L.
2
Wiratma, Lionov
4
Dept. of Mathematics and Computer Science, University of Sistan and Baluchestan, Zahedan, Iran
Dept. of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands
Dept. of Mathematics and Computer Science, TU Eindhoven, Eindhoven, The Netherlands
Dept. of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands, Dept. of Informatics, Parahyangan Catholic University, Bandung, Indonesia
We consider the problem of testing, for a given set of planar regions R and an integer k, whether there exists a convex shape whose boundary intersects at least k regions of R. We provide polynomial-time algorithms for the case where the regions are disjoint axis-aligned rectangles or disjoint line segments with a constant number of orientations. On the other hand, we show that the problem is NP-hard when the regions are intersecting axis-aligned rectangles or 3-oriented line segments. For several natural intermediate classes of shapes (arbitrary disjoint segments, intersecting 2-oriented segments) the problem remains open.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.52/LIPIcs.ISAAC.2018.52.pdf
computational geometry
algorithms
NP-hardness
convex transversals
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
53:1
53:13
10.4230/LIPIcs.ISAAC.2018.53
article
Extending the Centerpoint Theorem to Multiple Points
Pilz, Alexander
1
https://orcid.org/0000-0002-6059-1821
Schnider, Patrick
2
Institute of Software Technology, Graz University of Technology, Austria
Department of Computer Science, ETH Zurich, Switzerland
The centerpoint theorem is a well-known and widely used result in discrete geometry. It states that for any point set P of n points in R^d, there is a point c, not necessarily from P, such that each halfspace containing c contains at least n/(d+1) points of P. Such a point c is called a centerpoint, and it can be viewed as a generalization of a median to higher dimensions. In other words, a centerpoint can be interpreted as a good representative for the point set P. But what if we allow more than one representative? For example in one-dimensional data sets, often certain quantiles are chosen as representatives instead of the median.
We present a possible extension of the concept of quantiles to higher dimensions. The idea is to find a set Q of (few) points such that every halfspace that contains one point of Q contains a large fraction of the points of P and every halfspace that contains more of Q contains an even larger fraction of P. This setting is comparable to the well-studied concepts of weak epsilon-nets and weak epsilon-approximations, where it is stronger than the former but weaker than the latter. We show that for any point set of size n in R^d and for any positive alpha_1,...,alpha_k where alpha_1 <= alpha_2 <= ... <= alpha_k and for every i,j with i+j <= k+1 we have that (d-1)alpha_k+alpha_i+alpha_j <= 1, we can find Q of size k such that each halfspace containing j points of Q contains least alpha_j n points of P. For two-dimensional point sets we further show that for every alpha and beta with alpha <= beta and alpha+beta <= 2/3 we can find Q with |Q|=3 such that each halfplane containing one point of Q contains at least alpha n of the points of P and each halfplane containing all of Q contains at least beta n points of P. All these results generalize to the setting where P is any mass distribution. For the case where P is a point set in R^2 and |Q|=2, we provide algorithms to find such points in time O(n log^3 n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.53/LIPIcs.ISAAC.2018.53.pdf
centerpoint
point sets
Tukey depth
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
54:1
54:12
10.4230/LIPIcs.ISAAC.2018.54
article
Approximate Query Processing over Static Sets and Sliding Windows
Ben Basat, Ran
1
Jo, Seungbum
2
https://orcid.org/0000-0002-8644-3691
Satti, Srinivasa Rao
3
https://orcid.org/0000-0003-0636-9880
Ugare, Shubham
4
Harvard University, Cambridge, USA
University of Siegen, Germany
Seoul National University, South Korea
IIT Guwahati, Guwahati, India
Indexing of static and dynamic sets is fundamental to a large set of applications such as information retrieval and caching. Denoting the characteristic vector of the set by B, we consider the problem of encoding sets and multisets to support approximate versions of the operations rank(i) (i.e., computing sum_{j <= i} B[j]) and select(i) (i.e., finding min{p|rank(p) >= i}) queries. We study multiple types of approximations (allowing an error in the query or the result) and present lower bounds and succinct data structures for several variants of the problem. We also extend our model to sliding windows, in which we process a stream of elements and compute suffix sums. This is a generalization of the window summation problem that allows the user to specify the window size at query time. Here, we provide an algorithm that supports updates and queries in constant time while requiring just (1+o(1)) factor more space than the fixed-window summation algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.54/LIPIcs.ISAAC.2018.54.pdf
Streaming
Algorithms
Sliding window
Lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
55:1
55:26
10.4230/LIPIcs.ISAAC.2018.55
article
Multi-Finger Binary Search Trees
Chalermsook, Parinya
1
Goswami, Mayank
2
Kozma, László
3
Mehlhorn, Kurt
4
Saranurak, Thatchaphol
5
Aalto University, Finland
Queens College, City University of New York, USA
TU Eindhoven, The Netherlands
MPI für Informatik, Saarbrücken, Germany
KTH Royal Institute of Technology, Sweden
We study multi-finger binary search trees (BSTs), a far-reaching extension of the classical BST model, with connections to the well-studied k-server problem. Finger search is a popular technique for speeding up BST operations when a query sequence has locality of reference. BSTs with multiple fingers can exploit more general regularities in the input. In this paper we consider the cost of serving a sequence of queries in an optimal (offline) BST with k fingers, a powerful benchmark against which other algorithms can be measured.
We show that the k-finger optimum can be matched by a standard dynamic BST (having a single root-finger) with an O(log{k}) factor overhead. This result is tight for all k, improving the O(k) factor implicit in earlier work. Furthermore, we describe new online BSTs that match this bound up to a (log{k})^{O(1)} factor. Previously only the "one-finger" special case was known to hold for an online BST (Iacono, Langerman, 2016; Cole et al., 2000). Splay trees, assuming their conjectured optimality (Sleator and Tarjan, 1983), would have to match our bounds for all k.
Our online algorithms are randomized and combine techniques developed for the k-server problem with a multiplicative-weights scheme for learning tree metrics. To our knowledge, this is the first time when tools developed for the k-server problem are used in BSTs. As an application of our k-finger results, we show that BSTs can efficiently serve queries that are close to some recently accessed item. This is a (restricted) form of the unified property (Iacono, 2001) that was previously not known to hold for any BST algorithm, online or offline.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.55/LIPIcs.ISAAC.2018.55.pdf
binary search trees
dynamic optimality
finger search
k-server
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
56:1
56:12
10.4230/LIPIcs.ISAAC.2018.56
article
On Counting Oracles for Path Problems
Bezáková, Ivona
1
Searns, Andrew
2
Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA
Rochester Institute of Technology, Rochester, NY, USA
We initiate the study of counting oracles for various path problems in graphs. Distance oracles have gained a lot of attention in recent years, with studies of the underlying space and time tradeoffs. For a given graph G, a distance oracle is a data structure which can be used to answer distance queries for pairs of vertices s,t in V(G). In this work, we extend the set up to answering counting queries: for a pair of vertices s,t, the oracle needs to provide the number of (shortest or all) paths from s to t. We present O(n^{1.5}) preprocessing time, O(n^{1.5}) space, and O(sqrt{n}) query time algorithms for oracles counting shortest paths in planar graphs and for counting all paths in planar directed acyclic graphs. We extend our results to other graphs which admit small balanced separators and present applications where our oracle improves the currently best known running times.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.56/LIPIcs.ISAAC.2018.56.pdf
Counting oracle
Path problems
Shortest paths
Separators
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
57:1
57:13
10.4230/LIPIcs.ISAAC.2018.57
article
Reconstructing Phylogenetic Tree From Multipartite Quartet System
Hirai, Hiroshi
1
Iwamasa, Yuni
1
Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Japan
A phylogenetic tree is a graphical representation of an evolutionary history in a set of taxa in which the leaves correspond to taxa and the non-leaves correspond to speciations. One of important problems in phylogenetic analysis is to assemble a global phylogenetic tree from smaller pieces of phylogenetic trees, particularly, quartet trees. Quartet Compatibility is to decide whether there is a phylogenetic tree inducing a given collection of quartet trees, and to construct such a phylogenetic tree if it exists. It is known that Quartet Compatibility is NP-hard but there are only a few results known for polynomial-time solvable subclasses.
In this paper, we introduce two novel classes of quartet systems, called complete multipartite quartet system and full multipartite quartet system, and present polynomial time algorithms for Quartet Compatibility for these systems. We also see that complete/full multipartite quartet systems naturally arise from a limited situation of block-restricted measurement.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.57/LIPIcs.ISAAC.2018.57.pdf
phylogenetic tree
quartet system
reconstruction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
58:1
58:13
10.4230/LIPIcs.ISAAC.2018.58
article
Rectilinear Link Diameter and Radius in a Rectilinear Polygonal Domain
Arseneva, Elena
1
https://orcid.org/0000-0002-5267-4512
Chiu, Man-Kwun
2
Korman, Matias
3
Markovic, Aleksandar
4
Okamoto, Yoshio
5
https://orcid.org/0000-0002-9826-7074
Ooms, Aurélien
6
https://orcid.org/0000-0002-5733-1383
van Renssen, André
7
https://orcid.org/0000-0002-9294-9947
Roeloffzen, Marcel
4
St. Petersburg State University, St. Petersburg, Russia
Institut für Informatik, Freie Universität Berlin, Berlin, Germany
Tufts University, Boston, USA
TU Eindhoven, Eindhoven, The Netherlands
University of Electro-Communications, Tokyo, Japan, RIKEN Center for Advanced Intelligent Project, Tokyo, Japan
Université libre de Bruxelles (ULB), Brussels, Belgium
University of Sydney, Sydney, Australia
We study the computation of the diameter and radius under the rectilinear link distance within a rectilinear polygonal domain of n vertices and h holes. We introduce a graph of oriented distances to encode the distance between pairs of points of the domain. This helps us transform the problem so that we can search through the candidates more efficiently. Our algorithm computes both the diameter and the radius in O(min(n^omega, n^2 + nh log h + chi^2)) time, where omega<2.373 denotes the matrix multiplication exponent and chi in Omega(n) cap O(n^2) is the number of edges of the graph of oriented distances. We also provide an alternative algorithm for computing the diameter that runs in O(n^2 log n) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.58/LIPIcs.ISAAC.2018.58.pdf
Rectilinear link distance
polygonal domain
diameter
radius
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
59:1
59:12
10.4230/LIPIcs.ISAAC.2018.59
article
Minimizing Distance-to-Sight in Polygonal Domains
Oh, Eunjin
1
Max Planck Institute for Informatics Saarbrücken, Germany
In this paper, we consider the quickest pair-visibility problem in polygonal domains. Given two points in a polygonal domain with h holes of total complexity n, we want to minimize the maximum distance that the two points travel in order to see each other in the polygonal domain. We present an O(n log^2 n+h^2 log^4 h)-time algorithm for this problem. We show that this running time is almost optimal unless the 3sum problem can be solved in O(n^{2-epsilon}) time for some epsilon>0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.59/LIPIcs.ISAAC.2018.59.pdf
Visibility in polygonal domains
shortest path in polygonal domains
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
60:1
60:9
10.4230/LIPIcs.ISAAC.2018.60
article
Partially Walking a Polygon
Aurenhammer, Franz
1
Steinkogler, Michael
1
Klein, Rolf
2
Institute for Theoretical Computer Science, University of Technology, Graz, Austria
Universität Bonn, Institut für Informatik, Bonn, Germany
Deciding two-guard walkability of an n-sided polygon is a well-understood problem. We study the following more general question: How far can two guards reach from a given source vertex while staying mutually visible, in the (more realistic) case that the polygon is not entirely walkable? There can be Theta(n) such maximal walks, and we show how to find all of them in O(n log n) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.60/LIPIcs.ISAAC.2018.60.pdf
Polygon
guard walk
visibility
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
61:1
61:13
10.4230/LIPIcs.ISAAC.2018.61
article
Stabbing Rectangles by Line Segments - How Decomposition Reduces the Shallow-Cell Complexity
Chan, Timothy M.
1
van Dijk, Thomas C.
2
https://orcid.org/0000-0001-6553-7317
Fleszar, Krzysztof
3
https://orcid.org/0000-0002-1129-3289
Spoerhase, Joachim
4
https://orcid.org/0000-0002-2601-6452
Wolff, Alexander
2
https://orcid.org/0000-0001-5872-718X
University of Illinois at Urbana-Champaign, U.S.A.
Universität Würzburg, Germany
Max-Planck-Institut für Informatik, Saarbrücken, Germany
Aalto University, Espoo, Finland, Universität Würzburg, Germany
We initiate the study of the following natural geometric optimization problem. The input is a set of axis-aligned rectangles in the plane. The objective is to find a set of horizontal line segments of minimum total length so that every rectangle is stabbed by some line segment. A line segment stabs a rectangle if it intersects its left and its right boundary. The problem, which we call Stabbing, can be motivated by a resource allocation problem and has applications in geometric network design. To the best of our knowledge, only special cases of this problem have been considered so far.
Stabbing is a weighted geometric set cover problem, which we show to be NP-hard. While for general set cover the best possible approximation ratio is Theta(log n), it is an important field in geometric approximation algorithms to obtain better ratios for geometric set cover problems. Chan et al. [SODA'12] generalize earlier results by Varadarajan [STOC'10] to obtain sub-logarithmic performances for a broad class of weighted geometric set cover instances that are characterized by having low shallow-cell complexity. The shallow-cell complexity of Stabbing instances, however, can be high so that a direct application of the framework of Chan et al. gives only logarithmic bounds. We still achieve a constant-factor approximation by decomposing general instances into what we call laminar instances that have low enough complexity.
Our decomposition technique yields constant-factor approximations also for the variant where rectangles can be stabbed by horizontal and vertical segments and for two further geometric set cover problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.61/LIPIcs.ISAAC.2018.61.pdf
Geometric optimization
NP-hard
approximation
shallow-cell complexity
line stabbing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
62:1
62:12
10.4230/LIPIcs.ISAAC.2018.62
article
Impatient Online Matching
Liu, Xingwu
1
Pan, Zhida
2
Wang, Yuyi
3
Wattenhofer, Roger
3
SKL Computer Architecture, ICT, CAS , University of Chinese Academy of Sciences, Beijing, China
Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
ETH Zurich, Switzerland
We investigate the problem of Min-cost Perfect Matching with Delays (MPMD) in which requests are pairwise matched in an online fashion with the objective to minimize the sum of space cost and time cost. Though linear-MPMD (i.e., time cost is linear in delay) has been thoroughly studied in the literature, it does not well model impatient requests that are common in practice. Thus, we propose convex-MPMD where time cost functions are convex, capturing the situation where time cost increases faster and faster. Since the existing algorithms for linear-MPMD are not competitive any more, we devise a new deterministic algorithm for convex-MPMD problems. For a large class of convex time cost functions, our algorithm achieves a competitive ratio of O(k) on any k-point uniform metric space. Moreover, our deterministic algorithm is asymptotically optimal, which uncover a substantial difference between convex-MPMD and linear-MPMD which allows a deterministic algorithm with constant competitive ratio on any uniform metric space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.62/LIPIcs.ISAAC.2018.62.pdf
online algorithm
online matching
convex function
competitive analysis
lower bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
63:1
63:12
10.4230/LIPIcs.ISAAC.2018.63
article
Extensions of Self-Improving Sorters
Cheng, Siu-Wing
1
https://orcid.org/0000-0002-3557-9935
Yan, Lie
2
HKUST, Hong Kong, China
Hangzhou, China
Ailon et al. (SICOMP 2011) proposed a self-improving sorter that tunes its performance to the unknown input distribution in a training phase. The distribution of the input numbers x_1,x_2,...,x_n must be of the product type, that is, each x_i is drawn independently from an arbitrary distribution D_i, and the D_i's are independent of each other. We study two extensions that relax this requirement. The first extension models hidden classes in the input. We consider the case that numbers in the same class are governed by linear functions of the same hidden random parameter. The second extension considers a hidden mixture of product distributions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.63/LIPIcs.ISAAC.2018.63.pdf
sorting
self-improving algorithms
entropy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
64:1
64:13
10.4230/LIPIcs.ISAAC.2018.64
article
Online Scheduling of Car-Sharing Requests Between Two Locations with Many Cars and Flexible Advance Bookings
Luo, Kelin
1
https://orcid.org/0000-0003-2006-0601
Erlebach, Thomas
2
https://orcid.org/0000-0002-4470-5868
Xu, Yinfeng
1
School of Management, Xi'an Jiaotong University, Xi'an, China
Department of Informatics, University of Leicester, Leicester, United Kingdom
We study an on-line scheduling problem that is motivated by applications such as car-sharing, in which users submit ride requests, and the scheduler aims to accept requests of maximum total profit using k servers (cars). Each ride request specifies the pick-up time and the pick-up location (among two locations, with the other location being the destination). The scheduler has to decide whether or not to accept a request immediately at the time when the request is submitted (booking time). We consider two variants of the problem with respect to constraints on the booking time: In the fixed booking time variant, a request must be submitted a fixed amount of time before the pick-up time. In the variable booking time variant, a request can be submitted at any time during a certain time interval (called the booking horizon) that precedes the pick-up time. We present lower bounds on the competitive ratio for both variants and propose a balanced greedy algorithm (BGA) that achieves the best possible competitive ratio. We prove that, for the fixed booking time variant, BGA is 1.5-competitive if k=3i ( i in N) and the fixed booking length is not less than the travel time between the two locations; for the variable booking time variant, BGA is 1.5-competitive if k=3i ( i in N) and the length of the booking horizon is less than the travel time between the two locations, and BGA is 5/3-competitive if k=5i ( i in N) and the length of the booking horizon is not less than the travel time between the two locations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.64/LIPIcs.ISAAC.2018.64.pdf
Car-sharing system
Competitive analysis
On-line scheduling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
65:1
65:12
10.4230/LIPIcs.ISAAC.2018.65
article
Packing Returning Secretaries
Hoefer, Martin
1
Wilhelmi, Lisa
1
Goethe University Frankfurt/Main, Germany
We study online secretary problems with returns in combinatorial packing domains with n candidates that arrive sequentially over time in random order. The goal is to accept a feasible packing of candidates of maximum total value. In the first variant, each candidate arrives exactly twice. All 2n arrivals occur in random order. We propose a simple 0.5-competitive algorithm that can be combined with arbitrary approximation algorithms for the packing domain, even when the total value of candidates is a subadditive function. For bipartite matching, we obtain an algorithm with competitive ratio at least 0.5721 - o(1) for growing n, and an algorithm with ratio at least 0.5459 for all n >= 1. We extend all algorithms and ratios to k >= 2 arrivals per candidate.
In the second variant, there is a pool of undecided candidates. In each round, a random candidate from the pool arrives. Upon arrival a candidate can be either decided (accept/reject) or postponed (returned into the pool). We mainly focus on minimizing the expected number of postponements when computing an optimal solution. An expected number of Theta(n log n) is always sufficient. For matroids, we show that the expected number can be reduced to O(r log (n/r)), where r <=n/2 is the minimum of the ranks of matroid and dual matroid. For bipartite matching, we show a bound of O(r log n), where r is the size of the optimum matching. For general packing, we show a lower bound of Omega(n log log n), even when the size of the optimum is r = Theta(log n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.65/LIPIcs.ISAAC.2018.65.pdf
Secretary Problem
Coupon Collector Problem
Matroids
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
66:1
66:12
10.4230/LIPIcs.ISAAC.2018.66
article
Simple 2^f-Color Choice Dictionaries
Kammer, Frank
1
Sajenko, Andrej
1
THM, University of Applied Sciences Mittelhessen, Germany
A c-color choice dictionary of size n in N is a fundamental data structure in the development of space-efficient algorithms that stores the colors of n elements and that supports operations to get and change the color of an element as well as an operation choice that returns an arbitrary element of that color. For an integer f>0 and a constant c=2^f, we present a word-RAM algorithm for a c-color choice dictionary of size n that supports all operations above in constant time and uses only nf+1 bits, which is optimal if all operations have to run in o(n/w) time where w is the word size.
In addition, we extend our choice dictionary by an operation union without using more space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.66/LIPIcs.ISAAC.2018.66.pdf
space efficient
succinct
word RAM
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
67:1
67:12
10.4230/LIPIcs.ISAAC.2018.67
article
Succinct Data Structures for Chordal Graphs
Munro, J. Ian
1
https://orcid.org/0000-0002-7165-7988
Wu, Kaiyu
1
https://orcid.org/0000-0001-7562-1336
Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada
We study the problem of approximate shortest path queries in chordal graphs and give a n log n + o(n log n) bit data structure to answer the approximate distance query to within an additive constant of 1 in O(1) time.
We study the problem of succinctly storing a static chordal graph to answer adjacency, degree, neighbourhood and shortest path queries. Let G be a chordal graph with n vertices. We design a data structure using the information theoretic minimal n^2/4 + o(n^2) bits of space to support the queries:
- whether two vertices u,v are adjacent in time f(n) for any f(n) in omega(1).
- the degree of a vertex in O(1) time.
- the vertices adjacent to u in (f(n))^2 time per neighbour
- the length of the shortest path from u to v in O(nf(n)) time
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.67/LIPIcs.ISAAC.2018.67.pdf
Succinct Data Structure
Chordal Graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
68:1
68:12
10.4230/LIPIcs.ISAAC.2018.68
article
Tree Path Majority Data Structures
Gagie, Travis
1
He, Meng
2
Navarro, Gonzalo
3
CeBiB - Center for Biotechnology and Bioengineering, Chile, School of Computer Science and Telecommunications, Diego Portales University, Chile
Faculty of Computer Science, Dalhousie University, Canada
CeBiB - Center for Biotechnology and Bioengineering, Chile, IMFD - Millenium Institute for Foundational Research on Data, Chile, Dept. of Computer Science, University of Chile, Chile
We present the first solution to tau-majorities on tree paths. Given a tree of n nodes, each with a label from [1..sigma], and a fixed threshold 0<tau<1, such a query gives two nodes u and v and asks for all the labels that appear more than tau * |P_{uv}| times in the path P_{uv} from u to v, where |P_{uv}| denotes the number of nodes in P_{uv}. Note that the answer to any query is of size up to 1/tau. On a w-bit RAM, we obtain a linear-space data structure with O((1/tau)lg^* n lg lg_w sigma) query time. For any kappa > 1, we can also build a structure that uses O(n lg^{[kappa]} n) space, where lg^{[kappa]} n denotes the function that applies logarithm kappa times to n, and answers queries in time O((1/tau)lg lg_w sigma). The construction time of both structures is O(n lg n). We also describe two succinct-space solutions with the same query time of the linear-space structure. One uses 2nH + 4n + o(n)(H+1) bits, where H <=lg sigma is the entropy of the label distribution, and can be built in O(n lg n) time. The other uses nH + O(n) + o(nH) bits and is built in O(n lg n) time w.h.p.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.68/LIPIcs.ISAAC.2018.68.pdf
Majorities on Trees
Succinct data structures
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
69:1
69:13
10.4230/LIPIcs.ISAAC.2018.69
article
Encoding Two-Dimensional Range Top-k Queries Revisited
Jo, Seungbum
1
https://orcid.org/0000-0002-8644-3691
Satti, Srinivasa Rao
2
https://orcid.org/0000-0003-0636-9880
University of Siegen, Germany
Seoul National University, South Korea
We consider the problem of encoding two-dimensional arrays, whose elements come from a total order, for answering Top-k queries. The aim is to obtain encodings that use space close to the information-theoretic lower bound, which can be constructed efficiently. For 2 x n arrays, we first give upper and lower bounds on space for answering sorted and unsorted 3-sided Top-k queries. For m x n arrays, with m <=n and k <=mn, we obtain (m lg{(k+1)n choose n}+4nm(m-1)+o(n))-bit encoding for answering sorted 4-sided Top-k queries. This improves the min{(O(mn lg{n}),m^2 lg{(k+1)n choose n} + m lg{m}+o(n))}-bit encoding of Jo et al. [CPM, 2016] when m = o(lg{n}). This is a consequence of a new encoding that encodes a 2 x n array to support sorted 4-sided Top-k queries on it using an additional 4n bits, in addition to the encodings to support the Top-k queries on individual rows. This new encoding is a non-trivial generalization of the encoding of Jo et al. [CPM, 2016] that supports sorted 4-sided Top-2 queries on it using an additional 3n bits. We also give almost optimal space encodings for 3-sided Top-k queries, and show lower bounds on encodings for 3-sided and 4-sided Top-k queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.69/LIPIcs.ISAAC.2018.69.pdf
Encoding model
top-k query
range minimum query
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
70:1
70:13
10.4230/LIPIcs.ISAAC.2018.70
article
Longest Unbordered Factor in Quasilinear Time
Kociumaka, Tomasz
1
https://orcid.org/0000-0002-2477-1702
Kundu, Ritu
2
https://orcid.org/0000-0003-1353-4004
Mohamed, Manal
2
https://orcid.org/0000-0002-1435-5051
Pissis, Solon P.
2
https://orcid.org/0000-0002-1445-1932
Institute of Informatics, University of Warsaw, Warsaw, Poland
Department of Informatics, King’s College London, London, UK
A border u of a word w is a proper factor of w occurring both as a prefix and as a suffix. The maximal unbordered factor of w is the longest factor of w which does not have a border. Here an O(n log n)-time with high probability (or O(n log n log^2 log n)-time deterministic) algorithm to compute the Longest Unbordered Factor Array of w for general alphabets is presented, where n is the length of w. This array specifies the length of the maximal unbordered factor starting at each position of w. This is a major improvement on the running time of the currently best worst-case algorithm working in O(n^{1.5}) time for integer alphabets [Gawrychowski et al., 2015].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.70/LIPIcs.ISAAC.2018.70.pdf
longest unbordered factor
factorisation
period
border
strings
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
71:1
71:14
10.4230/LIPIcs.ISAAC.2018.71
article
Packing Sporadic Real-Time Tasks on Identical Multiprocessor Systems
Chen, Jian-Jia
1
https://orcid.org/0000-0001-8114-9760
Bansal, Nikhil
2
Chakraborty, Samarjit
3
https://orcid.org/0000-0002-0503-6235
von der Brüggen, Georg
1
https://orcid.org/0000-0002-8137-3612
Department of Computer Science, TU Dortmund University, Germany
Eindhoven University of Technology, The Netherlands
Technical University of Munich (TUM), Germany
In real-time systems, in addition to the functional correctness recurrent tasks must fulfill timing constraints to ensure the correct behavior of the system. Partitioned scheduling is widely used in real-time systems, i.e., the tasks are statically assigned onto processors while ensuring that all timing constraints are met. The decision version of the problem, which is to check whether the deadline constraints of tasks can be satisfied on a given number of identical processors, has been known NP-complete in the strong sense. Several studies on this problem are based on approximations involving resource augmentation, i.e., speeding up individual processors. This paper studies another type of resource augmentation by allocating additional processors, a topic that has not been explored until recently. We provide polynomial-time algorithms and analysis, in which the approximation factors are dependent upon the input instances. Specifically, the factors are related to the maximum ratio of the period to the relative deadline of a task in the given task set. We also show that these algorithms unfortunately cannot achieve a constant approximation factor for general cases. Furthermore, we prove that the problem does not admit any asymptotic polynomial-time approximation scheme (APTAS) unless P=NP when the task set has constrained deadlines, i.e., the relative deadline of a task is no more than the period of the task.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.71/LIPIcs.ISAAC.2018.71.pdf
multiprocessor partitioned scheduling
approximation factors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
72:1
72:12
10.4230/LIPIcs.ISAAC.2018.72
article
A Relaxed FPTAS for Chance-Constrained Knapsack
Shabtai, Galia
1
Raz, Danny
2
Shavitt, Yuval
1
School of Electrical Engineering, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel
Faculty of Computer Science, The Technion, Haifa 32000, Israel
The stochastic knapsack problem is a stochastic version of the well known deterministic knapsack problem, in which some of the input values are random variables. There are several variants of the stochastic problem. In this paper we concentrate on the chance-constrained variant, where item values are deterministic and item sizes are stochastic. The goal is to find a maximum value allocation subject to the constraint that the overflow probability is at most a given value. Previous work showed a PTAS for the problem for various distributions (Poisson, Exponential, Bernoulli and Normal). Some strictly respect the constraint and some relax the constraint by a factor of (1+epsilon). All algorithms use Omega(n^{1/epsilon}) time. A very recent work showed a "almost FPTAS" algorithm for Bernoulli distributions with O(poly(n) * quasipoly(1/epsilon)) time.
In this paper we present a FPTAS for normal distributions with a solution that satisfies the chance constraint in a relaxed sense. The normal distribution is particularly important, because by the Berry-Esseen theorem, an algorithm solving the normal distribution also solves, under mild conditions, arbitrary independent distributions. To the best of our knowledge, this is the first (relaxed or non-relaxed) FPTAS for the problem. In fact, our algorithm runs in poly(n/epsilon) time. We achieve the FPTAS by a delicate combination of previous techniques plus a new alternative solution to the non-heavy elements that is based on a non-convex program with a simple structure and an O(n^2 log {n/epsilon}) running time. We believe this part is also interesting on its own right.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.72/LIPIcs.ISAAC.2018.72.pdf
Stochastic knapsack
Chance constraint
Approximation algorithms
Combinatorial optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-12-06
123
73:1
73:12
10.4230/LIPIcs.ISAAC.2018.73
article
Covering Clients with Types and Budgets
Fotakis, Dimitris
1
Gourvès, Laurent
2
Mathieu, Claire
3
Srivastav, Abhinav
4
Yahoo Research-New York, USA & National Technical University of Athens, Greece
Université Paris-Dauphine, PSL University, CNRS, LAMSADE, 75016 Paris, France
CNRS, France
ENS Paris & Université Paris-Dauphine, France
In this paper, we consider a variant of the facility location problem. Imagine the scenario where facilities are categorized into multiple types such as schools, hospitals, post offices, etc. and the cost of connecting a client to a facility is realized by the distance between them. Each client has a total budget on the distance she/he is willing to travel. The goal is to open the minimum number of facilities such that the aggregate distance of each client to multiple types is within her/his budget. This problem closely resembles to the set cover and r-domination problems. Here, we study this problem in different settings. Specifically, we present some positive and negative results in the general setting, where no assumption is made on the distance values. Then we show that better results can be achieved when clients and facilities lie in a metric space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol123-isaac2018/LIPIcs.ISAAC.2018.73/LIPIcs.ISAAC.2018.73.pdf
Facility Location
Geometric Set Cover
Local Search