eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
0
0
10.4230/LIPIcs.ICALP.2016
article
LIPIcs, Volume 55, ICALP'16, Complete Volume
Chatzigiannakis, Ioannis
Mitzenmacher, Michael
Rabani, Yuval
Sangiorgi, Davide
LIPIcs, Volume 55, ICALP'16, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016/LIPIcs.ICALP.2016.pdf
Theory of Computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
0:i
0:xliv
10.4230/LIPIcs.ICALP.2016.0
article
Front Matter, Table of Contents, Preface, Organization, List of Authors
Chatzigiannakis, Ioannis
Mitzenmacher, Michael
Rabani, Yuval
Sangiorgi, Davide
Front Matter, Table of Contents, Preface, Organization, List of Authors
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.0/LIPIcs.ICALP.2016.0.pdf
Front Matter
Table of Contents
Preface
Organization
List of Authors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
1:1
1:1
10.4230/LIPIcs.ICALP.2016.1
article
Compute Choice (Invited Talk)
Shah, Devavrat
In this talk, we shall discuss the question of learning distribution over permutations of n choices based on partial observations. This is central to capturing the so called "choice" in a variety of contexts: understanding preferences of consumers over a collection of products based on purchasing and browsing data in the setting of retail and e-commerce, learning public opinion amongst a collection of socio-economic issues based on sparse polling data, and deciding a ranking of teams or players based on outcomes of games. The talk will primarily discuss the relationship between the ability to learn, nature of partial information and number of available observations. Connections to the classical theory of social choice and behavioral psychology, as well as modern literature in Statistics, learning theory and operations research will be discussed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.1/LIPIcs.ICALP.2016.1.pdf
Decision Systems
Learning Distributions
Partial observations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
2:1
2:1
10.4230/LIPIcs.ICALP.2016.2
article
Formally Verifying a Compiler: What Does It Mean, Exactly? (Invited Talk)
Leroy, Xavier
Compilers, and especially optimizing compilers, are complicated programs. Bugs in compilers happen, and can lead to miscompilation: the production of wrong executable code from a correct source program. Miscompilation is documented in the literature and a concern for high-assurance software, as it endangers the guarantees obtained by source-level formal verification of programs.
Compiler verification is a radical solution to the miscompilation problem: by applying program proof to the compiler itself, we can obtain mathematically strong guarantees that the generated executable code is faithful to the semantics of the source program. The state of the art in this line of research is arguably the CompCert verified compiler. This talk will give an overview of this optimizing C compiler and of its formal verification, conducted with the Coq proof assistant.
A formal verification is as good as the specifications it uses. In other words, verification reduces the problem of trusting a large implementation to that of ensuring that its formal specification enforce the intended correctness properties. In the case of CompCert, the correctness statement that is proved is rather complex, as it involves large operational semantics (for the C language and for the assembly languages of the target architectures) and simulations between these semantics that support both choice refinement and behavior refinement. The talk will review and discuss these elements of the specification, along with some of the accompanying proof principles.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.2/LIPIcs.ICALP.2016.2.pdf
Compilers
Compiler Optimization
Compiler Verification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
3:1
3:1
10.4230/LIPIcs.ICALP.2016.3
article
Hardness of Approximation (Invited Talk)
Khot, Subhash
The talk will present connections between approximability of NP-complete problems, analysis, and geometry, and the role played by the Unique Games Conjecture in facilitating these connections.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.3/LIPIcs.ICALP.2016.3.pdf
NP-completeness
Approximation algorithms
Inapproximability
Probabilistically Checkable Proofs
Discrete Fourier analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
4:1
4:18
10.4230/LIPIcs.ICALP.2016.4
article
Model Checking and Strategy Synthesis for Stochastic Games: From Theory to Practice (Invited Talk)
Kwiatkowska, Marta Z.
Probabilistic model checking is an automatic procedure for establishing if a desired property holds in a probabilistic model, aimed at verifying quantitative probabilistic specifications such as the probability of a critical failure occurring or expected time to termination.
Much progress has been made in recent years in algorithms, tools and applications of probabilistic model checking, as exemplified by the probabilistic model checker PRISM (http://www.prismmodelchecker.org). However, the unstoppable rise of autonomous systems, from robotic assistants to self-driving cars, is placing greater and greater demands on quantitative modelling and verification technologies. To address the challenges of autonomy we need to consider collaborative, competitive and adversarial behaviour, which is naturally modelled using game-theoretic abstractions, enhanced with stochasticity arising from randomisation and uncertainty. This paper gives an overview of quantitative verification and strategy synthesis techniques developed for turn-based stochastic multi-player games, summarising recent advances concerning multi-objective properties and compositional strategy synthesis. The techniques have been implemented in the PRISM-games model checker built as an extension of PRISM.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.4/LIPIcs.ICALP.2016.4.pdf
Quantitative verification
Stochastic games
Temporal logic
Model checking
Strategy synthesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
5:1
5:14
10.4230/LIPIcs.ICALP.2016.5
article
Fine-Grained Complexity Analysis of Two Classic TSP Variants
de Berg, Mark
Buchin, Kevin
Jansen, Bart M. P.
Woeginger, Gerhard
We analyze two classic variants of the Traveling Salesman Problem using the toolkit of fine-grained complexity.
Our first set of results is motivated by the Bitonic tsp problem: given a set of n points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamicprogramming exercise to solve this problem in O(n^2) time. While the near-quadratic dependency of similar dynamic programs for Longest Common Subsequence and Discrete Fréchet Distance has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves bitonic tsp in O(n*log^2(n)) time and its bottleneck version in O(n*log^3(n)) time. In the more general pyramidal tsp problem, the points to be visited are labeled 1, ..., n and the sequence of labels in the solution is required to have at most one local maximum. Our algorithms for the bitonic (bottleneck) tsp problem also work for the pyramidal tsp problem in the plane.
Our second set of results concerns the popular k-opt heuristic for tsp in the graph setting. More precisely, we study the k-opt decision problem, which asks whether a given tour can be improved by a k-opt move that replaces k edges in the tour by k new edges. A simple algorithm solves k-opt in O(n^k) time for fixed k. For 2-opt, this is easily seen to be optimal. For k = 3 we prove that an algorithm with a runtime of the form ~O(n^{3-epsilon}) exists if and only if All-Pairs Shortest Paths in weighted digraphs has such an algorithm. For general k-opt, it is known that a runtime of f(k)*n^{o(k/log(k))} would contradict the Exponential Time Hypothesis. The results for k = 2, 3 may suggest that the actual time complexity of k-opt is Theta(n^k). We show that this is not the case, by presenting an algorithm that finds the best k-move in O(n^{lfoor 2k/3 rfloor +1}) time for fixed k >= 3. This implies that 4-opt can be solved in O(n^3) time, matching the best-known algorithm for 3-opt. Finally, we show how to beat the quadratic barrier for k = 2 in two important settings, namely for points in the plane and when we want to solve 2-opt repeatedly
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.5/LIPIcs.ICALP.2016.5.pdf
Traveling salesman problem
fine-grained complexity
bitonic tours
k-opt
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
6:1
6:12
10.4230/LIPIcs.ICALP.2016.6
article
Bicovering: Covering Edges With Two Small Subsets of Vertices
Bhangale, Amey
Gandhi, Rajiv
Hajiaghayi, Mohammad Taghi
Khandekar, Rohit
Kortsarz, Guy
We study the following basic problem called Bi-Covering. Given a graph G(V, E), find two (not necessarily disjoint) sets A subseteq V and B subseteq V such that A union B = V and that every edge e belongs to either the graph induced by A or to the graph induced by B. The goal is to minimize max{|A|, |B|}. This is the most simple case of the Channel Allocation problem [Gandhi et al., Networks, 2006]. A solution that outputs V,emptyset gives ratio at most 2. We show that under the similar Strong Unique Game Conjecture by [Bansal-Khot, FOCS, 2009] there is no 2 - epsilon ratio algorithm for the problem, for any constant epsilon > 0.
Given a bipartite graph, Max-bi-clique is a problem of finding largest k*k complete bipartite sub graph. For Max-bi-clique problem, a constant factor hardness was known under random 3-SAT hypothesis of Feige [Feige, STOC, 2002] and also under the assumption that NP !subseteq intersection_{epsilon > 0} BPTIME(2^{n^{epsilon}}) [Khot, SIAM J. on Comp., 2011]. It was an open problem in [Ambühl et. al., SIAM J. on Comp., 2011] to prove inapproximability of Max-bi-clique assuming weaker conjecture. Our result implies similar hardness result assuming the Strong Unique Games Conjecture.
On the algorithmic side, we also give better than 2 approximation for Bi-Covering on numerous special graph classes. In particular, we get 1.876 approximation for Chordal graphs, exact algorithm for Interval Graphs, 1 + o(1) for Minor Free Graph, 2 - 4*delta/3 for graphs with minimum degree delta*n, 2/(1+delta^2/8) for delta-vertex expander, 8/5 for Split Graphs, 2 - (6/5)*1/d for graphs with minimum constant degree d etc. Our algorithmic results are quite non-trivial. In achieving these results, we use various known structural results about the graphs, combined with the techniques that we develop tailored to getting better than 2 approximation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.6/LIPIcs.ICALP.2016.6.pdf
Bi-covering
Unique Games
Max Bi-clique
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
7:1
7:14
10.4230/LIPIcs.ICALP.2016.7
article
Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs
Chekuri, Chandra
Ene, Alina
Pilipczuk, Marcin
We study the problem of routing symmetric demand pairs in planar digraphs. The input consists of a directed planar graph G = (V, E) and a collection of k source-destination pairs M = {s_1t_1, ..., s_kt_k}. The goal is to maximize the number of pairs that are routed along disjoint paths. A pair s_it_i is routed in the symmetric setting if there is a directed path connecting s_i to t_i and a directed path connecting t_i to s_i. In this paper we obtain a randomized poly-logarithmic approximation with constant congestion for this problem in planar digraphs. The main technical contribution is to show that a planar digraph with directed treewidth h contains a constant congestion crossbar of size Omega(h/polylog(h)).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.7/LIPIcs.ICALP.2016.7.pdf
Disjoint paths
symmetric demands
planar directed graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
8:1
8:13
10.4230/LIPIcs.ICALP.2016.8
article
Quasi-4-Connected Components
Grohe, Martin
We introduce a new decomposition of a graphs into quasi-4-connected components, where we call a graph quasi-4-connected if it is 3-connected and it only has separations of order 3 that separate a single vertex from the rest of the graph. Moreover, we give a cubic time algorithm computing the decomposition of a given graph.
Our decomposition into quasi-4-connected components refines the well-known decompositions of graphs into biconnected and triconnected components. We relate our decomposition to Robertson and Seymour's theory of tangles by establishing a correspondence between the quasi-4-connected components of a graph and its tangles of order 4.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.8/LIPIcs.ICALP.2016.8.pdf
decompositions
connectivity
tangles
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
9:1
9:14
10.4230/LIPIcs.ICALP.2016.9
article
Subexponential Time Algorithms for Embedding H-Minor Free Graphs
Bodlaender, Hans L.
Nederlof, Jesper
van der Zanden, Tom C.
We establish the complexity of several graph embedding problems: Subgraph Isomorphism, Graph Minor, Induced Subgraph and Induced Minor, when restricted to H-minor free graphs. In each of these problems, we are given a pattern graph P and a host graph G, and want to determine whether P is a subgraph (minor, induced subgraph or induced minor) of G. We show that, for any fixed graph H and epsilon > 0, if P is H-Minor Free and G has treewidth tw, (induced) subgraph can be solved 2^{O(k^{epsilon}*tw+k/log(k))}*n^{O(1)} time and (induced) minor can be solved in 2^{O(k^{epsilon}*tw+tw*log(tw)+k/log(k))}*n^{O(1)} time, where k = |V(P)|.
We also show that this is optimal, in the sense that the existence of an algorithm for one of these problems running in 2^{o(n/log(n))} time would contradict the Exponential Time Hypothesis. This solves an open problem on the complexity of Subgraph Isomorphism for planar graphs.
The key algorithmic insight is that dynamic programming approaches can be sped up by identifying isomorphic connected components in the pattern graph. This technique seems widely applicable, and it appears that there is a relatively unexplored class of problems that share a similar upper and lower bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.9/LIPIcs.ICALP.2016.9.pdf
subgraph isomorphism
graph minors
subexponential time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
10:1
10:13
10.4230/LIPIcs.ICALP.2016.10
article
Relating Graph Thickness to Planar Layers and Bend Complexity
Durocher, Stephane
Mondal, Debajyoti
The thickness of a graph G = (V, E) with n vertices is the minimum number of planar subgraphs of G whose union is G. A polyline drawing of G in R^2 is a drawing Gamma of G, where each vertex is mapped to a point and each edge is mapped to a polygonal chain. Bend and layer complexities are two important aesthetics of such a drawing. The bend complexity of Gamma is the maximum number of bends per edge in Gamma, and the layer complexity of Gamma is the minimum integer r such that the set of polygonal chains in Gamma can be partitioned into r disjoint sets, where each set corresponds to a planar polyline drawing. Let G be a graph of thickness t. By Fáry’s theorem, if t = 1, then G can be drawn on a single layer with bend complexity 0. A few extensions to higher thickness are known, e.g., if t = 2 (resp., t > 2), then G can be drawn on t layers with bend complexity 2 (resp., 3n + O(1)).
In this paper we present an elegant extension of Fáry's theorem to draw graphs of thickness t > 2. We first prove that thickness-t graphs can be drawn on t layers with 2.25n + O(1) bends per edge. We then develop another technique to draw thickness-t graphs on t layers with reduced bend complexity for small values of t, e.g., for t in {3, 4}, the bend complexity decreases to O(sqrt(n)).
Previously, the bend complexity was not known to be sublinear for t > 2. Finally, we show that graphs with linear arboricity k can be drawn on k layers with bend complexity 3*(k-1)*n/(4k-2).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.10/LIPIcs.ICALP.2016.10.pdf
Graph Drawing
Thickness
Geometric Thickness
Layers; Bends
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
11:1
11:14
10.4230/LIPIcs.ICALP.2016.11
article
Optimal Approximate Matrix Product in Terms of Stable Rank
Cohen, Michael B.
Nelson, Jelani
Woodruff, David P.
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having m = O(˜r/epsilon^2) rows. Here r˜ is the maximum stable rank, i.e., the squared ratio of Frobenius and operator norms, of the two matrices being multiplied. This is a quantitative improvement over previous work of [Magen and Zouzias, SODA, 2011] and [Kyrillidis et al., arXiv, 2014] and is also optimal for any oblivious dimensionality-reducing map. Furthermore, due to the black box reliance on the subspace embedding property in our proofs, our theorem can be applied to a much more general class of sketching matrices than what was known before, in addition to achieving better bounds. For example, one can apply our theorem to efficient subspace embeddings such as the Subsampled Randomized Hadamard Transform or sparse subspace embeddings, or even with subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication proven in previous work, implies quantitative improvements for approximate least squares regression and low rank approximation, and implies faster low rank approximation for popular kernels in machine learning such as the gaussian and Sobolev kernels. Our main result has also already been applied to improve dimensionality reduction guarantees for k-means clustering, and also implies new results for nonparametric regression.
Lastly, we point out that the proof of the "BSS" deterministic row-sampling result of [Batson et al., SICOMP, 2012] can be modified to obtain deterministic row-sampling for approximate matrix product in terms of the stable rank of the matrices. The original "BSS" proof was in terms of the rank rather than the stable rank.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.11/LIPIcs.ICALP.2016.11.pdf
subspace embeddings
approximate matrix multiplication
stable rank
regression
low rank approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
12:1
12:14
10.4230/LIPIcs.ICALP.2016.12
article
Approximate Span Programs
Ito, Tsuyoshi
Jeffery, Stacey
Span programs are a model of computation that have been used to design quantum algorithms, mainly in the query model. It is known that for any decision problem, there exists a span program that leads to an algorithm with optimal quantum query complexity, however finding such an algorithm is generally challenging. In this work, we consider new ways of designing quantum algorithms using span programs. We show how any span program that decides a problem f can also be used to decide "property testing" versions of the function f, or more generally, approximate a quantity called the span program witness size, which is some property of the input related to f. For example, using our techniques, the span program for OR, which can be used to design an optimal algorithm for the OR function, can also be used to design optimal algorithms for: threshold functions, in which we want to decide if the Hamming weight of a string is above a threshold, or far below, given the promise that one of these is true; and approximate counting, in which we want to estimate the Hamming weight of the input up to some desired accuracy. We achieve these results by relaxing the requirement that 1-inputs hit some target exactly in the span program, which could potentially make design of span programs significantly easier. In addition, we give an exposition of span program structure, which increases the general understanding of this important model. One implication of this is alternative algorithms for estimating the witness size when the phase gap of a certain unitary can be lower bounded. We show how to lower bound this phase gap in certain cases.
As an application, we give the first upper bounds in the adjacency query model on the quantum time complexity of estimating the effective resistance between s and t, R_{s,t}(G). For this problem we obtain ~O(1/epsilon^{3/2}*n*sqrt(R_{s,t}(G)), using O(log(n)) space. In addition, when mu is a lower bound on lambda_2(G), by our phase gap lower bound, we can obtain an upper bound of ~O(1/epsilon*n*sqrt(R){s,t}(G)/mu)) for estimating effective resistance, also using O(log(n)) space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.12/LIPIcs.ICALP.2016.12.pdf
Quantum algorithms
span programs
quantum query complexity
effective resistance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
13:1
13:14
10.4230/LIPIcs.ICALP.2016.13
article
Power of Quantum Computation with Few Clean Qubits
Fujii, Keisuke
Kobayashi, Hirotada
Morimae, Tomoyuki
Nishimura, Harumichi
Tamate, Shuhei
Tani, Seiichiro
This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor are intermediate measurements. The main contribution of this paper is to develop unexpectedly strong error-reduction methods for such quantum computations that simultaneously reduce the number of necessary clean qubits. It is proved that any problem solvable by a polynomialtime quantum computation with one-sided bounded error that uses logarithmically many clean qubits is also solvable with exponentially small one-sided error using just two clean qubits, and with polynomially small one-sided error using just one clean qubit. It is further proved in the twosided-error case that any problem solvable by such a computation with a constant gap between completeness and soundness using logarithmically many clean qubits is also solvable with exponentially small two-sided error using just two clean qubits. If only one clean qubit is available, the problem is again still solvable with exponentially small error in one of the completeness and soundness and with polynomially small error in the other. An immediate consequence is that the Trace Estimation problem defined with fixed constant threshold parameters is complete for BQ_{[1]}P and BQ_{log}P, the classes of problems solvable by polynomial-time quantum computations with completeness 2/3 and soundness 1/3 using just one and logarithmically many clean qubits, respectively. The techniques used for proving the error-reduction results may be of independent interest in themselves, and one of the technical tools can also be used to show the hardness of weak classical simulations of one-clean-qubit computations (i.e., DQC1 computations).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.13/LIPIcs.ICALP.2016.13.pdf
DQC1
quantum computing
complete problems
error reduction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
14:1
14:14
10.4230/LIPIcs.ICALP.2016.14
article
Space-Efficient Error Reduction for Unitary Quantum Computations
Fefferman, Bill
Kobayashi, Hirotada
Yen-Yu Lin, Cedric
Morimae, Tomoyuki
Nishimura, Harumichi
This paper presents a general space-efficient method for error reduction for unitary quantum computation. Consider a polynomial-time quantum computation with completeness c and soundness s, either with or without a witness (corresponding to QMA and BQP, respectively). To convert this computation into a new computation with error at most 2^{-p}, the most space-efficient method known requires extra workspace of O(p*log(1/(c-s))) qubits. This space requirement is too large for scenarios like logarithmic-space quantum computations. This paper shows an errorreduction method for unitary quantum computations (i.e., computations without intermediate measurements) that requires extra workspace of just O(log(p/(c-s))) qubits. This in particular gives the first method of strong amplification for logarithmic-space unitary quantum computations with two-sided bounded error. This also leads to a number of consequences in complexity theory, such as the uselessness of quantum witnesses in bounded-error logarithmic-space unitary quantum computations, the PSPACE upper bound for QMA with exponentially-small completeness-soundness gap, and strong amplification for matchgate computations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.14/LIPIcs.ICALP.2016.14.pdf
space-bounded computation
quantum Merlin-Arthur proof systems
error reduction
quantum computing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
15:1
15:14
10.4230/LIPIcs.ICALP.2016.15
article
Linear Time Algorithm for Quantum 2SAT
Arad, Itai
Santha, Miklos
Sundaram, Aarthi
Zhang, Shengyu
A canonical result about satisfiability theory is that the 2-SAT problem can be solved in linear time, despite the NP-hardness of the 3-SAT problem. In the quantum 2-SAT problem, we are given a family of 2-qubit projectors Q_{ij} on a system of n qubits, and the task is to decide whether the Hamiltonian H = sum Q_{ij} has a 0-eigenvalue, or it is larger than 1/n^c for some c = O(1). The problem is not only a natural extension of the classical 2-SAT problem to the quantum case, but is also equivalent to the problem of finding the ground state of 2-local frustration-free Hamiltonians of spin 1/2, a well-studied model believed to capture certain key properties in modern condensed matter physics. While Bravyi has shown that the quantum 2-SAT problem has a classical polynomial-time algorithm, the running time of his algorithm is O(n^4). In this paper we give a classical algorithm with linear running time in the number of local projectors, therefore achieving the best possible complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.15/LIPIcs.ICALP.2016.15.pdf
Quantum SAT
Davis-Putnam Procedure
Linear Time Algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
16:1
16:13
10.4230/LIPIcs.ICALP.2016.16
article
Optimal Quantum Algorithm for Polynomial Interpolation
Childs, Andrew M.
van Dam, Wim
Hung, Shih-Han
Shparlinski, Igor E.
We consider the number of quantum queries required to determine the coefficients of a degree-d polynomial over F_q. A lower bound shown independently by Kane and Kutin and by Meyer and Pommersheim shows that d/2 + 1/2 quantum queries are needed to solve this problem with bounded error, whereas an algorithm of Boneh and Zhandry shows that d quantum queries are sufficient. We show that the lower bound is achievable: d/2 + 1/2 quantum queries suffice to determine the polynomial with bounded error. Furthermore, we show that d/2 + 1 queries suffice to achieve probability approaching 1 for large q. These upper bounds improve results of Boneh and Zhandry on the insecurity of cryptographic protocols against quantum attacks. We also show that our algorithm’s success probability as a function of the number of queries is precisely optimal. Furthermore, the algorithm can be implemented with gate complexity poly(log(q)) with negligible decrease in the success probability. We end with a conjecture about the quantum query complexity of multivariate polynomial interpolation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.16/LIPIcs.ICALP.2016.16.pdf
Quantum algorithms
query complexity
polynomial interpolation
finite fields
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
17:1
17:15
10.4230/LIPIcs.ICALP.2016.17
article
Lower Bounds for the Approximate Degree of Block-Composed Functions
Thaler, Justin
We describe a new hardness amplification result for point-wise approximation of Boolean functions by low-degree polynomials.
Specifically, for any function f on N bits, define F(x_1,...,x_M) = OMB(f(x_1),...,f(x_M)) to be the function on M*N bits obtained by block-composing f with a function known as ODD-MAX-BIT. We show that, if f requires large degree to approximate to error 2/3 in a certain one-sided sense (captured by a complexity measure known as positive one-sided approximate degree), then F requires large degree to approximate even to error 1-2^{-M}. This generalizes a result of Beigel (Computational Complexity, 1994), who proved an identical result for the special case f=OR.
Unlike related prior work, our result implies strong approximate degree lower bounds even for many functions F that have low threshold degree. Our proof is constructive: we exhibit a solution to the dual of an appropriate linear program capturing the approximate degree of any function. We describe several applications, including improved separations between the complexity classes P^{NP} and PP in both the query and communication complexity settings. Our separations improve on work of Beigel (1994) and Buhrman, Vereshchagin, and de Wolf (CCC, 2007).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.17/LIPIcs.ICALP.2016.17.pdf
approximate degree
one-sided approximate degree
polynomial approx- imations
threshold degree
communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
18:1
18:16
10.4230/LIPIcs.ICALP.2016.18
article
Dynamic Graph Stream Algorithms in o(n) Space
Huang, Zengfeng
Peng, Pan
In this paper we study graph problems in dynamic streaming model, where the input is defined by a sequence of edge insertions and deletions. As many natural problems require Omega(n) space, where n is the number of vertices, existing works mainly focused on designing ~O(n) space algorithms. Although sublinear in the number of edges for dense graphs, it could still be too large for many applications (e.g. n is huge or the graph is sparse). In this work, we give single-pass algorithms beating this space barrier for two classes of problems. We present o(n) space algorithms for estimating the number of connected components with additive error epsilon*n and (1 + epsilon)-approximating the weight of minimum spanning tree. The latter improves previous ~O(n) space algorithm given by Ahn et al. (SODA 2012) for connected graphs with bounded edge weights. We initiate the study of approximate graph property testing in the dynamic streaming model, where we want to distinguish graphs satisfying the property from graphs that are epsilon-far from having the property. We consider the problem of testing k-edge connectivity, k-vertex connectivity, cycle-freeness and bipartiteness (of planar graphs), for which, we provide algorithms using roughly ~O(n^{1-epsilon}) space, which is o(n) for any constant epsilon. To complement our algorithms, we present Omega(n^{1-O(epsilon)}) space lower bounds for these problems, which show that such a dependence on epsilon is necessary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.18/LIPIcs.ICALP.2016.18.pdf
dynamic graph streams
sketching
property testing
minimum spanning tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
19:1
19:12
10.4230/LIPIcs.ICALP.2016.19
article
Diameter and k-Center in Sliding Windows
Cohen-Addad, Vincent
Schwiegelshohn, Chris
Sohler, Christian
In this paper we develop streaming algorithms for the diameter problem and the k-center clustering problem in the sliding window model. In this model we are interested in maintaining a solution for the N most recent points of the stream. In the diameter problem we would like to maintain two points whose distance approximates the diameter of the point set in the window. Our algorithm computes a (3 + epsilon)-approximation and uses O(1/epsilon*ln(alpha)) memory cells, where alpha is the ratio of the largest and smallest distance and is assumed to be known in advance. We also prove that under reasonable assumptions obtaining a (3 - epsilon)-approximation requires Omega(N1/3) space.
For the k-center problem, where the goal is to find k centers that minimize the maximum distance of a point to its nearest center, we obtain a (6 + epsilon)-approximation using O(k/epsilon*ln(alpha)) memory cells and a (4 + epsilon)-approximation for the special case k = 2. We also prove that any algorithm for the 2-center problem that achieves an approximation ratio of less than 4 requires Omega(N^{1/3}) space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.19/LIPIcs.ICALP.2016.19.pdf
Streaming
k-Center
Diameter
Sliding Windows
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
20:1
20:14
10.4230/LIPIcs.ICALP.2016.20
article
Approximate Hamming Distance in a Stream
Clifford, Raphaël
Starikovskaya, Tatiana
We consider the problem of computing a (1+epsilon)-approximation of the Hamming distance between a pattern of length n and successive substrings of a stream. We first look at the one-way randomised communication complexity of this problem. We show the following:
- If Alice and Bob both share the pattern and Alice has the first half of the stream and Bob the second half, then there is an O(epsilon^{-4}*log^2(n)) bit randomised one-way communication protocol.
- If Alice has the pattern, Bob the first half of the stream and Charlie the second half, then there is an O(epsilon^{-2}*sqrt(n)*log(n)) bit randomised one-way communication protocol. We then go on to develop small space streaming algorithms for (1 + epsilon)-approximate Hamming distance which give worst case running time guarantees per arriving symbol.
- For binary input alphabets there is an O(epsilon^{-3}*sqrt(n)*log^2(n)) space and O(epsilon^{-2}*log(n)) time streaming
(1 + epsilon)-approximate Hamming distance algorithm.
- For general input alphabets there is an O(epsilon^{-5}*sqrt(n)*log^4(n)) space and O(epsilon^{-4}*log^3(n)) time streaming
(1 + epsilon)-approximate Hamming distance algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.20/LIPIcs.ICALP.2016.20.pdf
Hamming distance
communication complexity
data stream model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
21:1
21:14
10.4230/LIPIcs.ICALP.2016.21
article
Price of Competition and Dueling Games
Dehghani, Sina
Hajiaghayi, Mohammad Taghi
Mahini, Hamid
Seddighin, Saeed
We study competition in a general framework introduced by Immorlica, Kalai, Lucier, Moitra, Postlewaite, and Tennenholtz and answer their main open question. Immorlica et al. considered classic optimization problems in terms of competition and introduced a general class of games called dueling games. They model this competition as a zero-sum game, where two players are competing for a user’s satisfaction. In their main and most natural game, the ranking duel, a user requests a webpage by submitting a query and players output an ordering over all possible webpages based on the submitted query. The user tends to choose the ordering which displays her requested webpage in a higher rank. The goal of both players is to maximize the probability that her ordering beats that of her opponent and gets the user's attention. Immorlica et al. show this game directs both players to provide suboptimal search results. However, they leave the following as their main open question: "does competition between algorithms improve or degrade expected performance?" (see the introduction for more quotes) In this paper, we resolve this question for the ranking duel and a more general class of dueling games.
More precisely, we study the quality of orderings in a competition between two players. This game is a zero-sum game, and thus any Nash equilibrium of the game can be described by minimax strategies. Let the value of the user for an ordering be a function of the position of her requested item in the corresponding ordering, and the social welfare for an ordering be the expected value of the corresponding ordering for the user. We propose the price of competition which is the ratio of the social welfare for the worst minimax strategy to the social welfare obtained by asocial planner. Finding the price of competition is another approach to obtain structural results of Nash equilibria. We use this criterion for analyzing the quality of orderings in the ranking duel. Although Immorlica et al. show that the competition leads to suboptimal strategies, we prove the quality of minimax results is surprisingly close to that of the optimum solution. In particular, via a novel factor-revealing LP for computing price of anarchy, we prove if the value of the user for an ordering is a linear function of its position, then the price of competition is at least 0.612 and bounded above by 0.833. Moreover we consider the cost minimization version of the problem. We prove, the social cost of the worst minimax strategy is at most 3 times the optimal social cost.
Last but not least, we go beyond linear valuation functions and capture the main challenge for bounding the price of competition for any arbitrary valuation function. We present a principle which states that the lower bound for the price of competition for all 0-1 valuation functions is the same as the lower bound for the price of competition for all possible valuation functions. It is worth mentioning that this principle not only works for the ranking duel but also for all dueling games. This principle says, in any dueling game, the most challenging part of bounding the price of competition is finding a lower bound for 0-1 valuation functions. We leverage this principle to show that the price of competition is at least 0.25 for the generalized ranking duel.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.21/LIPIcs.ICALP.2016.21.pdf
POC
POA
Dueling games
Nash equilibria
sponsored search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
22:1
22:13
10.4230/LIPIcs.ICALP.2016.22
article
Popular Half-Integral Matchings
Kavitha, Telikepalli
In an instance G = (A union B, E) of the stable marriage problem with strict and possibly incomplete preference lists, a matching M is popular if there is no matching M0 where the vertices that prefer M' to M outnumber those that prefer M to M'. All stable matchings are popular and there is a simple linear time algorithm to compute a maximum-size popular matching. More generally, what we seek is a min-cost popular matching where we assume there is a cost function c : E -> Q. However there is no polynomial time algorithm currently known for solving this problem. Here we consider the following generalization of a popular matching called a popular half-integral matching: this is a fractional matching ~x = (M_1 + M_2)/2, where M1 and M2 are the 0-1 edge incidence vectors of matchings in G, such that ~x satisfies popularity constraints. We show that every popular half-integral matching is equivalent to a stable matching in a larger graph G^*. This allows us to solve the min-cost popular half-integral matching problem in polynomial time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.22/LIPIcs.ICALP.2016.22.pdf
bipartite graphs
stable matchings
fractional matchings
polytopes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
23:1
23:13
10.4230/LIPIcs.ICALP.2016.23
article
Voronoi Choice Games
Boppana, Meena
Hod, Rani
Mitzenmacher, Michael
Morgan, Tom
We study novel variations of Voronoi games and associated random processes that we call Voronoi choice games. These games provide a rich framework for studying questions regarding the power of small numbers of choices in multi-player, competitive scenarios, and they further lead to many interesting, non-trivial random processes that appear worthy of study.
As an example of the type of problem we study, suppose a group of n miners (or players) are staking land claims through the following process: each miner has m associated points independently and uniformly distributed on an underlying space (such as the unit circle, the unit square, or the unit torus), so the kth miner will have associated points p_{k1}, p_{k2}, ..., p_{km}. We generally here think of m as being a small constant, such as 2. Each miner chooses one of these points as the base point for their claim. Each miner obtains mining rights for the area of the square that is closest to their chosen base; that is, they obtain the Voronoi cell corresponding to their chosen point in the Voronoi diagram of the n chosen points. Each player's goal is simply to maximize the amount of land under their control. What can we say about the players’ strategy and the equilibria of such games?
In our main result, we derive bounds on the expected number of pure Nash equilibria for a variation of the 1-dimensional game on the circle where a player owns the arc starting from their point and moving clockwise to the next point. This result uses interesting properties of random arc lengths on circles, and demonstrates the challenges in analyzing these kinds of problems. We also provide several other related results. In particular, for the 1-dimensional game on the circle, we show that a pure Nash equilibrium always exists when each player owns the part of the circle nearest to their point, but it is NP-hard to determine whether a pure Nash equilibrium exists in the variant when each player owns the arc starting from their point clockwise to the next point. This last result, in part, motivates our examination of the random setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.23/LIPIcs.ICALP.2016.23.pdf
Voronoi games
correlated equilibria
power of two choices
Hotelling model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
24:1
24:14
10.4230/LIPIcs.ICALP.2016.24
article
The Complexity of Hex and the Jordan Curve Theorem
Adler, Aviv
Daskalakis, Constantinos
Demaine, Erik D.
The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topology. We study their computational relationship, showing that a stylized computational version of Jordan’s theorem is PPAD-complete, and therefore in a sense computationally equivalent to Brouwer’s theorem. As a corollary, our computational result implies that these two theorems directly imply each other mathematically, complementing Maehara's proof that Brouwer implies Jordan [Maehara, 1984]. We then turn to the combinatorial game of Hex which is related to Jordan's theorem, and where the existence of a winner can be used to show Brouwer's theorem [Gale,1979]. We establish that determining who won an (implicitly encoded) play of Hex is PSPACE-complete by adapting a reduction (due to Goldberg [Goldberg,2015]) from Quantified Boolean Formula (QBF). As this problem is analogous to evaluating the output of a canonical path-following algorithm for finding a Brouwer fixed point - and which is known to be PSPACE-complete [Goldberg/Papadimitriou/Savani, 2013] - we thereby establish a connection between Brouwer, Jordan and Hex higher in the complexity hierarchy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.24/LIPIcs.ICALP.2016.24.pdf
Jordan
Brouwer
Hex
PPAD
PSPACE
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
25:1
25:14
10.4230/LIPIcs.ICALP.2016.25
article
Fractals for Kernelization Lower Bounds, With an Application to Length-Bounded Cut Problems
Fluschnik, Till
Hermelin, Danny
Nichterlein, André
Niedermeier, Rolf
Bodlaender et al.'s [Bodlaender/Jansen/Kratsch,2014] cross-composition technique is a popular method for excluding polynomial-size problem kernels for NP-hard parameterized problems. We present a new technique exploiting triangle-based fractal structures for extending the range of applicability of cross-compositions. Our technique makes it possible to prove new no-polynomial-kernel results for a number of problems dealing with length-bounded cuts. Roughly speaking, our new technique combines the advantages of serial and parallel composition. In particular, answering an open question of Golovach and Thilikos [Golovach/Thilikos,2011], we show that, unless NP subseteq coNP/poly, the NP-hard Length-Bounded Edge-Cut problem (delete at most k edges such that the resulting graph has no s-t path of length shorter than l) parameterized by the combination of k and l has no polynomial-size problem kernel. Our framework applies to planar as well as directed variants of the basic problems and also applies to both edge and vertex deletion problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.25/LIPIcs.ICALP.2016.25.pdf
Parameterized complexity
polynomial-time data reduction
cross-compositions
lower bounds
graph modification problems
interdiction problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
26:1
26:14
10.4230/LIPIcs.ICALP.2016.26
article
Kernelization of Cycle Packing with Relaxed Disjointness Constraints
Agrawal, Akanksha
Lokshtanov, Daniel
Majumdar, Diptapriyo
Mouawad, Amer E.
Saurabh, Saket
A key result in the field of kernelization, a subfield of parameterized complexity, states that the classic Disjoint Cycle Packing problem, i.e. finding k vertex disjoint cycles in a given graph G, admits no polynomial kernel unless NP subseteq coNP/poly. However, very little is known about this problem beyond the aforementioned kernelization lower bound (within the parameterized complexity framework). In the hope of clarifying the picture and better understanding the types of "constraints" that separate "kernelizable" from "non-kernelizable" variants of Disjoint Cycle Packing, we investigate two relaxations of the problem. The first variant, which we call Almost Disjoint Cycle Packing, introduces a "global" relaxation parameter t. That is, given a graph G and integers k and t, the goal is to find at least k distinct cycles such that every vertex of G appears in at most t of the cycles. The second variant, Pairwise Disjoint Cycle Packing, introduces a "local" relaxation parameter and we seek at least k distinct cycles such that every two cycles intersect in at most t vertices. While the Pairwise Disjoint Cycle Packing problem admits a polynomial kernel for all t >= 1, the kernelization complexity of Almost Disjoint Cycle Packing reveals an interesting spectrum of upper and lower bounds. In particular, for t = k/c, where c could be a function of k, we obtain a kernel of size O(2^{c^{2}}*k^{7+c}*log^3(k)) whenever c in o(sqrt(k))). Thus the kernel size varies from being sub-exponential when c in o(sqrt(k)), to quasipolynomial when c in o(log^l(k)), l in R_+, and polynomial when c in O(1). We complement these results for Almost Disjoint Cycle Packing by showing that the problem does not admit a polynomial kernel whenever t in O(k^{epsilon}), for any 0 <= epsilon < 1.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.26/LIPIcs.ICALP.2016.26.pdf
parameterized complexity
cycle packing
kernelization
relaxation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
27:1
27:14
10.4230/LIPIcs.ICALP.2016.27
article
The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems
Feldmann, Andreas Emil
Marx, Dániel
Given a directed graph G and a list (s_1, t_1), ..., (s_k, t_k) of terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of G that contains a directed s_i -> t_i path for every 1 <= i <= k. The special case Directed Steiner Tree (when we ask for paths from a root r to terminals t_1, . . . , t_k) is known to be fixed-parameter tractable parameterized by the number of terminals, while the special case Strongly Connected Steiner Subgraph (when we ask for a path from every t_i to every other t_j ) is known to be W[1]-hard parameterized by the number of terminals. We systematically explore the complexity landscape of directed Steiner problems to fully understand which other special cases are FPT or W[1]-hard. Formally, if H is a class of directed graphs, then we look at the special case of Directed Steiner Network where the list (s_1, t_1), ..., (s_k, t_k) of requests form a directed graph that is a member of H. Our main result is a complete characterization of the classes H resulting in fixed-parameter tractable special cases: we show that if every pattern in H has the combinatorial property of being "transitively equivalent to a bounded-length caterpillar with a bounded number of extra edges," then the problem is FPT, and it is W[1]-hard for every recursively enumerable H not having this property. This complete dichotomy unifies and generalizes the known results showing that Directed Steiner Tree is FPT [Dreyfus and Wagner, Networks 1971], Strongly Connected Steiner Subgraph is W[1]-hard [Guo et al., SIAM J. Discrete Math. 2011], and Directed Steiner Network is solvable in polynomial-time for constant number of terminals [Feldman and Ruhl, SIAM J. Comput. 2006], and moreover reveals a large continent of tractable cases that were not known before.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.27/LIPIcs.ICALP.2016.27.pdf
Directed Steiner Tree
Directed Steiner Network
fixed-parameter tractability
dichotomy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
28:1
28:15
10.4230/LIPIcs.ICALP.2016.28
article
Double-Exponential and Triple-Exponential Bounds for Choosability Problems Parameterized by Treewidth
Marx, Dániel
Mitsou, Valia
Choosability, introduced by Erdös, Rubin, and Taylor [Congr. Number. 1979], is a well-studied concept in graph theory: we say that a graph is c-choosable if for any assignment of a list of c colors to each vertex, there is a proper coloring where each vertex uses a color from its list. We study the complexity of deciding choosability on graphs of bounded treewidth. It follows from earlier work that 3-choosability can be decided in time 2^(2^(O(w)))*n^(O(1)) on graphs of treewidth w. We complement this result by a matching lower bound giving evidence that double-exponential dependence on treewidth may be necessary for the problem: we show that an algorithm with running time 2^(2^(o(w)))*n^(O(1)) would violate the Exponential-Time Hypothesis (ETH). We consider also the optimization problem where the task is to delete the minimum number of vertices to make the graph 4-choosable, and demonstrate that dependence on treewidth becomes tripleexponential for this problem: it can be solved in time 2^(2^(2^(O(w))))*n^(O(1)) on graphs of treewidth w, but an algorithm with running time 2^(2^(2^(o(w))))*n^(O(1)) would violate ETH.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.28/LIPIcs.ICALP.2016.28.pdf
Parameterized Complexity
List coloring
Treewidth
Lower bounds under ETH
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
29:1
29:15
10.4230/LIPIcs.ICALP.2016.29
article
Do Distributed Differentially-Private Protocols Require Oblivious Transfer?
Goyal, Vipul
Khurana, Dakshita
Mironov, Ilya
Pandey, Omkant
Sahai, Amit
We study the cryptographic complexity of two-party differentially-private protocols for a large natural class of boolean functionalities. Information theoretically, McGregor et al. [FOCS 2010] and Goyal et al. [Crypto 2013] demonstrated several functionalities for which the maximal possible accuracy in the distributed setting is significantly lower than that in the client-server setting. Goyal et al. [Crypto 2013] further showed that "highly accurate" protocols in the distributed setting for any non-trivial functionality in fact imply the existence of one-way functions. However, it has remained an open problem to characterize the exact cryptographic complexity of this class. In particular, we know that semi-honest oblivious transfer helps obtain optimally accurate distributed differential privacy. But we do not know whether the reverse is true. We study the following question: Does the existence of optimally accurate distributed differentially private protocols for any class of functionalities imply the existence of oblivious transfer (or equivalently secure multi-party computation)? We resolve this question in the affirmative for the class of boolean functionalities that contain an XOR embedded on adjacent inputs. We give a reduction from oblivious transfer to:
- Any distributed optimally accurate epsilon-differentially private protocol with epsilon > 0 computing a functionality with a boolean XOR embedded on adjacent inputs.
- Any distributed non-optimally accurate epsilon-differentially private protocol with epsilon > 0, for a constant range of non-optimal accuracies and constant range of values of epsilon, computing a functionality with a boolean XOR embedded on adjacent inputs.
Enroute to proving these results, we demonstrate a connection between optimally-accurate twoparty differentially-private protocols for functions with a boolean XOR embedded on adjacent inputs, and noisy channels, which were shown by Crépeau and Kilian [FOCS 1988] to be sufficient for oblivious transfer.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.29/LIPIcs.ICALP.2016.29.pdf
Oblivious Transfer
Distributed Differential Privacy
Noisy Channels
Weak Noisy Channels
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
30:1
30:14
10.4230/LIPIcs.ICALP.2016.30
article
Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions
Libert, Benoît
Ramanna, Somindu C.
Yung, Moti
We formalize a cryptographic primitive called functional commitment (FC) which can be viewed as a generalization of vector commitments (VCs), polynomial commitments and many other special kinds of commitment schemes. A non-interactive functional commitment allows committing to a message in such a way that the committer has the flexibility of only revealing a function of the committed message during the opening phase. We provide constructions for the functionality of linear functions, where messages consist of vectors over some domain and commitments can later be opened to a specific linear function of the vector coordinates. An opening for a function thus generates a witness for the fact that the function indeed evaluates to a given value for the committed message. One security requirement is called function binding and requires that no adversary be able to open a commitment to two different evaluations for the same function.
We propose a construction of functional commitment for linear functions based on constantsize assumptions in composite order groups endowed with a bilinear map. The construction has commitments and openings of constant size (i.e., independent of n or function description) and is perfectly hiding - the underlying message is information theoretically hidden. Our security proofs build on the Déjà Q framework of Chase and Meiklejohn (Eurocrypt 2014) and its extension by Wee (TCC 2016) to encryption primitives, thus relying on constant-size subgroup decisional assumptions. We show that FC for linear functions are sufficiently powerful to solve four open problems. They, first, imply polynomial commitments, and, then, give cryptographic accumulators (i.e., an algebraic hash function which makes it possible to efficiently prove that some input belongs to a hashed set). In particular, specializing our FC construction leads to the first pairing-based polynomial commitments and accumulators for large universes known to achieve security under simple assumptions. We also substantially extend our pairing-based accumulator to handle subset queries which requires a non-trivial extension of the Déjà Q framework.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.30/LIPIcs.ICALP.2016.30.pdf
Cryptography
commitment schemes
functional commitments
accumulators
provable security
pairing-based
simple assumptions.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
31:1
31:14
10.4230/LIPIcs.ICALP.2016.31
article
Block-Wise Non-Malleable Codes
Chandran, Nishanth
Goyal, Vipul
Mukherjee, Pratyay
Pandey, Omkant
Upadhyay, Jalaj
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS'10) provide the guarantee that if a codeword c of a message m, is modified by a tampering function f to c', then c' either decodes to m or to "something unrelated" to m. In recent literature, a lot of focus has been on explicitly constructing such codes against a large and natural class of tampering functions such as split-state model in which the tampering function operates on different parts of the codeword independently.
In this work, we consider a stronger adversarial model called block-wise tampering model, in which we allow tampering to depend on more than one block: if a codeword consists of two blocks c = (c1, c2), then the first tampering function f1 could produce a tampered part c'_1 = f1(c1) and the second tampering function f2 could produce c'_2 = f2(c1, c2) depending on both c2 and c1. The notion similarly extends to multiple blocks where tampering of block ci could happen with the knowledge of all cj for j <= i. We argue this is a natural notion where, for example, the blocks are sent one by one and the adversary must send the tampered block before it gets the next block.
A little thought reveals that it is impossible to construct such codes that are non-malleable (in the standard sense) against such a powerful adversary: indeed, upon receiving the last block, an adversary could decode the entire codeword and then can tamper depending on the message. In light of this impossibility, we consider a natural relaxation called non-malleable codes with replacement which requires the adversary to produce not only related but also a valid codeword in order to succeed. Unfortunately, we show that even this relaxed definition is not achievable in the information-theoretic setting (i.e., when the tampering functions can be unbounded) which implies that we must turn our attention towards computationally bounded adversaries.
As our main result, we show how to construct a block-wise non-malleable code (BNMC) from sub-exponentially hard one-way permutations. We provide an interesting connection between BNMC and non-malleable commitments. We show that any BNMC can be converted into a nonmalleable
(w.r.t. opening) commitment scheme. Our techniques, quite surprisingly, give rise to a non-malleable commitment scheme (secure against so-called synchronizing adversaries), in which only the committer sends messages. We believe this result to be of independent interest. In the other direction, we show that any non-interactive non-malleable (w.r.t. opening) commitment can be used to construct BNMC only with 2 blocks. Unfortunately, such commitment scheme exists only under highly non-standard assumptions (adaptive one-way functions) and hence can not substitute our main construction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.31/LIPIcs.ICALP.2016.31.pdf
Non-malleable codes
Non-malleable commitments
Block-wise Tampering
Complexity-leveraging
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
32:1
32:14
10.4230/LIPIcs.ICALP.2016.32
article
Provably Secure Virus Detection: Using The Observer Effect Against Malware
Lipton, Richard J.
Ostrovsky, Rafail
Zikas, Vassilis
Protecting software from malware injection is one of the biggest challenges of modern computer science. Despite intensive efforts by the scientific and engineering community, the number of successful attacks continues to increase.
This work sets first footsteps towards a provably secure investigation of malware detection. We provide a formal model and cryptographic security definitions of attestation for systems with dynamic memory, and suggest novel provably secure attestation schemes. The key idea underlying our schemes is to use the very insertion of the malware itself to allow for the systems to detect it. This is, in our opinion, close in spirit to the quantum Observer Effect. The attackers, no matter how clever, no matter when they insert their malware, change the state of the system they are attacking. This fundamental idea can be a game changer. And our system does not rely on heuristics; instead, our scheme enjoys the unique property that it is proved secure in a formal and precise mathematical sense and with minimal and realistic CPU modification achieves strong provable security guarantees. We envision such systems with a formal mathematical security treatment as a venue for new directions in software protection.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.32/LIPIcs.ICALP.2016.32.pdf
Cryptography
Software Attestation
Provable Security
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
33:1
33:15
10.4230/LIPIcs.ICALP.2016.33
article
An Almost Cubic Lower Bound for Depth Three Arithmetic Circuits
Kayal, Neeraj
Saha, Chandan
Tavenas, Sébastien
We show an almost cubic lower bound on the size of any depth three arithmetic circuit computing an explicit multilinear polynomial in n variables over any field. This improves upon the previously known quadratic lower bound by Shpilka and Wigderson [CCC, 1999].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.33/LIPIcs.ICALP.2016.33.pdf
arithmetic circuits
depth-3 circuits
shifted partials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
34:1
34:14
10.4230/LIPIcs.ICALP.2016.34
article
Boundaries of VP and VNP
Grochow, Joshua A.
Mulmuley, Ketan D.
Qiao, Youming
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = !VP, where VP is the class of families of polynomials that can be computed by arithmetic circuits of polynomial degree and size, and VP is the class of families of polynomials that can be approximated infinitesimally closely by arithmetic circuits of polynomial degree and size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that !VP is not contained in VP.
Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP*. We also introduce analogous degenerations of VNP. We show that Stable-VP subseteq Newton-VP subseteq VP* subseteq VNP, and Stable-VNP = Newton-VNP = VNP* = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP.
Although we do not yet construct explicit candidates for the polynomial families in !VP\VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be nongeneric by showing that, for many finite quivers (including some wild ones), Newton degeneration of any generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.34/LIPIcs.ICALP.2016.34.pdf
geometric complexity theory
arithmetic circuit
border complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
35:1
35:14
10.4230/LIPIcs.ICALP.2016.35
article
AC^0 o MOD_2 Lower Bounds for the Boolean Inner Product
Cheraghchi, Mahdi
Grigorescu, Elena
Juba, Brendan
Wimmer, Karl
Xie, Ning
AC^0 o MOD_2 circuits are AC^0 circuits augmented with a layer of parity gates just above the input layer. We study AC^0 o MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC^0 o MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an ~Omega(n^2) lower bound for the special case of depth-4 AC^0 o MOD_2. Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.35/LIPIcs.ICALP.2016.35.pdf
Boolean analysis
circuit complexity
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
36:1
36:13
10.4230/LIPIcs.ICALP.2016.36
article
Lower Bounds for Nondeterministic Semantic Read-Once Branching Programs
Cook, Stephen
Edmonds, Jeff
Medabalimi, Venkatesh
Pitassi, Toniann
We prove exponential lower bounds on the size of semantic read-once 3-ary nondeterministic branching programs. Prior to our result the best that was known was for D-ary branching programs with |D| >= 2^{13}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.36/LIPIcs.ICALP.2016.36.pdf
Branching Programs
Semantic
Non-deterministic
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
37:1
37:14
10.4230/LIPIcs.ICALP.2016.37
article
Improved Bounds on the Sign-Rank of AC^0
Bun, Mark
Thaler, Justin
The sign-rank of a matrix A with entries in {-1, +1} is the least rank of a real matrix B with A_{ij}*B_{ij} > 0 for all i, j. Razborov and Sherstov (2008) gave the first exponential lower bounds on the sign-rank of a function in AC^0, answering an old question of Babai, Frankl, and Simon (1986). Specifically, they exhibited a matrix A = [F(x,y)]_{x,y} for a specific function F:{-1,1}^n*{-1,1}^n -> {-1,1} in AC^0, such that A has sign-rank exp(Omega(n^{1/3}).
We prove a generalization of Razborov and Sherstov’s result, yielding exponential sign-rank lower bounds for a non-trivial class of functions (that includes the function used by Razborov and Sherstov). As a corollary of our general result, we improve Razborov and Sherstov's lower bound on the sign-rank of AC^0 from exp(Omega(n^{1/3})) to exp(~Omega(n^{2/5})). We also describe several applications to communication complexity, learning theory, and circuit complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.37/LIPIcs.ICALP.2016.37.pdf
Sign-rank
circuit complexity
communication complexity
constant-depth circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
38:1
38:13
10.4230/LIPIcs.ICALP.2016.38
article
On the Sensitivity Conjecture
Tal, Avishay
The sensitivity of a Boolean function f:{0,1}^n -> {0,1} is the maximal number of neighbors a point in the Boolean hypercube has with different f-value. Roughly speaking, the block sensitivity allows to flip a set of bits (called a block) rather than just one bit, in order to change the value of f. The sensitivity conjecture, posed by Nisan and Szegedy (CC, 1994), states that the block sensitivity, bs(f), is at most polynomial in the sensitivity, s(f), for any Boolean function f. A positive answer to the conjecture will have many consequences, as the block sensitivity is polynomially related to many other complexity measures such as the certificate complexity, the decision tree complexity and the degree. The conjecture is far from being understood, as there is an exponential gap between the known upper and lower bounds relating bs(f) and s(f).
We continue a line of work started by Kenyon and Kutin (Inf. Comput., 2004), studying the l-block sensitivity, bs_l(f), where l bounds the size of sensitive blocks. While for bs_2(f) the picture is well understood with almost matching upper and lower bounds, for bs_3(f) it is not. We show that any development in understanding bs_3(f) in terms of s(f) will have great implications on the original question. Namely, we show that either bs(f) is at most sub-exponential in s(f) (which improves the state of the art upper bounds) or that bs_3(f) >= s(f){3-epsilon} for some Boolean functions (which improves the state of the art separations).
We generalize the question of bs(f) versus s(f) to bounded functions f:{0,1}^n -> [0,1] and show an analog result to that of Kenyon and Kutin: bs_l(f) = O(s(f))^l. Surprisingly, in this case, the bounds are close to being tight. In particular, we construct a bounded function f:{0,1}^n -> [0, 1] with bs(f) n/log(n) and s(f) = O(log(n)), a clear counterexample to the sensitivity conjecture for bounded functions.
Finally, we give a new super-quadratic separation between sensitivity and decision tree complexity by constructing Boolean functions with DT(f) >= s(f)^{2.115}. Prior to this work, only quadratic separations, DT(f) = s(f)^2, were known.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.38/LIPIcs.ICALP.2016.38.pdf
sensitivity conjecture
decision tree
block sensitivity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
39:1
39:14
10.4230/LIPIcs.ICALP.2016.39
article
Randomization Can Be as Helpful as a Glimpse of the Future in Online Computation
Mikkelsen, Jesper W.
We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. Intuitively, our direct product theorems say that if b bits of advice are needed to ensure a cost of at most t for some problem, then r*b bits of advice are needed to ensure a total cost of at most r*t when solving r independent instances of the problem. Using our direct product theorems, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, we show that
- A paging algorithm needs Omega(n) bits of advice to achieve a competitive ratio better than H_k = Omega(log k), where k is the cache size. Previously, it was only known that Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/4.
- Every O(n^{1-epsilon})-competitive vertex coloring algorithm must use Omega(n log n) bits of advice. Previously, it was only known that Omega(n log n) bits of advice were necessary to be optimal.
For certain online problems, including the MTS, k-server, metric matching, paging, list update, and dynamic binary search tree problem, we prove that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(log k)-competitive k-server algorithm with sublinear advice if and only if there exists a randomized O(log k)-competitive k-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Rosén [ICALP'09].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.39/LIPIcs.ICALP.2016.39.pdf
online algorithms
advice complexity
information theory
randomization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
40:1
40:13
10.4230/LIPIcs.ICALP.2016.40
article
Online Semidefinite Programming
Elad, Noa
Kale, Satyen
Naor, Joseph (Seffi)
We consider semidefinite programming through the lens of online algorithms - what happens if not all input is given at once, but rather iteratively? In what way does it make sense for a semidefinite program to be revealed? We answer these questions by defining a model for online semidefinite programming. This model can be viewed as a generalization of online coveringpacking linear programs, and it also captures interesting problems from quantum information theory. We design an online algorithm for semidefinite programming, utilizing the online primaldual method, achieving a competitive ratio of O(log(n)), where n is the number of matrices in the primal semidefinite program. We also design an algorithm for semidefinite programming with box constraints, achieving a competitive ratio of O(log F*), where F* is a sparsity measure of the semidefinite program. We conclude with an online randomized rounding procedure.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.40/LIPIcs.ICALP.2016.40.pdf
online algorithms
semidefinite programming
primal-dual
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
41:1
41:14
10.4230/LIPIcs.ICALP.2016.41
article
Beating the Harmonic Lower Bound for Online Bin Packing
Heydrich, Sandy
van Stee, Rob
In the online bin packing problem, items of sizes in (0,1] arrive online to be packed into bins of size 1. The goal is to minimize the number of used bins. Harmonic++ achieves a competitive ratio of 1.58889 and belongs to the Super Harmonic framework [Seiden, J. ACM, 2002]; a lower bound of Ramanan et al. shows that within this framework, no competitive ratio below 1.58333 can be achieved [Ramanan et al., J. Algorithms, 1989]. In this paper, we present an online bin packing algorithm with asymptotic performance ratio of 1.5815, which constitutes the first improvement in fifteen years and reduces the gap to the lower bound by roughly 15%.
We make two crucial changes to the Super Harmonic framework. First, some of the decisions of the algorithm will depend on exact sizes of items, instead of only their types. In particular, for item pairs where the size of one item is in (1/3,1/2] and the other is larger than 1/2 (a large item), when deciding whether to pack such a pair together in one bin, our algorithm does not consider their types, but only checks whether their total size is at most 1.
Second, for items with sizes in (1/3,1/2] (medium items), we try to pack the larger items of every type in pairs, while combining the smallest items with large items whenever possible. To do this, we postpone the coloring of medium items (i.e., the decision which items to pack in pairs and which to pack alone) where possible, and later select the smallest ones to be reserved for combining with large items. Additionally, in case such large items arrive early, we pack medium items with them whenever possible. This is a highly unusual idea in the context of Harmonic-like algorithms, which initially seems to preclude analysis (the ratio of items combined with large items is no longer a fixed constant).
For the analysis, we carefully mark medium items depending on how they end up packed, enabling us to add crucial constraints to the linear program used by Seiden. We consider the dual, eliminate all but one variable and then solve it with the ellipsoid method using a separation oracle. Our implementation uses additional algorithmic ideas to determine previously hand set parameters automatically and gives certificates for easy verification of the results.
We give a lower bound of 1.5766 for algorithms like ours. This shows that fundamentally different ideas will be required to make further improvements
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.41/LIPIcs.ICALP.2016.41.pdf
Bin packing
online algorithms
harmonic algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
42:1
42:14
10.4230/LIPIcs.ICALP.2016.42
article
Online Weighted Degree-Bounded Steiner Networks via Novel Online Mixed Packing/Covering
Dehghani, Sina
Ehsani, Soheil
Hajiaghayi, Mohammad Taghi
Liaghat, Vahid
Räcke, Harald
Seddighin, Saeed
We design the first online algorithm with poly-logarithmic competitive ratio for the edge-weighted degree-bounded Steiner forest (EW-DB-SF) problem and its generalized variant. We obtain our result by demonstrating a new generic approach for solving mixed packing/covering integer programs in the online paradigm. In EW-DB-SF, we are given an edge-weighted graph with a degree bound for every vertex. Given a root vertex in advance, we receive a sequence of terminal vertices in an online manner. Upon the arrival of a terminal, we need to augment our solution subgraph to connect the new terminal to the root. The goal is to minimize the total weight of the solution while respecting the degree bounds on the vertices. In the offline setting, edge-weighted degree-bounded Steiner tree (EW-DB-ST) and its many variations have been extensively studied since early eighties. Unfortunately, the recent advancements in the online network design problems are inherently difficult to adapt for degree-bounded problems. In particular, it is not known whether the fractional solution obtained by standard primal-dual techniques for mixed packing/covering LPs can be rounded online. In contrast, in this paper we obtain our result by using structural properties of the optimal solution, and reducing the EW-DB-SF problem to an exponential-size mixed packing/covering integer program in which every variable appears only once in covering constraints. We then design a generic integral algorithm for solving this restricted family of IPs.
As mentioned above, we demonstrate a new technique for solving mixed packing/covering integer programs. Define the covering frequency k of a program as the maximum number of covering constraints in which a variable can participate. Let m denote the number of packing constraints. We design an online deterministic integral algorithm with competitive ratio of O(k*log(m)) for the mixed packing/covering integer programs. We prove the tightness of our result by providing a matching lower bound for any randomized algorithm. We note that our solution solely depends on m and k. Indeed, there can be exponentially many variables. Furthermore, our algorithm directly provides an integral solution, even if the integrality gap of the program is unbounded. We believe this technique can be used as an interesting alternative for the standard primal-dual techniques in solving online problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.42/LIPIcs.ICALP.2016.42.pdf
Online
Steiner Tree
Approximation
Competitive ratio
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
43:1
43:13
10.4230/LIPIcs.ICALP.2016.43
article
Carpooling in Social Networks
Fiat, Amos
Karlin, Anna R.
Koutsoupias, Elias
Mathieu, Claire
Zach, Rotem
We consider the online carpool fairness problem of [Fagin and Williams, 1983] in which an online algorithm is presented with a sequence of pairs drawn from a group of n potential drivers. The online algorithm must select one driver from each pair, with the objective of partitioning the driving burden as fairly as possible for all drivers. The unfairness of an online algorithm is a measure of the worst-case deviation between the number of times a person has driven and the number of times they would have driven if life was completely fair.
We introduce a version of the problem in which drivers only carpool with their neighbors in a given social network graph; this is a generalization of the original problem, which corresponds to the social network of the complete graph. We show that for graphs of degree d, the unfairness of deterministic algorithms against adversarial sequences is exactly d/2. For random sequences of edges from planar graph social networks we give a [deterministic] algorithm with logarithmic unfairness (holds more generally for any bounded-genus graph). This does not follow from previous random sequence results in the original model, as we show that restricting the random sequences to sparse social network graphs may increase the unfairness.
A very natural class of randomized online algorithms are so-called static algorithms that preserve the same state distribution over time. Surprisingly, we show that any such algorithm has unfairness ~Theta(sqrt(d)) against oblivious adversaries. This shows that the local random greedy algorithm of [Ajtai et al, 1996] is close to optimal amongst the class of static algorithms. A natural (non-static) algorithm is global random greedy (which acts greedily and breaks ties at random). We improve the lower bound on the competitive ratio from Omega(log^{1/3}(d)) to Omega(log(d)). We also show that the competitive ratio of global random greedy against adaptive adversaries is Omega(d).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.43/LIPIcs.ICALP.2016.43.pdf
Online algorithms
Fairness
Randomized algorithms
Competitive ratio
Carpool problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
44:1
44:14
10.4230/LIPIcs.ICALP.2016.44
article
An Improved Analysis of the ER-SpUD Dictionary Learning Algorithm
Blasiok, Jaroslaw
Nelson, Jelani
In dictionary learning we observe Y = AX + E for some Y in R^{n*p}, A in R^{m*n}, and X in R^{m*p}, where p >= max{n, m}, and typically m >=n. The matrix Y is observed, and A, X, E are unknown. Here E is a "noise" matrix of small norm, and X is column-wise sparse. The matrix A is referred to as a dictionary, and its columns as atoms. Then, given some small number p of samples, i.e. columns of Y , the goal is to learn the dictionary A up to small error, as well as the coefficient matrix X. In applications one could for example think of each column of Y as a distinct image in a database. The motivation is that in many applications data is expected to sparse when represented by atoms in the "right" dictionary A (e.g. images in the Haar wavelet basis), and the goal is to learn A from the data to then use it for other applications.
Recently, the work of [Spielman/Wang/Wright, COLT'12] proposed the dictionary learning algorithm ER-SpUD with provable guarantees when E = 0 and m = n. That work showed that if X has independent entries with an expected Theta n non-zeroes per column for 1/n <~ Theta <~ 1/sqrt(n), and with non-zero entries being subgaussian, then for p >~ n^2 log^2 n with high probability ER-SpUD outputs matrices A', X' which equal A, X up to permuting and scaling columns (resp. rows) of A (resp. X). They conjectured that p >~ n log n suffices, which they showed was information theoretically necessary for any algorithm to succeed when Theta =~ 1/n. Significant progress toward showing that p >~ n log^4 n might suffice was later obtained in [Luh/Vu, FOCS'15].
In this work, we show that for a slight variant of ER-SpUD, p >~ n log(n/delta) samples suffice for successful recovery with probability 1 - delta. We also show that without our slight variation made to ER-SpUD, p >~ n^{1.99} samples are required even to learn A, X with a small success probability of 1/ poly(n). This resolves the main conjecture of [Spielman/Wang/Wright, COLT'12], and contradicts a result of [Luh/Vu, FOCS'15], which claimed that p >~ n log^4 n guarantees high probability of success for the original ER-SpUD algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.44/LIPIcs.ICALP.2016.44.pdf
dictionary learning
stochastic processes
generic chaining
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
45:1
45:13
10.4230/LIPIcs.ICALP.2016.45
article
Approximation via Correlation Decay When Strong Spatial Mixing Fails
Bezáková, Ivona
Galanis, Andreas
Goldberg, Leslie Ann
Guo, Heng
Stefankovic, Daniel
Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of anti-ferromagnetic two-spin models.
Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing. This, roughly, means that one uses worst-case analysis of the recursive procedure that creates the sub-instances. In this paper, we develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and we amortise against certain "bad" instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain an FPTAS even when strong spatial mixing fails.
We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper-bound Delta and with a lower bound k on the arity of hyperedges. Liu and Lin gave an FPTAS for k >= 2 and Delta <= 5 (lack of strong spatial mixing was the obstacle preventing this algorithm from being generalised to Delta = 6). Our technique gives a tight result for Delta = 6, showing that there is an FPTAS for k >= 3 and Delta <= 6. The best previously-known approximation scheme for Delta = 6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and Karpinski, which only works for k >= 8.
Our technique also applies for larger values of k, giving an FPTAS for k >= 1.66 Delta. This bound is not as strong as existing randomised results, for technical reasons that are discussed in the paper. Nevertheless, it gives the first deterministic approximation schemes in this regime. We further demonstrate that in the hypergraph independent set model, approximating the partition function is NP-hard even within the uniqueness regime.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.45/LIPIcs.ICALP.2016.45.pdf
approximate counting
independent sets in hypergraphs
correlation decay
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
46:1
46:13
10.4230/LIPIcs.ICALP.2016.46
article
A Complexity Trichotomy for Approximately Counting List H-Colourings
Galanis, Andreas
Goldberg, Leslie Ann
Jerrum, Mark
We examine the computational complexity of approximately counting the list H-colourings of a graph. We discover a natural graph-theoretic trichotomy based on the structure of the graph H. If H is an irreflexive bipartite graph or a reflexive complete graph then counting list H-colourings is trivially in polynomial time. Otherwise, if H is an irreflexive bipartite permutation graph or a reflexive proper interval graph then approximately counting list H-colourings is equivalent to #BIS, the problem of approximately counting independent sets in a bipartite graph. This is a well-studied problem which is believed to be of intermediate complexity - it is believed that it does not have an FPRAS, but that it is not as difficult as approximating the most difficult counting problems in #P. For every other graph H, approximately counting list H-colourings is complete for #P with respect to approximation-preserving reductions (so there is no FPRAS unless NP = RP). Two pleasing features of the trichotomy are (i) it has a natural formulation in terms of hereditary graph classes, and (ii) the proof is largely self-contained and does not require any universal algebra (unlike similar dichotomies in the weighted case). We are able to extend the hardness results to the bounded-degree setting, showing that all hardness results apply to input graphs with maximum degree at most 6.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.46/LIPIcs.ICALP.2016.46.pdf
approximate counting
graph homomorphisms
list colourings
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
47:1
47:14
10.4230/LIPIcs.ICALP.2016.47
article
Parity Separation: A Scientifically Proven Method for Permanent Weight Loss
Curticapean, Radu
Given an edge-weighted graph G, let PerfMatch(G) denote the weighted sum over all perfect matchings M in G, weighting each matching M by the product of weights of edges in M. If G is unweighted, this plainly counts the perfect matchings of G.
In this paper, we introduce parity separation, a new method for reducing PerfMatch to unweighted instances: For graphs G with edge-weights 1 and -1, we construct two unweighted graphs G1 and G2 such that PerfMatch(G) = PerfMatch(G1) - PerfMatch(G2). This yields a novel weight removal technique for counting perfect matchings, in addition to those known from classical #P-hardness proofs. Our technique is based upon the Holant framework and matchgates. We derive the following applications:
Firstly, an alternative #P-completeness proof for counting unweighted perfect matchings.
Secondly, C=P-completeness for deciding whether two given unweighted graphs have the same number of perfect matchings. To the best of our knowledge, this is the first C=P-completeness result for the “equality-testing version” of any natural counting problem that is not already #P-hard under parsimonious reductions.
Thirdly, an alternative tight lower bound for counting unweighted perfect matchings under the counting exponential-time hypothesis #ETH.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.47/LIPIcs.ICALP.2016.47.pdf
perfect matchings
counting complexity
structural complexity
exponentialtime hypothesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
48:1
48:14
10.4230/LIPIcs.ICALP.2016.48
article
On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter
Dahlgaard, Søren
Conditional lower bounds for dynamic graph problems has received a great deal of attention in recent years. While many results are now known for the fully-dynamic case and such bounds often imply worst-case bounds for the partially dynamic setting, it seems much more difficult to prove amortized bounds for incremental and decremental algorithms. In this paper we consider partially dynamic versions of three classic problems in graph theory. Based on popular conjectures we show that:
- No algorithm with amortized update time O(n^{1-epsilon}) exists for incremental or decremental maximum cardinality bipartite matching. This significantly improves on the O(m^{1/2-epsilon}) bound for sparse graphs of Henzinger et al. [STOC'15] and O(n^{1/3-epsilon}) bound of Kopelowitz, Pettie and Porat. Our linear bound also appears more natural. In addition, the result we present separates the node-addition model from the edge insertion model, as an algorithm with total update time O(m*sqrt(n)) exists for the former by Bosek et al. [FOCS'14].
- No algorithm with amortized update time O(m^{1-epsilon}) exists for incremental or decremental maximum flow in directed and weighted sparse graphs. No such lower bound was known for partially dynamic maximum flow previously. Furthermore no algorithm with amortized update time O(n^{1-epsilon}) exists for directed and unweighted graphs or undirected and weighted graphs.
- No algorithm with amortized update time O(n^{1/2-epsilon}) exists for incremental or decremental (4/3 - epsilon')-approximating the diameter of an unweighted graph. We also show a slightly stronger bound if node additions are allowed. The result is then extended to the static case, where we show that no O((n*sqrt(m))^{1-epsilon}) algorithm exists. We also extend the result to the case when an additive error is allowed in the approximation. While our bounds are weaker than the already known bounds of Roditty and Vassilevska Williams [STOC'13], it is based on a weaker conjecture of Abboud et al. [STOC'15] and is the first known reduction from the 3SUM and APSP problems to diameter. Showing an equivalence between APSP and diameter is a major open problem in this area (Abboud et al. [SODA'15]), and thus showing even a weak connection in this direction is of interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.48/LIPIcs.ICALP.2016.48.pdf
Conditional lower bounds
Maximum cardinality matching
Diameter in graphs
Hardness in P
Partially dynamic problems
Maximum flow
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
49:1
49:15
10.4230/LIPIcs.ICALP.2016.49
article
Incremental 2-Edge-Connectivity in Directed Graphs
Georgiadis, Loukas
Italiano, Giuseppe F.
Parotsidis, Nikos
We present an algorithm that can update the 2-edge-connected blocks of a directed graph with n vertices through a sequence of m edge insertions in a total of O(m*n) time. After each insertion, we can answer the following queries in asymptotically optimal time:
- Test in constant time if two query vertices v and w are 2-edge-connected. Moreover, if v and w are not 2-edge-connected, we can produce in constant time a “witness” of this property, by exhibiting an edge that is contained in all paths from v to w or in all paths from w to v.
- Report in O(n) time all the 2-edge-connected blocks of G.
This is the first dynamic algorithm for 2-connectivity problems on directed graphs, and it matches the best known bounds for simpler problems, such as incremental transitive closure.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.49/LIPIcs.ICALP.2016.49.pdf
2-edge connectivity on directed graphs; dynamic graph algorithms; incremental algorithms.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
50:1
50:13
10.4230/LIPIcs.ICALP.2016.50
article
Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction
Wang, Di
Rao, Satish
Mahoney, Michael W.
In a series of recent breakthroughs, Allen-Zhu and Orecchia [Allen-Zhu/Orecchia, STOC 2015; Allen-Zhu/Orecchia, SODA 2015] leveraged insights from the linear coupling method [Allen-Zhu/Oreccia, arXiv 2014], which is a first-order optimization scheme, to provide improved algorithms for packing and covering linear programs. The result in [Allen-Zhu/Orecchia, STOC 2015] is particularly interesting, as the algorithm for packing LP achieves both width-independence and Nesterov-like acceleration, which was not known to be possible before. Somewhat surprisingly, however, while the dependence of the convergence rate on the error parameter epsilon for packing problems was improved to O(1/epsilon), which corresponds to what accelerated gradient methods are designed to achieve, the dependence for covering problems was only improved to O(1/epsilon^{1.5}), and even that required a different more complicated algorithm, rather than from Nesterov-like acceleration. Given the primal-dual connection between packing and covering problems and since previous algorithms for these very related problems have led to the same epsilon dependence, this discrepancy is surprising, and it leaves open the question of the exact role that the linear coupling is playing in coordinating the complementary gradient and mirror descent step of the algorithm. In this paper, we clarify these issues, illustrating that the linear coupling method can lead to improved O(1/epsilon) dependence for both packing and covering problems in a unified manner, i.e., with the same algorithm and almost identical analysis. Our main technical result is a novel dimension lifting method that reduces the coordinate-wise diameters of the feasible region for covering LPs, which is the key structural property to enable the same Nesterov-like acceleration as in the case of packing LPs. The technique is of independent interest and that may be useful in applying the accelerated linear coupling method to other combinatorial problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.50/LIPIcs.ICALP.2016.50.pdf
Convex optimization
Accelerated gradient descent
Linear program
Approximation algorithm
Packing and covering
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
51:1
51:14
10.4230/LIPIcs.ICALP.2016.51
article
Random-Edge Is Slower Than Random-Facet on Abstract Cubes
Hansen, Thomas Dueholm
Zwick, Uri
Random-Edge and Random-Facet are two very natural randomized pivoting rules for the simplex algorithm. The behavior of Random-Facet is fairly well understood. It performs an expected sub-exponential number of pivoting steps on any linear program, or more generally, on any Acyclic Unique Sink Orientation (AUSO) of an arbitrary polytope, making it the fastest known pivoting rule for the simplex algorithm. The behavior of Random-Edge is much less understood. We show that in the AUSO setting, Random-Edge is slower than Random-Facet. To do that, we construct AUSOs of the n-dimensional hypercube on which Random-Edge performs an expected number of 2^{Omega(sqrt(n*log(n)))} steps. This improves on a 2^{Omega(sqrt^3(n))} lower bound of Matoušek and Szabó. As Random-Facet performs an expected number of 2^{O(sqrt(n)} steps on any n-dimensional AUSO, this established our result. Improving our 2^{Omega(sqrt(n*log(n)))} lower bound seems to require radically new techniques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.51/LIPIcs.ICALP.2016.51.pdf
Linear programming
the Simplex Algorithm
Pivoting rules
Acyclic Unique Sink Orientations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
52:1
52:14
10.4230/LIPIcs.ICALP.2016.52
article
Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time
Mahoney, Michael W.
Rao, Satish
Wang, Di
Zhang, Peng
We study the problem of approximately solving positive linear programs (LPs). This class of LPs models a wide range of fundamental problems in combinatorial optimization and operations research, such as many resource allocation problems, solving non-negative linear systems, computing tomography, single/multi commodity flows on graphs, etc. For the special cases of pure packing or pure covering LPs, recent result by Allen-Zhu and Orecchia [Allen/Zhu/Orecchia, SODA'15] gives O˜(1/(epsilon^3))-time parallel algorithm, which breaks the longstanding O˜(1/(epsilon^4)) running time bound by the seminal work of Luby and Nisan [Luby/Nisan, STOC'93].
We present new parallel algorithm with running time O˜(1/(epsilon^3)) for the more general mixed packing and covering LPs, which improves upon the O˜(1/(epsilon^4))-time algorithm of Young [Young, FOCS'01; Young, arXiv 2014]. Our work leverages the ideas from both the optimization oriented approach [Allen/Zhu/Orecchia, SODA'15; Wang/Mahoney/Mohan/Rao, arXiv 2015], as well as the more combinatorial approach with phases [Young, FOCS'01; Young, arXiv 2014]. In addition, our algorithm, when directly applied to pure packing or pure covering LPs, gives a improved running time of O˜(1/(epsilon^2)).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.52/LIPIcs.ICALP.2016.52.pdf
Mixed packing and covering
Linear program
Approximation algorithm
Parallel algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
53:1
53:6
10.4230/LIPIcs.ICALP.2016.53
article
Optimization Algorithms for Faster Computational Geometry
Allen-Zhu, Zeyuan
Liao, Zhenyu
Yuan, Yang
We study two fundamental problems in computational geometry: finding the maximum inscribed ball (MaxIB) inside a bounded polyhedron defined by m hyperplanes, and the minimum enclosing ball (MinEB) of a set of n points, both in d-dimensional space. We improve the running time of iterative algorithms on
MaxIB from ~O(m*d*alpha^3/epsilon^3) to ~O(m*d + m*sqrt(d)*alpha/epsilon), a speed-up up to ~O(sqrt(d)*alpha^2/epsilon^2), and
MinEB from ~O(n*d/sqrt(epsilon)) to ~O(n*d + n*sqrt(d)/sqrt(epsilon)), a speed-up up to ~O(sqrt(d)).
Our improvements are based on a novel saddle-point optimization framework. We propose a new algorithm L1L2SPSolver for solving a class of regularized saddle-point problems, and apply a randomized Hadamard space rotation which is a technique borrowed from compressive sensing. Interestingly, the motivation of using Hadamard rotation solely comes from our optimization view but not the original geometry problem: indeed, it is not immediately clear why MaxIB or MinEB, as a geometric problem, should be easier to solve if we rotate the space by a unitary matrix. We hope that our optimization perspective sheds lights on solving other geometric problems as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.53/LIPIcs.ICALP.2016.53.pdf
maximum inscribed balls
minimum enclosing balls
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
54:1
54:15
10.4230/LIPIcs.ICALP.2016.54
article
A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems
Marasevic, Jelena
Stein, Clifford
Zussman, Gil
We study weighted alpha-fair packing problems, that is, the problems of maximizing the objective functions (i) sum_j w_j*x_j^{1-alpha}/(1-alpha) when alpha > 0, alpha != 1 and (ii) sum_j w_j*ln(x_j) when alpha = 1, over linear constraints A*x <=b, x >= 0, where wj are positive weights and A and b are non-negative. We consider the distributed computation model that was used for packing linear programs and network utility maximization problems. Under this model, we provide a distributed algorithm for general alpha that converges to an epsilon-approximate solution in time (number of distributed iterations) that has an inverse polynomial dependence on the approximation parameter epsilon and poly-logarithmic dependence on the problem size. This is the first distributed algorithm for weighted alpha-fair packing with poly-logarithmic convergence in the input size. The algorithm uses simple local update rules and is stateless (namely, it allows asynchronous updates, is self-stabilizing, and allows incremental and local adjustments). We also obtain a number of structural results that characterize alpha-fair allocations as the value of alpha is varied. These results deepen our understanding of fairness guarantees in alpha-fair packing allocations, and also provide insight into the behavior of alpha-fair allocations in the asymptotic cases alpha -> 0, alpha -> 1, and alpha -> infinity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.54/LIPIcs.ICALP.2016.54.pdf
Fairness
distributed and stateless algorithms
resource allocation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
55:1
55:13
10.4230/LIPIcs.ICALP.2016.55
article
All-Pairs Approximate Shortest Paths and Distance Oracle Preprocessing
Sommer, Christian
Given an undirected, unweighted graph G on n nodes, there is an O(n^2*poly log(n))-time algorithm that computes a data structure called distance oracle of size O(n^{5/3}*poly log(n)) answering approximate distance queries in constant time. For nodes at distance d the distance estimate is between d and 2d + 1.
This new distance oracle improves upon the oracles of Patrascu and Roditty (FOCS 2010), Abraham and Gavoille (DISC 2011), and Agarwal and Brighten Godfrey (PODC 2013) in terms of preprocessing time, and upon the oracle of Baswana and Sen (SODA 2004) in terms of stretch. The running time analysis is tight (up to logarithmic factors) due to a recent lower bound of Abboud and Bodwin (STOC 2016).
Techniques include dominating sets, sampling, balls, and spanners, and the main contribution lies in the way these techniques are combined. Perhaps the most interesting aspect from a technical point of view is the application of a spanner without incurring its constant additive stretch penalty.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.55/LIPIcs.ICALP.2016.55.pdf
graph algorithms
data structures
approximate shortest paths
distance oracles
distance labels
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
56:1
56:13
10.4230/LIPIcs.ICALP.2016.56
article
Total Space in Resolution Is at Least Width Squared
Bonacina, Ilario
Given an unsatisfiable k-CNF formula phi we consider two complexity measures in Resolution: width and total space. The width is the minimal W such that there exists a Resolution refutation of phi with clauses of at most W literals. The total space is the minimal size T of a memory used to write down a Resolution refutation of phi where the size of the memory is measured as the total number of literals it can contain. We prove that T = Omega((W - k)^2).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.56/LIPIcs.ICALP.2016.56.pdf
Resolution
width
total space
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
57:1
57:14
10.4230/LIPIcs.ICALP.2016.57
article
Supercritical Space-Width Trade-Offs for Resolution
Berkholz, Christoph
Nordström, Jakob
We show that there are CNF formulas which can be refuted in resolution in both small space and small width, but for which any small-width resolution proof must have space exceeding by far the linear worst-case upper bound. This significantly strengthens the space-width trade-offs in [Ben-Sasson 2009], and provides one more example of trade-offs in the "supercritical" regime above worst case recently identified by [Razborov 2016]. We obtain our results by using Razborov’s new hardness condensation technique and combining it with the space lower bounds in [Ben-Sasson and Nordström 2008].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.57/LIPIcs.ICALP.2016.57.pdf
Proof complexity
resolution
space
width
trade-offs
supercritical
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
58:1
58:14
10.4230/LIPIcs.ICALP.2016.58
article
Deterministic Time-Space Trade-Offs for k-SUM
Lincoln, Andrea
Vassilevska Williams, Virginia
Wang, Joshua R.
Williams, R. Ryan
Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences. To illustrate:
- 3-SUM is in deterministic time O(n^2*lg(lg(n))/lg(n)) and space O(sqrt(n*lg(n)/lg(lg(n)))). In general, any polylogarithmic-time improvement over quadratic time for 3-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well.
- 3-SUM is in deterministic time O(n^2) and space O(sqrt(n)), derandomizing an algorithm of Wang.
- A popular conjecture states that 3-SUM requires n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n^{.51})-space algorithm for 3-SUM requires at least n^{2-o(1)} time on the word-RAM.
- For k >= 4, k-SUM is in deterministic O(n^{k-2+2/k}) time and O(sqrt(n)) space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.58/LIPIcs.ICALP.2016.58.pdf
3SUM
kSUM
time-space tradeoff
algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
59:1
59:14
10.4230/LIPIcs.ICALP.2016.59
article
Semi-Streaming Algorithms for Annotated Graph Streams
Thaler, Justin
Considerable effort has been devoted to the development of streaming algorithms for analyzing massive graphs. Unfortunately, many results have been negative, establishing that a wide variety of problems require Omega(n^2) space to solve. One of the few bright spots has been the development of semi-streaming algorithms for a handful of graph problems - these algorithms use space O(n*polylog(n)).
In the annotated data streaming model of Chakrabarti et al. [Chakrabarti/Cormode/Goyal/Thaler, ACM Trans. on Alg. 2014], a computationally limited client wants to compute some property of a massive input, but lacks the resources to store even a small fraction of the input, and hence cannot perform the desired computation locally. The client therefore accesses a powerful but untrusted service provider, who not only performs the requested computation, but also proves that the answer is correct.
We consider the notion of semi-streaming algorithms for annotated graph streams (semistreaming annotation schemes for short). These are protocols in which both the client's space usage and the length of the proof are O(n*polylog(n)). We give evidence that semi-streaming annotation schemes represent a more robust solution concept than does the standard semi-streaming model. On the positive side, we give semi-streaming annotation schemes for two dynamic graph problems that are intractable in the standard model: (exactly) counting triangles, and (exactly) computing maximum matchings. The former scheme answers a question of Cormode [Cormode, Problem 47]. On the negative side, we identify for the first time two natural graph problems (connectivity and bipartiteness in a certain edge update model) that can be solved in the standard semi-streaming model, but cannot be solved by annotation schemes of "sub-semi-streaming" cost. That is, these problems are as hard in the annotations model as they are in the standard model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.59/LIPIcs.ICALP.2016.59.pdf
graph streams
stream verification
annotated data streams
probabilistic proof systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
60:1
60:14
10.4230/LIPIcs.ICALP.2016.60
article
Randomized Query Complexity of Sabotaged and Composed Functions
Shalev, Ben-David
Kothari, Robin
We study the composition question for bounded-error randomized query complexity: Is R(f circ g) = Omega(R(f)R(g))? We show that inserting a simple function h, whose query complexity is onlyTheta(log R(g)), in between f and g allows us to prove R(f circ h circ g) = Omega(R(f)R(h)R(g)).
We prove this using a new lower bound measure for randomized query complexity we call randomized sabotage complexity, RS(f). Randomized sabotage complexity has several desirable properties, such as a perfect composition theorem, RS(f circ g) >= RS(f) RS(g), and a composition theorem with randomized query complexity, R(f circ g) = Omega(R(f) RS(g)). It is also a quadratically tight lower bound for total functions and can be quadratically superior to the partition bound, the best known general lower bound for randomized query complexity.
Using this technique we also show implications for lifting theorems in communication complexity. We show that a general lifting theorem from zero-error randomized query to communication complexity implies a similar result for bounded-error algorithms for all total functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.60/LIPIcs.ICALP.2016.60.pdf
Randomized query complexity
decision tree complexity
composition theorem
partition bound
lifting theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
61:1
61:14
10.4230/LIPIcs.ICALP.2016.61
article
Coding for Interactive Communication Correcting Insertions and Deletions
Braverman, Mark
Gelles, Ran
Mao, Jieming
Ostrovsky, Rafail
We consider the question of interactive communication, in which two remote parties perform a computation while their communication channel is (adversarially) noisy. We extend here the discussion into a more general and stronger class of noise, namely, we allow the channel to perform insertions and deletions of symbols. These types of errors may bring the parties "out of sync", so that there is no consensus regarding the current round of the protocol.
In this more general noise model, we obtain the first interactive coding scheme that has a constant rate and tolerates noise rates of up to 1/18 - epsilon. To this end we develop a novel primitive we name edit distance tree code. The edit distance tree code is designed to replace the Hamming distance constraints in Schulman's tree codes (STOC 93), with a stronger edit distance requirement. However, the straightforward generalization of tree codes to edit distance does not seem to yield a primitive that suffices for communication in the presence of synchronization problems. Giving the "right" definition of edit distance tree codes is a main conceptual contribution of this work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.61/LIPIcs.ICALP.2016.61.pdf
Interactive communication
coding
edit distance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
62:1
62:13
10.4230/LIPIcs.ICALP.2016.62
article
Amplifiers for the Moran Process
Galanis, Andreas
Göbel, Andreas
Goldberg, Leslie Ann
Lapinskas, John
Richerby, David
The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness r > 1. All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or non-mutant) is passed on to an out-neighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every r > 1, the extinction probability tends to 0 as the number of vertices increases. Strong amplification is a rather surprising property - it means that in such graphs, the fixation probability of a uniformly-placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of r > 1 (independently of n). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified". Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly-amplifying families - superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. We give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars". When the algorithm is run on an n-vertex graph in this family, starting with a uniformly-chosen mutant, the extinction probability is roughly n^{-1/2} (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of n). Finally, we prove that our analysis of megastars is fairly tight - there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al. A full version [Galanis/Göbel/Goldberg/Lapinskas/Richerby, Preprint] containing detailed proofs is available at http://arxiv.org/abs/1512.05632. Theorem-numbering here matches the full version.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.62/LIPIcs.ICALP.2016.62.pdf
Moran process
randomised algorithm on graphs
evolutionary dynamics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
63:1
63:14
10.4230/LIPIcs.ICALP.2016.63
article
Mixing Time of Markov Chains, Dynamical Systems and Evolution
Panageas, Ioannis
Vishnoi, Nisheeth K.
In this paper we study the mixing time of evolutionary Markov chains over populations of a fixed size (N) in which each individual can be one of m types. These Markov chains have the property that they are guided by a dynamical system from the m-dimensional probability simplex to itself. Roughly, given the current state of the Markov chain, which can be viewed as a probability distribution over the m types, the next state is generated by applying this dynamical system to this distribution, and then sampling from it N times. Many processes in nature, from biology to sociology, are evolutionary and such chains can be used to model them. In this study, the mixing time is of particular interest as it determines the speed of evolution and whether the statistics of the steady state can be efficiently computed. In a recent result [Panageas, Srivastava, Vishnoi, Soda, 2016], it was suggested that the mixing time of such Markov chains is connected to the geometry of this guiding dynamical system. In particular, when the dynamical system has a fixed point which is a global attractor, then the mixing is fast. The limit sets of dynamical systems, however, can exhibit more complex behavior: they could have multiple fixed points that are not necessarily stable, periodic orbits, or even chaos. Such behavior arises in important evolutionary settings such as the dynamics of sexual evolution and that of grammar acquisition. In this paper we prove that the geometry of the dynamical system can also give tight mixing time bounds when the dynamical system has multiple fixed points and periodic orbits. We show that the mixing time continues to remain small in the presence of several unstable fixed points and is exponential in N when there are two or more stable fixed points. As a consequence of our results, we obtain a phase transition result for the mixing time of the sexual/grammar model mentioned above. We arrive at the conclusion that in the interesting parameter regime for these models, i.e., when there are multiple stable fixed points, the mixing is slow. Our techniques strengthen the connections between Markov chains and dynamical systems and we expect that the tools developed in this paper should have a wider applicability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.63/LIPIcs.ICALP.2016.63.pdf
Markov chains
Mixing time
Dynamical Systems
Evolutionary dynamics
Language evolution
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
64:1
64:14
10.4230/LIPIcs.ICALP.2016.64
article
Information Cascades on Arbitrary Topologies
Wan, Jun
Xia, Yu
Li, Liang
Moscibroda, Thomas
In this paper, we study information cascades on graphs. In this setting, each node in the graph represents a person. One after another, each person has to take a decision based on a private signal as well as the decisions made by earlier neighboring nodes. Such information cascades commonly occur in practice and have been studied in complete graphs where everyone can overhear the decisions of every other player. It is known that information cascades can be fragile and based on very little information, and that they have a high likelihood of being wrong.
Generalizing the problem to arbitrary graphs reveals interesting insights. In particular, we show that in a random graph G(n,q), for the right value of q, the number of nodes making a wrong decision is logarithmic in n. That is, in the limit for large n, the fraction of players that make a wrong decision tends to zero. This is intriguing because it contrasts to the two natural corner cases: empty graph (everyone decides independently based on his private signal) and complete graph (all decisions are heard by all nodes). In both of these cases a constant fraction of nodes make a wrong decision in expectation. Thus, our result shows that while both too little and too much information sharing causes nodes to take wrong decisions, for exactly the right amount of information sharing, asymptotically everyone can be right. We further show that this result in random graphs is asymptotically optimal for any topology, even if nodes follow a globally optimal algorithmic strategy. Based on the analysis of random graphs, we explore how topology impacts global performance and construct an optimal deterministic topology among layer graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.64/LIPIcs.ICALP.2016.64.pdf
Information Cascades
Herding Effect
Random Graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
65:1
65:12
10.4230/LIPIcs.ICALP.2016.65
article
Analysing Survey Propagation Guided Decimationon Random Formulas
Hetterich, Samuel
Let vec(theta) be a uniformly distributed random k-SAT formula with n variables and m clauses. For clauses/variables ratio m/n <= r_{k-SAT} ~ 2^k*ln(2) the formula vec(theta) is satisfiable with high probability. However, no efficient algorithm is known to provably find a satisfying assignment beyond m/n ~ 2k*ln(k)/k with a non-vanishing probability. Non-rigorous statistical mechanics work on k-CNF led to the development of a new efficient "message passing algorithm" called Survey Propagation Guided Decimation [Mézard et al., Science 2002]. Experiments conducted for k=3,4,5 suggest that the algorithm finds satisfying assignments close to r_{k-SAT}. However, in the present paper we prove that the basic version of Survey Propagation Guided Decimation fails to solve random k-SAT formulas efficiently already for m/n = 2^{k}(1 + epsilon_k)*ln(k)/k with lim_{k -> infinity} epsilon_k = 0 almost a factor k below r_{k-SAT}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.65/LIPIcs.ICALP.2016.65.pdf
Survey Propagation Guided Decimation
Message Passing Algorithm
Graph Theory
Random k-SAT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
66:1
66:13
10.4230/LIPIcs.ICALP.2016.66
article
Approximation Algorithms for Aversion k-Clustering via Local k-Median
Gupta, Anupam
Guruganesh, Guru
Schmidt, Melanie
In the aversion k-clustering problem, given a metric space, we want to cluster the points into k clusters. The cost incurred by each point is the distance to the furthest point in its cluster, and the cost of the clustering is the sum of all these per-point-costs. This problem is motivated by questions in generating automatic abstractions of extensive-form games.
We reduce this problem to a "local" k-median problem where each facility has a prescribed radius and can only connect to clients within that radius. Our main results is a constant-factor approximation algorithm for the aversion k-clustering problem via the local k-median problem.
We use a primal-dual approach; our technical contribution is a non-local rounding step which we feel is of broader interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.66/LIPIcs.ICALP.2016.66.pdf
Approximation algorithms
clustering
k-median
primal-dual
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
67:1
67:15
10.4230/LIPIcs.ICALP.2016.67
article
The Non-Uniform k-Center Problem
Chakrabarty, Deeparnab
Goyal, Prachi
Krishnaswamy, Ravishankar
In this paper, we introduce and study the Non-Uniform k-Center (NUkC) problem. Given a finite metric space (X, d) and a collection of balls of radii {r_1 >= ... >= r_k}, the NUkC problem is to find a placement of their centers on the metric space and find the minimum dilation alpha, such that the union of balls of radius alpha*r_i around the i-th center covers all the points in X. This problem naturally arises as a min-max vehicle routing problem with fleets of different speeds, or as a wireless router placement problem with routers of different powers/ranges.
The NUkC problem generalizes the classic k-center problem when all the k radii are the same (which can be assumed to be 1 after scaling). It also generalizes the k-center with outliers (kCwO for short) problem when there are k balls of radius 1 and l balls of radius 0. There are 2-approximation and 3-approximation algorithms known for these problems respectively; the former is best possible unless P=NP and the latter remains unimproved for 15 years.
We first observe that no O(1)-approximation is to the optimal dilation is possible unless P=NP, implying that the NUkC problem is more non-trivial than the above two problems. Our main algorithmic result is an (O(1), O(1))-bi-criteria approximation result: we give an O(1)-approximation to the optimal dilation, however, we may open Theta(1) centers of each radii. Our techniques also allow us to prove a simple (uni-criteria), optimal 2-approximation to the kCwO problem improving upon the long-standing 3-factor. Our main technical contribution is a connection between the NUkC problem and the so-called firefighter problems on trees which have been studied recently in the TCS community. We show NUkC is as hard as the firefighter problem.
While we don't know if the converse is true, we are able to adapt ideas from recent works [Chalermsook/Chuzhoy, SODA 2010; Asjiashvili/Baggio/Zenklusen, arXiv 2016] in non-trivial ways to obtain our constant factor bi-criteria approximation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.67/LIPIcs.ICALP.2016.67.pdf
Clustering
k-Center
Approximation Algorithms
Firefighter Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
68:1
68:14
10.4230/LIPIcs.ICALP.2016.68
article
k-Center Clustering Under Perturbation Resilience
Balcan, Maria-Florina
Haghtalab, Nika
White, Colin
The k-center problem is a canonical and long-studied facility location and clustering problem with many applications in both its symmetric and asymmetric forms. Both versions of the problem have tight approximation factors on worst case instances: a 2-approximation for symmetric kcenter and an O(log*(k))-approximation for the asymmetric version. Therefore to improve on these ratios, one must go beyond the worst case.
In this work, we take this approach and provide strong positive results both for the asymmetric and symmetric k-center problems under a very natural input stability (promise) condition called alpha-perturbation resilience [Bilu Linial, 2012], which states that the optimal solution does not change under any alpha-factor perturbation to the input distances. We show that by assuming 2-perturbation resilience, the exact solution for the asymmetric k-center problem can be found in polynomial time. To our knowledge, this is the first problem that is hard to approximate to any constant factor in the worst case, yet can be optimally solved in polynomial time under perturbation resilience for a constant value of alpha. Furthermore, we prove our result is tight by showing symmetric k-center under (2-epsilon)-perturbation resilience is hard unless NP=RP.
This is the first tight result for any problem under perturbation resilience, i.e., this is the first time the exact value of alpha for which the problem switches from being NP-hard to efficiently computable has been found.
Our results illustrate a surprising relationship between symmetric and asymmetric k-center instances under perturbation resilience. Unlike approximation ratio, for which symmetric k-center is easily solved to a factor of 2 but asymmetric k-center cannot be approximated to any constant factor, both symmetric and asymmetric k-center can be solved optimally under resilience
to 2-perturbations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.68/LIPIcs.ICALP.2016.68.pdf
k-center
clustering
perturbation resilience
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
69:1
69:15
10.4230/LIPIcs.ICALP.2016.69
article
Approximation Algorithms for Clustering Problems with Lower Bounds and Outliers
Ahmadian, Sara
Swamy, Chaitanya
We consider clustering problems with non-uniform lower bounds and outliers, and obtain the first approximation guarantees for these problems. We have a set F of facilities with lower bounds {L_i}_{i in F} and a set D of clients located in a common metric space {c(i,j)}_{i,j in F union D}, and bounds k, m. A feasible solution is a pair (S subseteq F, sigma: D -> S union {out}), where sigma specifies the client assignments, such that |S| <=k, |sigma^{-1}(i)| >= L_i for all i in S, and |sigma^{-1}(out)| <= m. In the lower-bounded min-sum-of-radii with outliers P (LBkSRO) problem, the objective is to minimize sum_{i in S} max_{j in sigma^{-1})i)}, and in the lower-bounded k-supplier with outliers (LBkSupO) problem, the objective is to minimize max_{i in S} max_{j in sigma^{-1})i)} c(i,j).
We obtain an approximation factor of 12.365 for LBkSRO, which improves to 3.83 for the non-outlier version (i.e., m = 0). These also constitute the first approximation bounds for the min-sum-of-radii objective when we consider lower bounds and outliers separately. We apply the primal-dual method to the relaxation where we Lagrangify the |S| <= k constraint. The chief technical contribution and novelty of our algorithm is that, departing from the standard paradigm used for such constrained problems, we obtain an O(1)-approximation despite the fact that we do not obtain a Lagrangian-multiplier-preserving algorithm for the Lagrangian relaxation. We believe that our ideas have broader applicability to other clustering problems with outliers as well.
We obtain approximation factors of 5 and 3 respectively for LBkSupO and its non-outlier version. These are the first approximation results for k-supplier with non-uniform lower bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.69/LIPIcs.ICALP.2016.69.pdf
Approximation algorithms
facililty-location problems
primal-dual method
Lagrangian relaxation
k-center problems
minimizing sum of radii
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
70:1
70:14
10.4230/LIPIcs.ICALP.2016.70
article
A Duality Based 2-Approximation Algorithm for Maximum Agreement Forest
Schalekamp, Frans
van Zuylen, Anke
van der Ster, Suzanne
We give a 2-approximation algorithm for the Maximum Agreement Forest problem on two rooted binary trees. This NP-hard problem has been studied extensively in the past two decades, since it can be used to compute the Subtree Prune-and-Regraft (SPR) distance between two phylogenetic trees. Our result improves on the very recent 2.5-approximation algorithm due to Shi, Feng, You and Wang (2015). Our algorithm is the first approximation algorithm for this problem that uses LP duality in its analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.70/LIPIcs.ICALP.2016.70.pdf
Maximum agreement forest
phylogenetic tree
SPR distance
subtree prune-and-regraft distance
computational biology
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
71:1
71:14
10.4230/LIPIcs.ICALP.2016.71
article
Robust Assignments via Ear Decompositions and Randomized Rounding
Adjiashvili, David
Bindewald, Viktor
Michaels, Dennis
Many real-life planning problems require making a priori decisions before all parameters of the problem have been revealed. An important special case of such problem arises in scheduling and transshipment problems, where a set of jobs needs to be assigned to the available set of machines or personnel (resources), in a way that all jobs have assigned resources, and no two jobs share the same resource. In its nominal form, the resulting computational problem becomes the assignment problem.
This paper deals with the Robust Assignment Problem (RAP) which models situations in which certain assignments are vulnerable and may become unavailable after the solution has been chosen. The goal is to choose a minimum-cost collection of assignments (edges in the corresponding bipartite graph) so that if any vulnerable edge becomes unavailable, the remaining part of the solution contains an assignment of all jobs.
We develop algorithms and hardness results for RAP and establish several connections to well-known concepts from matching theory, robust optimization, LP-based techniques and combinations
thereof.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.71/LIPIcs.ICALP.2016.71.pdf
robust optimization
matching theory
ear decomposition
randomized rounding
approximation algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
72:1
72:13
10.4230/LIPIcs.ICALP.2016.72
article
Closing the Gap for Makespan Scheduling via Sparsification Techniques
Jansen, Klaus
Klein, Kim-Manuel
Verschae, José
Makespan scheduling on identical machines is one of the most basic and fundamental packing problem studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines that minimizes the makespan. The problem is strongly NPhard, and thus we do not expect a (1 + epsilon)-approximation algorithm with a running time that depends polynomially on 1/epsilon. Furthermore, Chen et al. [Chen/JansenZhang, SODA'13] recently showed that a running time of 2^{1/epsilon}^{1-delta} + poly(n) for any delta > 0 would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on 1/epsilon, the better of which achieves a running time of 2^{~O(1/epsilon^{2})} + O(n*log(n)) [Jansen, SIAM J. Disc. Math. 2010]. In this paper we obtain an algorithm with a running time of 2^{~O(1/epsilon)} + O(n*log(n)), which is tight under ETH up to logarithmic factors on the exponent.
Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings.
In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time 2^{~O(1/epsilon)} + poly(n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.72/LIPIcs.ICALP.2016.72.pdf
scheduling
approximation
PTAS
makespan
ETH
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
73:1
73:14
10.4230/LIPIcs.ICALP.2016.73
article
Constant Approximation for Capacitated k-Median with (1+epsilon)-Capacity Violation
Demirci, Gökalp
Li, Shi
We study the Capacitated k-Median problem for which existing constant-factor approximation algorithms are all pseudo-approximations that violate either the capacities or the upper bound k on the number of open facilities. Using the natural LP relaxation for the problem, one can only hope to get the violation factor down to 2. Li [SODA'16] introduced a novel LP to go beyond the limit of 2 and gave a constant-factor approximation algorithm that opens (1 + epsilon)*k facilities.
We use the configuration LP of Li [SODA'16] to give a constant-factor approximation for the Capacitated k-Median problem in a seemingly harder configuration: we violate only the capacities by 1 + epsilon. This result settles the problem as far as pseudo-approximation algorithms are concerned.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.73/LIPIcs.ICALP.2016.73.pdf
Approximation Algorithms
Capacitated k-Median
Pseudo Approximation
Capacity Violation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
74:1
74:13
10.4230/LIPIcs.ICALP.2016.74
article
Approximating Directed Steiner Problems via Tree Embedding
Laekhanukit, Bundit
Directed Steiner problems are fundamental problems in Combinatorial Optimization and Theoretical Computer Science. An important problem in this genre is the k-edge connected directed Steiner tree (k-DST) problem. In this problem, we are given a directed graph G on n vertices with edge-costs, a root vertex r, a set of h terminals T and an integer k. The goal is to find a min-cost subgraph H subseteq G that connects r to each terminal t in T by k edge-disjoint r, t-paths. This problem includes as special cases the well-known directed Steiner tree (DST) problem (the case k=1) and the group Steiner tree (GST) problem. Despite having been studied and mentioned many times in literature, e.g., by Feldman et al. [SODA'09, JCSS'12], by Cheriyan et al. [SODA'12, TALG'14], by Laekhanukit [SODA'14] and in a survey by Kortsarz and Nutov [Handbook of Approximation Algorithms and Metaheuristics], there was no known non-trivial approximation algorithm for k-DST for k >= 2 even in a special case that an input graph is directed acyclic and has a constant number of layers. If an input graph is not acyclic, the complexity status of k-DST is not known even for a very strict special case that k=2 and h=2.
In this paper, we make a progress toward developing a non-trivial approximation algorithm for k-DST. We present an O(D*k^{D-1}*log(n))-approximation algorithm for k-DST on directed acyclic graphs (DAGs) with D layers, which can be extended to a special case of k-DST on "general graphs" when an instance has a D-shallow optimal solution, i.e., there exist k edge-disjoint r, t-paths, each of length at most D, for every terminal t in T. For the case k=1 (DST), our algorithm yields an approximation ratio of O(D*log(h)), thus implying an O(log^3(h))-approximation algorithm for DST that runs in quasi-polynomial-time (due to the height-reduction of Zelikovsky [Algorithmica'97]). Our algorithm is based on an LP-formulation that allows us to embed a solution to a tree-instance of GST, which does not preserve connectivity. We show, however, that one can randomly extract a solution of k-DST from the tree-instance of GST.
Our algorithm is almost tight when k and D are constants since the case that k=1 and D=3 is NP-hard to approximate to within a factor of O(log(h)), and our algorithm archives the same approximation ratio for this special case. We also remark that the k^{1/4-epsilon}-hardness instance of k-DST is a DAG with 6 layers, and our algorithm gives O(k^5*log(n))-approximation for this special case. Consequently, as our algorithm works for general graphs, we obtain an O(D*k^{D-1}*log(n))-approximation algorithm for a D-shallow instance of the k edge-connected directed Steiner subgraph problem, where we wish to connect every pair of terminals by k edgedisjoint paths.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.74/LIPIcs.ICALP.2016.74.pdf
Approximation Algorithms
Network Design
Graph Connectivity
Directed Graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
75:1
75:13
10.4230/LIPIcs.ICALP.2016.75
article
Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median
Friggstad, Zachary
Zhang, Yifeng
Budgeted Red-Blue Median is a generalization of classic k-Median in that there are two sets of facilities, say R and B, that can be used to serve clients located in some metric space. The goal is to open kr facilities in R and kb facilities in B for some given bounds kr, kb and connect each client to their nearest open facility in a way that minimizes the total connection cost.
We extend work by Hajiaghayi, Khandekar, and Kortsarz [2012] and show that a multipleswap local search heuristic can be used to obtain a (5 + epsilon)-approximation for Budgeted RedBlue Median for any constant epsilon > 0. This is an improvement over their single swap analysis and beats the previous best approximation guarantee of 8 by Swamy [2014].
We also present a matching lower bound showing that for every p >= 1, there are instances of Budgeted Red-Blue Median with local optimum solutions for the p-swap heuristic whose cost is 5 + Omega(1/p) times the optimum solution cost. Thus, our analysis is tight up to the lower order terms. In particular, for any epsilon > 0 we show the single-swap heuristic admits local optima whose cost can be as bad as 7 - epsilon times the optimum solution cost.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.75/LIPIcs.ICALP.2016.75.pdf
Approximation Algorithms
Local search
Red-Blue Meidan
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
76:1
76:12
10.4230/LIPIcs.ICALP.2016.76
article
Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices
Bai, Shi
Stehlé, Damien
Wen, Weiqiang
We present a probabilistic polynomial-time reduction from the lattice Bounded Distance Decoding (BDD) problem with parameter 1/( sqrt(2) * gamma) to the unique Shortest Vector Problem (uSVP) with parameter gamma for any gamma > 1 that is polynomial in the lattice dimension n. It improves the BDD to uSVP reductions of [Lyubashevsky and Micciancio, CRYPTO, 2009] and [Liu, Wang, Xu and Zheng, Inf. Process. Lett., 2014], which rely on Kannan's embedding technique. The main ingredient to the improvement is the use of Khot's lattice sparsification [Khot, FOCS, 2003] before resorting to Kannan's embedding, in order to boost the uSVP parameter.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.76/LIPIcs.ICALP.2016.76.pdf
Lattices
Bounded Distance Decoding Problem
Unique Shortest Vector Problem
Sparsification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
77:1
77:13
10.4230/LIPIcs.ICALP.2016.77
article
A Parallel Repetition Theorem for All Entangled Games
Yuen, Henry
The behavior of games repeated in parallel, when played with quantumly entangled players, has received much attention in recent years. Quantum analogues of Raz's classical parallel repetition theorem have been proved for many special classes of games. However, for general entangled games no parallel repetition theorem was known.
We prove that the entangled value of a two-player game G repeated n times in parallel is at most c_G*n^{-1/4}*log(n) for a constant c_G depending on G, provided that the entangled value of G is less than 1. In particular, this gives the first proof that the entangled value of a parallel repeated game must converge to 0 for all games whose entangled value is less than 1. Central to our proof is a combination of both classical and quantum correlated sampling.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.77/LIPIcs.ICALP.2016.77.pdf
parallel repetition
direct product theorems
entangled games
quantum games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
78:1
78:14
10.4230/LIPIcs.ICALP.2016.78
article
Tight Sum-Of-Squares Lower Bounds for Binary Polynomial Optimization Problems
Kurpisz, Adam
Leppänen, Samuli
Mastrolilli, Monaldo
We give two results concerning the power of the Sum-Of-Squares(SoS)/Lasserre hierarchy. For binary polynomial optimization problems of degree 2d and an odd number of variables n, we prove that (n+2d-1)/2 levels of the SoS/Lasserre hierarchy are necessary to provide the exact optimal value. This matches the recent upper bound result by Sakaue, Takeda, Kim and Ito.
Additionally, we study a conjecture by Laurent, who considered the linear representation of a set with no integral points. She showed that the Sherali-Adams hierarchy requires n levels to detect the empty integer hull, and conjectured that the SoS/Lasserre rank for the same problem is n-1. We disprove this conjecture and derive lower and upper bounds for the rank.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.78/LIPIcs.ICALP.2016.78.pdf
SoS/Lasserre hierarchy
lift and project methods
binary polynomial optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
79:1
79:13
10.4230/LIPIcs.ICALP.2016.79
article
Correlation Decay and Tractability of CSPs
Brown-Cohen, Jonah
Raghavendra, Prasad
The algebraic dichotomy conjecture of Bulatov, Krokhin and Jeavons yields an elegant characterization of the complexity of constraint satisfaction problems. Roughly speaking, the characterization asserts that a CSP L is tractable if and only if there exist certain non-trivial operations known as polymorphisms to combine solutions to L to create new ones.
In this work, we study the dynamical system associated with repeated applications of a polymorphism to a distribution over assignments. Specifically, we exhibit a correlation decay phenomenon that makes two variables or groups of variables that are not perfectly correlated become independent after repeated applications of a polymorphism.
We show that this correlation decay phenomenon can be utilized in designing algorithms for CSPs by exhibiting two applications:
1. A simple randomized algorithm to solve linear equations over a prime field, whose analysis crucially relies on correlation decay.
2. A sufficient condition for the simple linear programming relaxation for a 2-CSP to be sound (have no integrality gap) on a given instance.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.79/LIPIcs.ICALP.2016.79.pdf
Constraint Satisfaction
Polymorphisms
Linear Equations
Correlation Decay
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
80:1
80:14
10.4230/LIPIcs.ICALP.2016.80
article
On Percolation and NP-Hardness
Bennett, Huck
Reichman, Daniel
Shinkar, Igor
The edge-percolation and vertex-percolation random graph models start with an arbitrary graph G, and randomly delete edges or vertices of G with some fixed probability. We study the computational hardness of problems whose inputs are obtained by applying percolation to worst-case instances. Specifically, we show that a number of classical N P-hard graph problems remain essentially as hard on percolated instances as they are in the worst-case (assuming NP !subseteq BPP). We also prove hardness results for other NP-hard problems such as Constraint Satisfaction Problems, where random deletions are applied to clauses or variables.
We focus on proving the hardness of the Maximum Independent Set problem and the Graph Coloring problem on percolated instances. To show this we establish the robustness of the corresponding parameters alpha(.) and Chi(.) to percolation, which may be of independent interest. Given a graph G, let G' be the graph obtained by randomly deleting edges of G. We show that if alpha(G) is small, then alpha(G') remains small with probability at least 0.99. Similarly, we show that if Chi(G) is large, then Chi(G') remains large with probability at least 0.99.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.80/LIPIcs.ICALP.2016.80.pdf
percolation
NP-hardness
random subgraphs
chromatic number
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
81:1
81:13
10.4230/LIPIcs.ICALP.2016.81
article
Tight Hardness Results for Maximum Weight Rectangles
Backurs, Arturs
Dikkala, Nishanth
Tzamos, Christos
Given n weighted points (positive or negative) in d dimensions, what is the axis-aligned box which maximizes the total weight of the points it contains?
The best known algorithm for this problem is based on a reduction to a related problem, the Weighted Depth problem [Chan, FOCS, 2013], and runs in time O(n^d). It was conjectured [Barbay et al., CCCG, 2013] that this runtime is tight up to subpolynomial factors. We answer this conjecture affirmatively by providing a matching conditional lower bound. We also provide conditional lower bounds for the special case when points are arranged in a grid (a well studied problem known as Maximum Subarray problem) as well as for other related problems.
All our lower bounds are based on assumptions that the best known algorithms for the All-Pairs Shortest Paths problem (APSP) and for the Max-Weight k-Clique problem in edge-weighted graphs are essentially optimal.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.81/LIPIcs.ICALP.2016.81.pdf
Maximum Rectangles
Hardness in P
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
82:1
82:11
10.4230/LIPIcs.ICALP.2016.82
article
The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction
Larsen, Kasper Green
Nelson, Jelani
For any n > 1, 0 < epsilon < 1/2, and N > n^C for some constant C > 0, we show the existence of an N-point subset X of l_2^n such that any linear map from X to l_2^m with distortion at most 1 + epsilon must have m = Omega(min{n, epsilon^{-2}*lg(N)). This improves a lower bound of Alon [Alon, Discre. Mathem., 1999], in the linear setting, by a lg(1/epsilon) factor. Our lower bound matches the upper bounds provided by the identity matrix and the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, Contem. Mathem., 1984].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.82/LIPIcs.ICALP.2016.82.pdf
dimensionality reduction
lower bounds
Johnson-Lindenstrauss
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
83:1
83:14
10.4230/LIPIcs.ICALP.2016.83
article
Impossibility of Sketching of the 3D Transportation Metric with Quadratic Cost
Andoni, Alexandr
Naor, Assaf
Neiman, Ofer
Transportation cost metrics, also known as the Wasserstein distances W_p, are a natural choice for defining distances between two pointsets, or distributions, and have been applied in numerous fields. From the computational perspective, there has been an intensive research effort for understanding the W_p metrics over R^k, with work on the W_1 metric (a.k.a earth mover distance) being most successful in terms of theoretical guarantees. However, the W_2 metric, also known as the root-mean square (RMS) bipartite matching distance, is often a more suitable choice in many application areas, e.g. in graphics. Yet, the geometry of this metric space is currently poorly understood, and efficient algorithms have been elusive. For example, there are no known non-trivial algorithms for nearest-neighbor search or sketching for this metric.
In this paper we take the first step towards explaining the lack of efficient algorithms for the W_2 metric, even over the three-dimensional Euclidean space R^3. We prove that there are no meaningful embeddings of W_2 over R^3 into a wide class of normed spaces, as well as that there are no efficient sketching algorithms for W_2 over R^3 achieving constant approximation. For example, our results imply that: 1) any embedding into L1 must incur a distortion of Omega(sqrt(log(n))) for pointsets of size n equipped with the W_2 metric; and 2) any sketching algorithm of size s must incur Omega(sqrt(log(n))/sqrt(s)) approximation. Our results follow from a more general statement, asserting that W_2 over R^3 contains the 1/2-snowflake of all finite metric spaces with a uniformly bounded distortion. These are the first non-embeddability/non-sketchability results for W_2.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.83/LIPIcs.ICALP.2016.83.pdf
Transportation metric
embedding
snowflake
sketching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
84:1
84:13
10.4230/LIPIcs.ICALP.2016.84
article
Simple Average-Case Lower Bounds for Approximate Near-Neighbor from Isoperimetric Inequalities
Yin, Yitong
We prove an Omega(d/log(sw/nd)) lower bound for the average-case cell-probe complexity of deterministic or Las Vegas randomized algorithms solving approximate near-neighbor (ANN) problem in ddimensional Hamming space in the cell-probe model with w-bit cells, using a table of size s. This lower bound matches the highest known worst-case cell-probe lower bounds for any static data structure problems.
This average-case cell-probe lower bound is proved in a general framework which relates the cell-probe complexity of ANN to isoperimetric inequalities in the underlying metric space. A tighter connection between ANN lower bounds and isoperimetric inequalities is established by a stronger richness lemma proved by cell-sampling techniques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.84/LIPIcs.ICALP.2016.84.pdf
nearest neighbor search
approximate near-neighbor
cell-probe model
isoperimetric inequality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
85:1
85:14
10.4230/LIPIcs.ICALP.2016.85
article
Quasimetric Embeddings and Their Applications
Mémoli, Facundo
Sidiropoulos, Anastasios
Sridhar, Vijay
We study generalizations of classical metric embedding results to the case of quasimetric spaces; that is, spaces that do not necessarily satisfy symmetry. Quasimetric spaces arise naturally from the shortest-path distances on directed graphs. Perhaps surprisingly, very little is known about low-distortion embeddings for quasimetric spaces.
Random embeddings into ultrametric spaces are arguably one of the most successful geometric tools in the context of algorithm design. We extend this to the quasimetric case as follows. We show that any n-point quasimetric space supported on a graph of treewidth t admits a random embedding into quasiultrametric spaces with distortion O(t*log^2(n)), where quasiultrametrics are a natural generalization of ultrametrics. This result allows us to obtain t*log^{O(1)}(n)-approximation algorithms for the Directed Non-Bipartite Sparsest-Cut and the Directed Multicut problems on n-vertex graphs of treewidth t, with running time polynomial in both n and t.
The above results are obtained by considering a generalization of random partitions to the quasimetric case, which we refer to as random quasipartitions. Using this definition and a construction of [Chuzhoy and Khanna 2009] we derive a polynomial lower bound on the distortion of random embeddings of general quasimetric spaces into quasiultrametric spaces. Finally, we establish a lower bound for embedding the shortest-path quasimetric of a graph G into graphs that exclude G as a minor. This lower bound is used to show that several embedding results from the metric case do not have natural analogues in the quasimetric setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.85/LIPIcs.ICALP.2016.85.pdf
metric embeddings
quasimetrics
outliers
random embeddings
treewidth
Directed Sparsest-Cut
Directed Multicut
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
86:1
86:15
10.4230/LIPIcs.ICALP.2016.86
article
The Landscape of Communication Complexity Classes
Göös, Mika
Pitassi, Toniann
Watson, Thomas
We prove several results which, together with prior work, provide a nearly-complete picture of the relationships among classical communication complexity classes between P and PSPACE, short of proving lower bounds against classes for which no explicit lower bounds were already known. Our article also serves as an up-to-date survey on the state of structural communication complexity.
Among our new results we show that MA !subseteq ZPP^{NP[1]}, that is, Merlin–Arthur proof systems cannot be simulated by zero-sided error randomized protocols with one NP query. Here the class ZPP^{NP[1]} has the property that generalizing it in the slightest ways would make it contain AM intersect coAM, for which it is notoriously open to prove any explicit lower bounds. We also prove that US !subseteq ZPP^{NP[1]}, where US is the class whose canonically complete problem is the variant of set-disjointness where yes-instances are uniquely intersecting. We also prove that US !subseteq coDP, where DP is the class of differences of two NP sets. Finally, we explore an intriguing open issue: are rank-1 matrices inherently more powerful than rectangles in communication complexity? We prove a new separation concerning PP that sheds light on this issue and strengthens some previously known separations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.86/LIPIcs.ICALP.2016.86.pdf
Landscape
communication
complexity
classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
87:1
87:10
10.4230/LIPIcs.ICALP.2016.87
article
Information Complexity Is Computable
Braverman, Mark
Schneider, Jon
The information complexity of a function f is the minimum amount of information Alice and Bob need to exchange to compute the function f. In this paper we provide an algorithm for approximating the information complexity of an arbitrary function f to within any additive error epsilon > 0, thus resolving an open question as to whether information complexity is computable.
In the process, we give the first explicit upper bound on the rate of convergence of the information complexity of f when restricted to b-bit protocols to the (unrestricted) information complexity of f.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.87/LIPIcs.ICALP.2016.87.pdf
Communication complexity
convergence rate
information complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
88:1
88:14
10.4230/LIPIcs.ICALP.2016.88
article
Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound
Prabhakaran, Manoj M.
Prabhakaran, Vinod M.
In this work we introduce a new information-theoretic complexity measure for 2-party functions, called Rényi information complexity. It is a lower-bound on communication complexity, and has the two leading lower-bounds on communication complexity as its natural relaxations: (external) information complexity and logarithm of partition complexity. These two lower-bounds had so far appeared conceptually quite different from each other, but we show that they are both obtained from Rényi information complexity using two different, but natural relaxations:
1. The relaxation of Rényi information complexity that yields information complexity is to change the order of Rényi mutual information used in its definition from infinity to 1.
2. The relaxation that connects Rényi information complexity with partition complexity is to replace protocol transcripts used in the definition of Rényi information complexity with what we term "pseudotranscripts", which omits the interactive nature of a protocol, but only requires that the probability of any transcript given inputs x and y to the two parties, factorizes into two terms which depend on x and y separately. While this relaxation yields an apparently different definition than (log of) partition function, we show that the two are in fact identical. This gives us a surprising characterization of the partition bound in terms of an information-theoretic quantity.
We also show that if both the above relaxations are simultaneously applied to Rényi information complexity, we obtain a complexity measure that is lower-bounded by the (log of) relaxed partition complexity, a complexity measure introduced by Kerenidis et al. (FOCS 2012). We obtain a sharper connection between (external) information complexity and relaxed partition complexity than Kerenidis et al., using an arguably more direct proof.
Further understanding Rényi information complexity (of various orders) might have consequences for important direct-sum problems in communication complexity, as it lies between communication complexity and information complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.88/LIPIcs.ICALP.2016.88.pdf
Information Complexity
Communication Complexity
Rényi Mutual Information
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
89:1
89:12
10.4230/LIPIcs.ICALP.2016.89
article
On Isoperimetric Profiles and Computational Complexity
Hrubes, Pavel
Yehudayoff, Amir
The isoperimetric profile of a graph is a function that measures, for an integer k, the size of the smallest edge boundary over all sets of vertices of size k. We observe a connection between isoperimetric profiles and computational complexity. We illustrate this connection by an example from communication complexity, but our main result is in algebraic complexity.
We prove a sharp super-polynomial separation between monotone arithmetic circuits and monotone arithmetic branching programs. This shows that the classical simulation of arithmetic circuits by arithmetic branching programs by Valiant, Skyum, Berkowitz, and Rackoff (1983) cannot be improved, as long as it preserves monotonicity.
A key ingredient in the proof is an accurate analysis of the isoperimetric profile of finite full binary trees. We show that the isoperimetric profile of a full binary tree constantly fluctuates between one and almost the depth of the tree.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.89/LIPIcs.ICALP.2016.89.pdf
Monotone computation
separations
communication complexity
isoperimetry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
90:1
90:14
10.4230/LIPIcs.ICALP.2016.90
article
Tolerant Testers of Image Properties
Berman, Piotr
Murzabulatov, Meiram
Raskhodnikova, Sofya
We initiate a systematic study of tolerant testers of image properties or, equivalently, algorithms that approximate the distance from a given image to the desired property (that is, the smallest fraction of pixels that need to change in the image to ensure that the image satisfies the desired property). Image processing is a particularly compelling area of applications for sublinear-time algorithms and, specifically, property testing. However, for testing algorithms to reach their full potential in image processing, they have to be tolerant, which allows them to be resilient to noise. Prior to this work, only one tolerant testing algorithm for an image property (image partitioning) has been published.
We design efficient approximation algorithms for the following fundamental questions: What fraction of pixels have to be changed in an image so that it becomes a half-plane? a representation of a convex object? a representation of a connected object? More precisely, our algorithms approximate the distance to three basic properties (being a half-plane, convexity, and connectedness) within a small additive error epsilon, after reading a number of pixels polynomial in 1/epsilon and independent of the size of the image. The running time of the testers for half-plane and convexity is also polynomial in 1/epsilon. Tolerant testers for these three properties were not investigated previously. For convexity and connectedness, even the existence of distance approximation algorithms with query complexity independent of the input size is not implied by previous work. (It does not follow from the VC-dimension bounds, since VC dimension of convexity and connectedness, even in two dimensions, depends on the input size. It also does not follow from the existence of non-tolerant testers.)
Our algorithms require very simple access to the input: uniform random samples for the half-plane property and convexity, and samples from uniformly random blocks for connectedness. However, the analysis of the algorithms, especially for convexity, requires many geometric and combinatorial insights. For example, in the analysis of the algorithm for convexity, we define a set of reference polygons P_{epsilon} such that (1) every convex image has a nearby polygon in P_{epsilon} and (2) one can use dynamic programming to quickly compute the smallest empirical distance to a polygon in P_{epsilon}. This construction might be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.90/LIPIcs.ICALP.2016.90.pdf
Computational geometry
convexity
half-plane
connectedness
propertytesting
tolerant property testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
91:1
91:15
10.4230/LIPIcs.ICALP.2016.91
article
Erasure-Resilient Property Testing
Dixit, Kashyap
Raskhodnikova, Sofya
Thakurta, Abhradeep
Varma, Nithin
Property testers form an important class of sublinear algorithms. In the standard property testing model, an algorithm accesses the input function f:D -> R via an oracle. With very few exceptions, all property testers studied in this model rely on the oracle to provide function values at all queried domain points. However, in many realistic situations, the oracle may be unable to reveal the function values at some domain points due to privacy concerns, or when some of the values get erased by mistake or by an adversary. The testers do not learn anything useful about the property by querying those erased points. Moreover, the knowledge of a tester may enable an adversary to erase some of the values so as to increase the query complexity of the tester arbitrarily or, in some cases, make the tester entirely useless.
In this work, we initiate a study of property testers that are resilient to the presence of adversarially erased function values. An alpha-erasure-resilient epsilon-tester is given parameters alpha, epsilon in (0,1), along with oracle access to a function f such that at most an alpha fraction of function values have been erased. The tester does not know whether a value is erased until it queries the corresponding domain point. The tester has to accept with high probability if there is a way to assign values to the erased points such that the resulting function satisfies the desired property P. It has to reject with high probability if, for every assignment of values to the erased points, the resulting function has to be changed in at least an epsilon-fraction of the non-erased domain points to satisfy P.
We design erasure-resilient property testers for a large class of properties. For some properties, it is possible to obtain erasure-resilient testers by simply using standard testers as a black box. However, there are more challenging properties for which all known testers rely on querying a specific point. If this point is erased, all these testers break. We give efficient erasure-resilient testers for several important classes of such properties of functions including monotonicity, the Lipschitz property, and convexity. Finally, we show a separation between the standard testing and erasure-resilient testing. Specifically, we describe a property that can be epsilon-tested with O(1/epsilon) queries in the standard model, whereas testing it in the erasure-resilient model requires number of queries polynomial in the input size.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.91/LIPIcs.ICALP.2016.91.pdf
Randomized algorithms
property testing
error correction
monotoneand Lipschitz functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
92:1
92:12
10.4230/LIPIcs.ICALP.2016.92
article
Towards Tight Lower Bounds for Range Reporting on the RAM
Grønlund, Allan
Larsen, Kasper Green
In the orthogonal range reporting problem, we are to preprocess a set of n points with integer coordinates on a UxU grid. The goal is to support reporting all k points inside an axis-aligned query rectangle. This is one of the most fundamental data structure problems in databases and computational geometry. Despite the importance of the problem its complexity remains unresolved in the word-RAM.
On the upper bound side, three best tradeoffs exist, all derived by reducing range reporting to a ball-inheritance problem. Ball-inheritance is a problem that essentially encapsulates all previous attempts at solving range reporting in the word-RAM. In this paper we make progress towards closing the gap between the upper and lower bounds for range reporting by proving cell probe lower bounds for ball-inheritance. Our lower bounds are tight for a large range of parameters, excluding any further progress for range reporting using the ball-inheritance reduction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.92/LIPIcs.ICALP.2016.92.pdf
Data Structures
Lower Bounds
Cell Probe Model
Range Reporting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
93:1
93:15
10.4230/LIPIcs.ICALP.2016.93
article
Data Structure Lower Bounds for Document Indexing Problems
Afshani, Peyman
Nielsen, Jesper Sindahl
We study data structure problems related to document indexing and pattern matching queries and our main contribution is to show that the pointer machine model of computation can be extremely useful in proving high and unconditional lower bounds that cannot be obtained in any other known model of computation with the current techniques. Often our lower bounds match the known space-query time trade-off curve and in fact for all the problems considered, there is a very good and reasonable match between our lower bounds and the known upper bounds, at least for some choice of input parameters.
The problems that we consider are set intersection queries (both the reporting variant and the semi-group counting variant), indexing a set of documents for two-pattern queries, or forbidden-pattern queries, or queries with wild-cards, and indexing an input set of gapped-patterns (or two-patterns) to find those matching a document given at the query time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.93/LIPIcs.ICALP.2016.93.pdf
Data Structure Lower Bounds
Pointer Machine
Set Intersection
Pattern Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
94:1
94:14
10.4230/LIPIcs.ICALP.2016.94
article
Proof Complexity Modulo the Polynomial Hierarchy: Understanding Alternation as a Source of Hardness
Chen, Hubie
We present and study a framework in which one can present alternation-based lower bounds on proof length in proof systems for quantified Boolean formulas. A key notion in this framework is that of proof system ensemble, which is (essentially) a sequence of proof systems where, for each, proof checking can be performed in the polynomial hierarchy. We introduce a proof system ensemble called relaxing QU-res which is based on the established proof system QU-resolution.
Our main results include an exponential separation of the tree-like and general versions of relaxing QU-res, and an exponential lower bound for relaxing QU-res; these are analogs of classical results in propositional proof complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.94/LIPIcs.ICALP.2016.94.pdf
proof complexity
polynomial hierarchy
quantified propositional logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
95:1
95:14
10.4230/LIPIcs.ICALP.2016.95
article
Past, Present, and Infinite Future
Wilke, Thomas
I was supposed to deliver one of the speeches at Wolfgang Thomas's retirement ceremony. Wolfgang had called me on the phone earlier and posed some questions about temporal logic, but I hadn't had good answers at the time. What I decided to do at the ceremony was to take up the conversation again and show how it could have evolved if only I had put more effort into answering his questions. Here is the imaginary conversation with Wolfgang.
The contributions are (1) the first direct translation from counter-free omega-automata into future temporal formulas, (2) a definition of bimachines for omega-words, (3) a translation from arbitrary temporal formulas (including both, future and past operators) into counter-free omega-bimachines, and (4) an automata-based proof of separation: every arbitrary temporal formula is equivalent to a boolean combination of pure future, present, and pure past formulas when interpreted in omega-words.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.95/LIPIcs.ICALP.2016.95.pdf
linear-time temporal logic
separation
backward deterministic omega-automata
counter freeness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
96:1
96:13
10.4230/LIPIcs.ICALP.2016.96
article
Thin MSO with a Probabilistic Path Quantifier
Bojanczyk, Mikolaj
This paper is about a variant of MSO on infinite trees where:
- there is a quantifier "zero probability of choosing a path pi in 2^{omega} which makes omega(pi) true";
- the monadic quantifiers range over sets with countable topological closure.
We introduce an automaton model, and show that it captures the logic.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.96/LIPIcs.ICALP.2016.96.pdf
Automata
mso
infinite trees
probabilistic temporal logics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
97:1
97:15
10.4230/LIPIcs.ICALP.2016.97
article
Deciding Piecewise Testable Separability for Regular Tree Languages
Goubault-Larrecq, Jean
Schmitz, Sylvain
The piecewise testable separability problem asks, given two input languages, whether there exists a piecewise testable language that contains the first input language and is disjoint from the second. We prove a general characterisation of piecewise testable separability on languages in a well-quasiorder, in terms of ideals of the ordering. This subsumes the known characterisations in the case of finite words. In the case of finite ranked trees ordered by homeomorphic embedding, we show using effective representations for tree ideals that it entails the decidability of piecewise testable separability when the input languages are regular. A final byproduct is a new proof of the decidability of whether an input regular language of ranked trees is piecewise testable, which was first shown in the unranked case by Bojanczyk, Segoufin, and Straubing [Log. Meth. in Comput. Sci., 8(3:26), 2012].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.97/LIPIcs.ICALP.2016.97.pdf
Well-quasi-order
ideal
tree languages
first-order logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
98:1
98:14
10.4230/LIPIcs.ICALP.2016.98
article
Computation Tree Logic for Synchronization Properties
Chatterjee, Krishnendu
Doyen, Laurent
We present a logic that extends CTL (Computation Tree Logic) with operators that express synchronization properties. A property is synchronized in a system if it holds in all paths of a certain length. The new logic is obtained by using the same path quantifiers and temporal operators as in CTL, but allowing a different order of the quantifiers. This small syntactic variation induces a logic that can express non-regular properties for which known extensions of MSO with equality of path length are undecidable. We show that our variant of CTL is decidable and that the model-checking problem is in Delta_3^P = P^{NP^{NP}}, and is hard for the class of problems solvable in polynomial time using a parallel access to an NP oracle. We analogously consider quantifier exchange in extensions of CTL, and we present operators defined using basic operators of CTL* that express the occurrence of infinitely many synchronization points. We show that the model-checking problem remains in Delta_3^P. The distinguishing power of CTL and of our new logic coincide if the Next operator is allowed in the logics, thus the classical bisimulation quotient can be used for state-space reduction before model checking.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.98/LIPIcs.ICALP.2016.98.pdf
Computation Tree Logic
Synchronization
model-checking
complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
99:1
99:13
10.4230/LIPIcs.ICALP.2016.99
article
Deciding the Topological Complexity of Büchi Languages
Skrzypczak, Michal
Walukiewicz, Igor
We study the topological complexity of languages of Büchi automata on infinite binary trees. We show that such a language is either Borel and WMSO-definable, or Sigma_1^1-complete and not WMSO-definable; moreover it can be algorithmically decided which of the two cases holds. The proof relies on a direct reduction to deciding the winner in a finite game with a regular winning condition.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.99/LIPIcs.ICALP.2016.99.pdf
tree automata
non-determinism
Borel sets
topological complexity
decidability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
100:1
100:13
10.4230/LIPIcs.ICALP.2016.100
article
On the Skolem Problem for Continuous Linear Dynamical Systems
Chonev, Ventsislav
Ouaknine, Joël
Worrell, James
The Continuous Skolem Problem asks whether a real-valued function satisfying a linear differential equation has a zero in a given interval of real numbers. This is a fundamental reachability problem for continuous linear dynamical systems, such as linear hybrid automata and continuoustime Markov chains. Decidability of the problem is currently open — indeed decidability is open even for the sub-problem in which a zero is sought in a bounded interval. In this paper we show decidability of the bounded problem subject to Schanuel's Conjecture, a unifying conjecture in transcendental number theory. We furthermore analyse the unbounded problem in terms of the frequencies of the differential equation, that is, the imaginary parts of the characteristic roots.
We show that the unbounded problem can be reduced to the bounded problem if there is at most one rationally linearly independent frequency, or if there are two rationally linearly independent frequencies and all characteristic roots are simple. We complete the picture by showing that decidability of the unbounded problem in the case of two (or more) rationally linearly independent frequencies would entail a major new effectiveness result in Diophantine approximation, namely computability of the Diophantine-approximation types of all real algebraic numbers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.100/LIPIcs.ICALP.2016.100.pdf
differential equations
reachability
Baker’s Theorem
Schanuel’s Conjecture
semi-algebraic sets
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
101:1
101:14
10.4230/LIPIcs.ICALP.2016.101
article
Analysing Decisive Stochastic Processes
Bertrand, Nathalie
Bouyer, Patricia
Brihaye, Thomas
Carlier, Pierre
In 2007, Abdulla et al. introduced the elegant concept of decisive Markov chain. Intuitively, decisiveness allows one to lift the good properties of finite Markov chains to infinite Markov chains. For instance, the approximate quantitative reachability problem can be solved for decisive Markov chains (enjoying reasonable effectiveness assumptions) including probabilistic lossy channel systems and probabilistic vector addition systems with states. In this paper, we extend the concept of decisiveness to more general stochastic processes. This extension is non trivial as we consider stochastic processes with a potentially continuous set of states and uncountable branching (common features of real-time stochastic processes). This allows us to obtain decidability results for both qualitative and quantitative verification problems on some classes of real-time stochastic processes, including generalized semi-Markov processes and stochastic timed
automata
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.101/LIPIcs.ICALP.2016.101.pdf
Real-time stochastic processes
Decisiveness
Approximation Scheme
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
102:1
102:15
10.4230/LIPIcs.ICALP.2016.102
article
Composition of Stochastic Transition Systems Based on Spans and Couplings
Gburek, Daniel
Baier, Christel
Klüppelholz, Sascha
Conventional approaches for parallel composition of stochastic systems relate probability measures of the individual components in terms of product measures. Such approaches rely on the assumption that components interact stochastically independent, which might be too rigid for modeling real world systems. In this paper, we introduce a parallel-composition operator for stochastic transition systems that is based on couplings of probability measures and does not impose any stochastic assumptions. When composing systems within our framework, the intended dependencies between components can be determined by providing so-called spans and span couplings. We present a congruence result for our operator with respect to a standard notion of bisimilarity and develop a general theory for spans, exploiting deep results from descriptive set theory. As an application of our general approach, we propose a model for stochastic hybrid systems called stochastic hybrid motion automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.102/LIPIcs.ICALP.2016.102.pdf
Stochastic Transition System
Composition
Stochastic Hybrid Motion Automata
Stochastically Independent
Coupling
Span
Bisimulation
Congruence
Po
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
103:1
103:14
10.4230/LIPIcs.ICALP.2016.103
article
On Restricted Nonnegative Matrix Factorization
Chistikov, Dmitry
Kiefer, Stefan
Marusic, Ines
Shirmohammadi, Mahsa
Worrell, James
Nonnegative matrix factorization (NMF) is the problem of decomposing a given nonnegative n*m matrix M into a product of a nonnegative n*d matrix W and a nonnegative d*m matrix H. Restricted NMF requires in addition that the column spaces of M and W coincide.
Finding the minimal inner dimension d is known to be NP-hard, both for NMF and restricted NMF. We show that restricted NMF is closely related to a question about the nature of minimal probabilistic automata, posed by Paz in his seminal 1971 textbook. We use this connection to answer Paz's question negatively, thus falsifying a positive answer claimed in 1974.
Furthermore, we investigate whether a rational matrix M always has a restricted NMF of minimal inner dimension whose factors W and H are also rational. We show that this holds for matrices M of rank at most 3 and we exhibit a rank-4 matrix for which W and H require irrational entries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.103/LIPIcs.ICALP.2016.103.pdf
nonnegative matrix factorization
nonnegative rank
probabilistic automata
labelled Markov chains
minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
104:1
104:12
10.4230/LIPIcs.ICALP.2016.104
article
Proving the Herman-Protocol Conjecture
Bruna, Maria
Grigore, Radu
Kiefer, Stefan
Ouaknine, Joël
Worrell, James
Herman's self-stabilization algorithm, introduced 25 years ago, is a well-studied synchronous randomized protocol for enabling a ring of N processes collectively holding any odd number of tokens to reach a stable state in which a single token remains. Determining the worst-case expected time to stabilization is the central outstanding open problem about this protocol. It is known that there is a constant h such that any initial configuration has expected stabilization time at most hN2. Ten years ago, McIver and Morgan established a lower bound of 4/27 ~ 0.148 for h, achieved with three equally-spaced tokens, and conjectured this to be the optimal value of h. A series of papers over the last decade gradually reduced the upper bound on h, with the present record (achieved in 2014) standing at approximately 0.156. In this paper, we prove McIver and Morgan's conjecture and establish that h = 4/27 is indeed optimal.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.104/LIPIcs.ICALP.2016.104.pdf
randomized protocols
self-stabilization
Lyapunov function
expected time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
105:1
105:13
10.4230/LIPIcs.ICALP.2016.105
article
A Polynomial-Time Algorithm for Reachability in Branching VASS in Dimension One
Göller, Stefan
Haase, Christoph
Lazic, Ranko
Totzke, Patrick
Branching VASS (BVASS) generalise vector addition systems with states by allowing for special branching transitions that can non-deterministically distribute a counter value between two control states. A run of a BVASS consequently becomes a tree, and reachability is to decide whether a given configuration is the root of a reachability tree. This paper shows P-completeness of reachability in BVASS in dimension one, the first decidability result for reachability in a subclass of BVASS known so far. Moreover, we show that coverability and boundedness in BVASS in dimension one are P-complete as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.105/LIPIcs.ICALP.2016.105.pdf
branching vector addition systems
reachability
coverability
boundedness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
106:1
106:14
10.4230/LIPIcs.ICALP.2016.106
article
Reachability in Networks of Register Protocols under Stochastic Schedulers
Bouyer, Patricia
Markey, Nicolas
Randour, Mickael
Sangnier, Arnaud
Stan, Daniel
We study the almost-sure reachability problem in a distributed system obtained as the asynchronous composition of N copies (called processes) of the same automaton (called protocol), that can communicate via a shared register with finite domain. The automaton has two types of transitions: write-transitions update the value of the register, while read-transitions move to a new state depending on the content of the register. Non-determinism is resolved by a stochastic scheduler. Given a protocol, we focus on almost-sure reachability of a target state by one of the processes. The answer to this problem naturally depends on the number N of processes. However, we prove that our setting has a cut-off property: the answer to the almost-sure reachability problem is constant when N is large enough; we then develop an EXPSPACE algorithm deciding whether this constant answer is positive or negative.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.106/LIPIcs.ICALP.2016.106.pdf
Networks of Processes
Parametrized Systems
Stochastic Scheduler
Almost-sure Reachability
Cut-Off Property
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
107:1
107:15
10.4230/LIPIcs.ICALP.2016.107
article
A Program Logic for Union Bounds
Barthe, Gilles
Gaboardi, Marco
Grégoire, Benjamin
Hsu, Justin
Strub, Pierre-Yves
We propose a probabilistic Hoare logic aHL based on the union bound, a tool from basic probability theory. While the union bound is simple, it is an extremely common tool for analyzing randomized algorithms. In formal verification terms, the union bound allows flexible and compositional reasoning over possible ways an algorithm may go wrong. It also enables a clean separation between reasoning about probabilities and reasoning about events, which are expressed as standard first-order formulas in our logic. Notably, assertions in our logic are non-probabilistic, even though we can conclude probabilistic facts from the judgments.
Our logic can also prove accuracy properties for interactive programs, where the program must produce intermediate outputs as soon as pieces of the input arrive, rather than accessing the entire input at once. This setting also enables adaptivity, where later inputs may depend on earlier intermediate outputs. We show how to prove accuracy for several examples from the differential privacy literature, both interactive and non-interactive.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.107/LIPIcs.ICALP.2016.107.pdf
Probabilistic Algorithms
Accuracy
Formal Verification
Hoare Logic
Union Bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
108:1
108:13
10.4230/LIPIcs.ICALP.2016.108
article
The Decidable Properties of Subrecursive Functions
Hoyrup, Mathieu
What can be decided or semidecided about a primitive recursive function, given a definition of that function by primitive recursion? What about subrecursive classes other than primitive recursive functions? We provide a complete and explicit characterization of the decidable and semidecidable properties. This characterization uses a variant of Kolmogorov complexity where only programs in a subrecursive programming language are allowed. More precisely, we prove that all the decidable and semidecidable properties can be obtained as combinations of two classes of basic decidable properties: (i) the function takes some particular values on a finite set of inputs, and (ii) every finite part of the function can be compressed to some extent.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.108/LIPIcs.ICALP.2016.108.pdf
Rice theorem
subrecursive class
decidable property
Kolmogorov complexity
compressibility
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
109:1
109:15
10.4230/LIPIcs.ICALP.2016.109
article
Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length: The General Purpose Analog Computer and Computable Analysis Are Two Efficiently Equivalent Models of Computations
Bournez, Olivier
Graça, Daniel S.
Pouly, Amaury
The outcomes of this paper are twofold.
Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class P of languages computable in polynomial time in terms of differential equations with polynomial right-hand side.
This result gives a purely continuous (time and space) elegant and simple characterization of P. We believe it is the first time such classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis.
Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like P, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations.
Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog model of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both at the computability and complexity level, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both at a computability and at a computational complexity level.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.109/LIPIcs.ICALP.2016.109.pdf
Analog Models of Computation
Continuous-Time Models of Computation
Computable Analysis
Implicit Complexity
Computational Complexity
Ordinary Diff
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
110:1
110:14
10.4230/LIPIcs.ICALP.2016.110
article
Algorithmic Complexity for the Realization of an Effective Subshift By a Sofic
Sablik, Mathieu
Schraudner, Michael
Realization of d-dimensional effective subshifts as projective sub-actions of d + d'-dimensional sofic subshifts for d' >= 1 is now well known [Hochman, 2009; Durand/Romashchenko/Shen, 2012; Aubrun/Sablik, 2013]. In this paper we are interested in qualitative aspects of this realization. We introduce a new topological conjugacy invariant for effective subshifts, the speed of convergence, in view to exhibit algorithmic properties of these subshifts in contrast to the usual framework that focuses on undecidable properties.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.110/LIPIcs.ICALP.2016.110.pdf
Subshift
computability
time complexity
space complexity
tilings
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
111:1
111:13
10.4230/LIPIcs.ICALP.2016.111
article
On Word and Frontier Languages of Unsafe Higher-Order Grammars
Asada, Kazuyuki
Kobayashi, Naoki
Higher-order grammars are an extension of regular and context-free grammars, where nonterminals may take parameters. They have been extensively studied in 1980's, and restudied recently in the context of model checking and program verification. We show that the class of unsafe order-(n+1) word languages coincides with the class of frontier languages of unsafe order-n tree languages. We use intersection types for transforming an order-(n+1) word grammar to a corresponding order-n tree grammar. The result has been proved for safe languages by Damm in 1982, but it has been open for unsafe languages, to our knowledge. Various known results on higher-order grammars can be obtained as almost immediate corollaries of our result.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.111/LIPIcs.ICALP.2016.111.pdf
intersection types
higher-order grammars
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
112:1
112:14
10.4230/LIPIcs.ICALP.2016.112
article
The Schützenberger Product for Syntactic Spaces
Gehrke, Mai
Petrisan, Daniela
Reggio, Luca
Starting from Boolean algebras of languages closed under quotients and using duality theoretic insights, we derive the notion of Boolean spaces with internal monoids as recognisers for arbitrary formal languages of finite words over finite alphabets. This leads to recognisers and syntactic spaces in a setting that is well-suited for applying tools from Stone duality as applied in semantics.
The main focus of the paper is the development of topo-algebraic constructions pertinent to the treatment of languages given by logic formulas. In particular, using the standard semantic view of quantification as projection, we derive a notion of Schützenberger product for Boolean spaces with internal monoids. This makes heavy use of the Vietoris construction - and its dual functor - which is central to the coalgebraic treatment of classical modal logic.
We show that the unary Schützenberger product for spaces yields a recogniser for the language of all models of the formula EXISTS x.phi(x), when applied to a recogniser for the language of all models of phi(x). Further, we generalise global and local versions of the theorems of Schützenberger and Reutenauer characterising the languages recognised by the binary Schützenberger product.
Finally, we provide an equational characterisation of Boolean algebras obtained by local Schützenberger product with the one element space based on an Egli-Milner type condition on generalised factorisations of ultrafilters on words.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.112/LIPIcs.ICALP.2016.112.pdf
Stone duality and Stone-Cech compactification
Vietoris hyperspace construction
logic on words
algebraic language theory beyond the regular setting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
113:1
113:14
10.4230/LIPIcs.ICALP.2016.113
article
Logic of Local Inference for Contextuality in Quantum Physics and Beyond
Kishida, Kohei
Contextuality in quantum physics provides a key resource for quantum information and computation. The topological approach in [Abramsky and Brandenburger, New J. Phys., 2011, Abramsky et al., CSL 2015, 2015] characterizes contextuality as "global inconsistency" coupled with "local consistency", revealing it to be a phenomenon also found in many other fields. This has yielded a logical method of detecting and proving the "global inconsistency" part of contextuality. Our goal is to capture the other, "local consistency" part, which requires a novel approach to logic that is sensitive to the topology of contexts. To achieve this, we formulate a logic of local inference by using context-sensitive theories and models in regular categories. This provides a uniform framework for local consistency, and lays a foundation for high-level methods of detecting, proving, and moreover using contextuality as computational resource.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.113/LIPIcs.ICALP.2016.113.pdf
Contextuality
quantum mechanics
regular category
regular logic
separated presheaf
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
114:1
114:14
10.4230/LIPIcs.ICALP.2016.114
article
Minimizing Resources of Sweeping and Streaming String Transducers
Baschenis, Félix
Gauwin, Olivier
Muscholl, Anca
Puppis, Gabriele
We consider minimization problems for natural parameters of word transducers: the number of passes performed by two-way transducers and the number of registers used by streaming transducers. We show how to compute in ExpSpace the minimum number of passes needed to implement a transduction given as sweeping transducer, and we provide effective constructions of transducers of (worst-case optimal) doubly exponential size. We then consider streaming transducers where concatenations of registers are forbidden in the register updates. Based on a correspondence between the number of passes of sweeping transducers and the number of registers of equivalent concatenation-free streaming transducers, we derive a minimization procedure for the number of registers of concatenation-free streaming transducers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.114/LIPIcs.ICALP.2016.114.pdf
word transducers
streaming
2-way
sweeping transducers
minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
115:1
115:12
10.4230/LIPIcs.ICALP.2016.115
article
A Linear Acceleration Theorem for 2D Cellular Automata on All Complete Neighborhoods
Grandjean, Anaël
Poupet, Victor
Linear acceleration theorems are known for most computational models. Although such results have been proved for two-dimensional cellular automata working on specific neighborhoods, no general construction was known. We present here a technique of linear acceleration for all twodimensional languages recognized by cellular automata working on complete neighborhoods.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.115/LIPIcs.ICALP.2016.115.pdf
2D Cellular automata
linear acceleration
language recognition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
116:1
116:12
10.4230/LIPIcs.ICALP.2016.116
article
New Interpretation and Generalization of the Kameda-Weiner Method
Tamm, Hellis
We present a reinterpretation of the Kameda-Weiner method of finding a minimal nondeterministic finite automaton (NFA) of a language, in terms of atoms of the language. We introduce a method to generate NFAs from a set of languages, and show that the Kameda-Weiner method is a special case of it. Our method provides a unified view of the construction of several known NFAs, including the canonical residual finite state automaton and the atomaton of the language.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.116/LIPIcs.ICALP.2016.116.pdf
Nondeterministic finite automata
NFA minimization
Kameda-Weinermethod
atoms of regular languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
117:1
117:14
10.4230/LIPIcs.ICALP.2016.117
article
Nesting Depth of Operators in Graph Database Queries: Expressiveness vs. Evaluation Complexity
Praveen, M.
Srivathsan, B.
Designing query languages for graph structured data is an active field of research, where expressiveness and efficient algorithms for query evaluation are conflicting goals. To better handle dynamically changing data, recent work has been done on designing query languages that can compare values stored in the graph database, without hard coding the values in the query. The main idea is to allow variables in the query and bind the variables to values when evaluating the query. For query languages that bind variables only once, query evaluation is usually NP-complete. There are query languages that allow binding inside the scope of Kleene star operators, which can themselves be in the scope of bindings and so on. Uncontrolled nesting of binding and iteration within one another results in query evaluation being PSPACE-complete.
We define a way to syntactically control the nesting depth of iterated bindings, and study how this affects expressiveness and efficiency of query evaluation. The result is an infinite, syntactically defined hierarchy of expressions. We prove that the corresponding language hierarchy is strict.
Given an expression in the hierarchy, we prove that it is undecidable to check if there is a language equivalent expression at lower levels. We prove that evaluating a query based on an expression at level i can be done in level i of the polynomial time hierarchy. Satisfiability of quantified Boolean formulas can be reduced to query evaluation; we study the relationship between alternations in Boolean quantifiers and the depth of nesting of iterated bindings.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.117/LIPIcs.ICALP.2016.117.pdf
graphs with data
regular data path queries
expressiveness
query evaluation
complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
118:1
118:15
10.4230/LIPIcs.ICALP.2016.118
article
A Hierarchy of Local Decision
Feuilloley, Laurent
Fraigniaud, Pierre
Hirvonen, Juho
We extend the notion of distributed decision in the framework of distributed network computing, inspired by recent results on so-called distributed graph automata. We show that, by using distributed decision mechanisms based on the interaction between a prover and a disprover, the size of the certificates distributed to the nodes for certifying a given network property can be drastically reduced. For instance, we prove that minimum spanning tree can be certified with O(log(n))-bit certificates in n-node graphs, with just one interaction between the prover and the disprover, while it is known that certifying MST requires Omega(log^2(n))-bit certificates if only the prover can act. The improvement can even be exponential for some simple graph properties.
For instance, it is known that certifying the existence of a nontrivial automorphism requires Omega(n^2) bits if only the prover can act. We show that there is a protocol with two interactions between the prover and the disprover enabling to certify nontrivial automorphism with O(log(n))- bit certificates. These results are achieved by defining and analysing a local hierarchy of decision which generalizes the classical notions of proof-labelling schemes and locally checkable proofs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.118/LIPIcs.ICALP.2016.118.pdf
Distributed Network Computing
Distributed Algorithm
Distributed Decision
Locality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
119:1
119:14
10.4230/LIPIcs.ICALP.2016.119
article
Constraint Satisfaction Problems for Reducts of Homogeneous Graphs
Bodirsky, Manuel
Martin, Barnaby
Pinsker, Michael
Pongrácz, András
For n >= 3, let (Hn, E) denote the n-th Henson graph, i.e., the unique countable homogeneous graph with exactly those finite graphs as induced subgraphs that do not embed the complete graph on n vertices. We show that for all structures Gamma with domain Hn whose relations are first-order definable in (Hn, E) the constraint satisfaction problem for Gamma is either in P or is NP-complete.
We moreover show a similar complexity dichotomy for all structures whose relations are first-order definable in a homogeneous graph whose reflexive closure is an equivalence relation.
Together with earlier results, in particular for the random graph, this completes the complexity classification of constraint satisfaction problems of structures first-order definable in countably infinite homogeneous graphs: all such problems are either in P or NP-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.119/LIPIcs.ICALP.2016.119.pdf
Constraint Satisfaction
Homogeneous Graphs
Computational Complexity
Universal Algebra
Ramsey Theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
120:1
120:13
10.4230/LIPIcs.ICALP.2016.120
article
Sensitivity of Counting Queries
Arapinis, Myrto
Figueira, Diego
Gaboardi, Marco
In the context of statistical databases, the release of accurate statistical information about the collected data often puts at risk the privacy of the individual contributors. The goal of differential privacy is to maximise the utility of a query while protecting the individual records in the database. A natural way to achieve differential privacy is to add statistical noise to the result of the query.
In this context, a mechanism for releasing statistical information is thus a trade-off between utility and privacy. In order to balance these two "conflicting" requirements, privacy preserving mechanisms calibrate the added noise to the so-called sensitivity of the query, and thus a precise estimate of the sensitivity of the query is necessary to determine the amplitude of the noise to be added.
In this paper, we initiate a systematic study of sensitivity of counting queries over relational databases. We first observe that the sensitivity of a Relational Algebra query with counting is not computable in general, and that while the sensitivity of Conjunctive Queries with counting is computable, it becomes unbounded as soon as the query includes a join. We then consider restricted classes of databases (databases with constraints), and study the problem of computing the sensitivity of a query given such constraints. We are able to establish bounds on the sensitivity of counting conjunctive queries over constrained databases. The kind of constraints studied here are: functional dependencies and cardinality dependencies. The latter is a natural generalisation of functional dependencies that allows us to provide tight bounds on the sensitivity of counting conjunctive queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.120/LIPIcs.ICALP.2016.120.pdf
Differential privacy
sensitivity
relational algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
121:1
121:15
10.4230/LIPIcs.ICALP.2016.121
article
The Complexity of Rational Synthesis
Condurache, Rodica
Filiot, Emmanuel
Gentilini, Raffaella
Raskin, Jean-François
We study the computational complexity of the cooperative and non-cooperative rational synthesis problems, as introduced by Kupferman, Vardi and co-authors. We provide tight results for most of the classical omega-regular objectives, and show how to solve those problems optimally.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.121/LIPIcs.ICALP.2016.121.pdf
Non-zero sum games
reactive synthesis
omega-regular objectives
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
122:1
122:14
10.4230/LIPIcs.ICALP.2016.122
article
On the Complexity of Grammar-Based Compression over Fixed Alphabets
Casel, Katrin
Fernau, Henning
Gaspers, Serge
Gras, Benjamin
Schmid, Markus L.
It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an O(3n) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.122/LIPIcs.ICALP.2016.122.pdf
Grammar-Based Compression
Straight-Line Programs
NP-Completeness
Exact Exponential Time Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
123:1
123:14
10.4230/LIPIcs.ICALP.2016.123
article
The Complexity of Downward Closure Comparisons
Zetzsche, Georg
The downward closure of a language is the set of all (not necessarily contiguous) subwords of its members. It is well-known that the downward closure of every language is regular. Moreover, recent results show that downward closures are computable for quite powerful system models.
One advantage of abstracting a language by its downward closure is that then equivalence and inclusion become decidable. In this work, we study the complexity of these two problems. More precisely, we consider the following decision problems: Given languages K and L from classes C and D, respectively, does the downward closure of K include (equal) that of L?
These problems are investigated for finite automata, one-counter automata, context-free grammars, and reversal-bounded counter automata. For each combination, we prove a completeness result either for fixed or for arbitrary alphabets. Moreover, for Petri net languages, we show that both problems are Ackermann-hard and for higher-order pushdown automata of order k, we prove hardness for complements of nondeterministic k-fold exponential time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.123/LIPIcs.ICALP.2016.123.pdf
Downward closures
Complexity
Inclusion
Equivalence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
124:1
124:9
10.4230/LIPIcs.ICALP.2016.124
article
Anti-Powers in Infinite Words
Fici, Gabriele
Restivo, Antonio
Silva, Manuel
Zamboni, Luca Q.
In combinatorics of words, a concatenation of k consecutive equal blocks is called a power of order k. In this paper we take a different point of view and define an anti-power of order k as a concatenation of k consecutive pairwise distinct blocks of the same length. As a main result, we show that every infinite word contains powers of any order or anti-powers of any order. That is, the existence of powers or anti-powers is an unavoidable regularity. Indeed, we prove a stronger result, which relates the density of anti-powers to the existence of a factor that occurs with arbitrary exponent. From these results, we derive that at every position of an aperiodic uniformly recurrent word start anti-powers of any order. We further show that any infinite word avoiding anti-powers of order 3 is ultimately periodic, and that there exist aperiodic words avoiding anti-powers of order 4. We also show that there exist aperiodic recurrent words avoiding anti-powers of order 6, and leave open the question whether there exist aperiodic recurrent words avoiding anti-powers of order k for k=4,5.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.124/LIPIcs.ICALP.2016.124.pdf
infinite word
anti-power
unavoidable regularity
avoidability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
125:1
125:14
10.4230/LIPIcs.ICALP.2016.125
article
On Equivalence and Uniformisation Problems for Finite Transducers
Filiot, Emmanuel
Jecker, Ismaël
Löding, Christof
Winter, Sarah
Transductions are binary relations of finite words. For rational transductions, i.e., transductions defined by finite transducers, the inclusion, equivalence and sequential uniformisation problems are known to be undecidable. In this paper, we investigate stronger variants of inclusion, equivalence and sequential uniformisation, based on a general notion of transducer resynchronisation, and show their decidability. We also investigate the classes of finite-valued rational transductions and deterministic rational transductions, which are known to have a decidable equivalence problem. We show that sequential uniformisation is also decidable for them.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.125/LIPIcs.ICALP.2016.125.pdf
Transducers
Equivalence
Uniformisation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
126:1
126:13
10.4230/LIPIcs.ICALP.2016.126
article
The Bridge Between Regular Cost Functions and Omega-Regular Languages
Colcombet, Thomas
Fijalkow, Nathanaël
In this paper, we exhibit a one-to-one correspondence between omega-regular languages and a subclass of regular cost functions over finite words, called omega-regular like cost functions. This bridge between the two models allows one to readily import classical results such as the last appearance record or the McNaughton-Safra constructions to the realm of regular cost functions. In combination with game theoretic techniques, this also yields a simple description of an optimal procedure of history-determinisation for cost automata, a central result in the theory of regular cost functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.126/LIPIcs.ICALP.2016.126.pdf
Theory of Regular Cost Functions
Automata with Counters
Costautomata
Quantitative Extensions of Automata
Determinisation of Automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
127:1
127:14
10.4230/LIPIcs.ICALP.2016.127
article
Solutions of Word Equations Over Partially Commutative Structures
Diekert, Volker
Jez, Artur
Kufleitner, Manfred
We give NSPACE(n*log(n)) algorithms solving the following decision problems. Satisfiability: Is the given equation over a free partially commutative monoid with involution (resp. a free partially commutative group) solvable? Finiteness: Are there only finitely many solutions of such an equation? PSPACE algorithms with worse complexities for the first problem are known, but so far, a PSPACE algorithm for the second problem was out of reach. Our results are much stronger: Given such an equation, its solutions form an EDT0L language effectively representable in NSPACE(n*log(n)). In particular, we give an effective description of the set of all solutions for equations with constraints in free partially commutative monoids and groups.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.127/LIPIcs.ICALP.2016.127.pdf
Word equations
EDT0L language
trace monoid
right-angled Artin group
partial commutation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
128:1
128:13
10.4230/LIPIcs.ICALP.2016.128
article
The Taming of the Semi-Linear Set
Chistikov, Dmitry
Haase, Christoph
Semi-linear sets, which are rational subsets of the monoid (Z^d,+), have numerous applications in theoretical computer science. Although semi-linear sets are usually given implicitly, by formulas in Presburger arithmetic or by other means, the effect of Boolean operations on semi-linear sets in terms of the size of description has primarily been studied for explicit representations. In this paper, we develop a framework suitable for implicitly presented semi-linear sets, in which the size of a semi-linear set is characterized by its norm—the maximal magnitude of a generator.
We put together a toolbox of operations and decompositions for semi-linear sets which gives bounds in terms of the norm (as opposed to just the bit-size of the description), a unified presentation, and simplified proofs. This toolbox, in particular, provides exponentially better bounds for the complement and set-theoretic difference. We also obtain bounds on unambiguous decompositions and, as an application of the toolbox, settle the complexity of the equivalence problem for exponent-sensitive commutative grammars.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.128/LIPIcs.ICALP.2016.128.pdf
semi-linear sets
convex polyhedra
triangulations
integer linear programming
commutative grammars
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
129:1
129:14
10.4230/LIPIcs.ICALP.2016.129
article
Characterizing Classes of Regular Languages Using Prefix Codes of Bounded Synchronization Delay
Diekert, Volker
Walter, Tobias
In this paper we continue a classical work of Schützenberger on codes with bounded synchronization delay. He was interested in characterizing those regular languages where the groups in the syntactic monoid belong to a variety H. He allowed operations on the language side which are union, intersection, concatenation and modified Kleene-star involving a mapping of a prefix code of bounded synchronization delay to a group G in H, but no complementation. In our notation this leads to the language classes SD_G(A^{infinity}) and SD_H(A^{infinity}). Our main result shows that SD_H(A^{infinity}) always corresponds to the languages having syntactic monoids where all subgroups are in H. Schützenberger showed this for a variety H if H contains Abelian groups, only. Our method shows the general result for all H directly on finite and infinite words. Furthermore, we introduce the notion of local Rees extensions which refers to a simple type of classical Rees extensions. We give a decomposition of a monoid in terms of its groups and local Rees extensions. This gives a somewhat similar, but simpler decomposition than in Rhodes' synthesis theorem. Moreover, we need a singly exponential number of operations, only. Finally, our decomposition yields an answer to a question in a recent paper of Almeida and Klíma about varieties that are closed under Rees extensions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.129/LIPIcs.ICALP.2016.129.pdf
formal language
synchronization delay
variety
Rees extension
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
130:1
130:13
10.4230/LIPIcs.ICALP.2016.130
article
An Optimal Dual Fault Tolerant Reachability Oracle
Choudhary, Keerti
Let G=(V,E) be an n-vertices m-edges directed graph. Let s inV be any designated source vertex. We address the problem of reporting the reachability information from s under two vertex failures. We show that it is possible to compute in polynomial time an O(n) size data structure that for any query vertex v, and any pair of failed vertices f_1, f_2, answers in O(1) time whether or not there exists a path from s to v in G\{f_1,f_2}.
For the simpler case of single vertex failure such a data structure can be obtained using the dominator-tree from the celebrated work of Lengauer and Tarjan [TOPLAS 1979, Vol. 1]. However, no efficient data structure was known in the past for handling more than one failures. We, in addition, also present a labeling scheme with O(log^3(n))-bit size labels such that for any f_1, f_2, v in V , it is possible to determine in poly-logarithmic time if v is reachable from s in G\{f_1,f_2} using only the labels of f1, f_2 and v.
Our data structure can also be seen as an efficient mechanism for verifying double-dominators. For any given x, y, v in V we can determine in O(1) time if the pair (x,y) is a double-dominator of v. Earlier the best known method for this problem was using dominator chain from which verification of double-dominators of only a single vertex was possible.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.130/LIPIcs.ICALP.2016.130.pdf
Fault tolerant
Directed graph
Reachability oracle
Labeling scheme
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
131:1
131:14
10.4230/LIPIcs.ICALP.2016.131
article
Graph Minors for Preserving Terminal Distances Approximately - Lower and Upper Bounds
Cheung, Yun Kuen
Goranci, Gramoz
Henzinger, Monika
Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed.
We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Omega(k^2) / Omega(k^{5/4}) / Omega(k^{6/5}) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant.
We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k^2) non-terminals, and any planar graph admits a minor with
1 + epsilon distortion and ~O((k/epsilon)^2) non-terminals.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.131/LIPIcs.ICALP.2016.131.pdf
Distance Approximating Minor
Graph Minor
Graph Compression
Vertex Sparsification
Metric Embedding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
132:1
132:16
10.4230/LIPIcs.ICALP.2016.132
article
Distance Labeling Schemes for Trees
Alstrup, Stephen
Gørtz, Inge Li
Halvorsen, Esben Bistrup
Porat, Ely
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes.
A lower bound by Gavoille et al. [Gavoille et al., J. Alg., 2004] and an upper bound by Peleg [Peleg, J. Graph Theory, 2000] establish that labels must use Theta(log^2(n)) bits. Gavoille et al. [Gavoille et al., ESA, 2001] show that for very small approximate stretch, labels use Theta(log(n) log(log(n))) bits. Several other papers investigate various variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003].
We improve the known upper and lower bounds of exact distance labeling by showing that 1/4*log^2(n) bits are needed and that 1/2*log^2(n) bits are sufficient. We also give (1 + epsilon)-stretch labeling schemes using Theta(log(n)) bits for constant epsilon > 0. (1 + epsilon)-stretch labeling schemes with polylogarithmic label size have previously been established for doubling dimension graphs by Talwar [Talwar, STOC, 2004].
In addition, we present matching upper and lower bounds for distance labeling for caterpillars, showing that labels must have size 2*log(n) - Theta(log(log(n))). For simple paths with k nodes and edge weights in [1,n], we show that labels must have size (k - 1)/k*log(n) + Theta(log(k)).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.132/LIPIcs.ICALP.2016.132.pdf
Distributed computing
Distance labeling
Graph theory
Routing
Trees
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
133:1
133:15
10.4230/LIPIcs.ICALP.2016.133
article
Near Optimal Adjacency Labeling Schemes for Power-Law Graphs
Petersen, Casper
Rotbart, Noy
Simonsen, Jakob Grue
Wulff-Nilsen, Christian
An adjacency labeling scheme labels the n nodes of a graph with bit strings in a way that allows, given the labels of two nodes, to determine adjacency based only on those bit strings. Though many graph families have been meticulously studied for this problem, a non-trivial labeling scheme for the important family of power-law graphs has yet to be obtained. This family is particularly useful for social and web networks as their underlying graphs are typically modelled as power-law graphs. Using simple strategies and a careful selection of a parameter, we show upper bounds for such labeling schemes of ~O(sqrt^{alpha}(n)) for power law graphs with coefficient alpha;, as well as nearly matching lower bounds. We also show two relaxations that allow for a label of logarithmic size, and extend the upper-bound technique to produce an improved distance labeling scheme for power-law graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.133/LIPIcs.ICALP.2016.133.pdf
Labeling schemes
Power-law graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
134:1
134:15
10.4230/LIPIcs.ICALP.2016.134
article
On the Resiliency of Randomized Routing Against Multiple Edge Failures
Chiesa, Marco
Gurtov, Andrei
Madry, Aleksander
Mitrovic, Slobodan
Nikolaevskiy, Ilya
Shapira, Michael
Shenker, Scott
We study the Static-Routing-Resiliency problem, motivated by routing on the Internet: Given a graph G = (V,E), a unique destination vertex d, and an integer constant c > 0, does there exist a static and destination-based routing scheme such that the correct delivery of packets from any source s to the destination d is guaranteed so long as (1) no more than c edges fail and (2) there exists a physical path from s to d? We embark upon a study of this problem by relating the edge-connectivity of a graph, i.e., the minimum number of edges whose deletion partitions G, to its resiliency. Following the success of randomized routing algorithms in dealing with a variety of problems (e.g., Valiant load balancing in the network design problem), we embark upon a study of randomized routing algorithms for the Static-Routing-Resiliency problem. For any k-connected graph, we show a surprisingly simple randomized algorithm that has expected number of hops O(|V|k) if at most k-1 edges fail, which reduces to O(|V|) if only a fraction t of the links fail (where t < 1 is a constant). Furthermore, our algorithm is deterministic if the routing does not encounter any failed link.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.134/LIPIcs.ICALP.2016.134.pdf
Randomized
Routing
Resilience
Connectivity
Arborescenses
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
135:1
135:13
10.4230/LIPIcs.ICALP.2016.135
article
Partition Bound Is Quadratically Tight for Product Distributions
Harsha, Prahladh
Jain, Rahul
Radhakrishnan, Jaikumar
Let f: {0,1}^n*{0,1}^n -> {0,1} be a 2-party function. For every product distribution mu on {0,1}^n*{0,1}^n, we show that
CC^{mu}_{0.49}(f) = O(log(prt_{1/8}(f))*log(log(prt_{1/8}(f)))^2),
where CC^{mu}_{epsilon}(f) is the distributional communication complexity of f with error at most epsilon under the distribution mu and prt_{1/8}(f) is the partition bound of f, as defined by Jain and Klauck [Proc. 25th CCC, 2010]. We also prove a similar bound in terms of IC_{1/8}(f), the information complexity of f, namely,
CC^{mu}_{0.49}(f) = O((IC_{1/8}(f)*log(IC_{1/8}(f)))^2).
The latter bound was recently and independently established by Kol [Proc. 48th STOC, 2016] using a different technique.
We show a similar result for query complexity under product distributions. Let g: {0,1}^n -> {0,1} be a function. For every bit-wise product distribution mu on {0,1}^n, we show that
QC^{mu}_{0.49}(g) = O((log(qprt_{1/8}(g))*log(log(qprt_{1/8}(g))))^2),
where QC^{mu}_{epsilon}(g) is the distributional query complexity of f with error at most epsilon under the distribution mu and qprt_{1/8}(g) is the query partition bound of the function g.
Partition bounds were introduced (in both communication complexity and query complexity models) to provide LP-based lower bounds for randomized communication complexity and randomized query complexity. Our results demonstrate that these lower bounds are polynomially tight for product distributions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.135/LIPIcs.ICALP.2016.135.pdf
partition bound
product distribution
communication complexity
query complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
136:1
136:14
10.4230/LIPIcs.ICALP.2016.136
article
Efficient Plurality Consensus, Or: the Benefits of Cleaning up from Time to Time
Berenbrink, Petra
Friedetzky, Tom
Giakkoupis, George
Kling, Peter
Plurality consensus considers a network of n nodes, each having one of k opinions. Nodes execute a (randomized) distributed protocol with the goal that all nodes adopt the plurality (the opinion initially supported by the most nodes). Communication is realized via the Gossip (or random phone call) model. A major open question has been whether there is a protocol for the complete graph that converges (w.h.p.) in polylogarithmic time and uses only polylogarithmic memory per node (local memory). We answer this question affirmatively.
We propose two protocols that need only mild assumptions on the bias in favor of the plurality. As an example of our results, consider the complete graph and an arbitrarily small constant multiplicative bias in favor of the plurality. Our first protocol achieves plurality consensus in O(log(k)*log(log(n))) rounds using log(k) + Theta(log(log(k))) bits of local memory. Our second protocol achieves plurality consensus in O(log(n)*log(log(n))) rounds using only log(k) + 4 bits of local memory. This disproves a conjecture by Becchetti et al. (SODA'15) implying that any protocol with local memory log(k)+O(1) has worst-case runtime Omega(k). We provide similar bounds for much weaker bias assumptions. At the heart of our protocols lies an undecided state, an idea introduced by Angluin et al. (Distributed Computing'08).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.136/LIPIcs.ICALP.2016.136.pdf
plurality consensus
voting
majority
distributed
gossip
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
137:1
137:14
10.4230/LIPIcs.ICALP.2016.137
article
Fast, Robust, Quantizable Approximate Consensus
Charron-Bost, Bernadette
Függer, Matthias
Nowak, Thomas
We introduce a new class of distributed algorithms for the approximate consensus problem in dynamic rooted networks, which we call amortized averaging algorithms. They are deduced from ordinary averaging algorithms by adding a value-gathering phase before each value update. This results in a drastic drop in decision times, from being exponential in the number n of processes to being polynomial under the assumption that each process knows n. In particular, the amortized midpoint algorithm is the first algorithm that achieves a linear decision time in dynamic rooted networks with an optimal contraction rate of 1/2 at each update step.
We then show robustness of the amortized midpoint algorithm under violation of network assumptions: it gracefully degrades if communication graphs from time to time are non rooted, or under a wrong estimate of the number of processes. Finally, we prove that the amortized midpoint algorithm behaves well if processes can store and send only quantized values, rendering it well-suited for the design of dynamic networked systems. As a corollary we obtain that the 2-set consensus problem is solvable in linear time in any dynamic rooted network model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.137/LIPIcs.ICALP.2016.137.pdf
approximate consensus
dynamic networks
averaging algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
138:1
138:14
10.4230/LIPIcs.ICALP.2016.138
article
Leader Election in Unreliable Radio Networks
Ghaffari, Mohsen
Newport, Calvin
The dual graph model describes a radio network that contains both reliable and unreliable links. In recent years, this model has received significant attention by the distributed algorithms community [Kuhn/Lynch/Newport/Oshman/Richa, PODC 2010; Censor-Hillel/Gilbert/Kuhn/Lynch/Newport, Dist. Comp. 2014; Ghaffari/Haeupler/Lynch/Newport, DISC 2012; Ghaffari/Lynch/Newport, PODC 2013; Ghaffir/Kantor/Lynch/Newport, PODC 2014; Newport, DISC 2014; Ahmadi/Ghodselahi/Kuhn/Molla, OPODIS 2015; Lynch/Newport, PODC 2015]. Due to results in [Ghaffari/Lynch/Newport, PODC 2013], it is known that leader election plays a key role in enabling efficient computation in this difficult setting: a leader can synchronize the network in such a manner that most problems can be subsequently solved in time similar to the classical radio network model that lacks unreliable links. The feasibility of efficient leader election in the dual graph model, however, was left as an important open question. In this paper, we answer this question. In more detail, we prove new upper and lower bound results that characterize the complexity of leader election in this setting. By doing so, we reveal a surprising dichotomy: (1) under the assumption that the network size n is in the range 1 to N, where N is a large upper bound on the maximum possible network size (e.g., the ID space), leader election is fundamentally hard, requiring ~Omega(sqrt(N)) rounds to solve in the worst-case; (2) under the assumption that n is in the range 2 to N, however, the problem can be solved in only ~O(D) rounds, for network diameter D, matching the lower bound for leader election in the standard radio network model (within log factors) [Ghaffari/Haeupler, SODA 2013].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.138/LIPIcs.ICALP.2016.138.pdf
Radio Networks
Leader Election
Unreliability
Randomized Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
139:1
139:14
10.4230/LIPIcs.ICALP.2016.139
article
Faster Deterministic Communication in Radio Networks
Czumaj, Artur
Davies, Peter
In this paper we improve the deterministic complexity of two fundamental communication primitives in the classical model of ad-hoc radio networks with unknown topology: broadcasting and wake-up. We consider an unknown radio network, in which all nodes have no prior knowledge about network topology, and know only the size of the network n, the maximum in-degree of any node Delta, and the eccentricity of the network D.
For such networks, we first give an algorithm for wake-up, in both directed and undirected networks, based on the existence of small universal synchronizers. This algorithm runs in O((min{n,D*Delta}*log(n)*log(Delta))/(log(log(Delta)))) time, improving over the previous best O(n*log^2(n))-time result across all ranges of parameters, but particularly when maximum in-degree is small.
Next, we introduce a new combinatorial framework of block synchronizers and prove the existence of such objects of low size. Using this framework, we design a new deterministic algorithm for the fundamental problem of broadcasting, running in O(n*log(D)*log(log((D*Delta)/n))) time. This is the fastest known algorithm for this problems, improving upon the O(n*log(n)*log*log(n))-time algorithm of De Marco (2010) and the O(n*log^2(D))-time algorithm due to Czumaj and Rytter (2003), the previous fastest results for directed networks, and is the first to come within a log-logarithmic factor of the Omega(n*log(D)) lower bound due to Clementi et al. (2003).
Our results have also direct implications on the fastest deterministic leader election and clock synchronization algorithms in both directed and undirected radio networks, tasks which are commonly used as building blocks for more complex procedures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.139/LIPIcs.ICALP.2016.139.pdf
Radio networks
Communication networks
Broadcasting
Wake-Up
Deterministic algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
140:1
140:14
10.4230/LIPIcs.ICALP.2016.140
article
Networks of Complements
Babaioff, Moshe
Blumrosen, Liad
Nisan, Noam
We consider a network of sellers, each selling a single product, where the graph structure represents pair-wise complementarities between products. We study how the network structure affects revenue and social welfare of equilibria of the pricing game between the sellers. We prove positive and negative results, both of "Price of Anarchy" and of "Price of Stability" type, for special families of graphs (paths, cycles) as well as more general ones (trees, graphs). We describe best-reply dynamics that converge to non-trivial equilibrium in several families of graphs, and we use these dynamics to prove the existence of approximately-efficient equilibria.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.140/LIPIcs.ICALP.2016.140.pdf
Complements
Pricing
Networks
Game Theory
Price of Stability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
141:1
141:14
10.4230/LIPIcs.ICALP.2016.141
article
House Markets with Matroid and Knapsack Constraints
Krysta, Piotr
Zhang, Jinshan
Classical online bipartite matching problem and its generalizations are central algorithmic optimization problems. The second related line of research is in the area of algorithmic mechanism design, referring to the broad class of house allocation or assignment problems. We introduce a single framework that unifies and generalizes these two streams of models. Our generalizations allow for arbitrary matroid constraints or knapsack constraints at every object in the allocation problem. We design and analyze approximation algorithms and truthful mechanisms for this framework. Our algorithms have best possible approximation guarantees for most of the special instantiations of this framework, and are strong generalizations of the previous known results.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.141/LIPIcs.ICALP.2016.141.pdf
Algorithmic mechanism design; Approximation algorithms; Matching under preferences; Matroid and knapsack constraints
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
142:1
142:13
10.4230/LIPIcs.ICALP.2016.142
article
Reservation Exchange Markets for Internet Advertising
Goel, Gagan
Leonardi, Stefano
Mirrokni, Vahab
Nikzad, Afshin
Paes-Leme, Renato
Internet display advertising industry follows two main business models. One model is based on direct deals between publishers and advertisers where they sign legal contracts containing terms of fulfillment for a future inventory. The second model is a spot market based on auctioning page views in real-time on advertising exchange (AdX) platforms such as DoubleClick's Ad Exchange, RightMedia, or AppNexus. These exchanges play the role of intermediaries who sell items (e.g. page-views) on behalf of a seller (e.g. a publisher) to buyers (e.g., advertisers) on the opposite side of the market. The computational and economics issues arising in this second model have been extensively investigated in recent times.
In this work, we consider a third emerging model called reservation exchange market. A reservation exchange is a two-sided market between buyer orders for blocks of advertisers' impressions and seller orders for blocks of publishers' page views. The goal is to match seller orders to buyer orders while providing the right incentives to both sides. In this work we first describe the important features of mechanisms for efficient reservation exchange markets. We then address the algorithmic problems of designing revenue sharing schemes to provide a fair division between sellers of the revenue collected from buyers.
A major conceptual contribution of this work is in showing that even though both clinching ascending auctions and VCG mechanisms achieve the same outcome from a buyer perspective, however, from the perspective of revenue sharing among sellers, clinching ascending auctions are much more informative than VCG auctions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.142/LIPIcs.ICALP.2016.142.pdf
Reservation Markets
Internet Advertising
Two-sided Markets
Clinching Auction
Envy-free allocations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
143:1
143:13
10.4230/LIPIcs.ICALP.2016.143
article
Competitive Analysis of Constrained Queueing Systems
Im, Sungjin
Kulkarni, Janardhan
Munagala, Kamesh
We consider the classical problem of constrained queueing (or switched networks): There is a set of N queues to which unit sized packets arrive. The queues are interdependent, so that at any time step, only a subset of the queues can be activated. One packet from each activated queue can be transmitted, and leaves the system. The set of feasible subsets that can be activated, denoted S, is downward closed and is known in advance. The goal is to find a scheduling policy that minimizes average delay (or flow time) of the packets. The constrained queueing problem models several practical settings including packet transmission in wireless networks and scheduling cross-bar switches.
In this paper, we study this problem using the the competitive analysis: The packet arrivals can be adversarial and the scheduling policy only uses information about packets currently queued in the system. We present an online algorithm, that for any epsilon > 0, has average flow time at most O(R^2/epsilon^3*OPT+NR) when given (1+epsilon) speed, i.e., the ability to schedule (1+epsilon) packets on average per time step. Here, R is the maximum number of queues that can be simultaneously scheduled, and OPT is the average flow time of the optimal policy. This asymptotic competitive ratio O(R^3/epsilon^3) improves upon the previous O(N/epsilon^2) which was obtained in the context of multi-dimensional scheduling [Im/Kulkarni/Munagala, FOCS 2015]. In the full general model where N can be exponentially larger than R, this is an exponential improvement. The algorithm presented in this paper is based on Makespan estimates which is very different from that in [Im/Kulkarni/Munagala, FOCS 2015], a variation of the Max-Weight algorithm. Further, our policy is myopic, meaning that scheduling decisions at any step are based only on the current composition of the queues. We finally show that speed augmentation is necessary to achieve any bounded competitive ratio.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.143/LIPIcs.ICALP.2016.143.pdf
Online scheduling
Average flow time
Switch network
Adversarial
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
144:1
144:12
10.4230/LIPIcs.ICALP.2016.144
article
The Linear Voting Model
Cooper, Colin
Rivera, Nicolás
We study voting models on graphs. In the beginning, the vertices of a given graph have some initial opinion. Over time, the opinions on the vertices change by interactions between graph neighbours. Under suitable conditions the system evolves to a state in which all vertices have the same opinion. In this work, we consider a new model of voting, called the Linear Voting Model. This model can be seen as a generalization of several models of voting, including among others, pull voting and push voting. One advantage of our model is that, even though it is very general, it has a rich structure making the analysis tractable. In particular we are able to solve the basic question about voting, the probability that certain opinion wins the poll, and furthermore, given appropriate conditions, we are able to bound the expected time until some opinion wins.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.144/LIPIcs.ICALP.2016.144.pdf
Voter model
Interacting particles
Randomized algorithm
Probabilistic voting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
145:1
145:13
10.4230/LIPIcs.ICALP.2016.145
article
Discordant Voting Processes on Finite Graphs
Cooper, Colin
Dyer, Martin
Frieze, Alan
Rivera, Nicolás
We consider an asynchronous voting process on graphs which we call discordant voting, and which can be described as follows. Initially each vertex holds one of two opinions, red or blue say. Neighbouring vertices with different opinions interact pairwise. After an interaction both vertices have the same colour. The quantity of interest is T, the time to reach consensus, i.e. the number of interactions needed for all vertices have the same colour.
An edge whose endpoint colours differ (i.e. one vertex is coloured red and the other one blue) is said to be discordant. A vertex is discordant if its is incident with a discordant edge. In discordant voting, all interactions are based on discordant edges. Because the voting process is asynchronous there are several ways to update the colours of the interacting vertices.
- Push: Pick a random discordant vertex and push its colour to a random discordant neighbour.
- Pull: Pick a random discordant vertex and pull the colour of a random discordant neighbour.
- Oblivious: Pick a random endpoint of a random discordant edge and push the colour to the other end point.
We show that ET, the expected time to reach consensus, depends strongly on the underlying graph and the update rule. For connected graphs on n vertices, and an initial half red, half blue colouring the following hold. For oblivious voting, ET = (n^2)/4 independent of the underlying graph. For the complete graph Kn, the push protocol has ET = Theta(n*log(n)), whereas the pull protocol has ET = Theta(2^n). For the cycle C_n all three protocols have ET = Theta(n^2). For the star graph however, the pull protocol has ET = O(n^2), whereas the push protocol is slower with ET = Theta(n^2*log(n)).
The wide variation in ET for the pull protocol is to be contrasted with the well known model of synchronous pull voting, for which ET = O(n) on many classes of expanders.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.145/LIPIcs.ICALP.2016.145.pdf
Distributed consensus
Voter model
Interacting particles
Randomized algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
146:1
146:15
10.4230/LIPIcs.ICALP.2016.146
article
Bounds on the Voter Model in Dynamic Networks
Berenbrink, Petra
Giakkoupis, George
Kermarrec, Anne-Marie
Mallmann-Trenn, Frederik
In the voter model, each node of a graph has an opinion, and in every round each node chooses independently a random neighbour and adopts its opinion. We are interested in the consensus time, which is the first point in time where all nodes have the same opinion. We consider dynamic graphs in which the edges are rewired in every round (by an adversary) giving rise to the graph sequence G_1, G_2, ..., where we assume that G_i has conductance at least phi_i. We assume that the degrees of nodes don't change over time as one can show that the consensus time can become super-exponential otherwise. In the case of a sequence of d-regular graphs, we obtain asymptotically tight results. Even for some static graphs, such as the cycle, our results improve the state of the art. Here we show that the expected number of rounds until all nodes have the same opinion is bounded by O(m/(d_{min}*phi)), for any graph with m edges, conductance phi, and degrees at least d_{min}. In addition, we consider a biased dynamic voter model, where each opinion i is associated with a probability P_i, and when a node chooses a neighbour with that opinion, it adopts opinion i with probability P_i (otherwise the node keeps its current opinion). We show for any regular dynamic graph, that if there is an epsilon > 0 difference between the highest and second highest opinion probabilities, and at least Omega(log(n)) nodes have initially the opinion with the highest probability, then all nodes adopt w.h.p. that opinion. We obtain a bound on the convergence time, which becomes O(log(n)/phi) for static graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.146/LIPIcs.ICALP.2016.146.pdf
Voting
Distributed Computing
Conductance
Dynamic Graphs
Consensus
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
147:1
147:15
10.4230/LIPIcs.ICALP.2016.147
article
Bootstrap Percolation on Geometric Inhomogeneous Random Graphs
Koch, Christoph
Lengler, Johannes
Geometric inhomogeneous random graphs (GIRGs) are a model for scale-free networks with underlying geometry. We study bootstrap percolation on these graphs, which is a process modelling the spread of an infection of vertices starting within a (small) local region. We show that the process exhibits a phase transition in terms of the initial infection rate in this region. We determine the speed of the process in the supercritical case, up to lower order terms, and show that its evolution is fundamentally influenced by the underlying geometry. For vertices with given position and expected degree, we determine the infection time up to lower order terms. Finally, we show how this knowledge can be used to contain the infection locally by removing relatively few edges from the graph. This is the first time that the role of geometry on bootstrap percolation is analysed mathematically for geometric scale-free networks.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.147/LIPIcs.ICALP.2016.147.pdf
Geometric inhomogeneous random graphs
scale-free network
bootstrap percolation
localised infection process
metastability threshold
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
148:1
148:15
10.4230/LIPIcs.ICALP.2016.148
article
Sublinear-Space Bounded-Delay Enumeration for Massive Network Analytics: Maximal Cliques
Conte, Alessio
Grossi, Roberto
Marino, Andrea
Versari, Luca
Due to the sheer size of real-world networks, delay and space become quite relevant measures for the cost of enumeration in network analytics. This paper presents efficient algorithms for listing maximum cliques in networks, providing the first sublinear-space bounds with guaranteed delay per enumerated clique, thus comparing favorably with the known literature.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.148/LIPIcs.ICALP.2016.148.pdf
Enumeration algorithms
maximal cliques
network mining and analytics
reverse search
space efficiency
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
149:1
149:14
10.4230/LIPIcs.ICALP.2016.149
article
On the Size and the Approximability of Minimum Temporally Connected Subgraphs
Axiotis, Kyriakos
Fotakis, Dimitris
We consider temporal graphs with discrete time labels and investigate the size and the approximability of minimum temporally connected spanning subgraphs. We present a family of minimally connected temporal graphs with n vertices and Omega(n^2) edges, thus resolving an open question of (Kempe, Kleinberg, Kumar, JCSS 64, 2002) about the existence of sparse temporal connectivity certificates. Next, we consider the problem of computing a minimum weight subset of temporal edges that preserve connectivity of a given temporal graph either from a given vertex r (r-MTC problem) or among all vertex pairs (MTC problem). We show that the approximability of r-MTC is closely related to the approximability of Directed Steiner Tree and that r-MTC can be solved in polynomial time if the underlying graph has bounded treewidth. We also show that the best approximation ratio for MTC is at least O(2^{log^{1-epsilon}(n)} and at most O(min{n^{1+epsilon},(Delta*M)^{2/3+epsilon}), for any constant epsilon > 0, where M is the number of temporal edges and Delta is the maximum degree of the underlying graph. Furthermore, we prove that the unweighted version of MTC is APX-hard and that MTC is efficiently solvable in trees and 2-approximable in cycles.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.149/LIPIcs.ICALP.2016.149.pdf
Temporal Graphs
Temporal Connectivity
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2016-08-23
55
150:1
150:13
10.4230/LIPIcs.ICALP.2016.150
article
Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem
Doerr, Benjamin
Künnemann, Marvin
The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and Winkler (ITCS 2014), is to collaboratively leak a piece of information known to only one member of a group (i) without revealing who was the origin of this information and (ii) without any private communication, neither during the process nor before. Despite several deep structural results, even the smallest case of leaking one bit of information present at one of two players is not well understood. Brody et al. gave a 2-round protocol enabling the two players to succeed with probability 1/3 and showed the hardness result that no protocol can give a success probability of more than 3/8.
In this work, we show that neither bound is tight. Our new hardness result, obtained by a different application of the concavity method used also in the previous work, states that a success probability of better than 0.3672 is not possible. Using both theoretical and numerical approaches, we improve the lower bound to 0.3384, that is, give a protocol leading to this success probability. To ease the design of new protocols, we prove an equivalent formulation of the cryptogenography problem as solitaire vector splitting game. Via an automated game tree search, we find good strategies for this game. We then translate the splits that occurred in this strategy into inequalities relating position values and use an LP solver to find an optimal solution for these inequalities. This gives slightly better game values, but more importantly, also a more compact representation of the protocol and a way to easily verify the claimed quality of the protocol.
Unfortunately, already the smallest protocol we found that beats the previous 1/3 success probability takes up 16 rounds of communication. The protocol leading to the bound of 0.3384 even in a compact representation consists of 18248 game states. These numbers suggest that the task of finding good protocols for the cryptogenography problem as well as understanding their structure is harder than what the simple problem formulation suggests.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol055-icalp2016/LIPIcs.ICALP.2016.150/LIPIcs.ICALP.2016.150.pdf
randomized protocols
anonymous communication
computer-aided proofs
solitaire games