LIPIcs, Volume 55

43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)



Thumbnail PDF

Event

ICALP 2016, July 11-15, 2016, Rome, Italy

Editors

Ioannis Chatzigiannakis
Michael Mitzenmacher
Yuval Rabani
Davide Sangiorgi

Publication Details

  • published at: 2016-08-23
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-013-2
  • DBLP: db/conf/icalp/icalp2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 55, ICALP'16, Complete Volume

Authors: Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi


Abstract
LIPIcs, Volume 55, ICALP'16, Complete Volume

Cite as

43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2016,
  title =	{{LIPIcs, Volume 55, ICALP'16, Complete Volume}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016},
  URN =		{urn:nbn:de:0030-drops-65844},
  doi =		{10.4230/LIPIcs.ICALP.2016},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Organization, List of Authors

Authors: Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi


Abstract
Front Matter, Table of Contents, Preface, Organization, List of Authors

Cite as

43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 0:i-0:xliv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2016.0,
  author =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  title =	{{Front Matter, Table of Contents, Preface, Organization, List of Authors}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{0:i--0:xliv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.0},
  URN =		{urn:nbn:de:0030-drops-61917},
  doi =		{10.4230/LIPIcs.ICALP.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Organization, List of Authors}
}
Document
Invited Talk
Compute Choice (Invited Talk)

Authors: Devavrat Shah


Abstract
In this talk, we shall discuss the question of learning distribution over permutations of n choices based on partial observations. This is central to capturing the so called "choice" in a variety of contexts: understanding preferences of consumers over a collection of products based on purchasing and browsing data in the setting of retail and e-commerce, learning public opinion amongst a collection of socio-economic issues based on sparse polling data, and deciding a ranking of teams or players based on outcomes of games. The talk will primarily discuss the relationship between the ability to learn, nature of partial information and number of available observations. Connections to the classical theory of social choice and behavioral psychology, as well as modern literature in Statistics, learning theory and operations research will be discussed.

Cite as

Devavrat Shah. Compute Choice (Invited Talk). In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{shah:LIPIcs.ICALP.2016.1,
  author =	{Shah, Devavrat},
  title =	{{Compute Choice}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.1},
  URN =		{urn:nbn:de:0030-drops-63374},
  doi =		{10.4230/LIPIcs.ICALP.2016.1},
  annote =	{Keywords: Decision Systems, Learning Distributions, Partial observations}
}
Document
Invited Talk
Formally Verifying a Compiler: What Does It Mean, Exactly? (Invited Talk)

Authors: Xavier Leroy


Abstract
Compilers, and especially optimizing compilers, are complicated programs. Bugs in compilers happen, and can lead to miscompilation: the production of wrong executable code from a correct source program. Miscompilation is documented in the literature and a concern for high-assurance software, as it endangers the guarantees obtained by source-level formal verification of programs. Compiler verification is a radical solution to the miscompilation problem: by applying program proof to the compiler itself, we can obtain mathematically strong guarantees that the generated executable code is faithful to the semantics of the source program. The state of the art in this line of research is arguably the CompCert verified compiler. This talk will give an overview of this optimizing C compiler and of its formal verification, conducted with the Coq proof assistant. A formal verification is as good as the specifications it uses. In other words, verification reduces the problem of trusting a large implementation to that of ensuring that its formal specification enforce the intended correctness properties. In the case of CompCert, the correctness statement that is proved is rather complex, as it involves large operational semantics (for the C language and for the assembly languages of the target architectures) and simulations between these semantics that support both choice refinement and behavior refinement. The talk will review and discuss these elements of the specification, along with some of the accompanying proof principles.

Cite as

Xavier Leroy. Formally Verifying a Compiler: What Does It Mean, Exactly? (Invited Talk). In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{leroy:LIPIcs.ICALP.2016.2,
  author =	{Leroy, Xavier},
  title =	{{Formally Verifying a Compiler: What Does It Mean, Exactly?}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.2},
  URN =		{urn:nbn:de:0030-drops-63384},
  doi =		{10.4230/LIPIcs.ICALP.2016.2},
  annote =	{Keywords: Compilers, Compiler Optimization, Compiler Verification}
}
Document
Invited Talk
Hardness of Approximation (Invited Talk)

Authors: Subhash Khot


Abstract
The talk will present connections between approximability of NP-complete problems, analysis, and geometry, and the role played by the Unique Games Conjecture in facilitating these connections.

Cite as

Subhash Khot. Hardness of Approximation (Invited Talk). In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{khot:LIPIcs.ICALP.2016.3,
  author =	{Khot, Subhash},
  title =	{{Hardness of Approximation}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.3},
  URN =		{urn:nbn:de:0030-drops-63395},
  doi =		{10.4230/LIPIcs.ICALP.2016.3},
  annote =	{Keywords: NP-completeness, Approximation algorithms, Inapproximability, Probabilistically Checkable Proofs, Discrete Fourier analysis}
}
Document
Invited Talk
Model Checking and Strategy Synthesis for Stochastic Games: From Theory to Practice (Invited Talk)

Authors: Marta Z. Kwiatkowska


Abstract
Probabilistic model checking is an automatic procedure for establishing if a desired property holds in a probabilistic model, aimed at verifying quantitative probabilistic specifications such as the probability of a critical failure occurring or expected time to termination. Much progress has been made in recent years in algorithms, tools and applications of probabilistic model checking, as exemplified by the probabilistic model checker PRISM (http://www.prismmodelchecker.org). However, the unstoppable rise of autonomous systems, from robotic assistants to self-driving cars, is placing greater and greater demands on quantitative modelling and verification technologies. To address the challenges of autonomy we need to consider collaborative, competitive and adversarial behaviour, which is naturally modelled using game-theoretic abstractions, enhanced with stochasticity arising from randomisation and uncertainty. This paper gives an overview of quantitative verification and strategy synthesis techniques developed for turn-based stochastic multi-player games, summarising recent advances concerning multi-objective properties and compositional strategy synthesis. The techniques have been implemented in the PRISM-games model checker built as an extension of PRISM.

Cite as

Marta Z. Kwiatkowska. Model Checking and Strategy Synthesis for Stochastic Games: From Theory to Practice (Invited Talk). In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kwiatkowska:LIPIcs.ICALP.2016.4,
  author =	{Kwiatkowska, Marta Z.},
  title =	{{Model Checking and Strategy Synthesis for Stochastic Games: From Theory to Practice}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.4},
  URN =		{urn:nbn:de:0030-drops-62285},
  doi =		{10.4230/LIPIcs.ICALP.2016.4},
  annote =	{Keywords: Quantitative verification, Stochastic games, Temporal logic, Model checking, Strategy synthesis}
}
Document
Fine-Grained Complexity Analysis of Two Classic TSP Variants

Authors: Mark de Berg, Kevin Buchin, Bart M. P. Jansen, and Gerhard Woeginger


Abstract
We analyze two classic variants of the Traveling Salesman Problem using the toolkit of fine-grained complexity. Our first set of results is motivated by the Bitonic tsp problem: given a set of n points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamicprogramming exercise to solve this problem in O(n^2) time. While the near-quadratic dependency of similar dynamic programs for Longest Common Subsequence and Discrete Fréchet Distance has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves bitonic tsp in O(n*log^2(n)) time and its bottleneck version in O(n*log^3(n)) time. In the more general pyramidal tsp problem, the points to be visited are labeled 1, ..., n and the sequence of labels in the solution is required to have at most one local maximum. Our algorithms for the bitonic (bottleneck) tsp problem also work for the pyramidal tsp problem in the plane. Our second set of results concerns the popular k-opt heuristic for tsp in the graph setting. More precisely, we study the k-opt decision problem, which asks whether a given tour can be improved by a k-opt move that replaces k edges in the tour by k new edges. A simple algorithm solves k-opt in O(n^k) time for fixed k. For 2-opt, this is easily seen to be optimal. For k = 3 we prove that an algorithm with a runtime of the form ~O(n^{3-epsilon}) exists if and only if All-Pairs Shortest Paths in weighted digraphs has such an algorithm. For general k-opt, it is known that a runtime of f(k)*n^{o(k/log(k))} would contradict the Exponential Time Hypothesis. The results for k = 2, 3 may suggest that the actual time complexity of k-opt is Theta(n^k). We show that this is not the case, by presenting an algorithm that finds the best k-move in O(n^{lfoor 2k/3 rfloor +1}) time for fixed k >= 3. This implies that 4-opt can be solved in O(n^3) time, matching the best-known algorithm for 3-opt. Finally, we show how to beat the quadratic barrier for k = 2 in two important settings, namely for points in the plane and when we want to solve 2-opt repeatedly

Cite as

Mark de Berg, Kevin Buchin, Bart M. P. Jansen, and Gerhard Woeginger. Fine-Grained Complexity Analysis of Two Classic TSP Variants. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{deberg_et_al:LIPIcs.ICALP.2016.5,
  author =	{de Berg, Mark and Buchin, Kevin and Jansen, Bart M. P. and Woeginger, Gerhard},
  title =	{{Fine-Grained Complexity Analysis of Two Classic TSP Variants}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.5},
  URN =		{urn:nbn:de:0030-drops-62770},
  doi =		{10.4230/LIPIcs.ICALP.2016.5},
  annote =	{Keywords: Traveling salesman problem, fine-grained complexity, bitonic tours, k-opt}
}
Document
Bicovering: Covering Edges With Two Small Subsets of Vertices

Authors: Amey Bhangale, Rajiv Gandhi, Mohammad Taghi Hajiaghayi, Rohit Khandekar, and Guy Kortsarz


Abstract
We study the following basic problem called Bi-Covering. Given a graph G(V, E), find two (not necessarily disjoint) sets A subseteq V and B subseteq V such that A union B = V and that every edge e belongs to either the graph induced by A or to the graph induced by B. The goal is to minimize max{|A|, |B|}. This is the most simple case of the Channel Allocation problem [Gandhi et al., Networks, 2006]. A solution that outputs V,emptyset gives ratio at most 2. We show that under the similar Strong Unique Game Conjecture by [Bansal-Khot, FOCS, 2009] there is no 2 - epsilon ratio algorithm for the problem, for any constant epsilon > 0. Given a bipartite graph, Max-bi-clique is a problem of finding largest k*k complete bipartite sub graph. For Max-bi-clique problem, a constant factor hardness was known under random 3-SAT hypothesis of Feige [Feige, STOC, 2002] and also under the assumption that NP !subseteq intersection_{epsilon > 0} BPTIME(2^{n^{epsilon}}) [Khot, SIAM J. on Comp., 2011]. It was an open problem in [Ambühl et. al., SIAM J. on Comp., 2011] to prove inapproximability of Max-bi-clique assuming weaker conjecture. Our result implies similar hardness result assuming the Strong Unique Games Conjecture. On the algorithmic side, we also give better than 2 approximation for Bi-Covering on numerous special graph classes. In particular, we get 1.876 approximation for Chordal graphs, exact algorithm for Interval Graphs, 1 + o(1) for Minor Free Graph, 2 - 4*delta/3 for graphs with minimum degree delta*n, 2/(1+delta^2/8) for delta-vertex expander, 8/5 for Split Graphs, 2 - (6/5)*1/d for graphs with minimum constant degree d etc. Our algorithmic results are quite non-trivial. In achieving these results, we use various known structural results about the graphs, combined with the techniques that we develop tailored to getting better than 2 approximation.

Cite as

Amey Bhangale, Rajiv Gandhi, Mohammad Taghi Hajiaghayi, Rohit Khandekar, and Guy Kortsarz. Bicovering: Covering Edges With Two Small Subsets of Vertices. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.ICALP.2016.6,
  author =	{Bhangale, Amey and Gandhi, Rajiv and Hajiaghayi, Mohammad Taghi and Khandekar, Rohit and Kortsarz, Guy},
  title =	{{Bicovering: Covering Edges With Two Small Subsets of Vertices}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.6},
  URN =		{urn:nbn:de:0030-drops-62728},
  doi =		{10.4230/LIPIcs.ICALP.2016.6},
  annote =	{Keywords: Bi-covering, Unique Games, Max Bi-clique}
}
Document
Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs

Authors: Chandra Chekuri, Alina Ene, and Marcin Pilipczuk


Abstract
We study the problem of routing symmetric demand pairs in planar digraphs. The input consists of a directed planar graph G = (V, E) and a collection of k source-destination pairs M = {s_1t_1, ..., s_kt_k}. The goal is to maximize the number of pairs that are routed along disjoint paths. A pair s_it_i is routed in the symmetric setting if there is a directed path connecting s_i to t_i and a directed path connecting t_i to s_i. In this paper we obtain a randomized poly-logarithmic approximation with constant congestion for this problem in planar digraphs. The main technical contribution is to show that a planar digraph with directed treewidth h contains a constant congestion crossbar of size Omega(h/polylog(h)).

Cite as

Chandra Chekuri, Alina Ene, and Marcin Pilipczuk. Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:LIPIcs.ICALP.2016.7,
  author =	{Chekuri, Chandra and Ene, Alina and Pilipczuk, Marcin},
  title =	{{Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.7},
  URN =		{urn:nbn:de:0030-drops-62737},
  doi =		{10.4230/LIPIcs.ICALP.2016.7},
  annote =	{Keywords: Disjoint paths, symmetric demands, planar directed graph}
}
Document
Quasi-4-Connected Components

Authors: Martin Grohe


Abstract
We introduce a new decomposition of a graphs into quasi-4-connected components, where we call a graph quasi-4-connected if it is 3-connected and it only has separations of order 3 that separate a single vertex from the rest of the graph. Moreover, we give a cubic time algorithm computing the decomposition of a given graph. Our decomposition into quasi-4-connected components refines the well-known decompositions of graphs into biconnected and triconnected components. We relate our decomposition to Robertson and Seymour's theory of tangles by establishing a correspondence between the quasi-4-connected components of a graph and its tangles of order 4.

Cite as

Martin Grohe. Quasi-4-Connected Components. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{grohe:LIPIcs.ICALP.2016.8,
  author =	{Grohe, Martin},
  title =	{{Quasi-4-Connected Components}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.8},
  URN =		{urn:nbn:de:0030-drops-62740},
  doi =		{10.4230/LIPIcs.ICALP.2016.8},
  annote =	{Keywords: decompositions, connectivity, tangles}
}
Document
Subexponential Time Algorithms for Embedding H-Minor Free Graphs

Authors: Hans L. Bodlaender, Jesper Nederlof, and Tom C. van der Zanden


Abstract
We establish the complexity of several graph embedding problems: Subgraph Isomorphism, Graph Minor, Induced Subgraph and Induced Minor, when restricted to H-minor free graphs. In each of these problems, we are given a pattern graph P and a host graph G, and want to determine whether P is a subgraph (minor, induced subgraph or induced minor) of G. We show that, for any fixed graph H and epsilon > 0, if P is H-Minor Free and G has treewidth tw, (induced) subgraph can be solved 2^{O(k^{epsilon}*tw+k/log(k))}*n^{O(1)} time and (induced) minor can be solved in 2^{O(k^{epsilon}*tw+tw*log(tw)+k/log(k))}*n^{O(1)} time, where k = |V(P)|. We also show that this is optimal, in the sense that the existence of an algorithm for one of these problems running in 2^{o(n/log(n))} time would contradict the Exponential Time Hypothesis. This solves an open problem on the complexity of Subgraph Isomorphism for planar graphs. The key algorithmic insight is that dynamic programming approaches can be sped up by identifying isomorphic connected components in the pattern graph. This technique seems widely applicable, and it appears that there is a relatively unexplored class of problems that share a similar upper and lower bound.

Cite as

Hans L. Bodlaender, Jesper Nederlof, and Tom C. van der Zanden. Subexponential Time Algorithms for Embedding H-Minor Free Graphs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bodlaender_et_al:LIPIcs.ICALP.2016.9,
  author =	{Bodlaender, Hans L. and Nederlof, Jesper and van der Zanden, Tom C.},
  title =	{{Subexponential Time Algorithms for Embedding H-Minor Free Graphs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.9},
  URN =		{urn:nbn:de:0030-drops-62756},
  doi =		{10.4230/LIPIcs.ICALP.2016.9},
  annote =	{Keywords: subgraph isomorphism, graph minors, subexponential time}
}
Document
Relating Graph Thickness to Planar Layers and Bend Complexity

Authors: Stephane Durocher and Debajyoti Mondal


Abstract
The thickness of a graph G = (V, E) with n vertices is the minimum number of planar subgraphs of G whose union is G. A polyline drawing of G in R^2 is a drawing Gamma of G, where each vertex is mapped to a point and each edge is mapped to a polygonal chain. Bend and layer complexities are two important aesthetics of such a drawing. The bend complexity of Gamma is the maximum number of bends per edge in Gamma, and the layer complexity of Gamma is the minimum integer r such that the set of polygonal chains in Gamma can be partitioned into r disjoint sets, where each set corresponds to a planar polyline drawing. Let G be a graph of thickness t. By Fáry’s theorem, if t = 1, then G can be drawn on a single layer with bend complexity 0. A few extensions to higher thickness are known, e.g., if t = 2 (resp., t > 2), then G can be drawn on t layers with bend complexity 2 (resp., 3n + O(1)). In this paper we present an elegant extension of Fáry's theorem to draw graphs of thickness t > 2. We first prove that thickness-t graphs can be drawn on t layers with 2.25n + O(1) bends per edge. We then develop another technique to draw thickness-t graphs on t layers with reduced bend complexity for small values of t, e.g., for t in {3, 4}, the bend complexity decreases to O(sqrt(n)). Previously, the bend complexity was not known to be sublinear for t > 2. Finally, we show that graphs with linear arboricity k can be drawn on k layers with bend complexity 3*(k-1)*n/(4k-2).

Cite as

Stephane Durocher and Debajyoti Mondal. Relating Graph Thickness to Planar Layers and Bend Complexity. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{durocher_et_al:LIPIcs.ICALP.2016.10,
  author =	{Durocher, Stephane and Mondal, Debajyoti},
  title =	{{Relating Graph Thickness to Planar Layers and Bend Complexity}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.10},
  URN =		{urn:nbn:de:0030-drops-62767},
  doi =		{10.4230/LIPIcs.ICALP.2016.10},
  annote =	{Keywords: Graph Drawing, Thickness, Geometric Thickness, Layers; Bends}
}
Document
Optimal Approximate Matrix Product in Terms of Stable Rank

Authors: Michael B. Cohen, Jelani Nelson, and David P. Woodruff


Abstract
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having m = O(˜r/epsilon^2) rows. Here r˜ is the maximum stable rank, i.e., the squared ratio of Frobenius and operator norms, of the two matrices being multiplied. This is a quantitative improvement over previous work of [Magen and Zouzias, SODA, 2011] and [Kyrillidis et al., arXiv, 2014] and is also optimal for any oblivious dimensionality-reducing map. Furthermore, due to the black box reliance on the subspace embedding property in our proofs, our theorem can be applied to a much more general class of sketching matrices than what was known before, in addition to achieving better bounds. For example, one can apply our theorem to efficient subspace embeddings such as the Subsampled Randomized Hadamard Transform or sparse subspace embeddings, or even with subspace embedding constructions that may be developed in the future. Our main theorem, via connections with spectral error matrix multiplication proven in previous work, implies quantitative improvements for approximate least squares regression and low rank approximation, and implies faster low rank approximation for popular kernels in machine learning such as the gaussian and Sobolev kernels. Our main result has also already been applied to improve dimensionality reduction guarantees for k-means clustering, and also implies new results for nonparametric regression. Lastly, we point out that the proof of the "BSS" deterministic row-sampling result of [Batson et al., SICOMP, 2012] can be modified to obtain deterministic row-sampling for approximate matrix product in terms of the stable rank of the matrices. The original "BSS" proof was in terms of the rank rather than the stable rank.

Cite as

Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal Approximate Matrix Product in Terms of Stable Rank. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2016.11,
  author =	{Cohen, Michael B. and Nelson, Jelani and Woodruff, David P.},
  title =	{{Optimal Approximate Matrix Product in Terms of Stable Rank}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.11},
  URN =		{urn:nbn:de:0030-drops-62788},
  doi =		{10.4230/LIPIcs.ICALP.2016.11},
  annote =	{Keywords: subspace embeddings, approximate matrix multiplication, stable rank, regression, low rank approximation}
}
Document
Approximate Span Programs

Authors: Tsuyoshi Ito and Stacey Jeffery


Abstract
Span programs are a model of computation that have been used to design quantum algorithms, mainly in the query model. It is known that for any decision problem, there exists a span program that leads to an algorithm with optimal quantum query complexity, however finding such an algorithm is generally challenging. In this work, we consider new ways of designing quantum algorithms using span programs. We show how any span program that decides a problem f can also be used to decide "property testing" versions of the function f, or more generally, approximate a quantity called the span program witness size, which is some property of the input related to f. For example, using our techniques, the span program for OR, which can be used to design an optimal algorithm for the OR function, can also be used to design optimal algorithms for: threshold functions, in which we want to decide if the Hamming weight of a string is above a threshold, or far below, given the promise that one of these is true; and approximate counting, in which we want to estimate the Hamming weight of the input up to some desired accuracy. We achieve these results by relaxing the requirement that 1-inputs hit some target exactly in the span program, which could potentially make design of span programs significantly easier. In addition, we give an exposition of span program structure, which increases the general understanding of this important model. One implication of this is alternative algorithms for estimating the witness size when the phase gap of a certain unitary can be lower bounded. We show how to lower bound this phase gap in certain cases. As an application, we give the first upper bounds in the adjacency query model on the quantum time complexity of estimating the effective resistance between s and t, R_{s,t}(G). For this problem we obtain ~O(1/epsilon^{3/2}*n*sqrt(R_{s,t}(G)), using O(log(n)) space. In addition, when mu is a lower bound on lambda_2(G), by our phase gap lower bound, we can obtain an upper bound of ~O(1/epsilon*n*sqrt(R){s,t}(G)/mu)) for estimating effective resistance, also using O(log(n)) space.

Cite as

Tsuyoshi Ito and Stacey Jeffery. Approximate Span Programs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ito_et_al:LIPIcs.ICALP.2016.12,
  author =	{Ito, Tsuyoshi and Jeffery, Stacey},
  title =	{{Approximate Span Programs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.12},
  URN =		{urn:nbn:de:0030-drops-62956},
  doi =		{10.4230/LIPIcs.ICALP.2016.12},
  annote =	{Keywords: Quantum algorithms, span programs, quantum query complexity, effective resistance}
}
Document
Power of Quantum Computation with Few Clean Qubits

Authors: Keisuke Fujii, Hirotada Kobayashi, Tomoyuki Morimae, Harumichi Nishimura, Shuhei Tamate, and Seiichiro Tani


Abstract
This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor are intermediate measurements. The main contribution of this paper is to develop unexpectedly strong error-reduction methods for such quantum computations that simultaneously reduce the number of necessary clean qubits. It is proved that any problem solvable by a polynomialtime quantum computation with one-sided bounded error that uses logarithmically many clean qubits is also solvable with exponentially small one-sided error using just two clean qubits, and with polynomially small one-sided error using just one clean qubit. It is further proved in the twosided-error case that any problem solvable by such a computation with a constant gap between completeness and soundness using logarithmically many clean qubits is also solvable with exponentially small two-sided error using just two clean qubits. If only one clean qubit is available, the problem is again still solvable with exponentially small error in one of the completeness and soundness and with polynomially small error in the other. An immediate consequence is that the Trace Estimation problem defined with fixed constant threshold parameters is complete for BQ_{[1]}P and BQ_{log}P, the classes of problems solvable by polynomial-time quantum computations with completeness 2/3 and soundness 1/3 using just one and logarithmically many clean qubits, respectively. The techniques used for proving the error-reduction results may be of independent interest in themselves, and one of the technical tools can also be used to show the hardness of weak classical simulations of one-clean-qubit computations (i.e., DQC1 computations).

Cite as

Keisuke Fujii, Hirotada Kobayashi, Tomoyuki Morimae, Harumichi Nishimura, Shuhei Tamate, and Seiichiro Tani. Power of Quantum Computation with Few Clean Qubits. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fujii_et_al:LIPIcs.ICALP.2016.13,
  author =	{Fujii, Keisuke and Kobayashi, Hirotada and Morimae, Tomoyuki and Nishimura, Harumichi and Tamate, Shuhei and Tani, Seiichiro},
  title =	{{Power of Quantum Computation with Few Clean Qubits}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.13},
  URN =		{urn:nbn:de:0030-drops-62960},
  doi =		{10.4230/LIPIcs.ICALP.2016.13},
  annote =	{Keywords: DQC1, quantum computing, complete problems, error reduction}
}
Document
Space-Efficient Error Reduction for Unitary Quantum Computations

Authors: Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura


Abstract
This paper presents a general space-efficient method for error reduction for unitary quantum computation. Consider a polynomial-time quantum computation with completeness c and soundness s, either with or without a witness (corresponding to QMA and BQP, respectively). To convert this computation into a new computation with error at most 2^{-p}, the most space-efficient method known requires extra workspace of O(p*log(1/(c-s))) qubits. This space requirement is too large for scenarios like logarithmic-space quantum computations. This paper shows an errorreduction method for unitary quantum computations (i.e., computations without intermediate measurements) that requires extra workspace of just O(log(p/(c-s))) qubits. This in particular gives the first method of strong amplification for logarithmic-space unitary quantum computations with two-sided bounded error. This also leads to a number of consequences in complexity theory, such as the uselessness of quantum witnesses in bounded-error logarithmic-space unitary quantum computations, the PSPACE upper bound for QMA with exponentially-small completeness-soundness gap, and strong amplification for matchgate computations.

Cite as

Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura. Space-Efficient Error Reduction for Unitary Quantum Computations. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ICALP.2016.14,
  author =	{Fefferman, Bill and Kobayashi, Hirotada and Yen-Yu Lin, Cedric and Morimae, Tomoyuki and Nishimura, Harumichi},
  title =	{{Space-Efficient Error Reduction for Unitary Quantum Computations}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.14},
  URN =		{urn:nbn:de:0030-drops-62975},
  doi =		{10.4230/LIPIcs.ICALP.2016.14},
  annote =	{Keywords: space-bounded computation, quantum Merlin-Arthur proof systems, error reduction, quantum computing}
}
Document
Linear Time Algorithm for Quantum 2SAT

Authors: Itai Arad, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang


Abstract
A canonical result about satisfiability theory is that the 2-SAT problem can be solved in linear time, despite the NP-hardness of the 3-SAT problem. In the quantum 2-SAT problem, we are given a family of 2-qubit projectors Q_{ij} on a system of n qubits, and the task is to decide whether the Hamiltonian H = sum Q_{ij} has a 0-eigenvalue, or it is larger than 1/n^c for some c = O(1). The problem is not only a natural extension of the classical 2-SAT problem to the quantum case, but is also equivalent to the problem of finding the ground state of 2-local frustration-free Hamiltonians of spin 1/2, a well-studied model believed to capture certain key properties in modern condensed matter physics. While Bravyi has shown that the quantum 2-SAT problem has a classical polynomial-time algorithm, the running time of his algorithm is O(n^4). In this paper we give a classical algorithm with linear running time in the number of local projectors, therefore achieving the best possible complexity.

Cite as

Itai Arad, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang. Linear Time Algorithm for Quantum 2SAT. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arad_et_al:LIPIcs.ICALP.2016.15,
  author =	{Arad, Itai and Santha, Miklos and Sundaram, Aarthi and Zhang, Shengyu},
  title =	{{Linear Time Algorithm for Quantum 2SAT}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.15},
  URN =		{urn:nbn:de:0030-drops-62795},
  doi =		{10.4230/LIPIcs.ICALP.2016.15},
  annote =	{Keywords: Quantum SAT, Davis-Putnam Procedure, Linear Time Algorithm}
}
Document
Optimal Quantum Algorithm for Polynomial Interpolation

Authors: Andrew M. Childs, Wim van Dam, Shih-Han Hung, and Igor E. Shparlinski


Abstract
We consider the number of quantum queries required to determine the coefficients of a degree-d polynomial over F_q. A lower bound shown independently by Kane and Kutin and by Meyer and Pommersheim shows that d/2 + 1/2 quantum queries are needed to solve this problem with bounded error, whereas an algorithm of Boneh and Zhandry shows that d quantum queries are sufficient. We show that the lower bound is achievable: d/2 + 1/2 quantum queries suffice to determine the polynomial with bounded error. Furthermore, we show that d/2 + 1 queries suffice to achieve probability approaching 1 for large q. These upper bounds improve results of Boneh and Zhandry on the insecurity of cryptographic protocols against quantum attacks. We also show that our algorithm’s success probability as a function of the number of queries is precisely optimal. Furthermore, the algorithm can be implemented with gate complexity poly(log(q)) with negligible decrease in the success probability. We end with a conjecture about the quantum query complexity of multivariate polynomial interpolation.

Cite as

Andrew M. Childs, Wim van Dam, Shih-Han Hung, and Igor E. Shparlinski. Optimal Quantum Algorithm for Polynomial Interpolation. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{childs_et_al:LIPIcs.ICALP.2016.16,
  author =	{Childs, Andrew M. and van Dam, Wim and Hung, Shih-Han and Shparlinski, Igor E.},
  title =	{{Optimal Quantum Algorithm for Polynomial Interpolation}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.16},
  URN =		{urn:nbn:de:0030-drops-62985},
  doi =		{10.4230/LIPIcs.ICALP.2016.16},
  annote =	{Keywords: Quantum algorithms, query complexity, polynomial interpolation, finite fields}
}
Document
Lower Bounds for the Approximate Degree of Block-Composed Functions

Authors: Justin Thaler


Abstract
We describe a new hardness amplification result for point-wise approximation of Boolean functions by low-degree polynomials. Specifically, for any function f on N bits, define F(x_1,...,x_M) = OMB(f(x_1),...,f(x_M)) to be the function on M*N bits obtained by block-composing f with a function known as ODD-MAX-BIT. We show that, if f requires large degree to approximate to error 2/3 in a certain one-sided sense (captured by a complexity measure known as positive one-sided approximate degree), then F requires large degree to approximate even to error 1-2^{-M}. This generalizes a result of Beigel (Computational Complexity, 1994), who proved an identical result for the special case f=OR. Unlike related prior work, our result implies strong approximate degree lower bounds even for many functions F that have low threshold degree. Our proof is constructive: we exhibit a solution to the dual of an appropriate linear program capturing the approximate degree of any function. We describe several applications, including improved separations between the complexity classes P^{NP} and PP in both the query and communication complexity settings. Our separations improve on work of Beigel (1994) and Buhrman, Vereshchagin, and de Wolf (CCC, 2007).

Cite as

Justin Thaler. Lower Bounds for the Approximate Degree of Block-Composed Functions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{thaler:LIPIcs.ICALP.2016.17,
  author =	{Thaler, Justin},
  title =	{{Lower Bounds for the Approximate Degree of Block-Composed Functions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.17},
  URN =		{urn:nbn:de:0030-drops-63133},
  doi =		{10.4230/LIPIcs.ICALP.2016.17},
  annote =	{Keywords: approximate degree, one-sided approximate degree, polynomial approx- imations, threshold degree, communication complexity}
}
Document
Dynamic Graph Stream Algorithms in o(n) Space

Authors: Zengfeng Huang and Pan Peng


Abstract
In this paper we study graph problems in dynamic streaming model, where the input is defined by a sequence of edge insertions and deletions. As many natural problems require Omega(n) space, where n is the number of vertices, existing works mainly focused on designing ~O(n) space algorithms. Although sublinear in the number of edges for dense graphs, it could still be too large for many applications (e.g. n is huge or the graph is sparse). In this work, we give single-pass algorithms beating this space barrier for two classes of problems. We present o(n) space algorithms for estimating the number of connected components with additive error epsilon*n and (1 + epsilon)-approximating the weight of minimum spanning tree. The latter improves previous ~O(n) space algorithm given by Ahn et al. (SODA 2012) for connected graphs with bounded edge weights. We initiate the study of approximate graph property testing in the dynamic streaming model, where we want to distinguish graphs satisfying the property from graphs that are epsilon-far from having the property. We consider the problem of testing k-edge connectivity, k-vertex connectivity, cycle-freeness and bipartiteness (of planar graphs), for which, we provide algorithms using roughly ~O(n^{1-epsilon}) space, which is o(n) for any constant epsilon. To complement our algorithms, we present Omega(n^{1-O(epsilon)}) space lower bounds for these problems, which show that such a dependence on epsilon is necessary.

Cite as

Zengfeng Huang and Pan Peng. Dynamic Graph Stream Algorithms in o(n) Space. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ICALP.2016.18,
  author =	{Huang, Zengfeng and Peng, Pan},
  title =	{{Dynamic Graph Stream Algorithms in o(n) Space}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.18},
  URN =		{urn:nbn:de:0030-drops-62801},
  doi =		{10.4230/LIPIcs.ICALP.2016.18},
  annote =	{Keywords: dynamic graph streams, sketching, property testing, minimum spanning tree}
}
Document
Diameter and k-Center in Sliding Windows

Authors: Vincent Cohen-Addad, Chris Schwiegelshohn, and Christian Sohler


Abstract
In this paper we develop streaming algorithms for the diameter problem and the k-center clustering problem in the sliding window model. In this model we are interested in maintaining a solution for the N most recent points of the stream. In the diameter problem we would like to maintain two points whose distance approximates the diameter of the point set in the window. Our algorithm computes a (3 + epsilon)-approximation and uses O(1/epsilon*ln(alpha)) memory cells, where alpha is the ratio of the largest and smallest distance and is assumed to be known in advance. We also prove that under reasonable assumptions obtaining a (3 - epsilon)-approximation requires Omega(N1/3) space. For the k-center problem, where the goal is to find k centers that minimize the maximum distance of a point to its nearest center, we obtain a (6 + epsilon)-approximation using O(k/epsilon*ln(alpha)) memory cells and a (4 + epsilon)-approximation for the special case k = 2. We also prove that any algorithm for the 2-center problem that achieves an approximation ratio of less than 4 requires Omega(N^{1/3}) space.

Cite as

Vincent Cohen-Addad, Chris Schwiegelshohn, and Christian Sohler. Diameter and k-Center in Sliding Windows. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cohenaddad_et_al:LIPIcs.ICALP.2016.19,
  author =	{Cohen-Addad, Vincent and Schwiegelshohn, Chris and Sohler, Christian},
  title =	{{Diameter and k-Center in Sliding Windows}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.19},
  URN =		{urn:nbn:de:0030-drops-63401},
  doi =		{10.4230/LIPIcs.ICALP.2016.19},
  annote =	{Keywords: Streaming, k-Center, Diameter, Sliding Windows}
}
Document
Approximate Hamming Distance in a Stream

Authors: Raphaël Clifford and Tatiana Starikovskaya


Abstract
We consider the problem of computing a (1+epsilon)-approximation of the Hamming distance between a pattern of length n and successive substrings of a stream. We first look at the one-way randomised communication complexity of this problem. We show the following: - If Alice and Bob both share the pattern and Alice has the first half of the stream and Bob the second half, then there is an O(epsilon^{-4}*log^2(n)) bit randomised one-way communication protocol. - If Alice has the pattern, Bob the first half of the stream and Charlie the second half, then there is an O(epsilon^{-2}*sqrt(n)*log(n)) bit randomised one-way communication protocol. We then go on to develop small space streaming algorithms for (1 + epsilon)-approximate Hamming distance which give worst case running time guarantees per arriving symbol. - For binary input alphabets there is an O(epsilon^{-3}*sqrt(n)*log^2(n)) space and O(epsilon^{-2}*log(n)) time streaming (1 + epsilon)-approximate Hamming distance algorithm. - For general input alphabets there is an O(epsilon^{-5}*sqrt(n)*log^4(n)) space and O(epsilon^{-4}*log^3(n)) time streaming (1 + epsilon)-approximate Hamming distance algorithm.

Cite as

Raphaël Clifford and Tatiana Starikovskaya. Approximate Hamming Distance in a Stream. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{clifford_et_al:LIPIcs.ICALP.2016.20,
  author =	{Clifford, Rapha\"{e}l and Starikovskaya, Tatiana},
  title =	{{Approximate Hamming Distance in a Stream}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.20},
  URN =		{urn:nbn:de:0030-drops-62992},
  doi =		{10.4230/LIPIcs.ICALP.2016.20},
  annote =	{Keywords: Hamming distance, communication complexity, data stream model}
}
Document
Price of Competition and Dueling Games

Authors: Sina Dehghani, Mohammad Taghi Hajiaghayi, Hamid Mahini, and Saeed Seddighin


Abstract
We study competition in a general framework introduced by Immorlica, Kalai, Lucier, Moitra, Postlewaite, and Tennenholtz and answer their main open question. Immorlica et al. considered classic optimization problems in terms of competition and introduced a general class of games called dueling games. They model this competition as a zero-sum game, where two players are competing for a user’s satisfaction. In their main and most natural game, the ranking duel, a user requests a webpage by submitting a query and players output an ordering over all possible webpages based on the submitted query. The user tends to choose the ordering which displays her requested webpage in a higher rank. The goal of both players is to maximize the probability that her ordering beats that of her opponent and gets the user's attention. Immorlica et al. show this game directs both players to provide suboptimal search results. However, they leave the following as their main open question: "does competition between algorithms improve or degrade expected performance?" (see the introduction for more quotes) In this paper, we resolve this question for the ranking duel and a more general class of dueling games. More precisely, we study the quality of orderings in a competition between two players. This game is a zero-sum game, and thus any Nash equilibrium of the game can be described by minimax strategies. Let the value of the user for an ordering be a function of the position of her requested item in the corresponding ordering, and the social welfare for an ordering be the expected value of the corresponding ordering for the user. We propose the price of competition which is the ratio of the social welfare for the worst minimax strategy to the social welfare obtained by asocial planner. Finding the price of competition is another approach to obtain structural results of Nash equilibria. We use this criterion for analyzing the quality of orderings in the ranking duel. Although Immorlica et al. show that the competition leads to suboptimal strategies, we prove the quality of minimax results is surprisingly close to that of the optimum solution. In particular, via a novel factor-revealing LP for computing price of anarchy, we prove if the value of the user for an ordering is a linear function of its position, then the price of competition is at least 0.612 and bounded above by 0.833. Moreover we consider the cost minimization version of the problem. We prove, the social cost of the worst minimax strategy is at most 3 times the optimal social cost. Last but not least, we go beyond linear valuation functions and capture the main challenge for bounding the price of competition for any arbitrary valuation function. We present a principle which states that the lower bound for the price of competition for all 0-1 valuation functions is the same as the lower bound for the price of competition for all possible valuation functions. It is worth mentioning that this principle not only works for the ranking duel but also for all dueling games. This principle says, in any dueling game, the most challenging part of bounding the price of competition is finding a lower bound for 0-1 valuation functions. We leverage this principle to show that the price of competition is at least 0.25 for the generalized ranking duel.

Cite as

Sina Dehghani, Mohammad Taghi Hajiaghayi, Hamid Mahini, and Saeed Seddighin. Price of Competition and Dueling Games. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dehghani_et_al:LIPIcs.ICALP.2016.21,
  author =	{Dehghani, Sina and Hajiaghayi, Mohammad Taghi and Mahini, Hamid and Seddighin, Saeed},
  title =	{{Price of Competition and Dueling Games}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.21},
  URN =		{urn:nbn:de:0030-drops-63009},
  doi =		{10.4230/LIPIcs.ICALP.2016.21},
  annote =	{Keywords: POC, POA, Dueling games, Nash equilibria, sponsored search}
}
Document
Popular Half-Integral Matchings

Authors: Telikepalli Kavitha


Abstract
In an instance G = (A union B, E) of the stable marriage problem with strict and possibly incomplete preference lists, a matching M is popular if there is no matching M0 where the vertices that prefer M' to M outnumber those that prefer M to M'. All stable matchings are popular and there is a simple linear time algorithm to compute a maximum-size popular matching. More generally, what we seek is a min-cost popular matching where we assume there is a cost function c : E -> Q. However there is no polynomial time algorithm currently known for solving this problem. Here we consider the following generalization of a popular matching called a popular half-integral matching: this is a fractional matching ~x = (M_1 + M_2)/2, where M1 and M2 are the 0-1 edge incidence vectors of matchings in G, such that ~x satisfies popularity constraints. We show that every popular half-integral matching is equivalent to a stable matching in a larger graph G^*. This allows us to solve the min-cost popular half-integral matching problem in polynomial time.

Cite as

Telikepalli Kavitha. Popular Half-Integral Matchings. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 22:1-22:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kavitha:LIPIcs.ICALP.2016.22,
  author =	{Kavitha, Telikepalli},
  title =	{{Popular Half-Integral Matchings}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{22:1--22:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.22},
  URN =		{urn:nbn:de:0030-drops-63011},
  doi =		{10.4230/LIPIcs.ICALP.2016.22},
  annote =	{Keywords: bipartite graphs, stable matchings, fractional matchings, polytopes}
}
Document
Voronoi Choice Games

Authors: Meena Boppana, Rani Hod, Michael Mitzenmacher, and Tom Morgan


Abstract
We study novel variations of Voronoi games and associated random processes that we call Voronoi choice games. These games provide a rich framework for studying questions regarding the power of small numbers of choices in multi-player, competitive scenarios, and they further lead to many interesting, non-trivial random processes that appear worthy of study. As an example of the type of problem we study, suppose a group of n miners (or players) are staking land claims through the following process: each miner has m associated points independently and uniformly distributed on an underlying space (such as the unit circle, the unit square, or the unit torus), so the kth miner will have associated points p_{k1}, p_{k2}, ..., p_{km}. We generally here think of m as being a small constant, such as 2. Each miner chooses one of these points as the base point for their claim. Each miner obtains mining rights for the area of the square that is closest to their chosen base; that is, they obtain the Voronoi cell corresponding to their chosen point in the Voronoi diagram of the n chosen points. Each player's goal is simply to maximize the amount of land under their control. What can we say about the players’ strategy and the equilibria of such games? In our main result, we derive bounds on the expected number of pure Nash equilibria for a variation of the 1-dimensional game on the circle where a player owns the arc starting from their point and moving clockwise to the next point. This result uses interesting properties of random arc lengths on circles, and demonstrates the challenges in analyzing these kinds of problems. We also provide several other related results. In particular, for the 1-dimensional game on the circle, we show that a pure Nash equilibrium always exists when each player owns the part of the circle nearest to their point, but it is NP-hard to determine whether a pure Nash equilibrium exists in the variant when each player owns the arc starting from their point clockwise to the next point. This last result, in part, motivates our examination of the random setting.

Cite as

Meena Boppana, Rani Hod, Michael Mitzenmacher, and Tom Morgan. Voronoi Choice Games. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 23:1-23:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{boppana_et_al:LIPIcs.ICALP.2016.23,
  author =	{Boppana, Meena and Hod, Rani and Mitzenmacher, Michael and Morgan, Tom},
  title =	{{Voronoi Choice Games}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{23:1--23:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.23},
  URN =		{urn:nbn:de:0030-drops-63022},
  doi =		{10.4230/LIPIcs.ICALP.2016.23},
  annote =	{Keywords: Voronoi games, correlated equilibria, power of two choices, Hotelling model}
}
Document
The Complexity of Hex and the Jordan Curve Theorem

Authors: Aviv Adler, Constantinos Daskalakis, and Erik D. Demaine


Abstract
The Jordan curve theorem and Brouwer's fixed-point theorem are fundamental problems in topology. We study their computational relationship, showing that a stylized computational version of Jordan’s theorem is PPAD-complete, and therefore in a sense computationally equivalent to Brouwer’s theorem. As a corollary, our computational result implies that these two theorems directly imply each other mathematically, complementing Maehara's proof that Brouwer implies Jordan [Maehara, 1984]. We then turn to the combinatorial game of Hex which is related to Jordan's theorem, and where the existence of a winner can be used to show Brouwer's theorem [Gale,1979]. We establish that determining who won an (implicitly encoded) play of Hex is PSPACE-complete by adapting a reduction (due to Goldberg [Goldberg,2015]) from Quantified Boolean Formula (QBF). As this problem is analogous to evaluating the output of a canonical path-following algorithm for finding a Brouwer fixed point - and which is known to be PSPACE-complete [Goldberg/Papadimitriou/Savani, 2013] - we thereby establish a connection between Brouwer, Jordan and Hex higher in the complexity hierarchy.

Cite as

Aviv Adler, Constantinos Daskalakis, and Erik D. Demaine. The Complexity of Hex and the Jordan Curve Theorem. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{adler_et_al:LIPIcs.ICALP.2016.24,
  author =	{Adler, Aviv and Daskalakis, Constantinos and Demaine, Erik D.},
  title =	{{The Complexity of Hex and the Jordan Curve Theorem}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.24},
  URN =		{urn:nbn:de:0030-drops-63032},
  doi =		{10.4230/LIPIcs.ICALP.2016.24},
  annote =	{Keywords: Jordan, Brouwer, Hex, PPAD, PSPACE}
}
Document
Fractals for Kernelization Lower Bounds, With an Application to Length-Bounded Cut Problems

Authors: Till Fluschnik, Danny Hermelin, André Nichterlein, and Rolf Niedermeier


Abstract
Bodlaender et al.'s [Bodlaender/Jansen/Kratsch,2014] cross-composition technique is a popular method for excluding polynomial-size problem kernels for NP-hard parameterized problems. We present a new technique exploiting triangle-based fractal structures for extending the range of applicability of cross-compositions. Our technique makes it possible to prove new no-polynomial-kernel results for a number of problems dealing with length-bounded cuts. Roughly speaking, our new technique combines the advantages of serial and parallel composition. In particular, answering an open question of Golovach and Thilikos [Golovach/Thilikos,2011], we show that, unless NP subseteq coNP/poly, the NP-hard Length-Bounded Edge-Cut problem (delete at most k edges such that the resulting graph has no s-t path of length shorter than l) parameterized by the combination of k and l has no polynomial-size problem kernel. Our framework applies to planar as well as directed variants of the basic problems and also applies to both edge and vertex deletion problems.

Cite as

Till Fluschnik, Danny Hermelin, André Nichterlein, and Rolf Niedermeier. Fractals for Kernelization Lower Bounds, With an Application to Length-Bounded Cut Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fluschnik_et_al:LIPIcs.ICALP.2016.25,
  author =	{Fluschnik, Till and Hermelin, Danny and Nichterlein, Andr\'{e} and Niedermeier, Rolf},
  title =	{{Fractals for Kernelization Lower Bounds, With an Application to Length-Bounded Cut Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.25},
  URN =		{urn:nbn:de:0030-drops-63049},
  doi =		{10.4230/LIPIcs.ICALP.2016.25},
  annote =	{Keywords: Parameterized complexity, polynomial-time data reduction, cross-compositions, lower bounds, graph modification problems, interdiction problems}
}
Document
Kernelization of Cycle Packing with Relaxed Disjointness Constraints

Authors: Akanksha Agrawal, Daniel Lokshtanov, Diptapriyo Majumdar, Amer E. Mouawad, and Saket Saurabh


Abstract
A key result in the field of kernelization, a subfield of parameterized complexity, states that the classic Disjoint Cycle Packing problem, i.e. finding k vertex disjoint cycles in a given graph G, admits no polynomial kernel unless NP subseteq coNP/poly. However, very little is known about this problem beyond the aforementioned kernelization lower bound (within the parameterized complexity framework). In the hope of clarifying the picture and better understanding the types of "constraints" that separate "kernelizable" from "non-kernelizable" variants of Disjoint Cycle Packing, we investigate two relaxations of the problem. The first variant, which we call Almost Disjoint Cycle Packing, introduces a "global" relaxation parameter t. That is, given a graph G and integers k and t, the goal is to find at least k distinct cycles such that every vertex of G appears in at most t of the cycles. The second variant, Pairwise Disjoint Cycle Packing, introduces a "local" relaxation parameter and we seek at least k distinct cycles such that every two cycles intersect in at most t vertices. While the Pairwise Disjoint Cycle Packing problem admits a polynomial kernel for all t >= 1, the kernelization complexity of Almost Disjoint Cycle Packing reveals an interesting spectrum of upper and lower bounds. In particular, for t = k/c, where c could be a function of k, we obtain a kernel of size O(2^{c^{2}}*k^{7+c}*log^3(k)) whenever c in o(sqrt(k))). Thus the kernel size varies from being sub-exponential when c in o(sqrt(k)), to quasipolynomial when c in o(log^l(k)), l in R_+, and polynomial when c in O(1). We complement these results for Almost Disjoint Cycle Packing by showing that the problem does not admit a polynomial kernel whenever t in O(k^{epsilon}), for any 0 <= epsilon < 1.

Cite as

Akanksha Agrawal, Daniel Lokshtanov, Diptapriyo Majumdar, Amer E. Mouawad, and Saket Saurabh. Kernelization of Cycle Packing with Relaxed Disjointness Constraints. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ICALP.2016.26,
  author =	{Agrawal, Akanksha and Lokshtanov, Daniel and Majumdar, Diptapriyo and Mouawad, Amer E. and Saurabh, Saket},
  title =	{{Kernelization of Cycle Packing with Relaxed Disjointness Constraints}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.26},
  URN =		{urn:nbn:de:0030-drops-63053},
  doi =		{10.4230/LIPIcs.ICALP.2016.26},
  annote =	{Keywords: parameterized complexity, cycle packing, kernelization, relaxation}
}
Document
The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems

Authors: Andreas Emil Feldmann and Dániel Marx


Abstract
Given a directed graph G and a list (s_1, t_1), ..., (s_k, t_k) of terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of G that contains a directed s_i -> t_i path for every 1 <= i <= k. The special case Directed Steiner Tree (when we ask for paths from a root r to terminals t_1, . . . , t_k) is known to be fixed-parameter tractable parameterized by the number of terminals, while the special case Strongly Connected Steiner Subgraph (when we ask for a path from every t_i to every other t_j ) is known to be W[1]-hard parameterized by the number of terminals. We systematically explore the complexity landscape of directed Steiner problems to fully understand which other special cases are FPT or W[1]-hard. Formally, if H is a class of directed graphs, then we look at the special case of Directed Steiner Network where the list (s_1, t_1), ..., (s_k, t_k) of requests form a directed graph that is a member of H. Our main result is a complete characterization of the classes H resulting in fixed-parameter tractable special cases: we show that if every pattern in H has the combinatorial property of being "transitively equivalent to a bounded-length caterpillar with a bounded number of extra edges," then the problem is FPT, and it is W[1]-hard for every recursively enumerable H not having this property. This complete dichotomy unifies and generalizes the known results showing that Directed Steiner Tree is FPT [Dreyfus and Wagner, Networks 1971], Strongly Connected Steiner Subgraph is W[1]-hard [Guo et al., SIAM J. Discrete Math. 2011], and Directed Steiner Network is solvable in polynomial-time for constant number of terminals [Feldman and Ruhl, SIAM J. Comput. 2006], and moreover reveals a large continent of tractable cases that were not known before.

Cite as

Andreas Emil Feldmann and Dániel Marx. The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{feldmann_et_al:LIPIcs.ICALP.2016.27,
  author =	{Feldmann, Andreas Emil and Marx, D\'{a}niel},
  title =	{{The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.27},
  URN =		{urn:nbn:de:0030-drops-63060},
  doi =		{10.4230/LIPIcs.ICALP.2016.27},
  annote =	{Keywords: Directed Steiner Tree, Directed Steiner Network, fixed-parameter tractability, dichotomy}
}
Document
Double-Exponential and Triple-Exponential Bounds for Choosability Problems Parameterized by Treewidth

Authors: Dániel Marx and Valia Mitsou


Abstract
Choosability, introduced by Erdös, Rubin, and Taylor [Congr. Number. 1979], is a well-studied concept in graph theory: we say that a graph is c-choosable if for any assignment of a list of c colors to each vertex, there is a proper coloring where each vertex uses a color from its list. We study the complexity of deciding choosability on graphs of bounded treewidth. It follows from earlier work that 3-choosability can be decided in time 2^(2^(O(w)))*n^(O(1)) on graphs of treewidth w. We complement this result by a matching lower bound giving evidence that double-exponential dependence on treewidth may be necessary for the problem: we show that an algorithm with running time 2^(2^(o(w)))*n^(O(1)) would violate the Exponential-Time Hypothesis (ETH). We consider also the optimization problem where the task is to delete the minimum number of vertices to make the graph 4-choosable, and demonstrate that dependence on treewidth becomes tripleexponential for this problem: it can be solved in time 2^(2^(2^(O(w))))*n^(O(1)) on graphs of treewidth w, but an algorithm with running time 2^(2^(2^(o(w))))*n^(O(1)) would violate ETH.

Cite as

Dániel Marx and Valia Mitsou. Double-Exponential and Triple-Exponential Bounds for Choosability Problems Parameterized by Treewidth. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 28:1-28:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{marx_et_al:LIPIcs.ICALP.2016.28,
  author =	{Marx, D\'{a}niel and Mitsou, Valia},
  title =	{{Double-Exponential and Triple-Exponential Bounds for Choosability Problems Parameterized by Treewidth}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{28:1--28:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.28},
  URN =		{urn:nbn:de:0030-drops-63078},
  doi =		{10.4230/LIPIcs.ICALP.2016.28},
  annote =	{Keywords: Parameterized Complexity, List coloring, Treewidth, Lower bounds under ETH}
}
Document
Do Distributed Differentially-Private Protocols Require Oblivious Transfer?

Authors: Vipul Goyal, Dakshita Khurana, Ilya Mironov, Omkant Pandey, and Amit Sahai


Abstract
We study the cryptographic complexity of two-party differentially-private protocols for a large natural class of boolean functionalities. Information theoretically, McGregor et al. [FOCS 2010] and Goyal et al. [Crypto 2013] demonstrated several functionalities for which the maximal possible accuracy in the distributed setting is significantly lower than that in the client-server setting. Goyal et al. [Crypto 2013] further showed that "highly accurate" protocols in the distributed setting for any non-trivial functionality in fact imply the existence of one-way functions. However, it has remained an open problem to characterize the exact cryptographic complexity of this class. In particular, we know that semi-honest oblivious transfer helps obtain optimally accurate distributed differential privacy. But we do not know whether the reverse is true. We study the following question: Does the existence of optimally accurate distributed differentially private protocols for any class of functionalities imply the existence of oblivious transfer (or equivalently secure multi-party computation)? We resolve this question in the affirmative for the class of boolean functionalities that contain an XOR embedded on adjacent inputs. We give a reduction from oblivious transfer to: - Any distributed optimally accurate epsilon-differentially private protocol with epsilon > 0 computing a functionality with a boolean XOR embedded on adjacent inputs. - Any distributed non-optimally accurate epsilon-differentially private protocol with epsilon > 0, for a constant range of non-optimal accuracies and constant range of values of epsilon, computing a functionality with a boolean XOR embedded on adjacent inputs. Enroute to proving these results, we demonstrate a connection between optimally-accurate twoparty differentially-private protocols for functions with a boolean XOR embedded on adjacent inputs, and noisy channels, which were shown by Crépeau and Kilian [FOCS 1988] to be sufficient for oblivious transfer.

Cite as

Vipul Goyal, Dakshita Khurana, Ilya Mironov, Omkant Pandey, and Amit Sahai. Do Distributed Differentially-Private Protocols Require Oblivious Transfer?. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goyal_et_al:LIPIcs.ICALP.2016.29,
  author =	{Goyal, Vipul and Khurana, Dakshita and Mironov, Ilya and Pandey, Omkant and Sahai, Amit},
  title =	{{Do Distributed Differentially-Private Protocols Require Oblivious Transfer?}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.29},
  URN =		{urn:nbn:de:0030-drops-63080},
  doi =		{10.4230/LIPIcs.ICALP.2016.29},
  annote =	{Keywords: Oblivious Transfer, Distributed Differential Privacy, Noisy Channels, Weak Noisy Channels}
}
Document
Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions

Authors: Benoît Libert, Somindu C. Ramanna, and Moti Yung


Abstract
We formalize a cryptographic primitive called functional commitment (FC) which can be viewed as a generalization of vector commitments (VCs), polynomial commitments and many other special kinds of commitment schemes. A non-interactive functional commitment allows committing to a message in such a way that the committer has the flexibility of only revealing a function of the committed message during the opening phase. We provide constructions for the functionality of linear functions, where messages consist of vectors over some domain and commitments can later be opened to a specific linear function of the vector coordinates. An opening for a function thus generates a witness for the fact that the function indeed evaluates to a given value for the committed message. One security requirement is called function binding and requires that no adversary be able to open a commitment to two different evaluations for the same function. We propose a construction of functional commitment for linear functions based on constantsize assumptions in composite order groups endowed with a bilinear map. The construction has commitments and openings of constant size (i.e., independent of n or function description) and is perfectly hiding - the underlying message is information theoretically hidden. Our security proofs build on the Déjà Q framework of Chase and Meiklejohn (Eurocrypt 2014) and its extension by Wee (TCC 2016) to encryption primitives, thus relying on constant-size subgroup decisional assumptions. We show that FC for linear functions are sufficiently powerful to solve four open problems. They, first, imply polynomial commitments, and, then, give cryptographic accumulators (i.e., an algebraic hash function which makes it possible to efficiently prove that some input belongs to a hashed set). In particular, specializing our FC construction leads to the first pairing-based polynomial commitments and accumulators for large universes known to achieve security under simple assumptions. We also substantially extend our pairing-based accumulator to handle subset queries which requires a non-trivial extension of the Déjà Q framework.

Cite as

Benoît Libert, Somindu C. Ramanna, and Moti Yung. Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{libert_et_al:LIPIcs.ICALP.2016.30,
  author =	{Libert, Beno\^{i}t and Ramanna, Somindu C. and Yung, Moti},
  title =	{{Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.30},
  URN =		{urn:nbn:de:0030-drops-63096},
  doi =		{10.4230/LIPIcs.ICALP.2016.30},
  annote =	{Keywords: Cryptography, commitment schemes, functional commitments, accumulators, provable security, pairing-based, simple assumptions.}
}
Document
Block-Wise Non-Malleable Codes

Authors: Nishanth Chandran, Vipul Goyal, Pratyay Mukherjee, Omkant Pandey, and Jalaj Upadhyay


Abstract
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS'10) provide the guarantee that if a codeword c of a message m, is modified by a tampering function f to c', then c' either decodes to m or to "something unrelated" to m. In recent literature, a lot of focus has been on explicitly constructing such codes against a large and natural class of tampering functions such as split-state model in which the tampering function operates on different parts of the codeword independently. In this work, we consider a stronger adversarial model called block-wise tampering model, in which we allow tampering to depend on more than one block: if a codeword consists of two blocks c = (c1, c2), then the first tampering function f1 could produce a tampered part c'_1 = f1(c1) and the second tampering function f2 could produce c'_2 = f2(c1, c2) depending on both c2 and c1. The notion similarly extends to multiple blocks where tampering of block ci could happen with the knowledge of all cj for j <= i. We argue this is a natural notion where, for example, the blocks are sent one by one and the adversary must send the tampered block before it gets the next block. A little thought reveals that it is impossible to construct such codes that are non-malleable (in the standard sense) against such a powerful adversary: indeed, upon receiving the last block, an adversary could decode the entire codeword and then can tamper depending on the message. In light of this impossibility, we consider a natural relaxation called non-malleable codes with replacement which requires the adversary to produce not only related but also a valid codeword in order to succeed. Unfortunately, we show that even this relaxed definition is not achievable in the information-theoretic setting (i.e., when the tampering functions can be unbounded) which implies that we must turn our attention towards computationally bounded adversaries. As our main result, we show how to construct a block-wise non-malleable code (BNMC) from sub-exponentially hard one-way permutations. We provide an interesting connection between BNMC and non-malleable commitments. We show that any BNMC can be converted into a nonmalleable (w.r.t. opening) commitment scheme. Our techniques, quite surprisingly, give rise to a non-malleable commitment scheme (secure against so-called synchronizing adversaries), in which only the committer sends messages. We believe this result to be of independent interest. In the other direction, we show that any non-interactive non-malleable (w.r.t. opening) commitment can be used to construct BNMC only with 2 blocks. Unfortunately, such commitment scheme exists only under highly non-standard assumptions (adaptive one-way functions) and hence can not substitute our main construction.

Cite as

Nishanth Chandran, Vipul Goyal, Pratyay Mukherjee, Omkant Pandey, and Jalaj Upadhyay. Block-Wise Non-Malleable Codes. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chandran_et_al:LIPIcs.ICALP.2016.31,
  author =	{Chandran, Nishanth and Goyal, Vipul and Mukherjee, Pratyay and Pandey, Omkant and Upadhyay, Jalaj},
  title =	{{Block-Wise Non-Malleable Codes}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.31},
  URN =		{urn:nbn:de:0030-drops-63102},
  doi =		{10.4230/LIPIcs.ICALP.2016.31},
  annote =	{Keywords: Non-malleable codes, Non-malleable commitments, Block-wise Tampering, Complexity-leveraging}
}
Document
Provably Secure Virus Detection: Using The Observer Effect Against Malware

Authors: Richard J. Lipton, Rafail Ostrovsky, and Vassilis Zikas


Abstract
Protecting software from malware injection is one of the biggest challenges of modern computer science. Despite intensive efforts by the scientific and engineering community, the number of successful attacks continues to increase. This work sets first footsteps towards a provably secure investigation of malware detection. We provide a formal model and cryptographic security definitions of attestation for systems with dynamic memory, and suggest novel provably secure attestation schemes. The key idea underlying our schemes is to use the very insertion of the malware itself to allow for the systems to detect it. This is, in our opinion, close in spirit to the quantum Observer Effect. The attackers, no matter how clever, no matter when they insert their malware, change the state of the system they are attacking. This fundamental idea can be a game changer. And our system does not rely on heuristics; instead, our scheme enjoys the unique property that it is proved secure in a formal and precise mathematical sense and with minimal and realistic CPU modification achieves strong provable security guarantees. We envision such systems with a formal mathematical security treatment as a venue for new directions in software protection.

Cite as

Richard J. Lipton, Rafail Ostrovsky, and Vassilis Zikas. Provably Secure Virus Detection: Using The Observer Effect Against Malware. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lipton_et_al:LIPIcs.ICALP.2016.32,
  author =	{Lipton, Richard J. and Ostrovsky, Rafail and Zikas, Vassilis},
  title =	{{Provably Secure Virus Detection: Using The Observer Effect Against Malware}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.32},
  URN =		{urn:nbn:de:0030-drops-63113},
  doi =		{10.4230/LIPIcs.ICALP.2016.32},
  annote =	{Keywords: Cryptography, Software Attestation, Provable Security}
}
Document
An Almost Cubic Lower Bound for Depth Three Arithmetic Circuits

Authors: Neeraj Kayal, Chandan Saha, and Sébastien Tavenas


Abstract
We show an almost cubic lower bound on the size of any depth three arithmetic circuit computing an explicit multilinear polynomial in n variables over any field. This improves upon the previously known quadratic lower bound by Shpilka and Wigderson [CCC, 1999].

Cite as

Neeraj Kayal, Chandan Saha, and Sébastien Tavenas. An Almost Cubic Lower Bound for Depth Three Arithmetic Circuits. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kayal_et_al:LIPIcs.ICALP.2016.33,
  author =	{Kayal, Neeraj and Saha, Chandan and Tavenas, S\'{e}bastien},
  title =	{{An Almost Cubic Lower Bound for Depth Three Arithmetic Circuits}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.33},
  URN =		{urn:nbn:de:0030-drops-63126},
  doi =		{10.4230/LIPIcs.ICALP.2016.33},
  annote =	{Keywords: arithmetic circuits, depth-3 circuits, shifted partials}
}
Document
Boundaries of VP and VNP

Authors: Joshua A. Grochow, Ketan D. Mulmuley, and Youming Qiao


Abstract
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = !VP, where VP is the class of families of polynomials that can be computed by arithmetic circuits of polynomial degree and size, and VP is the class of families of polynomials that can be approximated infinitesimally closely by arithmetic circuits of polynomial degree and size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that !VP is not contained in VP. Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP*. We also introduce analogous degenerations of VNP. We show that Stable-VP subseteq Newton-VP subseteq VP* subseteq VNP, and Stable-VNP = Newton-VNP = VNP* = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP. Although we do not yet construct explicit candidates for the polynomial families in !VP\VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be nongeneric by showing that, for many finite quivers (including some wild ones), Newton degeneration of any generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.

Cite as

Joshua A. Grochow, Ketan D. Mulmuley, and Youming Qiao. Boundaries of VP and VNP. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{grochow_et_al:LIPIcs.ICALP.2016.34,
  author =	{Grochow, Joshua A. and Mulmuley, Ketan D. and Qiao, Youming},
  title =	{{Boundaries of VP and VNP}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.34},
  URN =		{urn:nbn:de:0030-drops-63147},
  doi =		{10.4230/LIPIcs.ICALP.2016.34},
  annote =	{Keywords: geometric complexity theory, arithmetic circuit, border complexity}
}
Document
AC^0 o MOD_2 Lower Bounds for the Boolean Inner Product

Authors: Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, and Ning Xie


Abstract
AC^0 o MOD_2 circuits are AC^0 circuits augmented with a layer of parity gates just above the input layer. We study AC^0 o MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC^0 o MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an ~Omega(n^2) lower bound for the special case of depth-4 AC^0 o MOD_2. Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match.

Cite as

Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, and Ning Xie. AC^0 o MOD_2 Lower Bounds for the Boolean Inner Product. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cheraghchi_et_al:LIPIcs.ICALP.2016.35,
  author =	{Cheraghchi, Mahdi and Grigorescu, Elena and Juba, Brendan and Wimmer, Karl and Xie, Ning},
  title =	{{AC^0 o MOD\underline2 Lower Bounds for the Boolean Inner Product}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.35},
  URN =		{urn:nbn:de:0030-drops-63150},
  doi =		{10.4230/LIPIcs.ICALP.2016.35},
  annote =	{Keywords: Boolean analysis, circuit complexity, lower bounds}
}
Document
Lower Bounds for Nondeterministic Semantic Read-Once Branching Programs

Authors: Stephen Cook, Jeff Edmonds, Venkatesh Medabalimi, and Toniann Pitassi


Abstract
We prove exponential lower bounds on the size of semantic read-once 3-ary nondeterministic branching programs. Prior to our result the best that was known was for D-ary branching programs with |D| >= 2^{13}.

Cite as

Stephen Cook, Jeff Edmonds, Venkatesh Medabalimi, and Toniann Pitassi. Lower Bounds for Nondeterministic Semantic Read-Once Branching Programs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 36:1-36:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.ICALP.2016.36,
  author =	{Cook, Stephen and Edmonds, Jeff and Medabalimi, Venkatesh and Pitassi, Toniann},
  title =	{{Lower Bounds for Nondeterministic Semantic Read-Once Branching Programs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{36:1--36:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.36},
  URN =		{urn:nbn:de:0030-drops-63166},
  doi =		{10.4230/LIPIcs.ICALP.2016.36},
  annote =	{Keywords: Branching Programs, Semantic, Non-deterministic, Lower Bounds}
}
Document
Improved Bounds on the Sign-Rank of AC^0

Authors: Mark Bun and Justin Thaler


Abstract
The sign-rank of a matrix A with entries in {-1, +1} is the least rank of a real matrix B with A_{ij}*B_{ij} > 0 for all i, j. Razborov and Sherstov (2008) gave the first exponential lower bounds on the sign-rank of a function in AC^0, answering an old question of Babai, Frankl, and Simon (1986). Specifically, they exhibited a matrix A = [F(x,y)]_{x,y} for a specific function F:{-1,1}^n*{-1,1}^n -> {-1,1} in AC^0, such that A has sign-rank exp(Omega(n^{1/3}). We prove a generalization of Razborov and Sherstov’s result, yielding exponential sign-rank lower bounds for a non-trivial class of functions (that includes the function used by Razborov and Sherstov). As a corollary of our general result, we improve Razborov and Sherstov's lower bound on the sign-rank of AC^0 from exp(Omega(n^{1/3})) to exp(~Omega(n^{2/5})). We also describe several applications to communication complexity, learning theory, and circuit complexity.

Cite as

Mark Bun and Justin Thaler. Improved Bounds on the Sign-Rank of AC^0. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 37:1-37:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bun_et_al:LIPIcs.ICALP.2016.37,
  author =	{Bun, Mark and Thaler, Justin},
  title =	{{Improved Bounds on the Sign-Rank of AC^0}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{37:1--37:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.37},
  URN =		{urn:nbn:de:0030-drops-63173},
  doi =		{10.4230/LIPIcs.ICALP.2016.37},
  annote =	{Keywords: Sign-rank, circuit complexity, communication complexity, constant-depth circuits}
}
Document
On the Sensitivity Conjecture

Authors: Avishay Tal


Abstract
The sensitivity of a Boolean function f:{0,1}^n -> {0,1} is the maximal number of neighbors a point in the Boolean hypercube has with different f-value. Roughly speaking, the block sensitivity allows to flip a set of bits (called a block) rather than just one bit, in order to change the value of f. The sensitivity conjecture, posed by Nisan and Szegedy (CC, 1994), states that the block sensitivity, bs(f), is at most polynomial in the sensitivity, s(f), for any Boolean function f. A positive answer to the conjecture will have many consequences, as the block sensitivity is polynomially related to many other complexity measures such as the certificate complexity, the decision tree complexity and the degree. The conjecture is far from being understood, as there is an exponential gap between the known upper and lower bounds relating bs(f) and s(f). We continue a line of work started by Kenyon and Kutin (Inf. Comput., 2004), studying the l-block sensitivity, bs_l(f), where l bounds the size of sensitive blocks. While for bs_2(f) the picture is well understood with almost matching upper and lower bounds, for bs_3(f) it is not. We show that any development in understanding bs_3(f) in terms of s(f) will have great implications on the original question. Namely, we show that either bs(f) is at most sub-exponential in s(f) (which improves the state of the art upper bounds) or that bs_3(f) >= s(f){3-epsilon} for some Boolean functions (which improves the state of the art separations). We generalize the question of bs(f) versus s(f) to bounded functions f:{0,1}^n -> [0,1] and show an analog result to that of Kenyon and Kutin: bs_l(f) = O(s(f))^l. Surprisingly, in this case, the bounds are close to being tight. In particular, we construct a bounded function f:{0,1}^n -> [0, 1] with bs(f) n/log(n) and s(f) = O(log(n)), a clear counterexample to the sensitivity conjecture for bounded functions. Finally, we give a new super-quadratic separation between sensitivity and decision tree complexity by constructing Boolean functions with DT(f) >= s(f)^{2.115}. Prior to this work, only quadratic separations, DT(f) = s(f)^2, were known.

Cite as

Avishay Tal. On the Sensitivity Conjecture. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 38:1-38:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{tal:LIPIcs.ICALP.2016.38,
  author =	{Tal, Avishay},
  title =	{{On the Sensitivity Conjecture}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{38:1--38:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.38},
  URN =		{urn:nbn:de:0030-drops-63184},
  doi =		{10.4230/LIPIcs.ICALP.2016.38},
  annote =	{Keywords: sensitivity conjecture, decision tree, block sensitivity}
}
Document
Randomization Can Be as Helpful as a Glimpse of the Future in Online Computation

Authors: Jesper W. Mikkelsen


Abstract
We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. Intuitively, our direct product theorems say that if b bits of advice are needed to ensure a cost of at most t for some problem, then r*b bits of advice are needed to ensure a total cost of at most r*t when solving r independent instances of the problem. Using our direct product theorems, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, we show that - A paging algorithm needs Omega(n) bits of advice to achieve a competitive ratio better than H_k = Omega(log k), where k is the cache size. Previously, it was only known that Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/4. - Every O(n^{1-epsilon})-competitive vertex coloring algorithm must use Omega(n log n) bits of advice. Previously, it was only known that Omega(n log n) bits of advice were necessary to be optimal. For certain online problems, including the MTS, k-server, metric matching, paging, list update, and dynamic binary search tree problem, we prove that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(log k)-competitive k-server algorithm with sublinear advice if and only if there exists a randomized O(log k)-competitive k-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Rosén [ICALP'09].

Cite as

Jesper W. Mikkelsen. Randomization Can Be as Helpful as a Glimpse of the Future in Online Computation. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mikkelsen:LIPIcs.ICALP.2016.39,
  author =	{Mikkelsen, Jesper W.},
  title =	{{Randomization Can Be as Helpful as a Glimpse of the Future in Online Computation}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.39},
  URN =		{urn:nbn:de:0030-drops-63199},
  doi =		{10.4230/LIPIcs.ICALP.2016.39},
  annote =	{Keywords: online algorithms, advice complexity, information theory, randomization}
}
Document
Online Semidefinite Programming

Authors: Noa Elad, Satyen Kale, and Joseph (Seffi) Naor


Abstract
We consider semidefinite programming through the lens of online algorithms - what happens if not all input is given at once, but rather iteratively? In what way does it make sense for a semidefinite program to be revealed? We answer these questions by defining a model for online semidefinite programming. This model can be viewed as a generalization of online coveringpacking linear programs, and it also captures interesting problems from quantum information theory. We design an online algorithm for semidefinite programming, utilizing the online primaldual method, achieving a competitive ratio of O(log(n)), where n is the number of matrices in the primal semidefinite program. We also design an algorithm for semidefinite programming with box constraints, achieving a competitive ratio of O(log F*), where F* is a sparsity measure of the semidefinite program. We conclude with an online randomized rounding procedure.

Cite as

Noa Elad, Satyen Kale, and Joseph (Seffi) Naor. Online Semidefinite Programming. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 40:1-40:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{elad_et_al:LIPIcs.ICALP.2016.40,
  author =	{Elad, Noa and Kale, Satyen and Naor, Joseph (Seffi)},
  title =	{{Online Semidefinite Programming}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{40:1--40:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.40},
  URN =		{urn:nbn:de:0030-drops-63205},
  doi =		{10.4230/LIPIcs.ICALP.2016.40},
  annote =	{Keywords: online algorithms, semidefinite programming, primal-dual}
}
Document
Beating the Harmonic Lower Bound for Online Bin Packing

Authors: Sandy Heydrich and Rob van Stee


Abstract
In the online bin packing problem, items of sizes in (0,1] arrive online to be packed into bins of size 1. The goal is to minimize the number of used bins. Harmonic++ achieves a competitive ratio of 1.58889 and belongs to the Super Harmonic framework [Seiden, J. ACM, 2002]; a lower bound of Ramanan et al. shows that within this framework, no competitive ratio below 1.58333 can be achieved [Ramanan et al., J. Algorithms, 1989]. In this paper, we present an online bin packing algorithm with asymptotic performance ratio of 1.5815, which constitutes the first improvement in fifteen years and reduces the gap to the lower bound by roughly 15%. We make two crucial changes to the Super Harmonic framework. First, some of the decisions of the algorithm will depend on exact sizes of items, instead of only their types. In particular, for item pairs where the size of one item is in (1/3,1/2] and the other is larger than 1/2 (a large item), when deciding whether to pack such a pair together in one bin, our algorithm does not consider their types, but only checks whether their total size is at most 1. Second, for items with sizes in (1/3,1/2] (medium items), we try to pack the larger items of every type in pairs, while combining the smallest items with large items whenever possible. To do this, we postpone the coloring of medium items (i.e., the decision which items to pack in pairs and which to pack alone) where possible, and later select the smallest ones to be reserved for combining with large items. Additionally, in case such large items arrive early, we pack medium items with them whenever possible. This is a highly unusual idea in the context of Harmonic-like algorithms, which initially seems to preclude analysis (the ratio of items combined with large items is no longer a fixed constant). For the analysis, we carefully mark medium items depending on how they end up packed, enabling us to add crucial constraints to the linear program used by Seiden. We consider the dual, eliminate all but one variable and then solve it with the ellipsoid method using a separation oracle. Our implementation uses additional algorithmic ideas to determine previously hand set parameters automatically and gives certificates for easy verification of the results. We give a lower bound of 1.5766 for algorithms like ours. This shows that fundamentally different ideas will be required to make further improvements

Cite as

Sandy Heydrich and Rob van Stee. Beating the Harmonic Lower Bound for Online Bin Packing. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{heydrich_et_al:LIPIcs.ICALP.2016.41,
  author =	{Heydrich, Sandy and van Stee, Rob},
  title =	{{Beating the Harmonic Lower Bound for Online Bin Packing}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.41},
  URN =		{urn:nbn:de:0030-drops-63214},
  doi =		{10.4230/LIPIcs.ICALP.2016.41},
  annote =	{Keywords: Bin packing, online algorithms, harmonic algorithm}
}
Document
Online Weighted Degree-Bounded Steiner Networks via Novel Online Mixed Packing/Covering

Authors: Sina Dehghani, Soheil Ehsani, Mohammad Taghi Hajiaghayi, Vahid Liaghat, Harald Räcke, and Saeed Seddighin


Abstract
We design the first online algorithm with poly-logarithmic competitive ratio for the edge-weighted degree-bounded Steiner forest (EW-DB-SF) problem and its generalized variant. We obtain our result by demonstrating a new generic approach for solving mixed packing/covering integer programs in the online paradigm. In EW-DB-SF, we are given an edge-weighted graph with a degree bound for every vertex. Given a root vertex in advance, we receive a sequence of terminal vertices in an online manner. Upon the arrival of a terminal, we need to augment our solution subgraph to connect the new terminal to the root. The goal is to minimize the total weight of the solution while respecting the degree bounds on the vertices. In the offline setting, edge-weighted degree-bounded Steiner tree (EW-DB-ST) and its many variations have been extensively studied since early eighties. Unfortunately, the recent advancements in the online network design problems are inherently difficult to adapt for degree-bounded problems. In particular, it is not known whether the fractional solution obtained by standard primal-dual techniques for mixed packing/covering LPs can be rounded online. In contrast, in this paper we obtain our result by using structural properties of the optimal solution, and reducing the EW-DB-SF problem to an exponential-size mixed packing/covering integer program in which every variable appears only once in covering constraints. We then design a generic integral algorithm for solving this restricted family of IPs. As mentioned above, we demonstrate a new technique for solving mixed packing/covering integer programs. Define the covering frequency k of a program as the maximum number of covering constraints in which a variable can participate. Let m denote the number of packing constraints. We design an online deterministic integral algorithm with competitive ratio of O(k*log(m)) for the mixed packing/covering integer programs. We prove the tightness of our result by providing a matching lower bound for any randomized algorithm. We note that our solution solely depends on m and k. Indeed, there can be exponentially many variables. Furthermore, our algorithm directly provides an integral solution, even if the integrality gap of the program is unbounded. We believe this technique can be used as an interesting alternative for the standard primal-dual techniques in solving online problems.

Cite as

Sina Dehghani, Soheil Ehsani, Mohammad Taghi Hajiaghayi, Vahid Liaghat, Harald Räcke, and Saeed Seddighin. Online Weighted Degree-Bounded Steiner Networks via Novel Online Mixed Packing/Covering. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 42:1-42:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dehghani_et_al:LIPIcs.ICALP.2016.42,
  author =	{Dehghani, Sina and Ehsani, Soheil and Hajiaghayi, Mohammad Taghi and Liaghat, Vahid and R\"{a}cke, Harald and Seddighin, Saeed},
  title =	{{Online Weighted Degree-Bounded Steiner Networks via Novel Online Mixed Packing/Covering}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{42:1--42:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.42},
  URN =		{urn:nbn:de:0030-drops-63221},
  doi =		{10.4230/LIPIcs.ICALP.2016.42},
  annote =	{Keywords: Online, Steiner Tree, Approximation, Competitive ratio}
}
Document
Carpooling in Social Networks

Authors: Amos Fiat, Anna R. Karlin, Elias Koutsoupias, Claire Mathieu, and Rotem Zach


Abstract
We consider the online carpool fairness problem of [Fagin and Williams, 1983] in which an online algorithm is presented with a sequence of pairs drawn from a group of n potential drivers. The online algorithm must select one driver from each pair, with the objective of partitioning the driving burden as fairly as possible for all drivers. The unfairness of an online algorithm is a measure of the worst-case deviation between the number of times a person has driven and the number of times they would have driven if life was completely fair. We introduce a version of the problem in which drivers only carpool with their neighbors in a given social network graph; this is a generalization of the original problem, which corresponds to the social network of the complete graph. We show that for graphs of degree d, the unfairness of deterministic algorithms against adversarial sequences is exactly d/2. For random sequences of edges from planar graph social networks we give a [deterministic] algorithm with logarithmic unfairness (holds more generally for any bounded-genus graph). This does not follow from previous random sequence results in the original model, as we show that restricting the random sequences to sparse social network graphs may increase the unfairness. A very natural class of randomized online algorithms are so-called static algorithms that preserve the same state distribution over time. Surprisingly, we show that any such algorithm has unfairness ~Theta(sqrt(d)) against oblivious adversaries. This shows that the local random greedy algorithm of [Ajtai et al, 1996] is close to optimal amongst the class of static algorithms. A natural (non-static) algorithm is global random greedy (which acts greedily and breaks ties at random). We improve the lower bound on the competitive ratio from Omega(log^{1/3}(d)) to Omega(log(d)). We also show that the competitive ratio of global random greedy against adaptive adversaries is Omega(d).

Cite as

Amos Fiat, Anna R. Karlin, Elias Koutsoupias, Claire Mathieu, and Rotem Zach. Carpooling in Social Networks. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 43:1-43:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fiat_et_al:LIPIcs.ICALP.2016.43,
  author =	{Fiat, Amos and Karlin, Anna R. and Koutsoupias, Elias and Mathieu, Claire and Zach, Rotem},
  title =	{{Carpooling in Social Networks}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{43:1--43:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.43},
  URN =		{urn:nbn:de:0030-drops-63234},
  doi =		{10.4230/LIPIcs.ICALP.2016.43},
  annote =	{Keywords: Online algorithms, Fairness, Randomized algorithms, Competitive ratio, Carpool problem}
}
Document
An Improved Analysis of the ER-SpUD Dictionary Learning Algorithm

Authors: Jaroslaw Blasiok and Jelani Nelson


Abstract
In dictionary learning we observe Y = AX + E for some Y in R^{n*p}, A in R^{m*n}, and X in R^{m*p}, where p >= max{n, m}, and typically m >=n. The matrix Y is observed, and A, X, E are unknown. Here E is a "noise" matrix of small norm, and X is column-wise sparse. The matrix A is referred to as a dictionary, and its columns as atoms. Then, given some small number p of samples, i.e. columns of Y , the goal is to learn the dictionary A up to small error, as well as the coefficient matrix X. In applications one could for example think of each column of Y as a distinct image in a database. The motivation is that in many applications data is expected to sparse when represented by atoms in the "right" dictionary A (e.g. images in the Haar wavelet basis), and the goal is to learn A from the data to then use it for other applications. Recently, the work of [Spielman/Wang/Wright, COLT'12] proposed the dictionary learning algorithm ER-SpUD with provable guarantees when E = 0 and m = n. That work showed that if X has independent entries with an expected Theta n non-zeroes per column for 1/n <~ Theta <~ 1/sqrt(n), and with non-zero entries being subgaussian, then for p >~ n^2 log^2 n with high probability ER-SpUD outputs matrices A', X' which equal A, X up to permuting and scaling columns (resp. rows) of A (resp. X). They conjectured that p >~ n log n suffices, which they showed was information theoretically necessary for any algorithm to succeed when Theta =~ 1/n. Significant progress toward showing that p >~ n log^4 n might suffice was later obtained in [Luh/Vu, FOCS'15]. In this work, we show that for a slight variant of ER-SpUD, p >~ n log(n/delta) samples suffice for successful recovery with probability 1 - delta. We also show that without our slight variation made to ER-SpUD, p >~ n^{1.99} samples are required even to learn A, X with a small success probability of 1/ poly(n). This resolves the main conjecture of [Spielman/Wang/Wright, COLT'12], and contradicts a result of [Luh/Vu, FOCS'15], which claimed that p >~ n log^4 n guarantees high probability of success for the original ER-SpUD algorithm.

Cite as

Jaroslaw Blasiok and Jelani Nelson. An Improved Analysis of the ER-SpUD Dictionary Learning Algorithm. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.ICALP.2016.44,
  author =	{Blasiok, Jaroslaw and Nelson, Jelani},
  title =	{{An Improved Analysis of the ER-SpUD Dictionary Learning Algorithm}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.44},
  URN =		{urn:nbn:de:0030-drops-63246},
  doi =		{10.4230/LIPIcs.ICALP.2016.44},
  annote =	{Keywords: dictionary learning, stochastic processes, generic chaining}
}
Document
Approximation via Correlation Decay When Strong Spatial Mixing Fails

Authors: Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Daniel Stefankovic


Abstract
Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of anti-ferromagnetic two-spin models. Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing. This, roughly, means that one uses worst-case analysis of the recursive procedure that creates the sub-instances. In this paper, we develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and we amortise against certain "bad" instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain an FPTAS even when strong spatial mixing fails. We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper-bound Delta and with a lower bound k on the arity of hyperedges. Liu and Lin gave an FPTAS for k >= 2 and Delta <= 5 (lack of strong spatial mixing was the obstacle preventing this algorithm from being generalised to Delta = 6). Our technique gives a tight result for Delta = 6, showing that there is an FPTAS for k >= 3 and Delta <= 6. The best previously-known approximation scheme for Delta = 6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and Karpinski, which only works for k >= 8. Our technique also applies for larger values of k, giving an FPTAS for k >= 1.66 Delta. This bound is not as strong as existing randomised results, for technical reasons that are discussed in the paper. Nevertheless, it gives the first deterministic approximation schemes in this regime. We further demonstrate that in the hypergraph independent set model, approximating the partition function is NP-hard even within the uniqueness regime.

Cite as

Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Daniel Stefankovic. Approximation via Correlation Decay When Strong Spatial Mixing Fails. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bezakova_et_al:LIPIcs.ICALP.2016.45,
  author =	{Bez\'{a}kov\'{a}, Ivona and Galanis, Andreas and Goldberg, Leslie Ann and Guo, Heng and Stefankovic, Daniel},
  title =	{{Approximation via Correlation Decay When Strong Spatial Mixing Fails}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.45},
  URN =		{urn:nbn:de:0030-drops-63257},
  doi =		{10.4230/LIPIcs.ICALP.2016.45},
  annote =	{Keywords: approximate counting, independent sets in hypergraphs, correlation decay}
}
Document
A Complexity Trichotomy for Approximately Counting List H-Colourings

Authors: Andreas Galanis, Leslie Ann Goldberg, and Mark Jerrum


Abstract
We examine the computational complexity of approximately counting the list H-colourings of a graph. We discover a natural graph-theoretic trichotomy based on the structure of the graph H. If H is an irreflexive bipartite graph or a reflexive complete graph then counting list H-colourings is trivially in polynomial time. Otherwise, if H is an irreflexive bipartite permutation graph or a reflexive proper interval graph then approximately counting list H-colourings is equivalent to #BIS, the problem of approximately counting independent sets in a bipartite graph. This is a well-studied problem which is believed to be of intermediate complexity - it is believed that it does not have an FPRAS, but that it is not as difficult as approximating the most difficult counting problems in #P. For every other graph H, approximately counting list H-colourings is complete for #P with respect to approximation-preserving reductions (so there is no FPRAS unless NP = RP). Two pleasing features of the trichotomy are (i) it has a natural formulation in terms of hereditary graph classes, and (ii) the proof is largely self-contained and does not require any universal algebra (unlike similar dichotomies in the weighted case). We are able to extend the hardness results to the bounded-degree setting, showing that all hardness results apply to input graphs with maximum degree at most 6.

Cite as

Andreas Galanis, Leslie Ann Goldberg, and Mark Jerrum. A Complexity Trichotomy for Approximately Counting List H-Colourings. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2016.46,
  author =	{Galanis, Andreas and Goldberg, Leslie Ann and Jerrum, Mark},
  title =	{{A Complexity Trichotomy for Approximately Counting List H-Colourings}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.46},
  URN =		{urn:nbn:de:0030-drops-63262},
  doi =		{10.4230/LIPIcs.ICALP.2016.46},
  annote =	{Keywords: approximate counting, graph homomorphisms, list colourings}
}
Document
Parity Separation: A Scientifically Proven Method for Permanent Weight Loss

Authors: Radu Curticapean


Abstract
Given an edge-weighted graph G, let PerfMatch(G) denote the weighted sum over all perfect matchings M in G, weighting each matching M by the product of weights of edges in M. If G is unweighted, this plainly counts the perfect matchings of G. In this paper, we introduce parity separation, a new method for reducing PerfMatch to unweighted instances: For graphs G with edge-weights 1 and -1, we construct two unweighted graphs G1 and G2 such that PerfMatch(G) = PerfMatch(G1) - PerfMatch(G2). This yields a novel weight removal technique for counting perfect matchings, in addition to those known from classical #P-hardness proofs. Our technique is based upon the Holant framework and matchgates. We derive the following applications: Firstly, an alternative #P-completeness proof for counting unweighted perfect matchings. Secondly, C=P-completeness for deciding whether two given unweighted graphs have the same number of perfect matchings. To the best of our knowledge, this is the first C=P-completeness result for the “equality-testing version” of any natural counting problem that is not already #P-hard under parsimonious reductions. Thirdly, an alternative tight lower bound for counting unweighted perfect matchings under the counting exponential-time hypothesis #ETH.

Cite as

Radu Curticapean. Parity Separation: A Scientifically Proven Method for Permanent Weight Loss. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{curticapean:LIPIcs.ICALP.2016.47,
  author =	{Curticapean, Radu},
  title =	{{Parity Separation: A Scientifically Proven Method for Permanent Weight Loss}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.47},
  URN =		{urn:nbn:de:0030-drops-63279},
  doi =		{10.4230/LIPIcs.ICALP.2016.47},
  annote =	{Keywords: perfect matchings, counting complexity, structural complexity, exponentialtime hypothesis}
}
Document
On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter

Authors: Søren Dahlgaard


Abstract
Conditional lower bounds for dynamic graph problems has received a great deal of attention in recent years. While many results are now known for the fully-dynamic case and such bounds often imply worst-case bounds for the partially dynamic setting, it seems much more difficult to prove amortized bounds for incremental and decremental algorithms. In this paper we consider partially dynamic versions of three classic problems in graph theory. Based on popular conjectures we show that: - No algorithm with amortized update time O(n^{1-epsilon}) exists for incremental or decremental maximum cardinality bipartite matching. This significantly improves on the O(m^{1/2-epsilon}) bound for sparse graphs of Henzinger et al. [STOC'15] and O(n^{1/3-epsilon}) bound of Kopelowitz, Pettie and Porat. Our linear bound also appears more natural. In addition, the result we present separates the node-addition model from the edge insertion model, as an algorithm with total update time O(m*sqrt(n)) exists for the former by Bosek et al. [FOCS'14]. - No algorithm with amortized update time O(m^{1-epsilon}) exists for incremental or decremental maximum flow in directed and weighted sparse graphs. No such lower bound was known for partially dynamic maximum flow previously. Furthermore no algorithm with amortized update time O(n^{1-epsilon}) exists for directed and unweighted graphs or undirected and weighted graphs. - No algorithm with amortized update time O(n^{1/2-epsilon}) exists for incremental or decremental (4/3 - epsilon')-approximating the diameter of an unweighted graph. We also show a slightly stronger bound if node additions are allowed. The result is then extended to the static case, where we show that no O((n*sqrt(m))^{1-epsilon}) algorithm exists. We also extend the result to the case when an additive error is allowed in the approximation. While our bounds are weaker than the already known bounds of Roditty and Vassilevska Williams [STOC'13], it is based on a weaker conjecture of Abboud et al. [STOC'15] and is the first known reduction from the 3SUM and APSP problems to diameter. Showing an equivalence between APSP and diameter is a major open problem in this area (Abboud et al. [SODA'15]), and thus showing even a weak connection in this direction is of interest.

Cite as

Søren Dahlgaard. On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 48:1-48:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dahlgaard:LIPIcs.ICALP.2016.48,
  author =	{Dahlgaard, S{\o}ren},
  title =	{{On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{48:1--48:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.48},
  URN =		{urn:nbn:de:0030-drops-63289},
  doi =		{10.4230/LIPIcs.ICALP.2016.48},
  annote =	{Keywords: Conditional lower bounds, Maximum cardinality matching, Diameter in graphs, Hardness in P, Partially dynamic problems, Maximum flow}
}
Document
Incremental 2-Edge-Connectivity in Directed Graphs

Authors: Loukas Georgiadis, Giuseppe F. Italiano, and Nikos Parotsidis


Abstract
We present an algorithm that can update the 2-edge-connected blocks of a directed graph with n vertices through a sequence of m edge insertions in a total of O(m*n) time. After each insertion, we can answer the following queries in asymptotically optimal time: - Test in constant time if two query vertices v and w are 2-edge-connected. Moreover, if v and w are not 2-edge-connected, we can produce in constant time a “witness” of this property, by exhibiting an edge that is contained in all paths from v to w or in all paths from w to v. - Report in O(n) time all the 2-edge-connected blocks of G. This is the first dynamic algorithm for 2-connectivity problems on directed graphs, and it matches the best known bounds for simpler problems, such as incremental transitive closure.

Cite as

Loukas Georgiadis, Giuseppe F. Italiano, and Nikos Parotsidis. Incremental 2-Edge-Connectivity in Directed Graphs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ICALP.2016.49,
  author =	{Georgiadis, Loukas and Italiano, Giuseppe F. and Parotsidis, Nikos},
  title =	{{Incremental 2-Edge-Connectivity in Directed Graphs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.49},
  URN =		{urn:nbn:de:0030-drops-63292},
  doi =		{10.4230/LIPIcs.ICALP.2016.49},
  annote =	{Keywords: 2-edge connectivity on directed graphs; dynamic graph algorithms; incremental algorithms.}
}
Document
Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction

Authors: Di Wang, Satish Rao, and Michael W. Mahoney


Abstract
In a series of recent breakthroughs, Allen-Zhu and Orecchia [Allen-Zhu/Orecchia, STOC 2015; Allen-Zhu/Orecchia, SODA 2015] leveraged insights from the linear coupling method [Allen-Zhu/Oreccia, arXiv 2014], which is a first-order optimization scheme, to provide improved algorithms for packing and covering linear programs. The result in [Allen-Zhu/Orecchia, STOC 2015] is particularly interesting, as the algorithm for packing LP achieves both width-independence and Nesterov-like acceleration, which was not known to be possible before. Somewhat surprisingly, however, while the dependence of the convergence rate on the error parameter epsilon for packing problems was improved to O(1/epsilon), which corresponds to what accelerated gradient methods are designed to achieve, the dependence for covering problems was only improved to O(1/epsilon^{1.5}), and even that required a different more complicated algorithm, rather than from Nesterov-like acceleration. Given the primal-dual connection between packing and covering problems and since previous algorithms for these very related problems have led to the same epsilon dependence, this discrepancy is surprising, and it leaves open the question of the exact role that the linear coupling is playing in coordinating the complementary gradient and mirror descent step of the algorithm. In this paper, we clarify these issues, illustrating that the linear coupling method can lead to improved O(1/epsilon) dependence for both packing and covering problems in a unified manner, i.e., with the same algorithm and almost identical analysis. Our main technical result is a novel dimension lifting method that reduces the coordinate-wise diameters of the feasible region for covering LPs, which is the key structural property to enable the same Nesterov-like acceleration as in the case of packing LPs. The technique is of independent interest and that may be useful in applying the accelerated linear coupling method to other combinatorial problems.

Cite as

Di Wang, Satish Rao, and Michael W. Mahoney. Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ICALP.2016.50,
  author =	{Wang, Di and Rao, Satish and Mahoney, Michael W.},
  title =	{{Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.50},
  URN =		{urn:nbn:de:0030-drops-63308},
  doi =		{10.4230/LIPIcs.ICALP.2016.50},
  annote =	{Keywords: Convex optimization, Accelerated gradient descent, Linear program, Approximation algorithm, Packing and covering}
}
Document
Random-Edge Is Slower Than Random-Facet on Abstract Cubes

Authors: Thomas Dueholm Hansen and Uri Zwick


Abstract
Random-Edge and Random-Facet are two very natural randomized pivoting rules for the simplex algorithm. The behavior of Random-Facet is fairly well understood. It performs an expected sub-exponential number of pivoting steps on any linear program, or more generally, on any Acyclic Unique Sink Orientation (AUSO) of an arbitrary polytope, making it the fastest known pivoting rule for the simplex algorithm. The behavior of Random-Edge is much less understood. We show that in the AUSO setting, Random-Edge is slower than Random-Facet. To do that, we construct AUSOs of the n-dimensional hypercube on which Random-Edge performs an expected number of 2^{Omega(sqrt(n*log(n)))} steps. This improves on a 2^{Omega(sqrt^3(n))} lower bound of Matoušek and Szabó. As Random-Facet performs an expected number of 2^{O(sqrt(n)} steps on any n-dimensional AUSO, this established our result. Improving our 2^{Omega(sqrt(n*log(n)))} lower bound seems to require radically new techniques.

Cite as

Thomas Dueholm Hansen and Uri Zwick. Random-Edge Is Slower Than Random-Facet on Abstract Cubes. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hansen_et_al:LIPIcs.ICALP.2016.51,
  author =	{Hansen, Thomas Dueholm and Zwick, Uri},
  title =	{{Random-Edge Is Slower Than Random-Facet on Abstract Cubes}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.51},
  URN =		{urn:nbn:de:0030-drops-63316},
  doi =		{10.4230/LIPIcs.ICALP.2016.51},
  annote =	{Keywords: Linear programming, the Simplex Algorithm, Pivoting rules, Acyclic Unique Sink Orientations}
}
Document
Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time

Authors: Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang


Abstract
We study the problem of approximately solving positive linear programs (LPs). This class of LPs models a wide range of fundamental problems in combinatorial optimization and operations research, such as many resource allocation problems, solving non-negative linear systems, computing tomography, single/multi commodity flows on graphs, etc. For the special cases of pure packing or pure covering LPs, recent result by Allen-Zhu and Orecchia [Allen/Zhu/Orecchia, SODA'15] gives O˜(1/(epsilon^3))-time parallel algorithm, which breaks the longstanding O˜(1/(epsilon^4)) running time bound by the seminal work of Luby and Nisan [Luby/Nisan, STOC'93]. We present new parallel algorithm with running time O˜(1/(epsilon^3)) for the more general mixed packing and covering LPs, which improves upon the O˜(1/(epsilon^4))-time algorithm of Young [Young, FOCS'01; Young, arXiv 2014]. Our work leverages the ideas from both the optimization oriented approach [Allen/Zhu/Orecchia, SODA'15; Wang/Mahoney/Mohan/Rao, arXiv 2015], as well as the more combinatorial approach with phases [Young, FOCS'01; Young, arXiv 2014]. In addition, our algorithm, when directly applied to pure packing or pure covering LPs, gives a improved running time of O˜(1/(epsilon^2)).

Cite as

Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang. Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mahoney_et_al:LIPIcs.ICALP.2016.52,
  author =	{Mahoney, Michael W. and Rao, Satish and Wang, Di and Zhang, Peng},
  title =	{{Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^\{-3\}) Time}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.52},
  URN =		{urn:nbn:de:0030-drops-63335},
  doi =		{10.4230/LIPIcs.ICALP.2016.52},
  annote =	{Keywords: Mixed packing and covering, Linear program, Approximation algorithm, Parallel algorithm}
}
Document
Optimization Algorithms for Faster Computational Geometry

Authors: Zeyuan Allen-Zhu, Zhenyu Liao, and Yang Yuan


Abstract
We study two fundamental problems in computational geometry: finding the maximum inscribed ball (MaxIB) inside a bounded polyhedron defined by m hyperplanes, and the minimum enclosing ball (MinEB) of a set of n points, both in d-dimensional space. We improve the running time of iterative algorithms on MaxIB from ~O(m*d*alpha^3/epsilon^3) to ~O(m*d + m*sqrt(d)*alpha/epsilon), a speed-up up to ~O(sqrt(d)*alpha^2/epsilon^2), and MinEB from ~O(n*d/sqrt(epsilon)) to ~O(n*d + n*sqrt(d)/sqrt(epsilon)), a speed-up up to ~O(sqrt(d)). Our improvements are based on a novel saddle-point optimization framework. We propose a new algorithm L1L2SPSolver for solving a class of regularized saddle-point problems, and apply a randomized Hadamard space rotation which is a technique borrowed from compressive sensing. Interestingly, the motivation of using Hadamard rotation solely comes from our optimization view but not the original geometry problem: indeed, it is not immediately clear why MaxIB or MinEB, as a geometric problem, should be easier to solve if we rotate the space by a unitary matrix. We hope that our optimization perspective sheds lights on solving other geometric problems as well.

Cite as

Zeyuan Allen-Zhu, Zhenyu Liao, and Yang Yuan. Optimization Algorithms for Faster Computational Geometry. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 53:1-53:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{allenzhu_et_al:LIPIcs.ICALP.2016.53,
  author =	{Allen-Zhu, Zeyuan and Liao, Zhenyu and Yuan, Yang},
  title =	{{Optimization Algorithms for Faster Computational Geometry}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{53:1--53:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.53},
  URN =		{urn:nbn:de:0030-drops-63325},
  doi =		{10.4230/LIPIcs.ICALP.2016.53},
  annote =	{Keywords: maximum inscribed balls, minimum enclosing balls, approximation algorithms}
}
Document
A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems

Authors: Jelena Marasevic, Clifford Stein, and Gil Zussman


Abstract
We study weighted alpha-fair packing problems, that is, the problems of maximizing the objective functions (i) sum_j w_j*x_j^{1-alpha}/(1-alpha) when alpha > 0, alpha != 1 and (ii) sum_j w_j*ln(x_j) when alpha = 1, over linear constraints A*x <=b, x >= 0, where wj are positive weights and A and b are non-negative. We consider the distributed computation model that was used for packing linear programs and network utility maximization problems. Under this model, we provide a distributed algorithm for general alpha that converges to an epsilon-approximate solution in time (number of distributed iterations) that has an inverse polynomial dependence on the approximation parameter epsilon and poly-logarithmic dependence on the problem size. This is the first distributed algorithm for weighted alpha-fair packing with poly-logarithmic convergence in the input size. The algorithm uses simple local update rules and is stateless (namely, it allows asynchronous updates, is self-stabilizing, and allows incremental and local adjustments). We also obtain a number of structural results that characterize alpha-fair allocations as the value of alpha is varied. These results deepen our understanding of fairness guarantees in alpha-fair packing allocations, and also provide insight into the behavior of alpha-fair allocations in the asymptotic cases alpha -> 0, alpha -> 1, and alpha -> infinity.

Cite as

Jelena Marasevic, Clifford Stein, and Gil Zussman. A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 54:1-54:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{marasevic_et_al:LIPIcs.ICALP.2016.54,
  author =	{Marasevic, Jelena and Stein, Clifford and Zussman, Gil},
  title =	{{A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{54:1--54:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.54},
  URN =		{urn:nbn:de:0030-drops-63344},
  doi =		{10.4230/LIPIcs.ICALP.2016.54},
  annote =	{Keywords: Fairness, distributed and stateless algorithms, resource allocation}
}
Document
All-Pairs Approximate Shortest Paths and Distance Oracle Preprocessing

Authors: Christian Sommer


Abstract
Given an undirected, unweighted graph G on n nodes, there is an O(n^2*poly log(n))-time algorithm that computes a data structure called distance oracle of size O(n^{5/3}*poly log(n)) answering approximate distance queries in constant time. For nodes at distance d the distance estimate is between d and 2d + 1. This new distance oracle improves upon the oracles of Patrascu and Roditty (FOCS 2010), Abraham and Gavoille (DISC 2011), and Agarwal and Brighten Godfrey (PODC 2013) in terms of preprocessing time, and upon the oracle of Baswana and Sen (SODA 2004) in terms of stretch. The running time analysis is tight (up to logarithmic factors) due to a recent lower bound of Abboud and Bodwin (STOC 2016). Techniques include dominating sets, sampling, balls, and spanners, and the main contribution lies in the way these techniques are combined. Perhaps the most interesting aspect from a technical point of view is the application of a spanner without incurring its constant additive stretch penalty.

Cite as

Christian Sommer. All-Pairs Approximate Shortest Paths and Distance Oracle Preprocessing. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 55:1-55:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sommer:LIPIcs.ICALP.2016.55,
  author =	{Sommer, Christian},
  title =	{{All-Pairs Approximate Shortest Paths and Distance Oracle Preprocessing}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.55},
  URN =		{urn:nbn:de:0030-drops-63354},
  doi =		{10.4230/LIPIcs.ICALP.2016.55},
  annote =	{Keywords: graph algorithms, data structures, approximate shortest paths, distance oracles, distance labels}
}
Document
Total Space in Resolution Is at Least Width Squared

Authors: Ilario Bonacina


Abstract
Given an unsatisfiable k-CNF formula phi we consider two complexity measures in Resolution: width and total space. The width is the minimal W such that there exists a Resolution refutation of phi with clauses of at most W literals. The total space is the minimal size T of a memory used to write down a Resolution refutation of phi where the size of the memory is measured as the total number of literals it can contain. We prove that T = Omega((W - k)^2).

Cite as

Ilario Bonacina. Total Space in Resolution Is at Least Width Squared. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 56:1-56:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bonacina:LIPIcs.ICALP.2016.56,
  author =	{Bonacina, Ilario},
  title =	{{Total Space in Resolution Is at Least Width Squared}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{56:1--56:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.56},
  URN =		{urn:nbn:de:0030-drops-62273},
  doi =		{10.4230/LIPIcs.ICALP.2016.56},
  annote =	{Keywords: Resolution, width, total space}
}
Document
Supercritical Space-Width Trade-Offs for Resolution

Authors: Christoph Berkholz and Jakob Nordström


Abstract
We show that there are CNF formulas which can be refuted in resolution in both small space and small width, but for which any small-width resolution proof must have space exceeding by far the linear worst-case upper bound. This significantly strengthens the space-width trade-offs in [Ben-Sasson 2009], and provides one more example of trade-offs in the "supercritical" regime above worst case recently identified by [Razborov 2016]. We obtain our results by using Razborov’s new hardness condensation technique and combining it with the space lower bounds in [Ben-Sasson and Nordström 2008].

Cite as

Christoph Berkholz and Jakob Nordström. Supercritical Space-Width Trade-Offs for Resolution. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{berkholz_et_al:LIPIcs.ICALP.2016.57,
  author =	{Berkholz, Christoph and Nordstr\"{o}m, Jakob},
  title =	{{Supercritical Space-Width Trade-Offs for Resolution}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.57},
  URN =		{urn:nbn:de:0030-drops-62266},
  doi =		{10.4230/LIPIcs.ICALP.2016.57},
  annote =	{Keywords: Proof complexity, resolution, space, width, trade-offs, supercritical}
}
Document
Deterministic Time-Space Trade-Offs for k-SUM

Authors: Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams


Abstract
Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences. To illustrate: - 3-SUM is in deterministic time O(n^2*lg(lg(n))/lg(n)) and space O(sqrt(n*lg(n)/lg(lg(n)))). In general, any polylogarithmic-time improvement over quadratic time for 3-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. - 3-SUM is in deterministic time O(n^2) and space O(sqrt(n)), derandomizing an algorithm of Wang. - A popular conjecture states that 3-SUM requires n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n^{.51})-space algorithm for 3-SUM requires at least n^{2-o(1)} time on the word-RAM. - For k >= 4, k-SUM is in deterministic O(n^{k-2+2/k}) time and O(sqrt(n)) space.

Cite as

Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams. Deterministic Time-Space Trade-Offs for k-SUM. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ICALP.2016.58,
  author =	{Lincoln, Andrea and Vassilevska Williams, Virginia and Wang, Joshua R. and Williams, R. Ryan},
  title =	{{Deterministic Time-Space Trade-Offs for k-SUM}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.58},
  URN =		{urn:nbn:de:0030-drops-62250},
  doi =		{10.4230/LIPIcs.ICALP.2016.58},
  annote =	{Keywords: 3SUM, kSUM, time-space tradeoff, algorithm}
}
Document
Semi-Streaming Algorithms for Annotated Graph Streams

Authors: Justin Thaler


Abstract
Considerable effort has been devoted to the development of streaming algorithms for analyzing massive graphs. Unfortunately, many results have been negative, establishing that a wide variety of problems require Omega(n^2) space to solve. One of the few bright spots has been the development of semi-streaming algorithms for a handful of graph problems - these algorithms use space O(n*polylog(n)). In the annotated data streaming model of Chakrabarti et al. [Chakrabarti/Cormode/Goyal/Thaler, ACM Trans. on Alg. 2014], a computationally limited client wants to compute some property of a massive input, but lacks the resources to store even a small fraction of the input, and hence cannot perform the desired computation locally. The client therefore accesses a powerful but untrusted service provider, who not only performs the requested computation, but also proves that the answer is correct. We consider the notion of semi-streaming algorithms for annotated graph streams (semistreaming annotation schemes for short). These are protocols in which both the client's space usage and the length of the proof are O(n*polylog(n)). We give evidence that semi-streaming annotation schemes represent a more robust solution concept than does the standard semi-streaming model. On the positive side, we give semi-streaming annotation schemes for two dynamic graph problems that are intractable in the standard model: (exactly) counting triangles, and (exactly) computing maximum matchings. The former scheme answers a question of Cormode [Cormode, Problem 47]. On the negative side, we identify for the first time two natural graph problems (connectivity and bipartiteness in a certain edge update model) that can be solved in the standard semi-streaming model, but cannot be solved by annotation schemes of "sub-semi-streaming" cost. That is, these problems are as hard in the annotations model as they are in the standard model.

Cite as

Justin Thaler. Semi-Streaming Algorithms for Annotated Graph Streams. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 59:1-59:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{thaler:LIPIcs.ICALP.2016.59,
  author =	{Thaler, Justin},
  title =	{{Semi-Streaming Algorithms for Annotated Graph Streams}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{59:1--59:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.59},
  URN =		{urn:nbn:de:0030-drops-62247},
  doi =		{10.4230/LIPIcs.ICALP.2016.59},
  annote =	{Keywords: graph streams, stream verification, annotated data streams, probabilistic proof systems}
}
Document
Randomized Query Complexity of Sabotaged and Composed Functions

Authors: Ben-David Shalev and Robin Kothari


Abstract
We study the composition question for bounded-error randomized query complexity: Is R(f circ g) = Omega(R(f)R(g))? We show that inserting a simple function h, whose query complexity is onlyTheta(log R(g)), in between f and g allows us to prove R(f circ h circ g) = Omega(R(f)R(h)R(g)). We prove this using a new lower bound measure for randomized query complexity we call randomized sabotage complexity, RS(f). Randomized sabotage complexity has several desirable properties, such as a perfect composition theorem, RS(f circ g) >= RS(f) RS(g), and a composition theorem with randomized query complexity, R(f circ g) = Omega(R(f) RS(g)). It is also a quadratically tight lower bound for total functions and can be quadratically superior to the partition bound, the best known general lower bound for randomized query complexity. Using this technique we also show implications for lifting theorems in communication complexity. We show that a general lifting theorem from zero-error randomized query to communication complexity implies a similar result for bounded-error algorithms for all total functions.

Cite as

Ben-David Shalev and Robin Kothari. Randomized Query Complexity of Sabotaged and Composed Functions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{shalev_et_al:LIPIcs.ICALP.2016.60,
  author =	{Shalev, Ben-David and Kothari, Robin},
  title =	{{Randomized Query Complexity of Sabotaged and Composed Functions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.60},
  URN =		{urn:nbn:de:0030-drops-62233},
  doi =		{10.4230/LIPIcs.ICALP.2016.60},
  annote =	{Keywords: Randomized query complexity, decision tree complexity, composition theorem, partition bound, lifting theorem}
}
Document
Coding for Interactive Communication Correcting Insertions and Deletions

Authors: Mark Braverman, Ran Gelles, Jieming Mao, and Rafail Ostrovsky


Abstract
We consider the question of interactive communication, in which two remote parties perform a computation while their communication channel is (adversarially) noisy. We extend here the discussion into a more general and stronger class of noise, namely, we allow the channel to perform insertions and deletions of symbols. These types of errors may bring the parties "out of sync", so that there is no consensus regarding the current round of the protocol. In this more general noise model, we obtain the first interactive coding scheme that has a constant rate and tolerates noise rates of up to 1/18 - epsilon. To this end we develop a novel primitive we name edit distance tree code. The edit distance tree code is designed to replace the Hamming distance constraints in Schulman's tree codes (STOC 93), with a stronger edit distance requirement. However, the straightforward generalization of tree codes to edit distance does not seem to yield a primitive that suffices for communication in the presence of synchronization problems. Giving the "right" definition of edit distance tree codes is a main conceptual contribution of this work.

Cite as

Mark Braverman, Ran Gelles, Jieming Mao, and Rafail Ostrovsky. Coding for Interactive Communication Correcting Insertions and Deletions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ICALP.2016.61,
  author =	{Braverman, Mark and Gelles, Ran and Mao, Jieming and Ostrovsky, Rafail},
  title =	{{Coding for Interactive Communication Correcting Insertions and Deletions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.61},
  URN =		{urn:nbn:de:0030-drops-61981},
  doi =		{10.4230/LIPIcs.ICALP.2016.61},
  annote =	{Keywords: Interactive communication, coding, edit distance}
}
Document
Amplifiers for the Moran Process

Authors: Andreas Galanis, Andreas Göbel, Leslie Ann Goldberg, John Lapinskas, and David Richerby


Abstract
The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness r > 1. All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or non-mutant) is passed on to an out-neighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every r > 1, the extinction probability tends to 0 as the number of vertices increases. Strong amplification is a rather surprising property - it means that in such graphs, the fixation probability of a uniformly-placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of r > 1 (independently of n). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified". Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly-amplifying families - superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. We give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars". When the algorithm is run on an n-vertex graph in this family, starting with a uniformly-chosen mutant, the extinction probability is roughly n^{-1/2} (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of n). Finally, we prove that our analysis of megastars is fairly tight - there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al. A full version [Galanis/Göbel/Goldberg/Lapinskas/Richerby, Preprint] containing detailed proofs is available at http://arxiv.org/abs/1512.05632. Theorem-numbering here matches the full version.

Cite as

Andreas Galanis, Andreas Göbel, Leslie Ann Goldberg, John Lapinskas, and David Richerby. Amplifiers for the Moran Process. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 62:1-62:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2016.62,
  author =	{Galanis, Andreas and G\"{o}bel, Andreas and Goldberg, Leslie Ann and Lapinskas, John and Richerby, David},
  title =	{{Amplifiers for the Moran Process}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{62:1--62:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.62},
  URN =		{urn:nbn:de:0030-drops-62227},
  doi =		{10.4230/LIPIcs.ICALP.2016.62},
  annote =	{Keywords: Moran process, randomised algorithm on graphs, evolutionary dynamics}
}
Document
Mixing Time of Markov Chains, Dynamical Systems and Evolution

Authors: Ioannis Panageas and Nisheeth K. Vishnoi


Abstract
In this paper we study the mixing time of evolutionary Markov chains over populations of a fixed size (N) in which each individual can be one of m types. These Markov chains have the property that they are guided by a dynamical system from the m-dimensional probability simplex to itself. Roughly, given the current state of the Markov chain, which can be viewed as a probability distribution over the m types, the next state is generated by applying this dynamical system to this distribution, and then sampling from it N times. Many processes in nature, from biology to sociology, are evolutionary and such chains can be used to model them. In this study, the mixing time is of particular interest as it determines the speed of evolution and whether the statistics of the steady state can be efficiently computed. In a recent result [Panageas, Srivastava, Vishnoi, Soda, 2016], it was suggested that the mixing time of such Markov chains is connected to the geometry of this guiding dynamical system. In particular, when the dynamical system has a fixed point which is a global attractor, then the mixing is fast. The limit sets of dynamical systems, however, can exhibit more complex behavior: they could have multiple fixed points that are not necessarily stable, periodic orbits, or even chaos. Such behavior arises in important evolutionary settings such as the dynamics of sexual evolution and that of grammar acquisition. In this paper we prove that the geometry of the dynamical system can also give tight mixing time bounds when the dynamical system has multiple fixed points and periodic orbits. We show that the mixing time continues to remain small in the presence of several unstable fixed points and is exponential in N when there are two or more stable fixed points. As a consequence of our results, we obtain a phase transition result for the mixing time of the sexual/grammar model mentioned above. We arrive at the conclusion that in the interesting parameter regime for these models, i.e., when there are multiple stable fixed points, the mixing is slow. Our techniques strengthen the connections between Markov chains and dynamical systems and we expect that the tools developed in this paper should have a wider applicability.

Cite as

Ioannis Panageas and Nisheeth K. Vishnoi. Mixing Time of Markov Chains, Dynamical Systems and Evolution. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{panageas_et_al:LIPIcs.ICALP.2016.63,
  author =	{Panageas, Ioannis and Vishnoi, Nisheeth K.},
  title =	{{Mixing Time of Markov Chains, Dynamical Systems and Evolution}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.63},
  URN =		{urn:nbn:de:0030-drops-62213},
  doi =		{10.4230/LIPIcs.ICALP.2016.63},
  annote =	{Keywords: Markov chains, Mixing time, Dynamical Systems, Evolutionary dynamics, Language evolution}
}
Document
Information Cascades on Arbitrary Topologies

Authors: Jun Wan, Yu Xia, Liang Li, and Thomas Moscibroda


Abstract
In this paper, we study information cascades on graphs. In this setting, each node in the graph represents a person. One after another, each person has to take a decision based on a private signal as well as the decisions made by earlier neighboring nodes. Such information cascades commonly occur in practice and have been studied in complete graphs where everyone can overhear the decisions of every other player. It is known that information cascades can be fragile and based on very little information, and that they have a high likelihood of being wrong. Generalizing the problem to arbitrary graphs reveals interesting insights. In particular, we show that in a random graph G(n,q), for the right value of q, the number of nodes making a wrong decision is logarithmic in n. That is, in the limit for large n, the fraction of players that make a wrong decision tends to zero. This is intriguing because it contrasts to the two natural corner cases: empty graph (everyone decides independently based on his private signal) and complete graph (all decisions are heard by all nodes). In both of these cases a constant fraction of nodes make a wrong decision in expectation. Thus, our result shows that while both too little and too much information sharing causes nodes to take wrong decisions, for exactly the right amount of information sharing, asymptotically everyone can be right. We further show that this result in random graphs is asymptotically optimal for any topology, even if nodes follow a globally optimal algorithmic strategy. Based on the analysis of random graphs, we explore how topology impacts global performance and construct an optimal deterministic topology among layer graphs.

Cite as

Jun Wan, Yu Xia, Liang Li, and Thomas Moscibroda. Information Cascades on Arbitrary Topologies. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 64:1-64:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wan_et_al:LIPIcs.ICALP.2016.64,
  author =	{Wan, Jun and Xia, Yu and Li, Liang and Moscibroda, Thomas},
  title =	{{Information Cascades on Arbitrary Topologies}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{64:1--64:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.64},
  URN =		{urn:nbn:de:0030-drops-63417},
  doi =		{10.4230/LIPIcs.ICALP.2016.64},
  annote =	{Keywords: Information Cascades, Herding Effect, Random Graphs}
}
Document
Analysing Survey Propagation Guided Decimationon Random Formulas

Authors: Samuel Hetterich


Abstract
Let vec(theta) be a uniformly distributed random k-SAT formula with n variables and m clauses. For clauses/variables ratio m/n <= r_{k-SAT} ~ 2^k*ln(2) the formula vec(theta) is satisfiable with high probability. However, no efficient algorithm is known to provably find a satisfying assignment beyond m/n ~ 2k*ln(k)/k with a non-vanishing probability. Non-rigorous statistical mechanics work on k-CNF led to the development of a new efficient "message passing algorithm" called Survey Propagation Guided Decimation [Mézard et al., Science 2002]. Experiments conducted for k=3,4,5 suggest that the algorithm finds satisfying assignments close to r_{k-SAT}. However, in the present paper we prove that the basic version of Survey Propagation Guided Decimation fails to solve random k-SAT formulas efficiently already for m/n = 2^{k}(1 + epsilon_k)*ln(k)/k with lim_{k -> infinity} epsilon_k = 0 almost a factor k below r_{k-SAT}.

Cite as

Samuel Hetterich. Analysing Survey Propagation Guided Decimationon Random Formulas. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 65:1-65:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hetterich:LIPIcs.ICALP.2016.65,
  author =	{Hetterich, Samuel},
  title =	{{Analysing Survey Propagation Guided Decimationon Random Formulas}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{65:1--65:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.65},
  URN =		{urn:nbn:de:0030-drops-62197},
  doi =		{10.4230/LIPIcs.ICALP.2016.65},
  annote =	{Keywords: Survey Propagation Guided Decimation, Message Passing Algorithm, Graph Theory, Random k-SAT}
}
Document
Approximation Algorithms for Aversion k-Clustering via Local k-Median

Authors: Anupam Gupta, Guru Guruganesh, and Melanie Schmidt


Abstract
In the aversion k-clustering problem, given a metric space, we want to cluster the points into k clusters. The cost incurred by each point is the distance to the furthest point in its cluster, and the cost of the clustering is the sum of all these per-point-costs. This problem is motivated by questions in generating automatic abstractions of extensive-form games. We reduce this problem to a "local" k-median problem where each facility has a prescribed radius and can only connect to clients within that radius. Our main results is a constant-factor approximation algorithm for the aversion k-clustering problem via the local k-median problem. We use a primal-dual approach; our technical contribution is a non-local rounding step which we feel is of broader interest.

Cite as

Anupam Gupta, Guru Guruganesh, and Melanie Schmidt. Approximation Algorithms for Aversion k-Clustering via Local k-Median. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 66:1-66:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2016.66,
  author =	{Gupta, Anupam and Guruganesh, Guru and Schmidt, Melanie},
  title =	{{Approximation Algorithms for Aversion k-Clustering via Local k-Median}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{66:1--66:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.66},
  URN =		{urn:nbn:de:0030-drops-62180},
  doi =		{10.4230/LIPIcs.ICALP.2016.66},
  annote =	{Keywords: Approximation algorithms, clustering, k-median, primal-dual}
}
Document
The Non-Uniform k-Center Problem

Authors: Deeparnab Chakrabarty, Prachi Goyal, and Ravishankar Krishnaswamy


Abstract
In this paper, we introduce and study the Non-Uniform k-Center (NUkC) problem. Given a finite metric space (X, d) and a collection of balls of radii {r_1 >= ... >= r_k}, the NUkC problem is to find a placement of their centers on the metric space and find the minimum dilation alpha, such that the union of balls of radius alpha*r_i around the i-th center covers all the points in X. This problem naturally arises as a min-max vehicle routing problem with fleets of different speeds, or as a wireless router placement problem with routers of different powers/ranges. The NUkC problem generalizes the classic k-center problem when all the k radii are the same (which can be assumed to be 1 after scaling). It also generalizes the k-center with outliers (kCwO for short) problem when there are k balls of radius 1 and l balls of radius 0. There are 2-approximation and 3-approximation algorithms known for these problems respectively; the former is best possible unless P=NP and the latter remains unimproved for 15 years. We first observe that no O(1)-approximation is to the optimal dilation is possible unless P=NP, implying that the NUkC problem is more non-trivial than the above two problems. Our main algorithmic result is an (O(1), O(1))-bi-criteria approximation result: we give an O(1)-approximation to the optimal dilation, however, we may open Theta(1) centers of each radii. Our techniques also allow us to prove a simple (uni-criteria), optimal 2-approximation to the kCwO problem improving upon the long-standing 3-factor. Our main technical contribution is a connection between the NUkC problem and the so-called firefighter problems on trees which have been studied recently in the TCS community. We show NUkC is as hard as the firefighter problem. While we don't know if the converse is true, we are able to adapt ideas from recent works [Chalermsook/Chuzhoy, SODA 2010; Asjiashvili/Baggio/Zenklusen, arXiv 2016] in non-trivial ways to obtain our constant factor bi-criteria approximation.

Cite as

Deeparnab Chakrabarty, Prachi Goyal, and Ravishankar Krishnaswamy. The Non-Uniform k-Center Problem. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 67:1-67:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ICALP.2016.67,
  author =	{Chakrabarty, Deeparnab and Goyal, Prachi and Krishnaswamy, Ravishankar},
  title =	{{The Non-Uniform k-Center Problem}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{67:1--67:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.67},
  URN =		{urn:nbn:de:0030-drops-62178},
  doi =		{10.4230/LIPIcs.ICALP.2016.67},
  annote =	{Keywords: Clustering, k-Center, Approximation Algorithms, Firefighter Problem}
}
Document
k-Center Clustering Under Perturbation Resilience

Authors: Maria-Florina Balcan, Nika Haghtalab, and Colin White


Abstract
The k-center problem is a canonical and long-studied facility location and clustering problem with many applications in both its symmetric and asymmetric forms. Both versions of the problem have tight approximation factors on worst case instances: a 2-approximation for symmetric kcenter and an O(log*(k))-approximation for the asymmetric version. Therefore to improve on these ratios, one must go beyond the worst case. In this work, we take this approach and provide strong positive results both for the asymmetric and symmetric k-center problems under a very natural input stability (promise) condition called alpha-perturbation resilience [Bilu Linial, 2012], which states that the optimal solution does not change under any alpha-factor perturbation to the input distances. We show that by assuming 2-perturbation resilience, the exact solution for the asymmetric k-center problem can be found in polynomial time. To our knowledge, this is the first problem that is hard to approximate to any constant factor in the worst case, yet can be optimally solved in polynomial time under perturbation resilience for a constant value of alpha. Furthermore, we prove our result is tight by showing symmetric k-center under (2-epsilon)-perturbation resilience is hard unless NP=RP. This is the first tight result for any problem under perturbation resilience, i.e., this is the first time the exact value of alpha for which the problem switches from being NP-hard to efficiently computable has been found. Our results illustrate a surprising relationship between symmetric and asymmetric k-center instances under perturbation resilience. Unlike approximation ratio, for which symmetric k-center is easily solved to a factor of 2 but asymmetric k-center cannot be approximated to any constant factor, both symmetric and asymmetric k-center can be solved optimally under resilience to 2-perturbations.

Cite as

Maria-Florina Balcan, Nika Haghtalab, and Colin White. k-Center Clustering Under Perturbation Resilience. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{balcan_et_al:LIPIcs.ICALP.2016.68,
  author =	{Balcan, Maria-Florina and Haghtalab, Nika and White, Colin},
  title =	{{k-Center Clustering Under Perturbation Resilience}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.68},
  URN =		{urn:nbn:de:0030-drops-62160},
  doi =		{10.4230/LIPIcs.ICALP.2016.68},
  annote =	{Keywords: k-center, clustering, perturbation resilience}
}
Document
Approximation Algorithms for Clustering Problems with Lower Bounds and Outliers

Authors: Sara Ahmadian and Chaitanya Swamy


Abstract
We consider clustering problems with non-uniform lower bounds and outliers, and obtain the first approximation guarantees for these problems. We have a set F of facilities with lower bounds {L_i}_{i in F} and a set D of clients located in a common metric space {c(i,j)}_{i,j in F union D}, and bounds k, m. A feasible solution is a pair (S subseteq F, sigma: D -> S union {out}), where sigma specifies the client assignments, such that |S| <=k, |sigma^{-1}(i)| >= L_i for all i in S, and |sigma^{-1}(out)| <= m. In the lower-bounded min-sum-of-radii with outliers P (LBkSRO) problem, the objective is to minimize sum_{i in S} max_{j in sigma^{-1})i)}, and in the lower-bounded k-supplier with outliers (LBkSupO) problem, the objective is to minimize max_{i in S} max_{j in sigma^{-1})i)} c(i,j). We obtain an approximation factor of 12.365 for LBkSRO, which improves to 3.83 for the non-outlier version (i.e., m = 0). These also constitute the first approximation bounds for the min-sum-of-radii objective when we consider lower bounds and outliers separately. We apply the primal-dual method to the relaxation where we Lagrangify the |S| <= k constraint. The chief technical contribution and novelty of our algorithm is that, departing from the standard paradigm used for such constrained problems, we obtain an O(1)-approximation despite the fact that we do not obtain a Lagrangian-multiplier-preserving algorithm for the Lagrangian relaxation. We believe that our ideas have broader applicability to other clustering problems with outliers as well. We obtain approximation factors of 5 and 3 respectively for LBkSupO and its non-outlier version. These are the first approximation results for k-supplier with non-uniform lower bounds.

Cite as

Sara Ahmadian and Chaitanya Swamy. Approximation Algorithms for Clustering Problems with Lower Bounds and Outliers. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 69:1-69:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ahmadian_et_al:LIPIcs.ICALP.2016.69,
  author =	{Ahmadian, Sara and Swamy, Chaitanya},
  title =	{{Approximation Algorithms for Clustering Problems with Lower Bounds and Outliers}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{69:1--69:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.69},
  URN =		{urn:nbn:de:0030-drops-62153},
  doi =		{10.4230/LIPIcs.ICALP.2016.69},
  annote =	{Keywords: Approximation algorithms, facililty-location problems, primal-dual method, Lagrangian relaxation, k-center problems, minimizing sum of radii}
}
Document
A Duality Based 2-Approximation Algorithm for Maximum Agreement Forest

Authors: Frans Schalekamp, Anke van Zuylen, and Suzanne van der Ster


Abstract
We give a 2-approximation algorithm for the Maximum Agreement Forest problem on two rooted binary trees. This NP-hard problem has been studied extensively in the past two decades, since it can be used to compute the Subtree Prune-and-Regraft (SPR) distance between two phylogenetic trees. Our result improves on the very recent 2.5-approximation algorithm due to Shi, Feng, You and Wang (2015). Our algorithm is the first approximation algorithm for this problem that uses LP duality in its analysis.

Cite as

Frans Schalekamp, Anke van Zuylen, and Suzanne van der Ster. A Duality Based 2-Approximation Algorithm for Maximum Agreement Forest. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 70:1-70:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{schalekamp_et_al:LIPIcs.ICALP.2016.70,
  author =	{Schalekamp, Frans and van Zuylen, Anke and van der Ster, Suzanne},
  title =	{{A Duality Based 2-Approximation Algorithm for Maximum Agreement Forest}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{70:1--70:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.70},
  URN =		{urn:nbn:de:0030-drops-62149},
  doi =		{10.4230/LIPIcs.ICALP.2016.70},
  annote =	{Keywords: Maximum agreement forest, phylogenetic tree, SPR distance, subtree prune-and-regraft distance, computational biology}
}
Document
Robust Assignments via Ear Decompositions and Randomized Rounding

Authors: David Adjiashvili, Viktor Bindewald, and Dennis Michaels


Abstract
Many real-life planning problems require making a priori decisions before all parameters of the problem have been revealed. An important special case of such problem arises in scheduling and transshipment problems, where a set of jobs needs to be assigned to the available set of machines or personnel (resources), in a way that all jobs have assigned resources, and no two jobs share the same resource. In its nominal form, the resulting computational problem becomes the assignment problem. This paper deals with the Robust Assignment Problem (RAP) which models situations in which certain assignments are vulnerable and may become unavailable after the solution has been chosen. The goal is to choose a minimum-cost collection of assignments (edges in the corresponding bipartite graph) so that if any vulnerable edge becomes unavailable, the remaining part of the solution contains an assignment of all jobs. We develop algorithms and hardness results for RAP and establish several connections to well-known concepts from matching theory, robust optimization, LP-based techniques and combinations thereof.

Cite as

David Adjiashvili, Viktor Bindewald, and Dennis Michaels. Robust Assignments via Ear Decompositions and Randomized Rounding. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 71:1-71:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{adjiashvili_et_al:LIPIcs.ICALP.2016.71,
  author =	{Adjiashvili, David and Bindewald, Viktor and Michaels, Dennis},
  title =	{{Robust Assignments via Ear Decompositions and Randomized Rounding}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{71:1--71:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.71},
  URN =		{urn:nbn:de:0030-drops-62133},
  doi =		{10.4230/LIPIcs.ICALP.2016.71},
  annote =	{Keywords: robust optimization, matching theory, ear decomposition, randomized rounding, approximation algorithm}
}
Document
Closing the Gap for Makespan Scheduling via Sparsification Techniques

Authors: Klaus Jansen, Kim-Manuel Klein, and José Verschae


Abstract
Makespan scheduling on identical machines is one of the most basic and fundamental packing problem studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines that minimizes the makespan. The problem is strongly NPhard, and thus we do not expect a (1 + epsilon)-approximation algorithm with a running time that depends polynomially on 1/epsilon. Furthermore, Chen et al. [Chen/JansenZhang, SODA'13] recently showed that a running time of 2^{1/epsilon}^{1-delta} + poly(n) for any delta > 0 would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on 1/epsilon, the better of which achieves a running time of 2^{~O(1/epsilon^{2})} + O(n*log(n)) [Jansen, SIAM J. Disc. Math. 2010]. In this paper we obtain an algorithm with a running time of 2^{~O(1/epsilon)} + O(n*log(n)), which is tight under ETH up to logarithmic factors on the exponent. Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings. In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time 2^{~O(1/epsilon)} + poly(n).

Cite as

Klaus Jansen, Kim-Manuel Klein, and José Verschae. Closing the Gap for Makespan Scheduling via Sparsification Techniques. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 72:1-72:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ICALP.2016.72,
  author =	{Jansen, Klaus and Klein, Kim-Manuel and Verschae, Jos\'{e}},
  title =	{{Closing the Gap for Makespan Scheduling via Sparsification Techniques}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{72:1--72:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.72},
  URN =		{urn:nbn:de:0030-drops-62122},
  doi =		{10.4230/LIPIcs.ICALP.2016.72},
  annote =	{Keywords: scheduling, approximation, PTAS, makespan, ETH}
}
Document
Constant Approximation for Capacitated k-Median with (1+epsilon)-Capacity Violation

Authors: Gökalp Demirci and Shi Li


Abstract
We study the Capacitated k-Median problem for which existing constant-factor approximation algorithms are all pseudo-approximations that violate either the capacities or the upper bound k on the number of open facilities. Using the natural LP relaxation for the problem, one can only hope to get the violation factor down to 2. Li [SODA'16] introduced a novel LP to go beyond the limit of 2 and gave a constant-factor approximation algorithm that opens (1 + epsilon)*k facilities. We use the configuration LP of Li [SODA'16] to give a constant-factor approximation for the Capacitated k-Median problem in a seemingly harder configuration: we violate only the capacities by 1 + epsilon. This result settles the problem as far as pseudo-approximation algorithms are concerned.

Cite as

Gökalp Demirci and Shi Li. Constant Approximation for Capacitated k-Median with (1+epsilon)-Capacity Violation. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{demirci_et_al:LIPIcs.ICALP.2016.73,
  author =	{Demirci, G\"{o}kalp and Li, Shi},
  title =	{{Constant Approximation for Capacitated k-Median with (1+epsilon)-Capacity Violation}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.73},
  URN =		{urn:nbn:de:0030-drops-62112},
  doi =		{10.4230/LIPIcs.ICALP.2016.73},
  annote =	{Keywords: Approximation Algorithms, Capacitated k-Median, Pseudo Approximation, Capacity Violation}
}
Document
Approximating Directed Steiner Problems via Tree Embedding

Authors: Bundit Laekhanukit


Abstract
Directed Steiner problems are fundamental problems in Combinatorial Optimization and Theoretical Computer Science. An important problem in this genre is the k-edge connected directed Steiner tree (k-DST) problem. In this problem, we are given a directed graph G on n vertices with edge-costs, a root vertex r, a set of h terminals T and an integer k. The goal is to find a min-cost subgraph H subseteq G that connects r to each terminal t in T by k edge-disjoint r, t-paths. This problem includes as special cases the well-known directed Steiner tree (DST) problem (the case k=1) and the group Steiner tree (GST) problem. Despite having been studied and mentioned many times in literature, e.g., by Feldman et al. [SODA'09, JCSS'12], by Cheriyan et al. [SODA'12, TALG'14], by Laekhanukit [SODA'14] and in a survey by Kortsarz and Nutov [Handbook of Approximation Algorithms and Metaheuristics], there was no known non-trivial approximation algorithm for k-DST for k >= 2 even in a special case that an input graph is directed acyclic and has a constant number of layers. If an input graph is not acyclic, the complexity status of k-DST is not known even for a very strict special case that k=2 and h=2. In this paper, we make a progress toward developing a non-trivial approximation algorithm for k-DST. We present an O(D*k^{D-1}*log(n))-approximation algorithm for k-DST on directed acyclic graphs (DAGs) with D layers, which can be extended to a special case of k-DST on "general graphs" when an instance has a D-shallow optimal solution, i.e., there exist k edge-disjoint r, t-paths, each of length at most D, for every terminal t in T. For the case k=1 (DST), our algorithm yields an approximation ratio of O(D*log(h)), thus implying an O(log^3(h))-approximation algorithm for DST that runs in quasi-polynomial-time (due to the height-reduction of Zelikovsky [Algorithmica'97]). Our algorithm is based on an LP-formulation that allows us to embed a solution to a tree-instance of GST, which does not preserve connectivity. We show, however, that one can randomly extract a solution of k-DST from the tree-instance of GST. Our algorithm is almost tight when k and D are constants since the case that k=1 and D=3 is NP-hard to approximate to within a factor of O(log(h)), and our algorithm archives the same approximation ratio for this special case. We also remark that the k^{1/4-epsilon}-hardness instance of k-DST is a DAG with 6 layers, and our algorithm gives O(k^5*log(n))-approximation for this special case. Consequently, as our algorithm works for general graphs, we obtain an O(D*k^{D-1}*log(n))-approximation algorithm for a D-shallow instance of the k edge-connected directed Steiner subgraph problem, where we wish to connect every pair of terminals by k edgedisjoint paths.

Cite as

Bundit Laekhanukit. Approximating Directed Steiner Problems via Tree Embedding. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 74:1-74:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{laekhanukit:LIPIcs.ICALP.2016.74,
  author =	{Laekhanukit, Bundit},
  title =	{{Approximating Directed Steiner Problems via Tree Embedding}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{74:1--74:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.74},
  URN =		{urn:nbn:de:0030-drops-62100},
  doi =		{10.4230/LIPIcs.ICALP.2016.74},
  annote =	{Keywords: Approximation Algorithms, Network Design, Graph Connectivity, Directed Graph}
}
Document
Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median

Authors: Zachary Friggstad and Yifeng Zhang


Abstract
Budgeted Red-Blue Median is a generalization of classic k-Median in that there are two sets of facilities, say R and B, that can be used to serve clients located in some metric space. The goal is to open kr facilities in R and kb facilities in B for some given bounds kr, kb and connect each client to their nearest open facility in a way that minimizes the total connection cost. We extend work by Hajiaghayi, Khandekar, and Kortsarz [2012] and show that a multipleswap local search heuristic can be used to obtain a (5 + epsilon)-approximation for Budgeted RedBlue Median for any constant epsilon > 0. This is an improvement over their single swap analysis and beats the previous best approximation guarantee of 8 by Swamy [2014]. We also present a matching lower bound showing that for every p >= 1, there are instances of Budgeted Red-Blue Median with local optimum solutions for the p-swap heuristic whose cost is 5 + Omega(1/p) times the optimum solution cost. Thus, our analysis is tight up to the lower order terms. In particular, for any epsilon > 0 we show the single-swap heuristic admits local optima whose cost can be as bad as 7 - epsilon times the optimum solution cost.

Cite as

Zachary Friggstad and Yifeng Zhang. Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 75:1-75:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{friggstad_et_al:LIPIcs.ICALP.2016.75,
  author =	{Friggstad, Zachary and Zhang, Yifeng},
  title =	{{Tight Analysis of a Multiple-Swap Heurstic for Budgeted Red-Blue Median}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{75:1--75:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.75},
  URN =		{urn:nbn:de:0030-drops-62094},
  doi =		{10.4230/LIPIcs.ICALP.2016.75},
  annote =	{Keywords: Approximation Algorithms, Local search, Red-Blue Meidan}
}
Document
Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices

Authors: Shi Bai, Damien Stehlé, and Weiqiang Wen


Abstract
We present a probabilistic polynomial-time reduction from the lattice Bounded Distance Decoding (BDD) problem with parameter 1/( sqrt(2) * gamma) to the unique Shortest Vector Problem (uSVP) with parameter gamma for any gamma > 1 that is polynomial in the lattice dimension n. It improves the BDD to uSVP reductions of [Lyubashevsky and Micciancio, CRYPTO, 2009] and [Liu, Wang, Xu and Zheng, Inf. Process. Lett., 2014], which rely on Kannan's embedding technique. The main ingredient to the improvement is the use of Khot's lattice sparsification [Khot, FOCS, 2003] before resorting to Kannan's embedding, in order to boost the uSVP parameter.

Cite as

Shi Bai, Damien Stehlé, and Weiqiang Wen. Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 76:1-76:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bai_et_al:LIPIcs.ICALP.2016.76,
  author =	{Bai, Shi and Stehl\'{e}, Damien and Wen, Weiqiang},
  title =	{{Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{76:1--76:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.76},
  URN =		{urn:nbn:de:0030-drops-62085},
  doi =		{10.4230/LIPIcs.ICALP.2016.76},
  annote =	{Keywords: Lattices, Bounded Distance Decoding Problem, Unique Shortest Vector Problem, Sparsification}
}
Document
A Parallel Repetition Theorem for All Entangled Games

Authors: Henry Yuen


Abstract
The behavior of games repeated in parallel, when played with quantumly entangled players, has received much attention in recent years. Quantum analogues of Raz's classical parallel repetition theorem have been proved for many special classes of games. However, for general entangled games no parallel repetition theorem was known. We prove that the entangled value of a two-player game G repeated n times in parallel is at most c_G*n^{-1/4}*log(n) for a constant c_G depending on G, provided that the entangled value of G is less than 1. In particular, this gives the first proof that the entangled value of a parallel repeated game must converge to 0 for all games whose entangled value is less than 1. Central to our proof is a combination of both classical and quantum correlated sampling.

Cite as

Henry Yuen. A Parallel Repetition Theorem for All Entangled Games. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 77:1-77:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yuen:LIPIcs.ICALP.2016.77,
  author =	{Yuen, Henry},
  title =	{{A Parallel Repetition Theorem for All Entangled Games}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{77:1--77:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.77},
  URN =		{urn:nbn:de:0030-drops-62073},
  doi =		{10.4230/LIPIcs.ICALP.2016.77},
  annote =	{Keywords: parallel repetition, direct product theorems, entangled games, quantum games}
}
Document
Tight Sum-Of-Squares Lower Bounds for Binary Polynomial Optimization Problems

Authors: Adam Kurpisz, Samuli Leppänen, and Monaldo Mastrolilli


Abstract
We give two results concerning the power of the Sum-Of-Squares(SoS)/Lasserre hierarchy. For binary polynomial optimization problems of degree 2d and an odd number of variables n, we prove that (n+2d-1)/2 levels of the SoS/Lasserre hierarchy are necessary to provide the exact optimal value. This matches the recent upper bound result by Sakaue, Takeda, Kim and Ito. Additionally, we study a conjecture by Laurent, who considered the linear representation of a set with no integral points. She showed that the Sherali-Adams hierarchy requires n levels to detect the empty integer hull, and conjectured that the SoS/Lasserre rank for the same problem is n-1. We disprove this conjecture and derive lower and upper bounds for the rank.

Cite as

Adam Kurpisz, Samuli Leppänen, and Monaldo Mastrolilli. Tight Sum-Of-Squares Lower Bounds for Binary Polynomial Optimization Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 78:1-78:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kurpisz_et_al:LIPIcs.ICALP.2016.78,
  author =	{Kurpisz, Adam and Lepp\"{a}nen, Samuli and Mastrolilli, Monaldo},
  title =	{{Tight Sum-Of-Squares Lower Bounds for Binary Polynomial Optimization Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{78:1--78:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.78},
  URN =		{urn:nbn:de:0030-drops-63368},
  doi =		{10.4230/LIPIcs.ICALP.2016.78},
  annote =	{Keywords: SoS/Lasserre hierarchy, lift and project methods, binary polynomial optimization}
}
Document
Correlation Decay and Tractability of CSPs

Authors: Jonah Brown-Cohen and Prasad Raghavendra


Abstract
The algebraic dichotomy conjecture of Bulatov, Krokhin and Jeavons yields an elegant characterization of the complexity of constraint satisfaction problems. Roughly speaking, the characterization asserts that a CSP L is tractable if and only if there exist certain non-trivial operations known as polymorphisms to combine solutions to L to create new ones. In this work, we study the dynamical system associated with repeated applications of a polymorphism to a distribution over assignments. Specifically, we exhibit a correlation decay phenomenon that makes two variables or groups of variables that are not perfectly correlated become independent after repeated applications of a polymorphism. We show that this correlation decay phenomenon can be utilized in designing algorithms for CSPs by exhibiting two applications: 1. A simple randomized algorithm to solve linear equations over a prime field, whose analysis crucially relies on correlation decay. 2. A sufficient condition for the simple linear programming relaxation for a 2-CSP to be sound (have no integrality gap) on a given instance.

Cite as

Jonah Brown-Cohen and Prasad Raghavendra. Correlation Decay and Tractability of CSPs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 79:1-79:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{browncohen_et_al:LIPIcs.ICALP.2016.79,
  author =	{Brown-Cohen, Jonah and Raghavendra, Prasad},
  title =	{{Correlation Decay and Tractability of CSPs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{79:1--79:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.79},
  URN =		{urn:nbn:de:0030-drops-62064},
  doi =		{10.4230/LIPIcs.ICALP.2016.79},
  annote =	{Keywords: Constraint Satisfaction, Polymorphisms, Linear Equations, Correlation Decay}
}
Document
On Percolation and NP-Hardness

Authors: Huck Bennett, Daniel Reichman, and Igor Shinkar


Abstract
The edge-percolation and vertex-percolation random graph models start with an arbitrary graph G, and randomly delete edges or vertices of G with some fixed probability. We study the computational hardness of problems whose inputs are obtained by applying percolation to worst-case instances. Specifically, we show that a number of classical N P-hard graph problems remain essentially as hard on percolated instances as they are in the worst-case (assuming NP !subseteq BPP). We also prove hardness results for other NP-hard problems such as Constraint Satisfaction Problems, where random deletions are applied to clauses or variables. We focus on proving the hardness of the Maximum Independent Set problem and the Graph Coloring problem on percolated instances. To show this we establish the robustness of the corresponding parameters alpha(.) and Chi(.) to percolation, which may be of independent interest. Given a graph G, let G' be the graph obtained by randomly deleting edges of G. We show that if alpha(G) is small, then alpha(G') remains small with probability at least 0.99. Similarly, we show that if Chi(G) is large, then Chi(G') remains large with probability at least 0.99.

Cite as

Huck Bennett, Daniel Reichman, and Igor Shinkar. On Percolation and NP-Hardness. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ICALP.2016.80,
  author =	{Bennett, Huck and Reichman, Daniel and Shinkar, Igor},
  title =	{{On Percolation and NP-Hardness}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.80},
  URN =		{urn:nbn:de:0030-drops-62056},
  doi =		{10.4230/LIPIcs.ICALP.2016.80},
  annote =	{Keywords: percolation, NP-hardness, random subgraphs, chromatic number}
}
Document
Tight Hardness Results for Maximum Weight Rectangles

Authors: Arturs Backurs, Nishanth Dikkala, and Christos Tzamos


Abstract
Given n weighted points (positive or negative) in d dimensions, what is the axis-aligned box which maximizes the total weight of the points it contains? The best known algorithm for this problem is based on a reduction to a related problem, the Weighted Depth problem [Chan, FOCS, 2013], and runs in time O(n^d). It was conjectured [Barbay et al., CCCG, 2013] that this runtime is tight up to subpolynomial factors. We answer this conjecture affirmatively by providing a matching conditional lower bound. We also provide conditional lower bounds for the special case when points are arranged in a grid (a well studied problem known as Maximum Subarray problem) as well as for other related problems. All our lower bounds are based on assumptions that the best known algorithms for the All-Pairs Shortest Paths problem (APSP) and for the Max-Weight k-Clique problem in edge-weighted graphs are essentially optimal.

Cite as

Arturs Backurs, Nishanth Dikkala, and Christos Tzamos. Tight Hardness Results for Maximum Weight Rectangles. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 81:1-81:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{backurs_et_al:LIPIcs.ICALP.2016.81,
  author =	{Backurs, Arturs and Dikkala, Nishanth and Tzamos, Christos},
  title =	{{Tight Hardness Results for Maximum Weight Rectangles}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{81:1--81:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.81},
  URN =		{urn:nbn:de:0030-drops-62040},
  doi =		{10.4230/LIPIcs.ICALP.2016.81},
  annote =	{Keywords: Maximum Rectangles, Hardness in P}
}
Document
The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction

Authors: Kasper Green Larsen and Jelani Nelson


Abstract
For any n > 1, 0 < epsilon < 1/2, and N > n^C for some constant C > 0, we show the existence of an N-point subset X of l_2^n such that any linear map from X to l_2^m with distortion at most 1 + epsilon must have m = Omega(min{n, epsilon^{-2}*lg(N)). This improves a lower bound of Alon [Alon, Discre. Mathem., 1999], in the linear setting, by a lg(1/epsilon) factor. Our lower bound matches the upper bounds provided by the identity matrix and the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, Contem. Mathem., 1984].

Cite as

Kasper Green Larsen and Jelani Nelson. The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 82:1-82:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ICALP.2016.82,
  author =	{Larsen, Kasper Green and Nelson, Jelani},
  title =	{{The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{82:1--82:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.82},
  URN =		{urn:nbn:de:0030-drops-62032},
  doi =		{10.4230/LIPIcs.ICALP.2016.82},
  annote =	{Keywords: dimensionality reduction, lower bounds, Johnson-Lindenstrauss}
}
Document
Impossibility of Sketching of the 3D Transportation Metric with Quadratic Cost

Authors: Alexandr Andoni, Assaf Naor, and Ofer Neiman


Abstract
Transportation cost metrics, also known as the Wasserstein distances W_p, are a natural choice for defining distances between two pointsets, or distributions, and have been applied in numerous fields. From the computational perspective, there has been an intensive research effort for understanding the W_p metrics over R^k, with work on the W_1 metric (a.k.a earth mover distance) being most successful in terms of theoretical guarantees. However, the W_2 metric, also known as the root-mean square (RMS) bipartite matching distance, is often a more suitable choice in many application areas, e.g. in graphics. Yet, the geometry of this metric space is currently poorly understood, and efficient algorithms have been elusive. For example, there are no known non-trivial algorithms for nearest-neighbor search or sketching for this metric. In this paper we take the first step towards explaining the lack of efficient algorithms for the W_2 metric, even over the three-dimensional Euclidean space R^3. We prove that there are no meaningful embeddings of W_2 over R^3 into a wide class of normed spaces, as well as that there are no efficient sketching algorithms for W_2 over R^3 achieving constant approximation. For example, our results imply that: 1) any embedding into L1 must incur a distortion of Omega(sqrt(log(n))) for pointsets of size n equipped with the W_2 metric; and 2) any sketching algorithm of size s must incur Omega(sqrt(log(n))/sqrt(s)) approximation. Our results follow from a more general statement, asserting that W_2 over R^3 contains the 1/2-snowflake of all finite metric spaces with a uniformly bounded distortion. These are the first non-embeddability/non-sketchability results for W_2.

Cite as

Alexandr Andoni, Assaf Naor, and Ofer Neiman. Impossibility of Sketching of the 3D Transportation Metric with Quadratic Cost. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 83:1-83:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{andoni_et_al:LIPIcs.ICALP.2016.83,
  author =	{Andoni, Alexandr and Naor, Assaf and Neiman, Ofer},
  title =	{{Impossibility of Sketching of the 3D Transportation Metric with Quadratic Cost}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{83:1--83:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.83},
  URN =		{urn:nbn:de:0030-drops-62028},
  doi =		{10.4230/LIPIcs.ICALP.2016.83},
  annote =	{Keywords: Transportation metric, embedding, snowflake, sketching}
}
Document
Simple Average-Case Lower Bounds for Approximate Near-Neighbor from Isoperimetric Inequalities

Authors: Yitong Yin


Abstract
We prove an Omega(d/log(sw/nd)) lower bound for the average-case cell-probe complexity of deterministic or Las Vegas randomized algorithms solving approximate near-neighbor (ANN) problem in ddimensional Hamming space in the cell-probe model with w-bit cells, using a table of size s. This lower bound matches the highest known worst-case cell-probe lower bounds for any static data structure problems. This average-case cell-probe lower bound is proved in a general framework which relates the cell-probe complexity of ANN to isoperimetric inequalities in the underlying metric space. A tighter connection between ANN lower bounds and isoperimetric inequalities is established by a stronger richness lemma proved by cell-sampling techniques.

Cite as

Yitong Yin. Simple Average-Case Lower Bounds for Approximate Near-Neighbor from Isoperimetric Inequalities. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 84:1-84:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yin:LIPIcs.ICALP.2016.84,
  author =	{Yin, Yitong},
  title =	{{Simple Average-Case Lower Bounds for Approximate Near-Neighbor from Isoperimetric Inequalities}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{84:1--84:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.84},
  URN =		{urn:nbn:de:0030-drops-62010},
  doi =		{10.4230/LIPIcs.ICALP.2016.84},
  annote =	{Keywords: nearest neighbor search, approximate near-neighbor, cell-probe model, isoperimetric inequality}
}
Document
Quasimetric Embeddings and Their Applications

Authors: Facundo Mémoli, Anastasios Sidiropoulos, and Vijay Sridhar


Abstract
We study generalizations of classical metric embedding results to the case of quasimetric spaces; that is, spaces that do not necessarily satisfy symmetry. Quasimetric spaces arise naturally from the shortest-path distances on directed graphs. Perhaps surprisingly, very little is known about low-distortion embeddings for quasimetric spaces. Random embeddings into ultrametric spaces are arguably one of the most successful geometric tools in the context of algorithm design. We extend this to the quasimetric case as follows. We show that any n-point quasimetric space supported on a graph of treewidth t admits a random embedding into quasiultrametric spaces with distortion O(t*log^2(n)), where quasiultrametrics are a natural generalization of ultrametrics. This result allows us to obtain t*log^{O(1)}(n)-approximation algorithms for the Directed Non-Bipartite Sparsest-Cut and the Directed Multicut problems on n-vertex graphs of treewidth t, with running time polynomial in both n and t. The above results are obtained by considering a generalization of random partitions to the quasimetric case, which we refer to as random quasipartitions. Using this definition and a construction of [Chuzhoy and Khanna 2009] we derive a polynomial lower bound on the distortion of random embeddings of general quasimetric spaces into quasiultrametric spaces. Finally, we establish a lower bound for embedding the shortest-path quasimetric of a graph G into graphs that exclude G as a minor. This lower bound is used to show that several embedding results from the metric case do not have natural analogues in the quasimetric setting.

Cite as

Facundo Mémoli, Anastasios Sidiropoulos, and Vijay Sridhar. Quasimetric Embeddings and Their Applications. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 85:1-85:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{memoli_et_al:LIPIcs.ICALP.2016.85,
  author =	{M\'{e}moli, Facundo and Sidiropoulos, Anastasios and Sridhar, Vijay},
  title =	{{Quasimetric Embeddings and Their Applications}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{85:1--85:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.85},
  URN =		{urn:nbn:de:0030-drops-62007},
  doi =		{10.4230/LIPIcs.ICALP.2016.85},
  annote =	{Keywords: metric embeddings, quasimetrics, outliers, random embeddings, treewidth, Directed Sparsest-Cut, Directed Multicut}
}
Document
The Landscape of Communication Complexity Classes

Authors: Mika Göös, Toniann Pitassi, and Thomas Watson


Abstract
We prove several results which, together with prior work, provide a nearly-complete picture of the relationships among classical communication complexity classes between P and PSPACE, short of proving lower bounds against classes for which no explicit lower bounds were already known. Our article also serves as an up-to-date survey on the state of structural communication complexity. Among our new results we show that MA !subseteq ZPP^{NP[1]}, that is, Merlin–Arthur proof systems cannot be simulated by zero-sided error randomized protocols with one NP query. Here the class ZPP^{NP[1]} has the property that generalizing it in the slightest ways would make it contain AM intersect coAM, for which it is notoriously open to prove any explicit lower bounds. We also prove that US !subseteq ZPP^{NP[1]}, where US is the class whose canonically complete problem is the variant of set-disjointness where yes-instances are uniquely intersecting. We also prove that US !subseteq coDP, where DP is the class of differences of two NP sets. Finally, we explore an intriguing open issue: are rank-1 matrices inherently more powerful than rectangles in communication complexity? We prove a new separation concerning PP that sheds light on this issue and strengthens some previously known separations.

Cite as

Mika Göös, Toniann Pitassi, and Thomas Watson. The Landscape of Communication Complexity Classes. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 86:1-86:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.ICALP.2016.86,
  author =	{G\"{o}\"{o}s, Mika and Pitassi, Toniann and Watson, Thomas},
  title =	{{The Landscape of Communication Complexity Classes}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{86:1--86:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.86},
  URN =		{urn:nbn:de:0030-drops-61990},
  doi =		{10.4230/LIPIcs.ICALP.2016.86},
  annote =	{Keywords: Landscape, communication, complexity, classes}
}
Document
Information Complexity Is Computable

Authors: Mark Braverman and Jon Schneider


Abstract
The information complexity of a function f is the minimum amount of information Alice and Bob need to exchange to compute the function f. In this paper we provide an algorithm for approximating the information complexity of an arbitrary function f to within any additive error epsilon > 0, thus resolving an open question as to whether information complexity is computable. In the process, we give the first explicit upper bound on the rate of convergence of the information complexity of f when restricted to b-bit protocols to the (unrestricted) information complexity of f.

Cite as

Mark Braverman and Jon Schneider. Information Complexity Is Computable. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 87:1-87:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ICALP.2016.87,
  author =	{Braverman, Mark and Schneider, Jon},
  title =	{{Information Complexity Is Computable}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{87:1--87:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.87},
  URN =		{urn:nbn:de:0030-drops-62203},
  doi =		{10.4230/LIPIcs.ICALP.2016.87},
  annote =	{Keywords: Communication complexity, convergence rate, information complexity}
}
Document
Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound

Authors: Manoj M. Prabhakaran and Vinod M. Prabhakaran


Abstract
In this work we introduce a new information-theoretic complexity measure for 2-party functions, called Rényi information complexity. It is a lower-bound on communication complexity, and has the two leading lower-bounds on communication complexity as its natural relaxations: (external) information complexity and logarithm of partition complexity. These two lower-bounds had so far appeared conceptually quite different from each other, but we show that they are both obtained from Rényi information complexity using two different, but natural relaxations: 1. The relaxation of Rényi information complexity that yields information complexity is to change the order of Rényi mutual information used in its definition from infinity to 1. 2. The relaxation that connects Rényi information complexity with partition complexity is to replace protocol transcripts used in the definition of Rényi information complexity with what we term "pseudotranscripts", which omits the interactive nature of a protocol, but only requires that the probability of any transcript given inputs x and y to the two parties, factorizes into two terms which depend on x and y separately. While this relaxation yields an apparently different definition than (log of) partition function, we show that the two are in fact identical. This gives us a surprising characterization of the partition bound in terms of an information-theoretic quantity. We also show that if both the above relaxations are simultaneously applied to Rényi information complexity, we obtain a complexity measure that is lower-bounded by the (log of) relaxed partition complexity, a complexity measure introduced by Kerenidis et al. (FOCS 2012). We obtain a sharper connection between (external) information complexity and relaxed partition complexity than Kerenidis et al., using an arguably more direct proof. Further understanding Rényi information complexity (of various orders) might have consequences for important direct-sum problems in communication complexity, as it lies between communication complexity and information complexity.

Cite as

Manoj M. Prabhakaran and Vinod M. Prabhakaran. Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 88:1-88:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{prabhakaran_et_al:LIPIcs.ICALP.2016.88,
  author =	{Prabhakaran, Manoj M. and Prabhakaran, Vinod M.},
  title =	{{R\'{e}nyi Information Complexity and an Information Theoretic Characterization of the Partition Bound}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{88:1--88:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.88},
  URN =		{urn:nbn:de:0030-drops-61970},
  doi =		{10.4230/LIPIcs.ICALP.2016.88},
  annote =	{Keywords: Information Complexity, Communication Complexity, R\'{e}nyi Mutual Information}
}
Document
On Isoperimetric Profiles and Computational Complexity

Authors: Pavel Hrubes and Amir Yehudayoff


Abstract
The isoperimetric profile of a graph is a function that measures, for an integer k, the size of the smallest edge boundary over all sets of vertices of size k. We observe a connection between isoperimetric profiles and computational complexity. We illustrate this connection by an example from communication complexity, but our main result is in algebraic complexity. We prove a sharp super-polynomial separation between monotone arithmetic circuits and monotone arithmetic branching programs. This shows that the classical simulation of arithmetic circuits by arithmetic branching programs by Valiant, Skyum, Berkowitz, and Rackoff (1983) cannot be improved, as long as it preserves monotonicity. A key ingredient in the proof is an accurate analysis of the isoperimetric profile of finite full binary trees. We show that the isoperimetric profile of a full binary tree constantly fluctuates between one and almost the depth of the tree.

Cite as

Pavel Hrubes and Amir Yehudayoff. On Isoperimetric Profiles and Computational Complexity. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 89:1-89:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hrubes_et_al:LIPIcs.ICALP.2016.89,
  author =	{Hrubes, Pavel and Yehudayoff, Amir},
  title =	{{On Isoperimetric Profiles and Computational Complexity}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{89:1--89:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.89},
  URN =		{urn:nbn:de:0030-drops-61964},
  doi =		{10.4230/LIPIcs.ICALP.2016.89},
  annote =	{Keywords: Monotone computation, separations, communication complexity, isoperimetry}
}
Document
Tolerant Testers of Image Properties

Authors: Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova


Abstract
We initiate a systematic study of tolerant testers of image properties or, equivalently, algorithms that approximate the distance from a given image to the desired property (that is, the smallest fraction of pixels that need to change in the image to ensure that the image satisfies the desired property). Image processing is a particularly compelling area of applications for sublinear-time algorithms and, specifically, property testing. However, for testing algorithms to reach their full potential in image processing, they have to be tolerant, which allows them to be resilient to noise. Prior to this work, only one tolerant testing algorithm for an image property (image partitioning) has been published. We design efficient approximation algorithms for the following fundamental questions: What fraction of pixels have to be changed in an image so that it becomes a half-plane? a representation of a convex object? a representation of a connected object? More precisely, our algorithms approximate the distance to three basic properties (being a half-plane, convexity, and connectedness) within a small additive error epsilon, after reading a number of pixels polynomial in 1/epsilon and independent of the size of the image. The running time of the testers for half-plane and convexity is also polynomial in 1/epsilon. Tolerant testers for these three properties were not investigated previously. For convexity and connectedness, even the existence of distance approximation algorithms with query complexity independent of the input size is not implied by previous work. (It does not follow from the VC-dimension bounds, since VC dimension of convexity and connectedness, even in two dimensions, depends on the input size. It also does not follow from the existence of non-tolerant testers.) Our algorithms require very simple access to the input: uniform random samples for the half-plane property and convexity, and samples from uniformly random blocks for connectedness. However, the analysis of the algorithms, especially for convexity, requires many geometric and combinatorial insights. For example, in the analysis of the algorithm for convexity, we define a set of reference polygons P_{epsilon} such that (1) every convex image has a nearby polygon in P_{epsilon} and (2) one can use dynamic programming to quickly compute the smallest empirical distance to a polygon in P_{epsilon}. This construction might be of independent interest.

Cite as

Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova. Tolerant Testers of Image Properties. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{berman_et_al:LIPIcs.ICALP.2016.90,
  author =	{Berman, Piotr and Murzabulatov, Meiram and Raskhodnikova, Sofya},
  title =	{{Tolerant Testers of Image Properties}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.90},
  URN =		{urn:nbn:de:0030-drops-61959},
  doi =		{10.4230/LIPIcs.ICALP.2016.90},
  annote =	{Keywords: Computational geometry, convexity, half-plane, connectedness, propertytesting, tolerant property testing}
}
Document
Erasure-Resilient Property Testing

Authors: Kashyap Dixit, Sofya Raskhodnikova, Abhradeep Thakurta, and Nithin Varma


Abstract
Property testers form an important class of sublinear algorithms. In the standard property testing model, an algorithm accesses the input function f:D -> R via an oracle. With very few exceptions, all property testers studied in this model rely on the oracle to provide function values at all queried domain points. However, in many realistic situations, the oracle may be unable to reveal the function values at some domain points due to privacy concerns, or when some of the values get erased by mistake or by an adversary. The testers do not learn anything useful about the property by querying those erased points. Moreover, the knowledge of a tester may enable an adversary to erase some of the values so as to increase the query complexity of the tester arbitrarily or, in some cases, make the tester entirely useless. In this work, we initiate a study of property testers that are resilient to the presence of adversarially erased function values. An alpha-erasure-resilient epsilon-tester is given parameters alpha, epsilon in (0,1), along with oracle access to a function f such that at most an alpha fraction of function values have been erased. The tester does not know whether a value is erased until it queries the corresponding domain point. The tester has to accept with high probability if there is a way to assign values to the erased points such that the resulting function satisfies the desired property P. It has to reject with high probability if, for every assignment of values to the erased points, the resulting function has to be changed in at least an epsilon-fraction of the non-erased domain points to satisfy P. We design erasure-resilient property testers for a large class of properties. For some properties, it is possible to obtain erasure-resilient testers by simply using standard testers as a black box. However, there are more challenging properties for which all known testers rely on querying a specific point. If this point is erased, all these testers break. We give efficient erasure-resilient testers for several important classes of such properties of functions including monotonicity, the Lipschitz property, and convexity. Finally, we show a separation between the standard testing and erasure-resilient testing. Specifically, we describe a property that can be epsilon-tested with O(1/epsilon) queries in the standard model, whereas testing it in the erasure-resilient model requires number of queries polynomial in the input size.

Cite as

Kashyap Dixit, Sofya Raskhodnikova, Abhradeep Thakurta, and Nithin Varma. Erasure-Resilient Property Testing. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 91:1-91:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dixit_et_al:LIPIcs.ICALP.2016.91,
  author =	{Dixit, Kashyap and Raskhodnikova, Sofya and Thakurta, Abhradeep and Varma, Nithin},
  title =	{{Erasure-Resilient Property Testing}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{91:1--91:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.91},
  URN =		{urn:nbn:de:0030-drops-61947},
  doi =		{10.4230/LIPIcs.ICALP.2016.91},
  annote =	{Keywords: Randomized algorithms, property testing, error correction, monotoneand Lipschitz functions}
}
Document
Towards Tight Lower Bounds for Range Reporting on the RAM

Authors: Allan Grønlund and Kasper Green Larsen


Abstract
In the orthogonal range reporting problem, we are to preprocess a set of n points with integer coordinates on a UxU grid. The goal is to support reporting all k points inside an axis-aligned query rectangle. This is one of the most fundamental data structure problems in databases and computational geometry. Despite the importance of the problem its complexity remains unresolved in the word-RAM. On the upper bound side, three best tradeoffs exist, all derived by reducing range reporting to a ball-inheritance problem. Ball-inheritance is a problem that essentially encapsulates all previous attempts at solving range reporting in the word-RAM. In this paper we make progress towards closing the gap between the upper and lower bounds for range reporting by proving cell probe lower bounds for ball-inheritance. Our lower bounds are tight for a large range of parameters, excluding any further progress for range reporting using the ball-inheritance reduction.

Cite as

Allan Grønlund and Kasper Green Larsen. Towards Tight Lower Bounds for Range Reporting on the RAM. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 92:1-92:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{grnlund_et_al:LIPIcs.ICALP.2016.92,
  author =	{Gr{\o}nlund, Allan and Larsen, Kasper Green},
  title =	{{Towards Tight Lower Bounds for Range Reporting on the RAM}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{92:1--92:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.92},
  URN =		{urn:nbn:de:0030-drops-61936},
  doi =		{10.4230/LIPIcs.ICALP.2016.92},
  annote =	{Keywords: Data Structures, Lower Bounds, Cell Probe Model, Range Reporting}
}
Document
Data Structure Lower Bounds for Document Indexing Problems

Authors: Peyman Afshani and Jesper Sindahl Nielsen


Abstract
We study data structure problems related to document indexing and pattern matching queries and our main contribution is to show that the pointer machine model of computation can be extremely useful in proving high and unconditional lower bounds that cannot be obtained in any other known model of computation with the current techniques. Often our lower bounds match the known space-query time trade-off curve and in fact for all the problems considered, there is a very good and reasonable match between our lower bounds and the known upper bounds, at least for some choice of input parameters. The problems that we consider are set intersection queries (both the reporting variant and the semi-group counting variant), indexing a set of documents for two-pattern queries, or forbidden-pattern queries, or queries with wild-cards, and indexing an input set of gapped-patterns (or two-patterns) to find those matching a document given at the query time.

Cite as

Peyman Afshani and Jesper Sindahl Nielsen. Data Structure Lower Bounds for Document Indexing Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ICALP.2016.93,
  author =	{Afshani, Peyman and Nielsen, Jesper Sindahl},
  title =	{{Data Structure Lower Bounds for Document Indexing Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.93},
  URN =		{urn:nbn:de:0030-drops-61923},
  doi =		{10.4230/LIPIcs.ICALP.2016.93},
  annote =	{Keywords: Data Structure Lower Bounds, Pointer Machine, Set Intersection, Pattern Matching}
}
Document
Proof Complexity Modulo the Polynomial Hierarchy: Understanding Alternation as a Source of Hardness

Authors: Hubie Chen


Abstract
We present and study a framework in which one can present alternation-based lower bounds on proof length in proof systems for quantified Boolean formulas. A key notion in this framework is that of proof system ensemble, which is (essentially) a sequence of proof systems where, for each, proof checking can be performed in the polynomial hierarchy. We introduce a proof system ensemble called relaxing QU-res which is based on the established proof system QU-resolution. Our main results include an exponential separation of the tree-like and general versions of relaxing QU-res, and an exponential lower bound for relaxing QU-res; these are analogs of classical results in propositional proof complexity.

Cite as

Hubie Chen. Proof Complexity Modulo the Polynomial Hierarchy: Understanding Alternation as a Source of Hardness. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 94:1-94:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.ICALP.2016.94,
  author =	{Chen, Hubie},
  title =	{{Proof Complexity Modulo the Polynomial Hierarchy: Understanding Alternation as a Source of Hardness}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{94:1--94:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.94},
  URN =		{urn:nbn:de:0030-drops-62290},
  doi =		{10.4230/LIPIcs.ICALP.2016.94},
  annote =	{Keywords: proof complexity, polynomial hierarchy, quantified propositional logic}
}
Document
Past, Present, and Infinite Future

Authors: Thomas Wilke


Abstract
I was supposed to deliver one of the speeches at Wolfgang Thomas's retirement ceremony. Wolfgang had called me on the phone earlier and posed some questions about temporal logic, but I hadn't had good answers at the time. What I decided to do at the ceremony was to take up the conversation again and show how it could have evolved if only I had put more effort into answering his questions. Here is the imaginary conversation with Wolfgang. The contributions are (1) the first direct translation from counter-free omega-automata into future temporal formulas, (2) a definition of bimachines for omega-words, (3) a translation from arbitrary temporal formulas (including both, future and past operators) into counter-free omega-bimachines, and (4) an automata-based proof of separation: every arbitrary temporal formula is equivalent to a boolean combination of pure future, present, and pure past formulas when interpreted in omega-words.

Cite as

Thomas Wilke. Past, Present, and Infinite Future. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 95:1-95:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wilke:LIPIcs.ICALP.2016.95,
  author =	{Wilke, Thomas},
  title =	{{Past, Present, and Infinite Future}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{95:1--95:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.95},
  URN =		{urn:nbn:de:0030-drops-62306},
  doi =		{10.4230/LIPIcs.ICALP.2016.95},
  annote =	{Keywords: linear-time temporal logic, separation, backward deterministic omega-automata, counter freeness}
}
Document
Thin MSO with a Probabilistic Path Quantifier

Authors: Mikolaj Bojanczyk


Abstract
This paper is about a variant of MSO on infinite trees where: - there is a quantifier "zero probability of choosing a path pi in 2^{omega} which makes omega(pi) true"; - the monadic quantifiers range over sets with countable topological closure. We introduce an automaton model, and show that it captures the logic.

Cite as

Mikolaj Bojanczyk. Thin MSO with a Probabilistic Path Quantifier. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 96:1-96:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bojanczyk:LIPIcs.ICALP.2016.96,
  author =	{Bojanczyk, Mikolaj},
  title =	{{Thin MSO with a Probabilistic Path Quantifier}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{96:1--96:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.96},
  URN =		{urn:nbn:de:0030-drops-62315},
  doi =		{10.4230/LIPIcs.ICALP.2016.96},
  annote =	{Keywords: Automata, mso, infinite trees, probabilistic temporal logics}
}
Document
Deciding Piecewise Testable Separability for Regular Tree Languages

Authors: Jean Goubault-Larrecq and Sylvain Schmitz


Abstract
The piecewise testable separability problem asks, given two input languages, whether there exists a piecewise testable language that contains the first input language and is disjoint from the second. We prove a general characterisation of piecewise testable separability on languages in a well-quasiorder, in terms of ideals of the ordering. This subsumes the known characterisations in the case of finite words. In the case of finite ranked trees ordered by homeomorphic embedding, we show using effective representations for tree ideals that it entails the decidability of piecewise testable separability when the input languages are regular. A final byproduct is a new proof of the decidability of whether an input regular language of ranked trees is piecewise testable, which was first shown in the unranked case by Bojanczyk, Segoufin, and Straubing [Log. Meth. in Comput. Sci., 8(3:26), 2012].

Cite as

Jean Goubault-Larrecq and Sylvain Schmitz. Deciding Piecewise Testable Separability for Regular Tree Languages. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 97:1-97:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goubaultlarrecq_et_al:LIPIcs.ICALP.2016.97,
  author =	{Goubault-Larrecq, Jean and Schmitz, Sylvain},
  title =	{{Deciding Piecewise Testable Separability for Regular Tree Languages}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{97:1--97:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.97},
  URN =		{urn:nbn:de:0030-drops-62321},
  doi =		{10.4230/LIPIcs.ICALP.2016.97},
  annote =	{Keywords: Well-quasi-order, ideal, tree languages, first-order logic}
}
Document
Computation Tree Logic for Synchronization Properties

Authors: Krishnendu Chatterjee and Laurent Doyen


Abstract
We present a logic that extends CTL (Computation Tree Logic) with operators that express synchronization properties. A property is synchronized in a system if it holds in all paths of a certain length. The new logic is obtained by using the same path quantifiers and temporal operators as in CTL, but allowing a different order of the quantifiers. This small syntactic variation induces a logic that can express non-regular properties for which known extensions of MSO with equality of path length are undecidable. We show that our variant of CTL is decidable and that the model-checking problem is in Delta_3^P = P^{NP^{NP}}, and is hard for the class of problems solvable in polynomial time using a parallel access to an NP oracle. We analogously consider quantifier exchange in extensions of CTL, and we present operators defined using basic operators of CTL* that express the occurrence of infinitely many synchronization points. We show that the model-checking problem remains in Delta_3^P. The distinguishing power of CTL and of our new logic coincide if the Next operator is allowed in the logics, thus the classical bisimulation quotient can be used for state-space reduction before model checking.

Cite as

Krishnendu Chatterjee and Laurent Doyen. Computation Tree Logic for Synchronization Properties. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 98:1-98:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.ICALP.2016.98,
  author =	{Chatterjee, Krishnendu and Doyen, Laurent},
  title =	{{Computation Tree Logic for Synchronization Properties}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{98:1--98:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.98},
  URN =		{urn:nbn:de:0030-drops-62334},
  doi =		{10.4230/LIPIcs.ICALP.2016.98},
  annote =	{Keywords: Computation Tree Logic, Synchronization, model-checking, complexity}
}
Document
Deciding the Topological Complexity of Büchi Languages

Authors: Michal Skrzypczak and Igor Walukiewicz


Abstract
We study the topological complexity of languages of Büchi automata on infinite binary trees. We show that such a language is either Borel and WMSO-definable, or Sigma_1^1-complete and not WMSO-definable; moreover it can be algorithmically decided which of the two cases holds. The proof relies on a direct reduction to deciding the winner in a finite game with a regular winning condition.

Cite as

Michal Skrzypczak and Igor Walukiewicz. Deciding the Topological Complexity of Büchi Languages. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 99:1-99:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{skrzypczak_et_al:LIPIcs.ICALP.2016.99,
  author =	{Skrzypczak, Michal and Walukiewicz, Igor},
  title =	{{Deciding the Topological Complexity of B\"{u}chi Languages}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{99:1--99:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.99},
  URN =		{urn:nbn:de:0030-drops-62346},
  doi =		{10.4230/LIPIcs.ICALP.2016.99},
  annote =	{Keywords: tree automata, non-determinism, Borel sets, topological complexity, decidability}
}
Document
On the Skolem Problem for Continuous Linear Dynamical Systems

Authors: Ventsislav Chonev, Joël Ouaknine, and James Worrell


Abstract
The Continuous Skolem Problem asks whether a real-valued function satisfying a linear differential equation has a zero in a given interval of real numbers. This is a fundamental reachability problem for continuous linear dynamical systems, such as linear hybrid automata and continuoustime Markov chains. Decidability of the problem is currently open — indeed decidability is open even for the sub-problem in which a zero is sought in a bounded interval. In this paper we show decidability of the bounded problem subject to Schanuel's Conjecture, a unifying conjecture in transcendental number theory. We furthermore analyse the unbounded problem in terms of the frequencies of the differential equation, that is, the imaginary parts of the characteristic roots. We show that the unbounded problem can be reduced to the bounded problem if there is at most one rationally linearly independent frequency, or if there are two rationally linearly independent frequencies and all characteristic roots are simple. We complete the picture by showing that decidability of the unbounded problem in the case of two (or more) rationally linearly independent frequencies would entail a major new effectiveness result in Diophantine approximation, namely computability of the Diophantine-approximation types of all real algebraic numbers.

Cite as

Ventsislav Chonev, Joël Ouaknine, and James Worrell. On the Skolem Problem for Continuous Linear Dynamical Systems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 100:1-100:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chonev_et_al:LIPIcs.ICALP.2016.100,
  author =	{Chonev, Ventsislav and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{On the Skolem Problem for Continuous Linear Dynamical Systems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{100:1--100:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.100},
  URN =		{urn:nbn:de:0030-drops-62357},
  doi =		{10.4230/LIPIcs.ICALP.2016.100},
  annote =	{Keywords: differential equations, reachability, Baker’s Theorem, Schanuel’s Conjecture, semi-algebraic sets}
}
Document
Analysing Decisive Stochastic Processes

Authors: Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Pierre Carlier


Abstract
In 2007, Abdulla et al. introduced the elegant concept of decisive Markov chain. Intuitively, decisiveness allows one to lift the good properties of finite Markov chains to infinite Markov chains. For instance, the approximate quantitative reachability problem can be solved for decisive Markov chains (enjoying reasonable effectiveness assumptions) including probabilistic lossy channel systems and probabilistic vector addition systems with states. In this paper, we extend the concept of decisiveness to more general stochastic processes. This extension is non trivial as we consider stochastic processes with a potentially continuous set of states and uncountable branching (common features of real-time stochastic processes). This allows us to obtain decidability results for both qualitative and quantitative verification problems on some classes of real-time stochastic processes, including generalized semi-Markov processes and stochastic timed automata

Cite as

Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Pierre Carlier. Analysing Decisive Stochastic Processes. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 101:1-101:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bertrand_et_al:LIPIcs.ICALP.2016.101,
  author =	{Bertrand, Nathalie and Bouyer, Patricia and Brihaye, Thomas and Carlier, Pierre},
  title =	{{Analysing Decisive Stochastic Processes}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{101:1--101:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.101},
  URN =		{urn:nbn:de:0030-drops-62362},
  doi =		{10.4230/LIPIcs.ICALP.2016.101},
  annote =	{Keywords: Real-time stochastic processes, Decisiveness, Approximation Scheme}
}
Document
Composition of Stochastic Transition Systems Based on Spans and Couplings

Authors: Daniel Gburek, Christel Baier, and Sascha Klüppelholz


Abstract
Conventional approaches for parallel composition of stochastic systems relate probability measures of the individual components in terms of product measures. Such approaches rely on the assumption that components interact stochastically independent, which might be too rigid for modeling real world systems. In this paper, we introduce a parallel-composition operator for stochastic transition systems that is based on couplings of probability measures and does not impose any stochastic assumptions. When composing systems within our framework, the intended dependencies between components can be determined by providing so-called spans and span couplings. We present a congruence result for our operator with respect to a standard notion of bisimilarity and develop a general theory for spans, exploiting deep results from descriptive set theory. As an application of our general approach, we propose a model for stochastic hybrid systems called stochastic hybrid motion automata.

Cite as

Daniel Gburek, Christel Baier, and Sascha Klüppelholz. Composition of Stochastic Transition Systems Based on Spans and Couplings. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 102:1-102:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gburek_et_al:LIPIcs.ICALP.2016.102,
  author =	{Gburek, Daniel and Baier, Christel and Kl\"{u}ppelholz, Sascha},
  title =	{{Composition of Stochastic Transition Systems Based on Spans and Couplings}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{102:1--102:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.102},
  URN =		{urn:nbn:de:0030-drops-62379},
  doi =		{10.4230/LIPIcs.ICALP.2016.102},
  annote =	{Keywords: Stochastic Transition System, Composition, Stochastic Hybrid Motion Automata, Stochastically Independent, Coupling, Span, Bisimulation, Congruence, Po}
}
Document
On Restricted Nonnegative Matrix Factorization

Authors: Dmitry Chistikov, Stefan Kiefer, Ines Marusic, Mahsa Shirmohammadi, and James Worrell


Abstract
Nonnegative matrix factorization (NMF) is the problem of decomposing a given nonnegative n*m matrix M into a product of a nonnegative n*d matrix W and a nonnegative d*m matrix H. Restricted NMF requires in addition that the column spaces of M and W coincide. Finding the minimal inner dimension d is known to be NP-hard, both for NMF and restricted NMF. We show that restricted NMF is closely related to a question about the nature of minimal probabilistic automata, posed by Paz in his seminal 1971 textbook. We use this connection to answer Paz's question negatively, thus falsifying a positive answer claimed in 1974. Furthermore, we investigate whether a rational matrix M always has a restricted NMF of minimal inner dimension whose factors W and H are also rational. We show that this holds for matrices M of rank at most 3 and we exhibit a rank-4 matrix for which W and H require irrational entries.

Cite as

Dmitry Chistikov, Stefan Kiefer, Ines Marusic, Mahsa Shirmohammadi, and James Worrell. On Restricted Nonnegative Matrix Factorization. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 103:1-103:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.ICALP.2016.103,
  author =	{Chistikov, Dmitry and Kiefer, Stefan and Marusic, Ines and Shirmohammadi, Mahsa and Worrell, James},
  title =	{{On Restricted Nonnegative Matrix Factorization}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{103:1--103:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.103},
  URN =		{urn:nbn:de:0030-drops-62389},
  doi =		{10.4230/LIPIcs.ICALP.2016.103},
  annote =	{Keywords: nonnegative matrix factorization, nonnegative rank, probabilistic automata, labelled Markov chains, minimization}
}
Document
Proving the Herman-Protocol Conjecture

Authors: Maria Bruna, Radu Grigore, Stefan Kiefer, Joël Ouaknine, and James Worrell


Abstract
Herman's self-stabilization algorithm, introduced 25 years ago, is a well-studied synchronous randomized protocol for enabling a ring of N processes collectively holding any odd number of tokens to reach a stable state in which a single token remains. Determining the worst-case expected time to stabilization is the central outstanding open problem about this protocol. It is known that there is a constant h such that any initial configuration has expected stabilization time at most hN2. Ten years ago, McIver and Morgan established a lower bound of 4/27 ~ 0.148 for h, achieved with three equally-spaced tokens, and conjectured this to be the optimal value of h. A series of papers over the last decade gradually reduced the upper bound on h, with the present record (achieved in 2014) standing at approximately 0.156. In this paper, we prove McIver and Morgan's conjecture and establish that h = 4/27 is indeed optimal.

Cite as

Maria Bruna, Radu Grigore, Stefan Kiefer, Joël Ouaknine, and James Worrell. Proving the Herman-Protocol Conjecture. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 104:1-104:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bruna_et_al:LIPIcs.ICALP.2016.104,
  author =	{Bruna, Maria and Grigore, Radu and Kiefer, Stefan and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{Proving the Herman-Protocol Conjecture}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{104:1--104:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.104},
  URN =		{urn:nbn:de:0030-drops-62393},
  doi =		{10.4230/LIPIcs.ICALP.2016.104},
  annote =	{Keywords: randomized protocols, self-stabilization, Lyapunov function, expected time}
}
Document
A Polynomial-Time Algorithm for Reachability in Branching VASS in Dimension One

Authors: Stefan Göller, Christoph Haase, Ranko Lazic, and Patrick Totzke


Abstract
Branching VASS (BVASS) generalise vector addition systems with states by allowing for special branching transitions that can non-deterministically distribute a counter value between two control states. A run of a BVASS consequently becomes a tree, and reachability is to decide whether a given configuration is the root of a reachability tree. This paper shows P-completeness of reachability in BVASS in dimension one, the first decidability result for reachability in a subclass of BVASS known so far. Moreover, we show that coverability and boundedness in BVASS in dimension one are P-complete as well.

Cite as

Stefan Göller, Christoph Haase, Ranko Lazic, and Patrick Totzke. A Polynomial-Time Algorithm for Reachability in Branching VASS in Dimension One. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 105:1-105:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goller_et_al:LIPIcs.ICALP.2016.105,
  author =	{G\"{o}ller, Stefan and Haase, Christoph and Lazic, Ranko and Totzke, Patrick},
  title =	{{A Polynomial-Time Algorithm for Reachability in Branching VASS in Dimension One}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{105:1--105:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.105},
  URN =		{urn:nbn:de:0030-drops-62409},
  doi =		{10.4230/LIPIcs.ICALP.2016.105},
  annote =	{Keywords: branching vector addition systems, reachability, coverability, boundedness}
}
Document
Reachability in Networks of Register Protocols under Stochastic Schedulers

Authors: Patricia Bouyer, Nicolas Markey, Mickael Randour, Arnaud Sangnier, and Daniel Stan


Abstract
We study the almost-sure reachability problem in a distributed system obtained as the asynchronous composition of N copies (called processes) of the same automaton (called protocol), that can communicate via a shared register with finite domain. The automaton has two types of transitions: write-transitions update the value of the register, while read-transitions move to a new state depending on the content of the register. Non-determinism is resolved by a stochastic scheduler. Given a protocol, we focus on almost-sure reachability of a target state by one of the processes. The answer to this problem naturally depends on the number N of processes. However, we prove that our setting has a cut-off property: the answer to the almost-sure reachability problem is constant when N is large enough; we then develop an EXPSPACE algorithm deciding whether this constant answer is positive or negative.

Cite as

Patricia Bouyer, Nicolas Markey, Mickael Randour, Arnaud Sangnier, and Daniel Stan. Reachability in Networks of Register Protocols under Stochastic Schedulers. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 106:1-106:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouyer_et_al:LIPIcs.ICALP.2016.106,
  author =	{Bouyer, Patricia and Markey, Nicolas and Randour, Mickael and Sangnier, Arnaud and Stan, Daniel},
  title =	{{Reachability in Networks of Register Protocols under Stochastic Schedulers}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{106:1--106:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.106},
  URN =		{urn:nbn:de:0030-drops-62416},
  doi =		{10.4230/LIPIcs.ICALP.2016.106},
  annote =	{Keywords: Networks of Processes, Parametrized Systems, Stochastic Scheduler, Almost-sure Reachability, Cut-Off Property}
}
Document
A Program Logic for Union Bounds

Authors: Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub


Abstract
We propose a probabilistic Hoare logic aHL based on the union bound, a tool from basic probability theory. While the union bound is simple, it is an extremely common tool for analyzing randomized algorithms. In formal verification terms, the union bound allows flexible and compositional reasoning over possible ways an algorithm may go wrong. It also enables a clean separation between reasoning about probabilities and reasoning about events, which are expressed as standard first-order formulas in our logic. Notably, assertions in our logic are non-probabilistic, even though we can conclude probabilistic facts from the judgments. Our logic can also prove accuracy properties for interactive programs, where the program must produce intermediate outputs as soon as pieces of the input arrive, rather than accessing the entire input at once. This setting also enables adaptivity, where later inputs may depend on earlier intermediate outputs. We show how to prove accuracy for several examples from the differential privacy literature, both interactive and non-interactive.

Cite as

Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub. A Program Logic for Union Bounds. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 107:1-107:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{barthe_et_al:LIPIcs.ICALP.2016.107,
  author =	{Barthe, Gilles and Gaboardi, Marco and Gr\'{e}goire, Benjamin and Hsu, Justin and Strub, Pierre-Yves},
  title =	{{A Program Logic for Union Bounds}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{107:1--107:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.107},
  URN =		{urn:nbn:de:0030-drops-62425},
  doi =		{10.4230/LIPIcs.ICALP.2016.107},
  annote =	{Keywords: Probabilistic Algorithms, Accuracy, Formal Verification, Hoare Logic, Union Bound}
}
Document
The Decidable Properties of Subrecursive Functions

Authors: Mathieu Hoyrup


Abstract
What can be decided or semidecided about a primitive recursive function, given a definition of that function by primitive recursion? What about subrecursive classes other than primitive recursive functions? We provide a complete and explicit characterization of the decidable and semidecidable properties. This characterization uses a variant of Kolmogorov complexity where only programs in a subrecursive programming language are allowed. More precisely, we prove that all the decidable and semidecidable properties can be obtained as combinations of two classes of basic decidable properties: (i) the function takes some particular values on a finite set of inputs, and (ii) every finite part of the function can be compressed to some extent.

Cite as

Mathieu Hoyrup. The Decidable Properties of Subrecursive Functions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 108:1-108:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hoyrup:LIPIcs.ICALP.2016.108,
  author =	{Hoyrup, Mathieu},
  title =	{{The Decidable Properties of Subrecursive Functions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{108:1--108:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.108},
  URN =		{urn:nbn:de:0030-drops-62435},
  doi =		{10.4230/LIPIcs.ICALP.2016.108},
  annote =	{Keywords: Rice theorem, subrecursive class, decidable property, Kolmogorov complexity, compressibility}
}
Document
Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length: The General Purpose Analog Computer and Computable Analysis Are Two Efficiently Equivalent Models of Computations

Authors: Olivier Bournez, Daniel S. Graça, and Amaury Pouly


Abstract
The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class P of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous (time and space) elegant and simple characterization of P. We believe it is the first time such classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like P, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog model of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both at the computability and complexity level, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both at a computability and at a computational complexity level.

Cite as

Olivier Bournez, Daniel S. Graça, and Amaury Pouly. Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length: The General Purpose Analog Computer and Computable Analysis Are Two Efficiently Equivalent Models of Computations. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 109:1-109:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bournez_et_al:LIPIcs.ICALP.2016.109,
  author =	{Bournez, Olivier and Gra\c{c}a, Daniel S. and Pouly, Amaury},
  title =	{{Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length: The General Purpose Analog Computer and Computable Analysis Are Two Efficiently Equivalent Models of Computations}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{109:1--109:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.109},
  URN =		{urn:nbn:de:0030-drops-62445},
  doi =		{10.4230/LIPIcs.ICALP.2016.109},
  annote =	{Keywords: Analog Models of Computation, Continuous-Time Models of Computation, Computable Analysis, Implicit Complexity, Computational Complexity, Ordinary Diff}
}
Document
Algorithmic Complexity for the Realization of an Effective Subshift By a Sofic

Authors: Mathieu Sablik and Michael Schraudner


Abstract
Realization of d-dimensional effective subshifts as projective sub-actions of d + d'-dimensional sofic subshifts for d' >= 1 is now well known [Hochman, 2009; Durand/Romashchenko/Shen, 2012; Aubrun/Sablik, 2013]. In this paper we are interested in qualitative aspects of this realization. We introduce a new topological conjugacy invariant for effective subshifts, the speed of convergence, in view to exhibit algorithmic properties of these subshifts in contrast to the usual framework that focuses on undecidable properties.

Cite as

Mathieu Sablik and Michael Schraudner. Algorithmic Complexity for the Realization of an Effective Subshift By a Sofic. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 110:1-110:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sablik_et_al:LIPIcs.ICALP.2016.110,
  author =	{Sablik, Mathieu and Schraudner, Michael},
  title =	{{Algorithmic Complexity for the Realization of an Effective Subshift By a Sofic}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{110:1--110:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.110},
  URN =		{urn:nbn:de:0030-drops-62454},
  doi =		{10.4230/LIPIcs.ICALP.2016.110},
  annote =	{Keywords: Subshift, computability, time complexity, space complexity, tilings}
}
Document
On Word and Frontier Languages of Unsafe Higher-Order Grammars

Authors: Kazuyuki Asada and Naoki Kobayashi


Abstract
Higher-order grammars are an extension of regular and context-free grammars, where nonterminals may take parameters. They have been extensively studied in 1980's, and restudied recently in the context of model checking and program verification. We show that the class of unsafe order-(n+1) word languages coincides with the class of frontier languages of unsafe order-n tree languages. We use intersection types for transforming an order-(n+1) word grammar to a corresponding order-n tree grammar. The result has been proved for safe languages by Damm in 1982, but it has been open for unsafe languages, to our knowledge. Various known results on higher-order grammars can be obtained as almost immediate corollaries of our result.

Cite as

Kazuyuki Asada and Naoki Kobayashi. On Word and Frontier Languages of Unsafe Higher-Order Grammars. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 111:1-111:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{asada_et_al:LIPIcs.ICALP.2016.111,
  author =	{Asada, Kazuyuki and Kobayashi, Naoki},
  title =	{{On Word and Frontier Languages of Unsafe Higher-Order Grammars}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{111:1--111:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.111},
  URN =		{urn:nbn:de:0030-drops-62469},
  doi =		{10.4230/LIPIcs.ICALP.2016.111},
  annote =	{Keywords: intersection types, higher-order grammars}
}
Document
The Schützenberger Product for Syntactic Spaces

Authors: Mai Gehrke, Daniela Petrisan, and Luca Reggio


Abstract
Starting from Boolean algebras of languages closed under quotients and using duality theoretic insights, we derive the notion of Boolean spaces with internal monoids as recognisers for arbitrary formal languages of finite words over finite alphabets. This leads to recognisers and syntactic spaces in a setting that is well-suited for applying tools from Stone duality as applied in semantics. The main focus of the paper is the development of topo-algebraic constructions pertinent to the treatment of languages given by logic formulas. In particular, using the standard semantic view of quantification as projection, we derive a notion of Schützenberger product for Boolean spaces with internal monoids. This makes heavy use of the Vietoris construction - and its dual functor - which is central to the coalgebraic treatment of classical modal logic. We show that the unary Schützenberger product for spaces yields a recogniser for the language of all models of the formula EXISTS x.phi(x), when applied to a recogniser for the language of all models of phi(x). Further, we generalise global and local versions of the theorems of Schützenberger and Reutenauer characterising the languages recognised by the binary Schützenberger product. Finally, we provide an equational characterisation of Boolean algebras obtained by local Schützenberger product with the one element space based on an Egli-Milner type condition on generalised factorisations of ultrafilters on words.

Cite as

Mai Gehrke, Daniela Petrisan, and Luca Reggio. The Schützenberger Product for Syntactic Spaces. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 112:1-112:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gehrke_et_al:LIPIcs.ICALP.2016.112,
  author =	{Gehrke, Mai and Petrisan, Daniela and Reggio, Luca},
  title =	{{The Sch\"{u}tzenberger Product for Syntactic Spaces}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{112:1--112:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.112},
  URN =		{urn:nbn:de:0030-drops-62474},
  doi =		{10.4230/LIPIcs.ICALP.2016.112},
  annote =	{Keywords: Stone duality and Stone-Cech compactification, Vietoris hyperspace construction, logic on words, algebraic language theory beyond the regular setting}
}
Document
Logic of Local Inference for Contextuality in Quantum Physics and Beyond

Authors: Kohei Kishida


Abstract
Contextuality in quantum physics provides a key resource for quantum information and computation. The topological approach in [Abramsky and Brandenburger, New J. Phys., 2011, Abramsky et al., CSL 2015, 2015] characterizes contextuality as "global inconsistency" coupled with "local consistency", revealing it to be a phenomenon also found in many other fields. This has yielded a logical method of detecting and proving the "global inconsistency" part of contextuality. Our goal is to capture the other, "local consistency" part, which requires a novel approach to logic that is sensitive to the topology of contexts. To achieve this, we formulate a logic of local inference by using context-sensitive theories and models in regular categories. This provides a uniform framework for local consistency, and lays a foundation for high-level methods of detecting, proving, and moreover using contextuality as computational resource.

Cite as

Kohei Kishida. Logic of Local Inference for Contextuality in Quantum Physics and Beyond. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 113:1-113:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kishida:LIPIcs.ICALP.2016.113,
  author =	{Kishida, Kohei},
  title =	{{Logic of Local Inference for Contextuality in Quantum Physics and Beyond}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{113:1--113:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.113},
  URN =		{urn:nbn:de:0030-drops-62484},
  doi =		{10.4230/LIPIcs.ICALP.2016.113},
  annote =	{Keywords: Contextuality, quantum mechanics, regular category, regular logic, separated presheaf}
}
Document
Minimizing Resources of Sweeping and Streaming String Transducers

Authors: Félix Baschenis, Olivier Gauwin, Anca Muscholl, and Gabriele Puppis


Abstract
We consider minimization problems for natural parameters of word transducers: the number of passes performed by two-way transducers and the number of registers used by streaming transducers. We show how to compute in ExpSpace the minimum number of passes needed to implement a transduction given as sweeping transducer, and we provide effective constructions of transducers of (worst-case optimal) doubly exponential size. We then consider streaming transducers where concatenations of registers are forbidden in the register updates. Based on a correspondence between the number of passes of sweeping transducers and the number of registers of equivalent concatenation-free streaming transducers, we derive a minimization procedure for the number of registers of concatenation-free streaming transducers.

Cite as

Félix Baschenis, Olivier Gauwin, Anca Muscholl, and Gabriele Puppis. Minimizing Resources of Sweeping and Streaming String Transducers. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 114:1-114:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{baschenis_et_al:LIPIcs.ICALP.2016.114,
  author =	{Baschenis, F\'{e}lix and Gauwin, Olivier and Muscholl, Anca and Puppis, Gabriele},
  title =	{{Minimizing Resources of Sweeping and Streaming String Transducers}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{114:1--114:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.114},
  URN =		{urn:nbn:de:0030-drops-62496},
  doi =		{10.4230/LIPIcs.ICALP.2016.114},
  annote =	{Keywords: word transducers, streaming, 2-way, sweeping transducers, minimization}
}
Document
A Linear Acceleration Theorem for 2D Cellular Automata on All Complete Neighborhoods

Authors: Anaël Grandjean and Victor Poupet


Abstract
Linear acceleration theorems are known for most computational models. Although such results have been proved for two-dimensional cellular automata working on specific neighborhoods, no general construction was known. We present here a technique of linear acceleration for all twodimensional languages recognized by cellular automata working on complete neighborhoods.

Cite as

Anaël Grandjean and Victor Poupet. A Linear Acceleration Theorem for 2D Cellular Automata on All Complete Neighborhoods. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 115:1-115:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{grandjean_et_al:LIPIcs.ICALP.2016.115,
  author =	{Grandjean, Ana\"{e}l and Poupet, Victor},
  title =	{{A Linear Acceleration Theorem for 2D Cellular Automata on All Complete Neighborhoods}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{115:1--115:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.115},
  URN =		{urn:nbn:de:0030-drops-62502},
  doi =		{10.4230/LIPIcs.ICALP.2016.115},
  annote =	{Keywords: 2D Cellular automata, linear acceleration, language recognition}
}
Document
New Interpretation and Generalization of the Kameda-Weiner Method

Authors: Hellis Tamm


Abstract
We present a reinterpretation of the Kameda-Weiner method of finding a minimal nondeterministic finite automaton (NFA) of a language, in terms of atoms of the language. We introduce a method to generate NFAs from a set of languages, and show that the Kameda-Weiner method is a special case of it. Our method provides a unified view of the construction of several known NFAs, including the canonical residual finite state automaton and the atomaton of the language.

Cite as

Hellis Tamm. New Interpretation and Generalization of the Kameda-Weiner Method. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 116:1-116:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{tamm:LIPIcs.ICALP.2016.116,
  author =	{Tamm, Hellis},
  title =	{{New Interpretation and Generalization of the Kameda-Weiner Method}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{116:1--116:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.116},
  URN =		{urn:nbn:de:0030-drops-62518},
  doi =		{10.4230/LIPIcs.ICALP.2016.116},
  annote =	{Keywords: Nondeterministic finite automata, NFA minimization, Kameda-Weinermethod, atoms of regular languages}
}
Document
Nesting Depth of Operators in Graph Database Queries: Expressiveness vs. Evaluation Complexity

Authors: M. Praveen and B. Srivathsan


Abstract
Designing query languages for graph structured data is an active field of research, where expressiveness and efficient algorithms for query evaluation are conflicting goals. To better handle dynamically changing data, recent work has been done on designing query languages that can compare values stored in the graph database, without hard coding the values in the query. The main idea is to allow variables in the query and bind the variables to values when evaluating the query. For query languages that bind variables only once, query evaluation is usually NP-complete. There are query languages that allow binding inside the scope of Kleene star operators, which can themselves be in the scope of bindings and so on. Uncontrolled nesting of binding and iteration within one another results in query evaluation being PSPACE-complete. We define a way to syntactically control the nesting depth of iterated bindings, and study how this affects expressiveness and efficiency of query evaluation. The result is an infinite, syntactically defined hierarchy of expressions. We prove that the corresponding language hierarchy is strict. Given an expression in the hierarchy, we prove that it is undecidable to check if there is a language equivalent expression at lower levels. We prove that evaluating a query based on an expression at level i can be done in level i of the polynomial time hierarchy. Satisfiability of quantified Boolean formulas can be reduced to query evaluation; we study the relationship between alternations in Boolean quantifiers and the depth of nesting of iterated bindings.

Cite as

M. Praveen and B. Srivathsan. Nesting Depth of Operators in Graph Database Queries: Expressiveness vs. Evaluation Complexity. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 117:1-117:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{praveen_et_al:LIPIcs.ICALP.2016.117,
  author =	{Praveen, M. and Srivathsan, B.},
  title =	{{Nesting Depth of Operators in Graph Database Queries: Expressiveness vs. Evaluation Complexity}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{117:1--117:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.117},
  URN =		{urn:nbn:de:0030-drops-62520},
  doi =		{10.4230/LIPIcs.ICALP.2016.117},
  annote =	{Keywords: graphs with data, regular data path queries, expressiveness, query evaluation, complexity}
}
Document
A Hierarchy of Local Decision

Authors: Laurent Feuilloley, Pierre Fraigniaud, and Juho Hirvonen


Abstract
We extend the notion of distributed decision in the framework of distributed network computing, inspired by recent results on so-called distributed graph automata. We show that, by using distributed decision mechanisms based on the interaction between a prover and a disprover, the size of the certificates distributed to the nodes for certifying a given network property can be drastically reduced. For instance, we prove that minimum spanning tree can be certified with O(log(n))-bit certificates in n-node graphs, with just one interaction between the prover and the disprover, while it is known that certifying MST requires Omega(log^2(n))-bit certificates if only the prover can act. The improvement can even be exponential for some simple graph properties. For instance, it is known that certifying the existence of a nontrivial automorphism requires Omega(n^2) bits if only the prover can act. We show that there is a protocol with two interactions between the prover and the disprover enabling to certify nontrivial automorphism with O(log(n))- bit certificates. These results are achieved by defining and analysing a local hierarchy of decision which generalizes the classical notions of proof-labelling schemes and locally checkable proofs.

Cite as

Laurent Feuilloley, Pierre Fraigniaud, and Juho Hirvonen. A Hierarchy of Local Decision. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 118:1-118:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{feuilloley_et_al:LIPIcs.ICALP.2016.118,
  author =	{Feuilloley, Laurent and Fraigniaud, Pierre and Hirvonen, Juho},
  title =	{{A Hierarchy of Local Decision}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{118:1--118:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.118},
  URN =		{urn:nbn:de:0030-drops-62536},
  doi =		{10.4230/LIPIcs.ICALP.2016.118},
  annote =	{Keywords: Distributed Network Computing, Distributed Algorithm, Distributed Decision, Locality}
}
Document
Constraint Satisfaction Problems for Reducts of Homogeneous Graphs

Authors: Manuel Bodirsky, Barnaby Martin, Michael Pinsker, and András Pongrácz


Abstract
For n >= 3, let (Hn, E) denote the n-th Henson graph, i.e., the unique countable homogeneous graph with exactly those finite graphs as induced subgraphs that do not embed the complete graph on n vertices. We show that for all structures Gamma with domain Hn whose relations are first-order definable in (Hn, E) the constraint satisfaction problem for Gamma is either in P or is NP-complete. We moreover show a similar complexity dichotomy for all structures whose relations are first-order definable in a homogeneous graph whose reflexive closure is an equivalence relation. Together with earlier results, in particular for the random graph, this completes the complexity classification of constraint satisfaction problems of structures first-order definable in countably infinite homogeneous graphs: all such problems are either in P or NP-complete.

Cite as

Manuel Bodirsky, Barnaby Martin, Michael Pinsker, and András Pongrácz. Constraint Satisfaction Problems for Reducts of Homogeneous Graphs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 119:1-119:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bodirsky_et_al:LIPIcs.ICALP.2016.119,
  author =	{Bodirsky, Manuel and Martin, Barnaby and Pinsker, Michael and Pongr\'{a}cz, Andr\'{a}s},
  title =	{{Constraint Satisfaction Problems for Reducts of Homogeneous Graphs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{119:1--119:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.119},
  URN =		{urn:nbn:de:0030-drops-62543},
  doi =		{10.4230/LIPIcs.ICALP.2016.119},
  annote =	{Keywords: Constraint Satisfaction, Homogeneous Graphs, Computational Complexity, Universal Algebra, Ramsey Theory}
}
Document
Sensitivity of Counting Queries

Authors: Myrto Arapinis, Diego Figueira, and Marco Gaboardi


Abstract
In the context of statistical databases, the release of accurate statistical information about the collected data often puts at risk the privacy of the individual contributors. The goal of differential privacy is to maximise the utility of a query while protecting the individual records in the database. A natural way to achieve differential privacy is to add statistical noise to the result of the query. In this context, a mechanism for releasing statistical information is thus a trade-off between utility and privacy. In order to balance these two "conflicting" requirements, privacy preserving mechanisms calibrate the added noise to the so-called sensitivity of the query, and thus a precise estimate of the sensitivity of the query is necessary to determine the amplitude of the noise to be added. In this paper, we initiate a systematic study of sensitivity of counting queries over relational databases. We first observe that the sensitivity of a Relational Algebra query with counting is not computable in general, and that while the sensitivity of Conjunctive Queries with counting is computable, it becomes unbounded as soon as the query includes a join. We then consider restricted classes of databases (databases with constraints), and study the problem of computing the sensitivity of a query given such constraints. We are able to establish bounds on the sensitivity of counting conjunctive queries over constrained databases. The kind of constraints studied here are: functional dependencies and cardinality dependencies. The latter is a natural generalisation of functional dependencies that allows us to provide tight bounds on the sensitivity of counting conjunctive queries.

Cite as

Myrto Arapinis, Diego Figueira, and Marco Gaboardi. Sensitivity of Counting Queries. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 120:1-120:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arapinis_et_al:LIPIcs.ICALP.2016.120,
  author =	{Arapinis, Myrto and Figueira, Diego and Gaboardi, Marco},
  title =	{{Sensitivity of Counting Queries}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{120:1--120:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.120},
  URN =		{urn:nbn:de:0030-drops-62552},
  doi =		{10.4230/LIPIcs.ICALP.2016.120},
  annote =	{Keywords: Differential privacy, sensitivity, relational algebra}
}
Document
The Complexity of Rational Synthesis

Authors: Rodica Condurache, Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin


Abstract
We study the computational complexity of the cooperative and non-cooperative rational synthesis problems, as introduced by Kupferman, Vardi and co-authors. We provide tight results for most of the classical omega-regular objectives, and show how to solve those problems optimally.

Cite as

Rodica Condurache, Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. The Complexity of Rational Synthesis. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 121:1-121:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{condurache_et_al:LIPIcs.ICALP.2016.121,
  author =	{Condurache, Rodica and Filiot, Emmanuel and Gentilini, Raffaella and Raskin, Jean-Fran\c{c}ois},
  title =	{{The Complexity of Rational Synthesis}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{121:1--121:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.121},
  URN =		{urn:nbn:de:0030-drops-62565},
  doi =		{10.4230/LIPIcs.ICALP.2016.121},
  annote =	{Keywords: Non-zero sum games, reactive synthesis, omega-regular objectives}
}
Document
On the Complexity of Grammar-Based Compression over Fixed Alphabets

Authors: Katrin Casel, Henning Fernau, Serge Gaspers, Benjamin Gras, and Markus L. Schmid


Abstract
It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an O(3n) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem).

Cite as

Katrin Casel, Henning Fernau, Serge Gaspers, Benjamin Gras, and Markus L. Schmid. On the Complexity of Grammar-Based Compression over Fixed Alphabets. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 122:1-122:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.ICALP.2016.122,
  author =	{Casel, Katrin and Fernau, Henning and Gaspers, Serge and Gras, Benjamin and Schmid, Markus L.},
  title =	{{On the Complexity of Grammar-Based Compression over Fixed Alphabets}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{122:1--122:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.122},
  URN =		{urn:nbn:de:0030-drops-62570},
  doi =		{10.4230/LIPIcs.ICALP.2016.122},
  annote =	{Keywords: Grammar-Based Compression, Straight-Line Programs, NP-Completeness, Exact Exponential Time Algorithms}
}
Document
The Complexity of Downward Closure Comparisons

Authors: Georg Zetzsche


Abstract
The downward closure of a language is the set of all (not necessarily contiguous) subwords of its members. It is well-known that the downward closure of every language is regular. Moreover, recent results show that downward closures are computable for quite powerful system models. One advantage of abstracting a language by its downward closure is that then equivalence and inclusion become decidable. In this work, we study the complexity of these two problems. More precisely, we consider the following decision problems: Given languages K and L from classes C and D, respectively, does the downward closure of K include (equal) that of L? These problems are investigated for finite automata, one-counter automata, context-free grammars, and reversal-bounded counter automata. For each combination, we prove a completeness result either for fixed or for arbitrary alphabets. Moreover, for Petri net languages, we show that both problems are Ackermann-hard and for higher-order pushdown automata of order k, we prove hardness for complements of nondeterministic k-fold exponential time.

Cite as

Georg Zetzsche. The Complexity of Downward Closure Comparisons. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 123:1-123:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{zetzsche:LIPIcs.ICALP.2016.123,
  author =	{Zetzsche, Georg},
  title =	{{The Complexity of Downward Closure Comparisons}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{123:1--123:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.123},
  URN =		{urn:nbn:de:0030-drops-62589},
  doi =		{10.4230/LIPIcs.ICALP.2016.123},
  annote =	{Keywords: Downward closures, Complexity, Inclusion, Equivalence}
}
Document
Anti-Powers in Infinite Words

Authors: Gabriele Fici, Antonio Restivo, Manuel Silva, and Luca Q. Zamboni


Abstract
In combinatorics of words, a concatenation of k consecutive equal blocks is called a power of order k. In this paper we take a different point of view and define an anti-power of order k as a concatenation of k consecutive pairwise distinct blocks of the same length. As a main result, we show that every infinite word contains powers of any order or anti-powers of any order. That is, the existence of powers or anti-powers is an unavoidable regularity. Indeed, we prove a stronger result, which relates the density of anti-powers to the existence of a factor that occurs with arbitrary exponent. From these results, we derive that at every position of an aperiodic uniformly recurrent word start anti-powers of any order. We further show that any infinite word avoiding anti-powers of order 3 is ultimately periodic, and that there exist aperiodic words avoiding anti-powers of order 4. We also show that there exist aperiodic recurrent words avoiding anti-powers of order 6, and leave open the question whether there exist aperiodic recurrent words avoiding anti-powers of order k for k=4,5.

Cite as

Gabriele Fici, Antonio Restivo, Manuel Silva, and Luca Q. Zamboni. Anti-Powers in Infinite Words. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 124:1-124:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fici_et_al:LIPIcs.ICALP.2016.124,
  author =	{Fici, Gabriele and Restivo, Antonio and Silva, Manuel and Zamboni, Luca Q.},
  title =	{{Anti-Powers in Infinite Words}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{124:1--124:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.124},
  URN =		{urn:nbn:de:0030-drops-62599},
  doi =		{10.4230/LIPIcs.ICALP.2016.124},
  annote =	{Keywords: infinite word, anti-power, unavoidable regularity, avoidability}
}
Document
On Equivalence and Uniformisation Problems for Finite Transducers

Authors: Emmanuel Filiot, Ismaël Jecker, Christof Löding, and Sarah Winter


Abstract
Transductions are binary relations of finite words. For rational transductions, i.e., transductions defined by finite transducers, the inclusion, equivalence and sequential uniformisation problems are known to be undecidable. In this paper, we investigate stronger variants of inclusion, equivalence and sequential uniformisation, based on a general notion of transducer resynchronisation, and show their decidability. We also investigate the classes of finite-valued rational transductions and deterministic rational transductions, which are known to have a decidable equivalence problem. We show that sequential uniformisation is also decidable for them.

Cite as

Emmanuel Filiot, Ismaël Jecker, Christof Löding, and Sarah Winter. On Equivalence and Uniformisation Problems for Finite Transducers. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 125:1-125:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filiot_et_al:LIPIcs.ICALP.2016.125,
  author =	{Filiot, Emmanuel and Jecker, Isma\"{e}l and L\"{o}ding, Christof and Winter, Sarah},
  title =	{{On Equivalence and Uniformisation Problems for Finite Transducers}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{125:1--125:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.125},
  URN =		{urn:nbn:de:0030-drops-62605},
  doi =		{10.4230/LIPIcs.ICALP.2016.125},
  annote =	{Keywords: Transducers, Equivalence, Uniformisation}
}
Document
The Bridge Between Regular Cost Functions and Omega-Regular Languages

Authors: Thomas Colcombet and Nathanaël Fijalkow


Abstract
In this paper, we exhibit a one-to-one correspondence between omega-regular languages and a subclass of regular cost functions over finite words, called omega-regular like cost functions. This bridge between the two models allows one to readily import classical results such as the last appearance record or the McNaughton-Safra constructions to the realm of regular cost functions. In combination with game theoretic techniques, this also yields a simple description of an optimal procedure of history-determinisation for cost automata, a central result in the theory of regular cost functions.

Cite as

Thomas Colcombet and Nathanaël Fijalkow. The Bridge Between Regular Cost Functions and Omega-Regular Languages. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 126:1-126:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{colcombet_et_al:LIPIcs.ICALP.2016.126,
  author =	{Colcombet, Thomas and Fijalkow, Nathana\"{e}l},
  title =	{{The Bridge Between Regular Cost Functions and Omega-Regular Languages}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{126:1--126:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.126},
  URN =		{urn:nbn:de:0030-drops-62619},
  doi =		{10.4230/LIPIcs.ICALP.2016.126},
  annote =	{Keywords: Theory of Regular Cost Functions, Automata with Counters, Costautomata, Quantitative Extensions of Automata, Determinisation of Automata}
}
Document
Solutions of Word Equations Over Partially Commutative Structures

Authors: Volker Diekert, Artur Jez, and Manfred Kufleitner


Abstract
We give NSPACE(n*log(n)) algorithms solving the following decision problems. Satisfiability: Is the given equation over a free partially commutative monoid with involution (resp. a free partially commutative group) solvable? Finiteness: Are there only finitely many solutions of such an equation? PSPACE algorithms with worse complexities for the first problem are known, but so far, a PSPACE algorithm for the second problem was out of reach. Our results are much stronger: Given such an equation, its solutions form an EDT0L language effectively representable in NSPACE(n*log(n)). In particular, we give an effective description of the set of all solutions for equations with constraints in free partially commutative monoids and groups.

Cite as

Volker Diekert, Artur Jez, and Manfred Kufleitner. Solutions of Word Equations Over Partially Commutative Structures. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 127:1-127:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{diekert_et_al:LIPIcs.ICALP.2016.127,
  author =	{Diekert, Volker and Jez, Artur and Kufleitner, Manfred},
  title =	{{Solutions of Word Equations Over Partially Commutative Structures}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{127:1--127:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.127},
  URN =		{urn:nbn:de:0030-drops-62624},
  doi =		{10.4230/LIPIcs.ICALP.2016.127},
  annote =	{Keywords: Word equations, EDT0L language, trace monoid, right-angled Artin group, partial commutation}
}
Document
The Taming of the Semi-Linear Set

Authors: Dmitry Chistikov and Christoph Haase


Abstract
Semi-linear sets, which are rational subsets of the monoid (Z^d,+), have numerous applications in theoretical computer science. Although semi-linear sets are usually given implicitly, by formulas in Presburger arithmetic or by other means, the effect of Boolean operations on semi-linear sets in terms of the size of description has primarily been studied for explicit representations. In this paper, we develop a framework suitable for implicitly presented semi-linear sets, in which the size of a semi-linear set is characterized by its norm—the maximal magnitude of a generator. We put together a toolbox of operations and decompositions for semi-linear sets which gives bounds in terms of the norm (as opposed to just the bit-size of the description), a unified presentation, and simplified proofs. This toolbox, in particular, provides exponentially better bounds for the complement and set-theoretic difference. We also obtain bounds on unambiguous decompositions and, as an application of the toolbox, settle the complexity of the equivalence problem for exponent-sensitive commutative grammars.

Cite as

Dmitry Chistikov and Christoph Haase. The Taming of the Semi-Linear Set. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 128:1-128:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.ICALP.2016.128,
  author =	{Chistikov, Dmitry and Haase, Christoph},
  title =	{{The Taming of the Semi-Linear Set}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{128:1--128:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.128},
  URN =		{urn:nbn:de:0030-drops-62636},
  doi =		{10.4230/LIPIcs.ICALP.2016.128},
  annote =	{Keywords: semi-linear sets, convex polyhedra, triangulations, integer linear programming, commutative grammars}
}
Document
Characterizing Classes of Regular Languages Using Prefix Codes of Bounded Synchronization Delay

Authors: Volker Diekert and Tobias Walter


Abstract
In this paper we continue a classical work of Schützenberger on codes with bounded synchronization delay. He was interested in characterizing those regular languages where the groups in the syntactic monoid belong to a variety H. He allowed operations on the language side which are union, intersection, concatenation and modified Kleene-star involving a mapping of a prefix code of bounded synchronization delay to a group G in H, but no complementation. In our notation this leads to the language classes SD_G(A^{infinity}) and SD_H(A^{infinity}). Our main result shows that SD_H(A^{infinity}) always corresponds to the languages having syntactic monoids where all subgroups are in H. Schützenberger showed this for a variety H if H contains Abelian groups, only. Our method shows the general result for all H directly on finite and infinite words. Furthermore, we introduce the notion of local Rees extensions which refers to a simple type of classical Rees extensions. We give a decomposition of a monoid in terms of its groups and local Rees extensions. This gives a somewhat similar, but simpler decomposition than in Rhodes' synthesis theorem. Moreover, we need a singly exponential number of operations, only. Finally, our decomposition yields an answer to a question in a recent paper of Almeida and Klíma about varieties that are closed under Rees extensions.

Cite as

Volker Diekert and Tobias Walter. Characterizing Classes of Regular Languages Using Prefix Codes of Bounded Synchronization Delay. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 129:1-129:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{diekert_et_al:LIPIcs.ICALP.2016.129,
  author =	{Diekert, Volker and Walter, Tobias},
  title =	{{Characterizing Classes of Regular Languages Using Prefix Codes of Bounded Synchronization Delay}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{129:1--129:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.129},
  URN =		{urn:nbn:de:0030-drops-62642},
  doi =		{10.4230/LIPIcs.ICALP.2016.129},
  annote =	{Keywords: formal language, synchronization delay, variety, Rees extension}
}
Document
An Optimal Dual Fault Tolerant Reachability Oracle

Authors: Keerti Choudhary


Abstract
Let G=(V,E) be an n-vertices m-edges directed graph. Let s inV be any designated source vertex. We address the problem of reporting the reachability information from s under two vertex failures. We show that it is possible to compute in polynomial time an O(n) size data structure that for any query vertex v, and any pair of failed vertices f_1, f_2, answers in O(1) time whether or not there exists a path from s to v in G\{f_1,f_2}. For the simpler case of single vertex failure such a data structure can be obtained using the dominator-tree from the celebrated work of Lengauer and Tarjan [TOPLAS 1979, Vol. 1]. However, no efficient data structure was known in the past for handling more than one failures. We, in addition, also present a labeling scheme with O(log^3(n))-bit size labels such that for any f_1, f_2, v in V , it is possible to determine in poly-logarithmic time if v is reachable from s in G\{f_1,f_2} using only the labels of f1, f_2 and v. Our data structure can also be seen as an efficient mechanism for verifying double-dominators. For any given x, y, v in V we can determine in O(1) time if the pair (x,y) is a double-dominator of v. Earlier the best known method for this problem was using dominator chain from which verification of double-dominators of only a single vertex was possible.

Cite as

Keerti Choudhary. An Optimal Dual Fault Tolerant Reachability Oracle. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 130:1-130:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{choudhary:LIPIcs.ICALP.2016.130,
  author =	{Choudhary, Keerti},
  title =	{{An Optimal Dual Fault Tolerant Reachability Oracle}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{130:1--130:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.130},
  URN =		{urn:nbn:de:0030-drops-62659},
  doi =		{10.4230/LIPIcs.ICALP.2016.130},
  annote =	{Keywords: Fault tolerant, Directed graph, Reachability oracle, Labeling scheme}
}
Document
Graph Minors for Preserving Terminal Distances Approximately - Lower and Upper Bounds

Authors: Yun Kuen Cheung, Gramoz Goranci, and Monika Henzinger


Abstract
Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Omega(k^2) / Omega(k^{5/4}) / Omega(k^{6/5}) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k^2) non-terminals, and any planar graph admits a minor with 1 + epsilon distortion and ~O((k/epsilon)^2) non-terminals.

Cite as

Yun Kuen Cheung, Gramoz Goranci, and Monika Henzinger. Graph Minors for Preserving Terminal Distances Approximately - Lower and Upper Bounds. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 131:1-131:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cheung_et_al:LIPIcs.ICALP.2016.131,
  author =	{Cheung, Yun Kuen and Goranci, Gramoz and Henzinger, Monika},
  title =	{{Graph Minors for Preserving Terminal Distances Approximately - Lower and Upper Bounds}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{131:1--131:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.131},
  URN =		{urn:nbn:de:0030-drops-62675},
  doi =		{10.4230/LIPIcs.ICALP.2016.131},
  annote =	{Keywords: Distance Approximating Minor, Graph Minor, Graph Compression, Vertex Sparsification, Metric Embedding}
}
Document
Distance Labeling Schemes for Trees

Authors: Stephen Alstrup, Inge Li Gørtz, Esben Bistrup Halvorsen, and Ely Porat


Abstract
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille et al., J. Alg., 2004] and an upper bound by Peleg [Peleg, J. Graph Theory, 2000] establish that labels must use Theta(log^2(n)) bits. Gavoille et al. [Gavoille et al., ESA, 2001] show that for very small approximate stretch, labels use Theta(log(n) log(log(n))) bits. Several other papers investigate various variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4*log^2(n) bits are needed and that 1/2*log^2(n) bits are sufficient. We also give (1 + epsilon)-stretch labeling schemes using Theta(log(n)) bits for constant epsilon > 0. (1 + epsilon)-stretch labeling schemes with polylogarithmic label size have previously been established for doubling dimension graphs by Talwar [Talwar, STOC, 2004]. In addition, we present matching upper and lower bounds for distance labeling for caterpillars, showing that labels must have size 2*log(n) - Theta(log(log(n))). For simple paths with k nodes and edge weights in [1,n], we show that labels must have size (k - 1)/k*log(n) + Theta(log(k)).

Cite as

Stephen Alstrup, Inge Li Gørtz, Esben Bistrup Halvorsen, and Ely Porat. Distance Labeling Schemes for Trees. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 132:1-132:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{alstrup_et_al:LIPIcs.ICALP.2016.132,
  author =	{Alstrup, Stephen and G{\o}rtz, Inge Li and Halvorsen, Esben Bistrup and Porat, Ely},
  title =	{{Distance Labeling Schemes for Trees}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{132:1--132:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.132},
  URN =		{urn:nbn:de:0030-drops-62661},
  doi =		{10.4230/LIPIcs.ICALP.2016.132},
  annote =	{Keywords: Distributed computing, Distance labeling, Graph theory, Routing, Trees}
}
Document
Near Optimal Adjacency Labeling Schemes for Power-Law Graphs

Authors: Casper Petersen, Noy Rotbart, Jakob Grue Simonsen, and Christian Wulff-Nilsen


Abstract
An adjacency labeling scheme labels the n nodes of a graph with bit strings in a way that allows, given the labels of two nodes, to determine adjacency based only on those bit strings. Though many graph families have been meticulously studied for this problem, a non-trivial labeling scheme for the important family of power-law graphs has yet to be obtained. This family is particularly useful for social and web networks as their underlying graphs are typically modelled as power-law graphs. Using simple strategies and a careful selection of a parameter, we show upper bounds for such labeling schemes of ~O(sqrt^{alpha}(n)) for power law graphs with coefficient alpha;, as well as nearly matching lower bounds. We also show two relaxations that allow for a label of logarithmic size, and extend the upper-bound technique to produce an improved distance labeling scheme for power-law graphs.

Cite as

Casper Petersen, Noy Rotbart, Jakob Grue Simonsen, and Christian Wulff-Nilsen. Near Optimal Adjacency Labeling Schemes for Power-Law Graphs. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 133:1-133:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{petersen_et_al:LIPIcs.ICALP.2016.133,
  author =	{Petersen, Casper and Rotbart, Noy and Simonsen, Jakob Grue and Wulff-Nilsen, Christian},
  title =	{{Near Optimal Adjacency Labeling Schemes for Power-Law Graphs}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{133:1--133:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.133},
  URN =		{urn:nbn:de:0030-drops-62684},
  doi =		{10.4230/LIPIcs.ICALP.2016.133},
  annote =	{Keywords: Labeling schemes, Power-law graphs}
}
Document
On the Resiliency of Randomized Routing Against Multiple Edge Failures

Authors: Marco Chiesa, Andrei Gurtov, Aleksander Madry, Slobodan Mitrovic, Ilya Nikolaevskiy, Michael Shapira, and Scott Shenker


Abstract
We study the Static-Routing-Resiliency problem, motivated by routing on the Internet: Given a graph G = (V,E), a unique destination vertex d, and an integer constant c > 0, does there exist a static and destination-based routing scheme such that the correct delivery of packets from any source s to the destination d is guaranteed so long as (1) no more than c edges fail and (2) there exists a physical path from s to d? We embark upon a study of this problem by relating the edge-connectivity of a graph, i.e., the minimum number of edges whose deletion partitions G, to its resiliency. Following the success of randomized routing algorithms in dealing with a variety of problems (e.g., Valiant load balancing in the network design problem), we embark upon a study of randomized routing algorithms for the Static-Routing-Resiliency problem. For any k-connected graph, we show a surprisingly simple randomized algorithm that has expected number of hops O(|V|k) if at most k-1 edges fail, which reduces to O(|V|) if only a fraction t of the links fail (where t < 1 is a constant). Furthermore, our algorithm is deterministic if the routing does not encounter any failed link.

Cite as

Marco Chiesa, Andrei Gurtov, Aleksander Madry, Slobodan Mitrovic, Ilya Nikolaevskiy, Michael Shapira, and Scott Shenker. On the Resiliency of Randomized Routing Against Multiple Edge Failures. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 134:1-134:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.ICALP.2016.134,
  author =	{Chiesa, Marco and Gurtov, Andrei and Madry, Aleksander and Mitrovic, Slobodan and Nikolaevskiy, Ilya and Shapira, Michael and Shenker, Scott},
  title =	{{On the Resiliency of Randomized Routing Against Multiple Edge Failures}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{134:1--134:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.134},
  URN =		{urn:nbn:de:0030-drops-62692},
  doi =		{10.4230/LIPIcs.ICALP.2016.134},
  annote =	{Keywords: Randomized, Routing, Resilience, Connectivity, Arborescenses}
}
Document
Partition Bound Is Quadratically Tight for Product Distributions

Authors: Prahladh Harsha, Rahul Jain, and Jaikumar Radhakrishnan


Abstract
Let f: {0,1}^n*{0,1}^n -> {0,1} be a 2-party function. For every product distribution mu on {0,1}^n*{0,1}^n, we show that CC^{mu}_{0.49}(f) = O(log(prt_{1/8}(f))*log(log(prt_{1/8}(f)))^2), where CC^{mu}_{epsilon}(f) is the distributional communication complexity of f with error at most epsilon under the distribution mu and prt_{1/8}(f) is the partition bound of f, as defined by Jain and Klauck [Proc. 25th CCC, 2010]. We also prove a similar bound in terms of IC_{1/8}(f), the information complexity of f, namely, CC^{mu}_{0.49}(f) = O((IC_{1/8}(f)*log(IC_{1/8}(f)))^2). The latter bound was recently and independently established by Kol [Proc. 48th STOC, 2016] using a different technique. We show a similar result for query complexity under product distributions. Let g: {0,1}^n -> {0,1} be a function. For every bit-wise product distribution mu on {0,1}^n, we show that QC^{mu}_{0.49}(g) = O((log(qprt_{1/8}(g))*log(log(qprt_{1/8}(g))))^2), where QC^{mu}_{epsilon}(g) is the distributional query complexity of f with error at most epsilon under the distribution mu and qprt_{1/8}(g) is the query partition bound of the function g. Partition bounds were introduced (in both communication complexity and query complexity models) to provide LP-based lower bounds for randomized communication complexity and randomized query complexity. Our results demonstrate that these lower bounds are polynomially tight for product distributions.

Cite as

Prahladh Harsha, Rahul Jain, and Jaikumar Radhakrishnan. Partition Bound Is Quadratically Tight for Product Distributions. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 135:1-135:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{harsha_et_al:LIPIcs.ICALP.2016.135,
  author =	{Harsha, Prahladh and Jain, Rahul and Radhakrishnan, Jaikumar},
  title =	{{Partition Bound Is Quadratically Tight for Product Distributions}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{135:1--135:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.135},
  URN =		{urn:nbn:de:0030-drops-62708},
  doi =		{10.4230/LIPIcs.ICALP.2016.135},
  annote =	{Keywords: partition bound, product distribution, communication complexity, query complexity}
}
Document
Efficient Plurality Consensus, Or: the Benefits of Cleaning up from Time to Time

Authors: Petra Berenbrink, Tom Friedetzky, George Giakkoupis, and Peter Kling


Abstract
Plurality consensus considers a network of n nodes, each having one of k opinions. Nodes execute a (randomized) distributed protocol with the goal that all nodes adopt the plurality (the opinion initially supported by the most nodes). Communication is realized via the Gossip (or random phone call) model. A major open question has been whether there is a protocol for the complete graph that converges (w.h.p.) in polylogarithmic time and uses only polylogarithmic memory per node (local memory). We answer this question affirmatively. We propose two protocols that need only mild assumptions on the bias in favor of the plurality. As an example of our results, consider the complete graph and an arbitrarily small constant multiplicative bias in favor of the plurality. Our first protocol achieves plurality consensus in O(log(k)*log(log(n))) rounds using log(k) + Theta(log(log(k))) bits of local memory. Our second protocol achieves plurality consensus in O(log(n)*log(log(n))) rounds using only log(k) + 4 bits of local memory. This disproves a conjecture by Becchetti et al. (SODA'15) implying that any protocol with local memory log(k)+O(1) has worst-case runtime Omega(k). We provide similar bounds for much weaker bias assumptions. At the heart of our protocols lies an undecided state, an idea introduced by Angluin et al. (Distributed Computing'08).

Cite as

Petra Berenbrink, Tom Friedetzky, George Giakkoupis, and Peter Kling. Efficient Plurality Consensus, Or: the Benefits of Cleaning up from Time to Time. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 136:1-136:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{berenbrink_et_al:LIPIcs.ICALP.2016.136,
  author =	{Berenbrink, Petra and Friedetzky, Tom and Giakkoupis, George and Kling, Peter},
  title =	{{Efficient Plurality Consensus, Or: the Benefits of Cleaning up from Time to Time}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{136:1--136:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.136},
  URN =		{urn:nbn:de:0030-drops-62711},
  doi =		{10.4230/LIPIcs.ICALP.2016.136},
  annote =	{Keywords: plurality consensus, voting, majority, distributed, gossip}
}
Document
Fast, Robust, Quantizable Approximate Consensus

Authors: Bernadette Charron-Bost, Matthias Függer, and Thomas Nowak


Abstract
We introduce a new class of distributed algorithms for the approximate consensus problem in dynamic rooted networks, which we call amortized averaging algorithms. They are deduced from ordinary averaging algorithms by adding a value-gathering phase before each value update. This results in a drastic drop in decision times, from being exponential in the number n of processes to being polynomial under the assumption that each process knows n. In particular, the amortized midpoint algorithm is the first algorithm that achieves a linear decision time in dynamic rooted networks with an optimal contraction rate of 1/2 at each update step. We then show robustness of the amortized midpoint algorithm under violation of network assumptions: it gracefully degrades if communication graphs from time to time are non rooted, or under a wrong estimate of the number of processes. Finally, we prove that the amortized midpoint algorithm behaves well if processes can store and send only quantized values, rendering it well-suited for the design of dynamic networked systems. As a