LIPIcs, Volume 80

44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)



Thumbnail PDF

Event

ICALP 2017, July 10-14, 2017, Warsaw, Poland

Editors

Ioannis Chatzigiannakis
Piotr Indyk
Fabian Kuhn
Anca Muscholl

Publication Details

  • published at: 2017-07-07
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-041-5
  • DBLP: db/conf/icalp/icalp2017

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 80, ICALP'17, Complete Volume

Authors: Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl


Abstract
LIPIcs, Volume 80, ICALP'17, Complete Volume

Cite as

44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2017,
  title =	{{LIPIcs, Volume 80, ICALP'17, Complete Volume}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017},
  URN =		{urn:nbn:de:0030-drops-75107},
  doi =		{10.4230/LIPIcs.ICALP.2017},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Organization, List of Authors

Authors: Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl


Abstract
Front Matter, Table of Contents, Preface, Organization, List of Authors

Cite as

44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 0:i-0:xlii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2017.0,
  author =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  title =	{{Front Matter, Table of Contents, Preface, Organization, List of Authors}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{0:i--0:xlii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.0},
  URN =		{urn:nbn:de:0030-drops-73663},
  doi =		{10.4230/LIPIcs.ICALP.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Organization, List of Authors}
}
Document
Invited Talk
Orbit-Finite Sets and Their Algorithms (Invited Talk)

Authors: Mikolaj Bojanczyk


Abstract
An introduction to orbit-finite sets, which are a type of sets that are infinite enough to describe interesting examples, and finite enough to have algorithms running on them. The notion of orbit-finiteness is illustrated on the example of register automata, an automaton model dealing with infinite alphabets.

Cite as

Mikolaj Bojanczyk. Orbit-Finite Sets and Their Algorithms (Invited Talk). In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bojanczyk:LIPIcs.ICALP.2017.1,
  author =	{Bojanczyk, Mikolaj},
  title =	{{Orbit-Finite Sets and Their Algorithms}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.1},
  URN =		{urn:nbn:de:0030-drops-74295},
  doi =		{10.4230/LIPIcs.ICALP.2017.1},
  annote =	{Keywords: Orbit-finite sets, sets with atoms, data words, register automata}
}
Document
Invited Talk
Efficient Algorithms for Graph-Related Problems in Computer-Aided Verification (Invited Talk)

Authors: Monika Henzinger


Abstract
Fundamental algorithmic problems that lie in the core of many application in formal verification and analysis of systems can be described as graph-related algorithmic problems. Nodes in these problems are of one of two (or three) types, giving rise to a game-theoretic viewpoint: Player one nodes are under the control of the algorithm that wants to accomplish a goal, player two nodes are under the control of a worst-case adversary that tries to keep player one to achieve her goal, and random nodes are under the control of a random process that is oblivious to the goal of player one. A graph containing only player one and random nodes is called a Markov Decision Process, a graph containing only player one and player two nodes is called a game graph. A variety of goals on these graphs are of interest, the simplest being whether a fixed set of nodes can be reached. The algorithmic question is then whether there is a strategy for player one to achieve her goal from a given starting node. In this talk we give an overview of a variety of goals that are interesting in computer-aided verification and present upper and (conditional) lower bounds on the time complexity for deciding whether a winning strategy for player one exists.

Cite as

Monika Henzinger. Efficient Algorithms for Graph-Related Problems in Computer-Aided Verification (Invited Talk). In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{henzinger:LIPIcs.ICALP.2017.2,
  author =	{Henzinger, Monika},
  title =	{{Efficient Algorithms for Graph-Related Problems in Computer-Aided Verification}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.2},
  URN =		{urn:nbn:de:0030-drops-75054},
  doi =		{10.4230/LIPIcs.ICALP.2017.2},
  annote =	{Keywords: Computer-aided Verification, Game Theory, Markov Decision Process}
}
Document
Invited Talk
Local Computation Algorithms (Invited Talk)

Authors: Ronitt Rubinfeld


Abstract
Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input? We survey recent work in the model of local computation algorithms which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. In this talk, we describe results on a variety of problems for which sublinear time and space local computation algorithms have been developed - we will give special focus to finding maximal independent sets and sparse spanning graphs.

Cite as

Ronitt Rubinfeld. Local Computation Algorithms (Invited Talk). In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rubinfeld:LIPIcs.ICALP.2017.3,
  author =	{Rubinfeld, Ronitt},
  title =	{{Local Computation Algorithms}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.3},
  URN =		{urn:nbn:de:0030-drops-75068},
  doi =		{10.4230/LIPIcs.ICALP.2017.3},
  annote =	{Keywords: Massive Data Sets, Approximate Solutions, Maximal Independent Set, Sparse Spanning Graphs}
}
Document
Invited Talk
Fast and Powerful Hashing Using Tabulation (Invited Talk)

Authors: Mikkel Thorup


Abstract
Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here we survey recent results on how simple hashing schemes based on tabulation provide unexpectedly strong guarantees. Simple tabulation hashing dates back to Zobrist [1970]. Keys are viewed as consisting of c characters and we have precomputed character tables h_1,...,h_q mapping characters to random hash values. A key x=(x_1,...,x_c) is hashed to h_1[x_1] xor h_2[x_2]..... xor h_c[x_c]. This schemes is very fast with character tables in cache. While simple tabulation is not even 4-independent, it does provide many of the guarantees that are normally obtained via higher independence, e.g., linear probing and Cuckoo hashing. Next we consider twisted tabulation where one character is "twisted" with some simple operations. The resulting hash function has powerful distributional properties: Chernoff-Hoeffding type tail bounds and a very small bias for min-wise hashing. Finally, we consider double tabulation where we compose two simple tabulation functions, applying one to the output of the other, and show that this yields very high independence in the classic framework of Carter and Wegman [1977]. In fact, w.h.p., for a given set of size proportional to that of the space consumed, double tabulation gives fully-random hashing. While these tabulation schemes are all easy to implement and use, their analysis is not. This keynote talk surveys results from the papers in the reference list.

Cite as

Mikkel Thorup. Fast and Powerful Hashing Using Tabulation (Invited Talk). In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 4:1-4:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{thorup:LIPIcs.ICALP.2017.4,
  author =	{Thorup, Mikkel},
  title =	{{Fast and Powerful Hashing Using Tabulation}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{4:1--4:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.4},
  URN =		{urn:nbn:de:0030-drops-75074},
  doi =		{10.4230/LIPIcs.ICALP.2017.4},
  annote =	{Keywords: Hashing, Randomized Algorithms}
}
Document
Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps

Authors: Roksana Baleshzar, Deeparnab Chakrabarty, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and C. Seshadhri


Abstract
We study the problem of testing unateness of functions f:{0,1}^d -> R. We give an O(d/\epsilon . log(d/\epsilon))-query nonadaptive tester and an O(d/\epsilon)-query adaptive tester and show that both testers are optimal for a fixed distance parameter \epsilon. Previously known unateness testers worked only for Boolean functions, and their query complexity had worse dependence on the dimension both for the adaptive and the nonadaptive case. Moreover, no lower bounds for testing unateness were known. We generalize our results to obtain optimal unateness testers for functions f:[n]^d -> R. Our results establish that adaptivity helps with testing unateness of real-valued functions on domains of the form {0,1}^d and, more generally, [n]^d. This stands in contrast to the situation for monotonicity testing where there is no adaptivity gap for functions f:[n]^d -> R.

Cite as

Roksana Baleshzar, Deeparnab Chakrabarty, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and C. Seshadhri. Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{baleshzar_et_al:LIPIcs.ICALP.2017.5,
  author =	{Baleshzar, Roksana and Chakrabarty, Deeparnab and Pallavoor, Ramesh Krishnan S. and Raskhodnikova, Sofya and Seshadhri, C.},
  title =	{{Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.5},
  URN =		{urn:nbn:de:0030-drops-74844},
  doi =		{10.4230/LIPIcs.ICALP.2017.5},
  annote =	{Keywords: Property testing, unate and monotone functions}
}
Document
Sublinear Random Access Generators for Preferential Attachment Graphs

Authors: Guy Even, Reut Levi, Moti Medina, and Adi Rosén


Abstract
We consider the problem of sampling from a distribution on graphs, specifically when the distribution is defined by an evolving graph model, and consider the time, space and randomness complexities of such samplers. In the standard approach, the whole graph is chosen randomly according to the randomized evolving process, stored in full, and then queries on the sampled graph are answered by simply accessing the stored graph. This may require prohibitive amounts of time, space and random bits, especially when only a small number of queries are actually issued. Instead, we propose to generate the graph on-the-fly, in response to queries, and therefore to require amounts of time, space, and random bits which are a function of the actual number of queries. We focus on two random graph models: the Barabási-Albert Preferential Attachment model (BA-graphs) and the random recursive tree model. We give on-the-fly generation algorithms for both models. With probability 1-1/poly(n), each and every query is answered in polylog(n) time, and the increase in space and the number of random bits consumed by any single query are both polylog(n), where n denotes the number of vertices in the graph. Our results show that, although the BA random graph model is defined by a sequential process, efficient random access to the graph's nodes is possible. In addition to the conceptual contribution, efficient on-the-fly generation of random graphs can serve as a tool for the efficient simulation of sublinear algorithms over large BA-graphs, and the efficient estimation of their performance on such graphs.

Cite as

Guy Even, Reut Levi, Moti Medina, and Adi Rosén. Sublinear Random Access Generators for Preferential Attachment Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{even_et_al:LIPIcs.ICALP.2017.6,
  author =	{Even, Guy and Levi, Reut and Medina, Moti and Ros\'{e}n, Adi},
  title =	{{Sublinear Random Access Generators for Preferential Attachment Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.6},
  URN =		{urn:nbn:de:0030-drops-74242},
  doi =		{10.4230/LIPIcs.ICALP.2017.6},
  annote =	{Keywords: local computation algorithms, preferential attachment graphs, random recursive trees, sublinear algorithms}
}
Document
Sublinear Time Estimation of Degree Distribution Moments: The Degeneracy Connection

Authors: Talya Eden, Dana Ron, and C. Seshadhri


Abstract
We revisit the classic problem of estimating the degree distribution moments of an undirected graph. Consider an undirected graph G=(V,E) with n (non-isolated) vertices, and define (for s > 0) mu_s = 1\n * sum_{v in V} d^s_v. Our aim is to estimate mu_s within a multiplicative error of (1+epsilon) (for a given approximation parameter epsilon>0) in sublinear time. We consider the sparse graph model that allows access to: uniform random vertices, queries for the degree of any vertex, and queries for a neighbor of any vertex. For the case of s=1 (the average degree), \widetilde{O}(\sqrt{n}) queries suffice for any constant epsilon (Feige, SICOMP 06 and Goldreich-Ron, RSA 08). Gonen-Ron-Shavitt (SIDMA 11) extended this result to all integral s > 0, by designing an algorithms that performs \widetilde{O}(n^{1-1/(s+1)}) queries. (Strictly speaking, their algorithm approximates the number of star-subgraphs of a given size, but a slight modification gives an algorithm for moments.) We design a new, significantly simpler algorithm for this problem. In the worst-case, it exactly matches the bounds of Gonen-Ron-Shavitt, and has a much simpler proof. More importantly, the running time of this algorithm is connected to the degeneracy of G. This is (essentially) the maximum density of an induced subgraph. For the family of graphs with degeneracy at most alpha, it has a query complexity of widetilde{O}\left(\frac{n^{1-1/s}}{\mu^{1/s}_s} \Big(\alpha^{1/s} + \min\{\alpha,\mu^{1/s}_s\}\Big)\right) = \widetilde{O}(n^{1-1/s}\alpha/\mu^{1/s}_s). Thus, for the class of bounded degeneracy graphs (which includes all minor closed families and preferential attachment graphs), we can estimate the average degree in \widetilde{O}(1) queries, and can estimate the variance of the degree distribution in \widetilde{O}(\sqrt{n}) queries. This is a major improvement over the previous worst-case bounds. Our key insight is in designing an estimator for mu_s that has low variance when G does not have large dense subgraphs.

Cite as

Talya Eden, Dana Ron, and C. Seshadhri. Sublinear Time Estimation of Degree Distribution Moments: The Degeneracy Connection. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ICALP.2017.7,
  author =	{Eden, Talya and Ron, Dana and Seshadhri, C.},
  title =	{{Sublinear Time Estimation of Degree Distribution Moments: The Degeneracy Connection}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{7:1--7:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.7},
  URN =		{urn:nbn:de:0030-drops-73747},
  doi =		{10.4230/LIPIcs.ICALP.2017.7},
  annote =	{Keywords: Sublinear algorithms, Degree distribution, Graph moments}
}
Document
Near-Optimal Closeness Testing of Discrete Histogram Distributions

Authors: Ilias Diakonikolas, Daniel M. Kane, and Vladimir Nikishkin


Abstract
We investigate the problem of testing the equivalence between two discrete histograms. A k-histogram over [n] is a probability distribution that is piecewise constant over some set of k intervals over [n]. Histograms have been extensively studied in computer science and statistics. Given a set of samples from two k-histogram distributions p, q over [n], we want to distinguish (with high probability) between the cases that p = q and ||p ? q||_1 >= epsilon. The main contribution of this paper is a new algorithm for this testing problem and a nearly matching information-theoretic lower bound. Specifically, the sample complexity of our algorithm matches our lower bound up to a logarithmic factor, improving on previous work by polynomial factors in the relevant parameters. Our algorithmic approach applies in a more general setting and yields improved sample upper bounds for testing closeness of other structured distributions as well.

Cite as

Ilias Diakonikolas, Daniel M. Kane, and Vladimir Nikishkin. Near-Optimal Closeness Testing of Discrete Histogram Distributions. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{diakonikolas_et_al:LIPIcs.ICALP.2017.8,
  author =	{Diakonikolas, Ilias and Kane, Daniel M. and Nikishkin, Vladimir},
  title =	{{Near-Optimal Closeness Testing of Discrete Histogram Distributions}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.8},
  URN =		{urn:nbn:de:0030-drops-74937},
  doi =		{10.4230/LIPIcs.ICALP.2017.8},
  annote =	{Keywords: distribution testing, histograms, closeness testing}
}
Document
Deleting and Testing Forbidden Patterns in Multi-Dimensional Arrays

Authors: Omri Ben-Eliezer, Simon Korman, and Daniel Reichman


Abstract
Analyzing multi-dimensional data is a fundamental problem in various areas of computer science. As the amount of data is often huge, it is desirable to obtain sublinear time algorithms to understand local properties of the data. We focus on the natural problem of testing pattern freeness: given a large d-dimensional array A and a fixed d-dimensional pattern P over a finite alphabet Gamma, we say that A is P-free if it does not contain a copy of the forbidden pattern P as a consecutive subarray. The distance of A to P-freeness is the fraction of the entries of A that need to be modified to make it P-free. For any epsilon > 0 and any large enough pattern P over any alphabet - other than a very small set of exceptional patterns - we design a tolerant tester that distinguishes between the case that the distance is at least epsilon and the case that the distance is at most a_d epsilon, with query complexity and running time c_d epsilon^{-1}, where a_d < 1 and c_d depend only on the dimension d. These testers only need to access uniformly random blocks of samples from the input A. To analyze the testers we establish several combinatorial results, including the following d-dimensional modification lemma, which might be of independent interest: For any large enough d-dimensional pattern P over any alphabet (excluding a small set of exceptional patterns for the binary case), and any d-dimensional array A containing a copy of P, one can delete this copy by modifying one of its locations without creating new P-copies in A. Our results address an open question of Fischer and Newman, who asked whether there exist efficient testers for properties related to tight substructures in multi-dimensional structured data.

Cite as

Omri Ben-Eliezer, Simon Korman, and Daniel Reichman. Deleting and Testing Forbidden Patterns in Multi-Dimensional Arrays. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ICALP.2017.9,
  author =	{Ben-Eliezer, Omri and Korman, Simon and Reichman, Daniel},
  title =	{{Deleting and Testing Forbidden Patterns in Multi-Dimensional Arrays}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.9},
  URN =		{urn:nbn:de:0030-drops-74427},
  doi =		{10.4230/LIPIcs.ICALP.2017.9},
  annote =	{Keywords: Property testing, Sublinear algorithms, Pattern matching}
}
Document
On the Value of Penalties in Time-Inconsistent Planning

Authors: Susanne Albers and Dennis Kraft


Abstract
People tend to behave inconsistently over time due to an inherent present bias. As this may impair performance, social and economic settings need to be adapted accordingly. Common tools to reduce the impact of time-inconsistent behavior are penalties and prohibition. Such tools are called commitment devices. In recent work Kleinberg and Oren [EC, 2014] connect the design of a prohibition-based commitment device to a combinatorial problem in which edges are removed from a task graph G with n nodes. However, this problem is NP-hard to approximate within a ratio less than n^(1/2)/3 [Albers and Kraft, WINE, 2016]. To address this issue, we propose a penalty-based commitment device that does not delete edges, but raises their cost. The benefits of our approach are twofold. On the conceptual side, we show that penalties are up to 1/beta times more efficient than prohibition, where 0 < beta <= 1 parameterizes the present bias. On the computational side, we improve approximability by presenting a 2-approximation algorithm for allocating penalties. To complement this result, we prove that optimal penalties are NP-hard to approximate within a ratio of 1.08192.

Cite as

Susanne Albers and Dennis Kraft. On the Value of Penalties in Time-Inconsistent Planning. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 10:1-10:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{albers_et_al:LIPIcs.ICALP.2017.10,
  author =	{Albers, Susanne and Kraft, Dennis},
  title =	{{On the Value of Penalties in Time-Inconsistent Planning}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{10:1--10:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.10},
  URN =		{urn:nbn:de:0030-drops-73876},
  doi =		{10.4230/LIPIcs.ICALP.2017.10},
  annote =	{Keywords: approximation algorithms, behavioral economics, commitment devices, computational complexity, time-inconsistent preferences}
}
Document
Efficient Approximations for the Online Dispersion Problem

Authors: Jing Chen, Bo Li, and Yingkai Li


Abstract
The dispersion problem has been widely studied in computational geometry and facility location, and is closely related to the packing problem. The goal is to locate n points (e.g., facilities or persons) in a k-dimensional polytope, so that they are far away from each other and from the boundary of the polytope. In many real-world scenarios however, the points arrive and depart at different times, and decisions must be made without knowing future events. Therefore we study, for the first time in the literature, the online dispersion problem in Euclidean space. There are two natural objectives when time is involved: the all-time worst-case (ATWC) problem tries to maximize the minimum distance that ever appears at any time; and the cumulative distance (CD) problem tries to maximize the integral of the minimum distance throughout the whole time interval. Interestingly, the online problems are highly non-trivial even on a segment. For cumulative distance, this remains the case even when the problem is time-dependent but offline, with all the arriving and departure times given in advance. For the online ATWC problem on a segment, we construct a deterministic polynomial-time algorithm which is (2ln2+epsilon)-competitive, where epsilon>0 can be arbitrarily small and the algorithm's running time is polynomial in 1/epsilon. We show this algorithm is actually optimal. For the same problem in a square, we provide a 1.591-competitive algorithm and a 1.183 lower-bound. Furthermore, for arbitrary k-dimensional polytopes with k>=2, we provide a 2/(1-epsilon)-competitive algorithm and a 7/6 lower-bound. All our lower-bounds come from the structure of the online problems and hold even when computational complexity is not a concern. Interestingly, for the offline CD problem in arbitrary k-dimensional polytopes, we provide a polynomial-time black-box reduction to the online ATWC problem, and the resulting competitive ratio increases by a factor of at most 2. Our techniques also apply to online dispersion problems with different boundary conditions.

Cite as

Jing Chen, Bo Li, and Yingkai Li. Efficient Approximations for the Online Dispersion Problem. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2017.11,
  author =	{Chen, Jing and Li, Bo and Li, Yingkai},
  title =	{{Efficient Approximations for the Online Dispersion Problem}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.11},
  URN =		{urn:nbn:de:0030-drops-74002},
  doi =		{10.4230/LIPIcs.ICALP.2017.11},
  annote =	{Keywords: dispersion, online algorithms, geometric optimization, packing, competitive algorithms}
}
Document
Online Covering with Sum of $ell_q$-Norm Objectives

Authors: Viswanath Nagarajan and Xiangkun Shen


Abstract
We consider fractional online covering problems with lq-norm objectives. The problem of interest is of the form min{ f(x) : Ax >= 1, x >= 0} where f(x) is the weighted sum of lq-norms and A is a non-negative matrix. The rows of A (i.e. covering constraints) arrive online over time. We provide an online O(log d+log p)-competitive algorithm where p is the maximum to minimum ratio of A and A is the row sparsity of A. This is based on the online primal-dual framework where we use the dual of the above convex program. Our result expands the class of convex objectives that admit good online algorithms: prior results required a monotonicity condition on the objective which is not satisfied here. This result is nearly tight even for the linear special case. As direct applications, we obtain (i) improved online algorithms for non-uniform buy-at-bulk network design and (ii) the first online algorithm for throughput maximization under lq-norm edge capacities.

Cite as

Viswanath Nagarajan and Xiangkun Shen. Online Covering with Sum of $ell_q$-Norm Objectives. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 12:1-12:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nagarajan_et_al:LIPIcs.ICALP.2017.12,
  author =	{Nagarajan, Viswanath and Shen, Xiangkun},
  title =	{{Online Covering with Sum of \$ell\underlineq\$-Norm Objectives}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{12:1--12:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.12},
  URN =		{urn:nbn:de:0030-drops-73839},
  doi =		{10.4230/LIPIcs.ICALP.2017.12},
  annote =	{Keywords: online algorithm, covering/packing problem, convex, buy-at-bulk, throughput maximization}
}
Document
Dynamic Beats Fixed: On Phase-Based Algorithms for File Migration

Authors: Marcin Bienkowski, Jaroslaw Byrka, and Marcin Mucha


Abstract
In this paper, we construct a deterministic 4-competitive algorithm for the online file migration problem, beating the currently best 20-year old, 4.086-competitive MTLM algorithm by Bartal et al. (SODA 1997). Like MTLM, our algorithm also operates in phases, but it adapts their lengths dynamically depending on the geometry of requests seen so far. The improvement was obtained by carefully analyzing a linear model (factor-revealing LP) of a single phase of the algorithm. We also show that if an online algorithm operates in phases of fixed length and the adversary is able to modify the graph between phases, no algorithm can beat the competitive ratio of 4.086.

Cite as

Marcin Bienkowski, Jaroslaw Byrka, and Marcin Mucha. Dynamic Beats Fixed: On Phase-Based Algorithms for File Migration. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.ICALP.2017.13,
  author =	{Bienkowski, Marcin and Byrka, Jaroslaw and Mucha, Marcin},
  title =	{{Dynamic Beats Fixed: On Phase-Based Algorithms for File Migration}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.13},
  URN =		{urn:nbn:de:0030-drops-73942},
  doi =		{10.4230/LIPIcs.ICALP.2017.13},
  annote =	{Keywords: file migration, factor-revealing linear programs, online algorithms, competitive analysis}
}
Document
The Infinite Server Problem

Authors: Christian Coester, Elias Koutsoupias, and Philip Lazos


Abstract
We study a variant of the k-server problem, the infinite server problem, in which infinitely many servers reside initially at a particular point of the metric space and serve a sequence of requests. In the framework of competitive analysis, we show a surprisingly tight connection between this problem and the (h,k)-server problem, in which an online algorithm with k servers competes against an offline algorithm with h servers. Specifically, we show that the infinite server problem has bounded competitive ratio if and only if the (h,k)-server problem has bounded competitive ratio for some k=O(h). We give a lower bound of 3.146 for the competitive ratio of the infinite server problem, which implies the same lower bound for the (h,k)-server problem even when k>>h and holds also for the line metric; the previous known bounds were 2.4 for general metric spaces and 2 for the line. For weighted trees and layered graphs we obtain upper bounds, although they depend on the depth. Of particular interest is the infinite server problem on the line, which we show to be equivalent to the seemingly easier case in which all requests are in a fixed bounded interval away from the original position of the servers. This is a special case of a more general reduction from arbitrary metric spaces to bounded subspaces. Unfortunately, classical approaches (double coverage and generalizations, work function algorithm, balancing algorithms) fail even for this special case.

Cite as

Christian Coester, Elias Koutsoupias, and Philip Lazos. The Infinite Server Problem. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{coester_et_al:LIPIcs.ICALP.2017.14,
  author =	{Coester, Christian and Koutsoupias, Elias and Lazos, Philip},
  title =	{{The Infinite Server Problem}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.14},
  URN =		{urn:nbn:de:0030-drops-74563},
  doi =		{10.4230/LIPIcs.ICALP.2017.14},
  annote =	{Keywords: Online Algorithms, k-Server, Resource Augmentation}
}
Document
Quantum Automata Cannot Detect Biased Coins, Even in the Limit

Authors: Guy Kindler and Ryan O'Donnell


Abstract
Aaronson and Drucker (2011) asked whether there exists a quantum finite automaton that can distinguish fair coin tosses from biased ones by spending significantly more time in accepting states, on average, given an infinite sequence of tosses. We answer this question negatively.

Cite as

Guy Kindler and Ryan O'Donnell. Quantum Automata Cannot Detect Biased Coins, Even in the Limit. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 15:1-15:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kindler_et_al:LIPIcs.ICALP.2017.15,
  author =	{Kindler, Guy and O'Donnell, Ryan},
  title =	{{Quantum Automata Cannot Detect Biased Coins, Even in the Limit}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{15:1--15:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.15},
  URN =		{urn:nbn:de:0030-drops-73995},
  doi =		{10.4230/LIPIcs.ICALP.2017.15},
  annote =	{Keywords: quantum automata}
}
Document
A New Holant Dichotomy Inspired by Quantum Computation

Authors: Miriam Backens


Abstract
Holant problems are a framework for the analysis of counting complexity problems on graphs. This framework is simultaneously general enough to encompass many counting problems on graphs and specific enough to allow the derivation of dichotomy results, partitioning all problems into those which are in FP and those which are #P-hard. The Holant framework is based on the theory of holographic algorithms, which was originally inspired by concepts from quantum computation, but this connection appears not to have been explored before. Here, we employ quantum information theory to explain existing results in a concise way and to derive a dichotomy for a new family of problems, which we call Holant^+. This family sits in between the known families of Holant^*, for which a full dichotomy is known, and Holant^c, for which only a restricted dichotomy is known. Using knowledge from entanglement theory -- both previously existing work and new results of our own -- we prove a full dichotomy theorem for Holant^+, which is very similar to the restricted Holant^c dichotomy and may thus be a stepping stone to a full dichotomy for that family.

Cite as

Miriam Backens. A New Holant Dichotomy Inspired by Quantum Computation. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 16:1-16:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{backens:LIPIcs.ICALP.2017.16,
  author =	{Backens, Miriam},
  title =	{{A New Holant Dichotomy Inspired by Quantum Computation}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{16:1--16:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.16},
  URN =		{urn:nbn:de:0030-drops-74383},
  doi =		{10.4230/LIPIcs.ICALP.2017.16},
  annote =	{Keywords: computational complexity, counting complexity, Holant, dichotomy, entanglement}
}
Document
Efficient Quantum Algorithms for Simulating Lindblad Evolution

Authors: Richard Cleve and Chunhao Wang


Abstract
We consider the natural generalization of the Schrodinger equation to Markovian open system dynamics: the so-called the Lindblad equation. We give a quantum algorithm for simulating the evolution of an n-qubit system for time t within precision epsilon. If the Lindbladian consists of poly(n) operators that can each be expressed as a linear combination of poly(n) tensor products of Pauli operators then the gate cost of our algorithm is O(t polylog(t/epsilon) poly(n)). We also obtain similar bounds for the cases where the Lindbladian consists of local operators, and where the Lindbladian consists of sparse operators. This is remarkable in light of evidence that we provide indicating that the above efficiency is impossible to attain by first expressing Lindblad evolution as Schrodinger evolution on a larger system and tracing out the ancillary system: the cost of such a reduction incurs an efficiency overhead of O(t^2/epsilon) even before the Hamiltonian evolution simulation begins. Instead, the approach of our algorithm is to use a novel variation of the "linear combinations of unitaries" construction that pertains to channels.

Cite as

Richard Cleve and Chunhao Wang. Efficient Quantum Algorithms for Simulating Lindblad Evolution. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cleve_et_al:LIPIcs.ICALP.2017.17,
  author =	{Cleve, Richard and Wang, Chunhao},
  title =	{{Efficient Quantum Algorithms for Simulating Lindblad Evolution}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.17},
  URN =		{urn:nbn:de:0030-drops-74776},
  doi =		{10.4230/LIPIcs.ICALP.2017.17},
  annote =	{Keywords: quantum algorithms, open quantum systems, Lindblad simulation}
}
Document
Controlled Quantum Amplification

Authors: Catalin Dohotaru and Peter Høyer


Abstract
We propose a new framework for turning quantum search algorithms that decide into quantum algorithms for finding a solution. Consider we are given an abstract quantum search algorithm A that can determine whether a target g exists or not. We give a general construction of another operator U that both determines and finds the target, whenever one exists. Our amplification method at most doubles the cost over using A, has little overhead, and works by controlling the evolution of A. This is the first known general framework to the open question of turning abstract quantum search algorithms into quantum algorithms for finding a solution. We next apply the framework to random walks. We develop a new classical algorithm and a new quantum algorithm for finding a unique marked element. Our new random walk finds a unique marked element using H update operations and 1/eps checking operations. Here H is the hitting time, and eps is the probability that the stationary distribution of the walk is in the marked state. Our classical walk is derived via quantum arguments. Our new quantum algorithm finds a unique marked element using H^(1/2) update operations and 1/eps^(1/2) checking operations, up to logarithmic factors. This is the first known quantum algorithm being simultaneously quadratically faster in both parameters. We also show that the framework can simulate Grover's quantum search algorithm, amplitude amplification, Szegedy's quantum walks, and quantum interpolated walks.

Cite as

Catalin Dohotaru and Peter Høyer. Controlled Quantum Amplification. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 18:1-18:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dohotaru_et_al:LIPIcs.ICALP.2017.18,
  author =	{Dohotaru, Catalin and H{\o}yer, Peter},
  title =	{{Controlled Quantum Amplification}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{18:1--18:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.18},
  URN =		{urn:nbn:de:0030-drops-74893},
  doi =		{10.4230/LIPIcs.ICALP.2017.18},
  annote =	{Keywords: Quantum algorithms, quantum walks, random walks, quantum search}
}
Document
Approximating Language Edit Distance Beyond Fast Matrix Multiplication: Ultralinear Grammars Are Where Parsing Becomes Hard!

Authors: Rajesh Jayaram and Barna Saha


Abstract
In 1975, a breakthrough result of L. Valiant showed that parsing context free grammars can be reduced to Boolean matrix multiplication, resulting in a running time of O(n^omega) for parsing where omega <= 2.373 is the exponent of fast matrix multiplication, and n is the string length. Recently, Abboud, Backurs and V. Williams (FOCS 2015) demonstrated that this is likely optimal; moreover, a combinatorial o(n^3) algorithm is unlikely to exist for the general parsing problem. The language edit distance problem is a significant generalization of the parsing problem, which computes the minimum edit distance of a given string (using insertions, deletions, and substitutions) to any valid string in the language, and has received significant attention both in theory and practice since the seminal work of Aho and Peterson in 1972. Clearly, the lower bound for parsing rules out any algorithm running in o(n^omega) time that can return a nontrivial multiplicative approximation of the language edit distance problem. Furthermore, combinatorial algorithms with cubic running time or algorithms that use fast matrix multiplication are often not desirable in practice. To break this n^omega hardness barrier, in this paper we study additive approximation algorithms for language edit distance. We provide two explicit combinatorial algorithms to obtain a string with minimum edit distance with performance dependencies on either the number of non-linear productions, k^*, or the number of nested non-linear production, k, used in the optimal derivation. Explicitly, we give an additive O(k^*gamma) approximation in time O(|G|(n^2 + (n/gamma)^3)) and an additive O(k gamma) approximation in time O(|G|(n^2 + (n^3/gamma^2))), where |G| is the grammar size and n is the string length. In particular, we obtain tight approximations for an important subclass of context free grammars known as ultralinear grammars, for which k and k^* are naturally bounded. Interestingly, we show that the same conditional lower bound for parsing context free grammars holds for the class of ultralinear grammars as well, clearly marking the boundary where parsing becomes hard!

Cite as

Rajesh Jayaram and Barna Saha. Approximating Language Edit Distance Beyond Fast Matrix Multiplication: Ultralinear Grammars Are Where Parsing Becomes Hard!. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 19:1-19:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jayaram_et_al:LIPIcs.ICALP.2017.19,
  author =	{Jayaram, Rajesh and Saha, Barna},
  title =	{{Approximating Language Edit Distance Beyond Fast Matrix Multiplication: Ultralinear Grammars Are Where Parsing Becomes Hard!}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{19:1--19:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.19},
  URN =		{urn:nbn:de:0030-drops-74548},
  doi =		{10.4230/LIPIcs.ICALP.2017.19},
  annote =	{Keywords: Approximation, Edit Distance, Dynamic Programming, Context Free Grammar, Hardness}
}
Document
Conditional Lower Bounds for All-Pairs Max-Flow

Authors: Robert Krauthgamer and Ohad Trabelsi


Abstract
We provide evidence that computing the maximum flow value between every pair of nodes in a directed graph on n nodes, m edges, and capacities in the range [1..n], which we call the All-Pairs Max-Flow problem, cannot be solved in time that is faster significantly (i.e., by a polynomial factor) than O(n^2 m). Since a single maximum st-flow in such graphs can be solved in time \tilde{O}(m\sqrt{n}) [Lee and Sidford, FOCS 2014], we conclude that the all-pairs version might require time equivalent to \tilde\Omega(n^{3/2}) computations of maximum st-flow, which strongly separates the directed case from the undirected one. Moreover, if maximum $st$-flow can be solved in time \tilde{O}(m), then the runtime of \tilde\Omega(n^2) computations is needed. This is in contrast to a conjecture of Lacki, Nussbaum, Sankowski, and Wulf-Nilsen [FOCS 2012] that All-Pairs Max-Flow in general graphs can be solved faster than the time of O(n^2) computations of maximum st-flow. Specifically, we show that in sparse graphs G=(V,E,w), if one can compute the maximum st-flow from every s in an input set of sources S\subseteq V to every t in an input set of sinks T\subseteq V in time O((|S||T|m)^{1-epsilon}), for some |S|, |T|, and a constant epsilon>0, then MAX-CNF-SAT (maximum satisfiability of conjunctive normal form formulas) with n' variables and m' clauses can be solved in time {m'}^{O(1)}2^{(1-delta)n'} for a constant delta(epsilon)>0, a problem for which not even 2^{n'}/\poly(n') algorithms are known. Such runtime for MAX-CNF-SAT would in particular refute the Strong Exponential Time Hypothesis (SETH). Hence, we improve the lower bound of Abboud, Vassilevska-Williams, and Yu [STOC 2015], who showed that for every fixed epsilon>0 and |S|=|T|=O(\sqrt{n}), if the above problem can be solved in time O(n^{3/2-epsilon}), then some incomparable (and intuitively weaker) conjecture is false. Furthermore, a larger lower bound than ours implies strictly super-linear time for maximum st-flow problem, which would be an amazing breakthrough. In addition, we show that All-Pairs Max-Flow in uncapacitated networks with every edge-density m=m(n), cannot be computed in time significantly faster than O(mn), even for acyclic networks. The gap to the fastest known algorithm by Cheung, Lau, and Leung [FOCS 2011] is a factor of O(m^{omega-1}/n), and for acyclic networks it is O(n^{omega-1}), where omega is the matrix multiplication exponent.

Cite as

Robert Krauthgamer and Ohad Trabelsi. Conditional Lower Bounds for All-Pairs Max-Flow. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{krauthgamer_et_al:LIPIcs.ICALP.2017.20,
  author =	{Krauthgamer, Robert and Trabelsi, Ohad},
  title =	{{Conditional Lower Bounds for All-Pairs Max-Flow}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.20},
  URN =		{urn:nbn:de:0030-drops-74264},
  doi =		{10.4230/LIPIcs.ICALP.2017.20},
  annote =	{Keywords: Conditional lower bounds, Hardness in P, All-Pairs Maximum Flow, Strong Exponential Time Hypothesis}
}
Document
On the Fine-Grained Complexity of One-Dimensional Dynamic Programming

Authors: Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider


Abstract
In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions. In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be. More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems.

Cite as

Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider. On the Fine-Grained Complexity of One-Dimensional Dynamic Programming. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2017.21,
  author =	{K\"{u}nnemann, Marvin and Paturi, Ramamohan and Schneider, Stefan},
  title =	{{On the Fine-Grained Complexity of One-Dimensional Dynamic Programming}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.21},
  URN =		{urn:nbn:de:0030-drops-74688},
  doi =		{10.4230/LIPIcs.ICALP.2017.21},
  annote =	{Keywords: Least-Weight Subsequence, SETH, Fine-Grained Complexity, Knapsack, Subquadratic Algorithms}
}
Document
On Problems Equivalent to (min,+)-Convolution

Authors: Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk


Abstract
In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results.

Cite as

Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk. On Problems Equivalent to (min,+)-Convolution. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.ICALP.2017.22,
  author =	{Cygan, Marek and Mucha, Marcin and Wegrzycki, Karol and Wlodarczyk, Michal},
  title =	{{On Problems Equivalent to (min,+)-Convolution}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.22},
  URN =		{urn:nbn:de:0030-drops-74216},
  doi =		{10.4230/LIPIcs.ICALP.2017.22},
  annote =	{Keywords: fine-grained complexity, knapsack, conditional lower bounds, (min,+)-convolution, subquadratic equivalence}
}
Document
On Finding the Jaccard Center

Authors: Marc Bury and Chris Schwiegelshohn


Abstract
We initiate the study of finding the Jaccard center of a given collection N of sets. For two sets X,Y, the Jaccard index is defined as |X\cap Y|/|X\cup Y| and the corresponding distance is 1-|X\cap Y|/|X\cup Y|. The Jaccard center is a set C minimizing the maximum distance to any set of N. We show that the problem is NP-hard to solve exactly, and that it admits a PTAS while no FPTAS can exist unless P = NP. Furthermore, we show that the problem is fixed parameter tractable in the maximum Hamming norm between Jaccard center and any input set. Our algorithms are based on a compression technique similar in spirit to coresets for the Euclidean 1-center problem. In addition, we also show that, contrary to the previously studied median problem by Chierichetti et al. (SODA 2010), the continuous version of the Jaccard center problem admits a simple polynomial time algorithm.

Cite as

Marc Bury and Chris Schwiegelshohn. On Finding the Jaccard Center. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bury_et_al:LIPIcs.ICALP.2017.23,
  author =	{Bury, Marc and Schwiegelshohn, Chris},
  title =	{{On Finding the Jaccard Center}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.23},
  URN =		{urn:nbn:de:0030-drops-73769},
  doi =		{10.4230/LIPIcs.ICALP.2017.23},
  annote =	{Keywords: Clustering, 1-Center, Jaccard}
}
Document
The Polytope-Collision Problem

Authors: Shaull Almagor, Joël Ouaknine, and James Worrell


Abstract
The Orbit Problem consists of determining, given a matrix A in R^dxd and vectors x,y in R^d, whether there exists n in N such that A^n=y. This problem was shown to be decidable in a seminal work of Kannan and Lipton in the 1980s. Subsequently, Kannan and Lipton noted that the Orbit Problem becomes considerably harder when the target y is replaced with a subspace of R^d. Recently, it was shown that the problem is decidable for vector-space targets of dimension at most three, followed by another development showing that the problem is in PSPACE for polytope targets of dimension at most three. In this work, we take a dual look at the problem, and consider the case where the initial vector x is replaced with a polytope P_1, and the target is a polytope P_2. Then, the question is whether there exists n in N such that A^n P_1 intersection P_2 does not equal the empty set. We show that the problem can be decided in PSPACE for dimension at most three. As in previous works, decidability in the case of higher dimensions is left open, as the problem is known to be hard for long-standing number-theoretic open problems. Our proof begins by formulating the problem as the satisfiability of a parametrized family of sentences in the existential first-order theory of real-closed fields. Then, after removing quantifiers, we are left with instances of simultaneous positivity of sums of exponentials. Using techniques from transcendental number theory, and separation bounds on algebraic numbers, we are able to solve such instances in PSPACE.

Cite as

Shaull Almagor, Joël Ouaknine, and James Worrell. The Polytope-Collision Problem. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{almagor_et_al:LIPIcs.ICALP.2017.24,
  author =	{Almagor, Shaull and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{The Polytope-Collision Problem}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.24},
  URN =		{urn:nbn:de:0030-drops-74521},
  doi =		{10.4230/LIPIcs.ICALP.2017.24},
  annote =	{Keywords: linear dynamical systems, orbit problem, algebraic algorithms}
}
Document
Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier

Authors: Omer Gold and Micha Sharir


Abstract
Dynamic Time Warping (DTW) and Geometric Edit Distance (GED) are basic similarity measures between curves or general temporal sequences (e.g., time series) that are represented as sequences of points in some metric space (X, dist). The DTW and GED measures are massively used in various fields of computer science and computational biology, consequently, the tasks of computing these measures are among the core problems in P. Despite extensive efforts to find more efficient algorithms, the best-known algorithms for computing the DTW or GED between two sequences of points in X = R^d are long-standing dynamic programming algorithms that require quadratic runtime, even for the one-dimensional case d = 1, which is perhaps one of the most used in practice. In this paper, we break the nearly 50 years old quadratic time bound for computing DTW or GED between two sequences of n points in R, by presenting deterministic algorithms that run in O( n^2 log log log n / log log n ) time. Our algorithms can be extended to work also for higher dimensional spaces R^d, for any constant d, when the underlying distance-metric dist is polyhedral (e.g., L_1, L_infty).

Cite as

Omer Gold and Micha Sharir. Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gold_et_al:LIPIcs.ICALP.2017.25,
  author =	{Gold, Omer and Sharir, Micha},
  title =	{{Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.25},
  URN =		{urn:nbn:de:0030-drops-73820},
  doi =		{10.4230/LIPIcs.ICALP.2017.25},
  annote =	{Keywords: Dynamic Time Warping, Geometric Edit Distance, Time Series, Points Matching, Geometric Matching}
}
Document
Efficient Construction of Probabilistic Tree Embeddings

Authors: Guy E. Blelloch, Yan Gu, and Yihan Sun


Abstract
In this paper we describe an algorithm that embeds a graph metric (V,d_G) on an undirected weighted graph G=(V,E) into a distribution of tree metrics (T,D_T) such that for every pair u,v in V, d_G(u,v)<=d_T(u,v) and E_T[d_T(u,v)]<=O(log n)d_G(u,v). Such embeddings have proved highly useful in designing fast approximation algorithms, as many hard problems on graphs are easy to solve on tree instances. For a graph with n vertices and m edges, our algorithm runs in O(m log n) time with high probability, which improves the previous upper bound of O(m log^3 n) shown by Mendel et al. in 2009. The key component of our algorithm is a new approximate single-source shortest-path algorithm, which implements the priority queue with a new data structure, the bucket-tree structure. The algorithm has three properties: it only requires linear time in terms of the number of edges in the input graph; the computed distances have the distance preserving property; and when computing the shortest-paths to the k-nearest vertices from the source, it only requires to visit these vertices and their edge lists. These properties are essential to guarantee the correctness and the stated work bound. Using this shortest-path algorithm, we show how to generate an intermediate structure, the approximate dominance sequences of the input graph, in O(m log n) time, and further propose a simple yet efficient algorithm to converted this sequence to a tree embedding in O(n log n) time, both with high probability. Combining the three subroutines gives the stated work bound of the algorithm. We also show a new application of probabilistic tree embeddings: they can be used to accelerate the construction of a series of approximate distance oracles.

Cite as

Guy E. Blelloch, Yan Gu, and Yihan Sun. Efficient Construction of Probabilistic Tree Embeddings. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{blelloch_et_al:LIPIcs.ICALP.2017.26,
  author =	{Blelloch, Guy E. and Gu, Yan and Sun, Yihan},
  title =	{{Efficient Construction of Probabilistic Tree Embeddings}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.26},
  URN =		{urn:nbn:de:0030-drops-75034},
  doi =		{10.4230/LIPIcs.ICALP.2017.26},
  annote =	{Keywords: Graph Algorithm, Metric Embeddings, Probabilistic Tree Embeddings, Single-source Shortest-paths}
}
Document
Approximating Partition Functions of Bounded-Degree Boolean Counting Constraint Satisfaction Problems

Authors: Andreas Galanis, Leslie Ann Goldberg, and Kuan Yang


Abstract
We study the complexity of approximate counting Constraint Satisfaction Problems (#CSPs) in a bounded degree setting. Specifically, given a Boolean constraint language Gamma and a degree bound Delta, we study the complexity of #CSP_Delta(Gamma), which is the problem of counting satisfying assignments to CSP instances with constraints from Gamma and whose variables can appear at most Delta times. Our main result shows that: (i) if every function in Gamma is affine, then #CSP_Delta(Gamma) is in FP for all Delta, (ii) otherwise, if every function in Gamma is in a class called IM_2, then for all sufficiently large Delta, #CSP_Delta(Gamma) is equivalent under approximation-preserving (AP) reductions to the counting problem #BIS (the problem of counting independent sets in bipartite graphs) (iii) otherwise, for all sufficiently large Delta, it is NP-hard to approximate the number of satisfying assignments of an instance of #CSP_Delta(Gamma), even within an exponential factor. Our result extends previous results, which apply only in the so-called "conservative" case.

Cite as

Andreas Galanis, Leslie Ann Goldberg, and Kuan Yang. Approximating Partition Functions of Bounded-Degree Boolean Counting Constraint Satisfaction Problems. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2017.27,
  author =	{Galanis, Andreas and Goldberg, Leslie Ann and Yang, Kuan},
  title =	{{Approximating Partition Functions of Bounded-Degree Boolean Counting Constraint Satisfaction Problems}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.27},
  URN =		{urn:nbn:de:0030-drops-74099},
  doi =		{10.4230/LIPIcs.ICALP.2017.27},
  annote =	{Keywords: Constraint Satisfaction, Approximate Counting}
}
Document
Inapproximability of the Independent Set Polynomial Below the Shearer Threshold

Authors: Andreas Galanis, Leslie Ann Goldberg, and Daniel Stefankovic


Abstract
We study the problem of approximately evaluating the independent set polynomial of bounded-degree graphs at a point lambda or, equivalently, the problem of approximating the partition function of the hard-core model with activity lambda on graphs G of max degree D. For lambda>0, breakthrough results of Weitz and Sly established a computational transition from easy to hard at lambda_c(D)=(D-1)^(D-1)/(D-2)^D, which coincides with the tree uniqueness phase transition from statistical physics. For lambda<0, the evaluation of the independent set polynomial is connected to the conditions of the Lovasz Local Lemma. Shearer identified the threshold lambda*(D)=(D-1)^(D-1)/D^D as the maximum value p such that every family of events with failure probability at most p and whose dependency graph has max degree D has nonempty intersection. Very recently, Patel and Regts, and Harvey et al. have independently designed FPTASes for approximating the partition function whenever |lambda|<lambda*(D). Our main result establishes for the first time a computational transition at the Shearer threshold. We show that for all D>=3, for all lambda<-lambda*(D), it is NP-hard to approximate the partition function on graphs of maximum degree D, even within an exponential factor. Thus, our result, combined with the FPTASes for lambda>-lambda*(D), establishes a phase transition for negative activities. In fact, we now have the following picture for the problem of approximating the partition function with activity lambda on graphs G of max degree D. 1. For -lambda*(D)<lambda<lambda_c(D), the problem admits an FPTAS. 2. For lambda<-lambda*(D) or lambda>lambda_c(D), the problem is NP-hard. Rather than the tree uniqueness threshold of the positive case, the phase transition for negative activities corresponds to the existence of zeros for the partition function of the tree below -lambda*(D).

Cite as

Andreas Galanis, Leslie Ann Goldberg, and Daniel Stefankovic. Inapproximability of the Independent Set Polynomial Below the Shearer Threshold. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2017.28,
  author =	{Galanis, Andreas and Goldberg, Leslie Ann and Stefankovic, Daniel},
  title =	{{Inapproximability of the Independent Set Polynomial Below the Shearer Threshold}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.28},
  URN =		{urn:nbn:de:0030-drops-73962},
  doi =		{10.4230/LIPIcs.ICALP.2017.28},
  annote =	{Keywords: approximate counting, independent set polynomial, Shearer threshold}
}
Document
The Complexity of Holant Problems over Boolean Domain with Non-Negative Weights

Authors: Jiabao Lin and Hanpin Wang


Abstract
Holant problem is a general framework to study the computational complexity of counting problems. We prove a complexity dichotomy theorem for Holant problems over the Boolean domain with non-negative weights. It is the first complete Holant dichotomy where constraint functions are not necessarily symmetric. Holant problems are indeed read-twice #CSPs. Intuitively, some #CSPs that are #P-hard become tractable when restricted to read-twice instances. To capture them, we introduce the Block-rank-one condition. It turns out that the condition leads to a clear separation. If a function set F satisfies the condition, then F is of affine type or product type. Otherwise (a) Holant(F) is #P-hard; or (b) every function in F is a tensor product of functions of arity at most 2; or (c) F is transformable to a product type by some real orthogonal matrix. Holographic transformations play an important role in both the hardness proof and the characterization of tractability.

Cite as

Jiabao Lin and Hanpin Wang. The Complexity of Holant Problems over Boolean Domain with Non-Negative Weights. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ICALP.2017.29,
  author =	{Lin, Jiabao and Wang, Hanpin},
  title =	{{The Complexity of Holant Problems over Boolean Domain with Non-Negative Weights}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{29:1--29:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.29},
  URN =		{urn:nbn:de:0030-drops-73846},
  doi =		{10.4230/LIPIcs.ICALP.2017.29},
  annote =	{Keywords: counting complexity, dichotomy, Holant, #CSP}
}
Document
Polynomial-Time Rademacher Theorem, Porosity and Randomness

Authors: Alex Galicki


Abstract
The main result of this paper is a polynomial time version of Rademacher's theorem. We show that if z is p-random, then every polynomial time computable Lipschitz function f:R^n->R is differentiable at z. This is a generalization of the main result of [Nies, STACS2014]. To prove our main result, we introduce and study a new notion, p-porosity, and prove several results of independent interest. In particular, we characterize p-porosity in terms of polynomial time computable martingales and we show that p-randomness in R^n is invariant under polynomial time computable linear isometries.

Cite as

Alex Galicki. Polynomial-Time Rademacher Theorem, Porosity and Randomness. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 30:1-30:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{galicki:LIPIcs.ICALP.2017.30,
  author =	{Galicki, Alex},
  title =	{{Polynomial-Time Rademacher Theorem, Porosity and Randomness}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{30:1--30:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.30},
  URN =		{urn:nbn:de:0030-drops-74033},
  doi =		{10.4230/LIPIcs.ICALP.2017.30},
  annote =	{Keywords: Rademacher, porosity, p-randomness, differentiability}
}
Document
A QPTAS for the General Scheduling Problem with Identical Release Dates

Authors: Antonios Antoniadis, Ruben Hoeksma, Julie Meißner, José Verschae, and Andreas Wiese


Abstract
The General Scheduling Problem (GSP) generalizes scheduling problems with sum of cost objectives such as weighted flow time and weighted tardiness. Given a set of jobs with processing times, release dates, and job dependent cost functions, we seek to find a minimum cost preemptive schedule on a single machine. The best known algorithm for this problem and also for weighted flow time/tardiness is an O(loglog P)-approximation (where P denotes the range of the job processing times), while the best lower bound shows only strong NP-hardness. When release dates are identical there is also a gap: the problem remains strongly NP-hard and the best known approximation algorithm has a ratio of e+\epsilon (running in quasi-polynomial time). We reduce the latter gap by giving a QPTAS if the numbers in the input are quasi-polynomially bounded, ruling out the existence of an APX-hardness proof unless NP\subseteq DTIME(2^polylog(n)). Our techniques are based on the QPTAS known for the UFP-Cover problem, a particular case of GSP where we must pick a subset of intervals (jobs) on the real line with associated heights and costs. If an interval is selected, its height will help cover a given demand on any point contained within the interval. We reduce our problem to a generalization of UFP-Cover and use a sophisticated divide-and-conquer procedure with interdependent non-symmetric subproblems. We also present a pseudo-polynomial time approximation scheme for two variants of UFP-Cover. For the case of agreeable intervals we give an algorithm based on a new dynamic programming approach which might be useful for other problems of this type. The second one is a resource augmentation setting where we are allowed to slightly enlarge each interval.

Cite as

Antonios Antoniadis, Ruben Hoeksma, Julie Meißner, José Verschae, and Andreas Wiese. A QPTAS for the General Scheduling Problem with Identical Release Dates. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{antoniadis_et_al:LIPIcs.ICALP.2017.31,
  author =	{Antoniadis, Antonios and Hoeksma, Ruben and Mei{\ss}ner, Julie and Verschae, Jos\'{e} and Wiese, Andreas},
  title =	{{A QPTAS for the General Scheduling Problem with Identical Release Dates}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.31},
  URN =		{urn:nbn:de:0030-drops-74575},
  doi =		{10.4230/LIPIcs.ICALP.2017.31},
  annote =	{Keywords: Generalized Scheduling, QPTAS, Unsplittable Flows}
}
Document
Improved Algorithms for MST and Metric-TSP Interdiction

Authors: André Linhares and Chaitanya Swamy


Abstract
We consider the MST-interdiction problem: given a multigraph G = (V, E), edge weights {w_e >= 0}_{e in E}, interdiction costs {c_e >= 0}_{e in E}, and an interdiction budget B >= 0, the goal is to remove a subset R of edges of total interdiction cost at most B so as to maximize the w-weight of an MST of G-R:=(V,E-R). Our main result is a 4-approximation algorithm for this problem. This improves upon the previous-best 14-approximation [Zenklusen, FOCS 2015]. Notably, our analysis is also significantly simpler and cleaner than the one in [Zenklusen, FOCS 2015]. Whereas Zenklusen uses a greedy algorithm with an involved analysis to extract a good interdiction set from an over-budget set, we utilize a generalization of knapsack called the tree knapsack problem that nicely captures the key combinatorial aspects of this "extraction problem." We prove a simple, yet strong, LP-relative approximation bound for tree knapsack, which leads to our improved guarantees for MST interdiction. Our algorithm and analysis are nearly tight, as we show that one cannot achieve an approximation ratio better than 3 relative to the upper bound used in our analysis (and the one in [Zenklusen, FOCS 2015]). Our guarantee for MST-interdiction yields an 8-approximation for metric-TSP interdiction (improving over the 28-approximation in [Zenklusen, FOCS 2015]). We also show that maximum-spanning-tree interdiction is at least as hard to approximate as the minimization version of densest-k-subgraph.

Cite as

André Linhares and Chaitanya Swamy. Improved Algorithms for MST and Metric-TSP Interdiction. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{linhares_et_al:LIPIcs.ICALP.2017.32,
  author =	{Linhares, Andr\'{e} and Swamy, Chaitanya},
  title =	{{Improved Algorithms for MST and Metric-TSP Interdiction}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.32},
  URN =		{urn:nbn:de:0030-drops-74552},
  doi =		{10.4230/LIPIcs.ICALP.2017.32},
  annote =	{Keywords: Approximation algorithms, interdiction problems, LP-rounding algorithms, iterative rounding, tree-knapsack problem, supermodular functions}
}
Document
Reordering Buffer Management with a Logarithmic Guarantee in General Metric Spaces

Authors: Matthias Kohler and Harald Räcke


Abstract
In the reordering buffer management problem a sequence of requests arrive online in a finite metric space, and have to be processed by a single server. This server is equipped with a request buffer of size k and can decide at each point in time, which request from its buffer to serve next. Servicing of a request is simply done by moving the server to the location of the request. The goal is to process all requests while minimizing the total distance that the server is traveling inside the metric space. In this paper we present a deterministic algorithm for the reordering buffer management problem that achieves a competitive ratio of O(log Delta + min {log n,log k}) in a finite metric space of n points and aspect ratio Delta. This is the first algorithm that works for general metric spaces and has just a logarithmic dependency on the relevant parameters. The guarantee is memory-robust, i.e., the competitive ratio decreases only slightly when the buffer-size of the optimum is increased to h=(1+\epsilon)k. For memory robust guarantees our bounds are close to optimal.

Cite as

Matthias Kohler and Harald Räcke. Reordering Buffer Management with a Logarithmic Guarantee in General Metric Spaces. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 33:1-33:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kohler_et_al:LIPIcs.ICALP.2017.33,
  author =	{Kohler, Matthias and R\"{a}cke, Harald},
  title =	{{Reordering Buffer Management with a Logarithmic Guarantee in General Metric Spaces}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{33:1--33:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.33},
  URN =		{urn:nbn:de:0030-drops-73882},
  doi =		{10.4230/LIPIcs.ICALP.2017.33},
  annote =	{Keywords: Online algorithms, reordering buffer, metric spaces, scheduling}
}
Document
Correlated Rounding of Multiple Uniform Matroids and Multi-Label Classification

Authors: Shahar Chen, Dotan Di Castro, Zohar Karnin, Liane Lewin-Eytan, Joseph (Seffi) Naor, and Roy Schwartz


Abstract
We introduce correlated randomized dependent rounding where, given multiple points y^1,...,y^n in some polytope P\subseteq [0,1]^k, the goal is to simultaneously round each y^i to some integral z^i in P while preserving both marginal values and expected distances between the points. In addition to being a natural question in its own right, the correlated randomized dependent rounding problem is motivated by multi-label classification applications that arise in machine learning, e.g., classification of web pages, semantic tagging of images, and functional genomics. The results of this work can be summarized as follows: (1) we present an algorithm for solving the correlated randomized dependent rounding problem in uniform matroids while losing only a factor of O(log{k}) in the distances (k is the size of the ground set); (2) we introduce a novel multi-label classification problem, the metric multi-labeling problem, which captures the above applications. We present a (true) O(log{k})-approximation for the general case of metric multi-labeling and a tight 2-approximation for the special case where there is no limit on the number of labels that can be assigned to an object.

Cite as

Shahar Chen, Dotan Di Castro, Zohar Karnin, Liane Lewin-Eytan, Joseph (Seffi) Naor, and Roy Schwartz. Correlated Rounding of Multiple Uniform Matroids and Multi-Label Classification. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2017.34,
  author =	{Chen, Shahar and Di Castro, Dotan and Karnin, Zohar and Lewin-Eytan, Liane and Naor, Joseph (Seffi) and Schwartz, Roy},
  title =	{{Correlated Rounding of Multiple Uniform Matroids and Multi-Label Classification}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.34},
  URN =		{urn:nbn:de:0030-drops-74612},
  doi =		{10.4230/LIPIcs.ICALP.2017.34},
  annote =	{Keywords: approximation algorithms, randomized rounding, dependent rounding, metric labeling, classification}
}
Document
When the Optimum is also Blind: a New Perspective on Universal Optimization

Authors: Marek Adamczyk, Fabrizio Grandoni, Stefano Leonardi, and Michal Wlodarczyk


Abstract
Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C = {S_1,...,S_m} where each S_i is a subset of U. For every element u from U we need to find a set phi(u) from collection C such that u belongs to phi(u). Once we construct and fix the mapping phi from U to C a subset X from the universe U is revealed, and we need to cover all elements from X with exactly phi(X), that is {phi(u)}_{all u from X}. The goal is to find a mapping such that the cover phi(X) is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance. As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio. But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? We show that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For example, for the set cover problem we obtain $H_n$ approximation which matches the approximation ratio from the classic deterministic setup. Moreover, we show this for all possible probability distributions over $U$ that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. Our result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization. The same basic approach leads to improved approximation algorithms for other related problems, including Vertex Cover, Edge Cover, Directed Steiner Tree, Multicut, and Facility Location.

Cite as

Marek Adamczyk, Fabrizio Grandoni, Stefano Leonardi, and Michal Wlodarczyk. When the Optimum is also Blind: a New Perspective on Universal Optimization. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{adamczyk_et_al:LIPIcs.ICALP.2017.35,
  author =	{Adamczyk, Marek and Grandoni, Fabrizio and Leonardi, Stefano and Wlodarczyk, Michal},
  title =	{{When the Optimum is also Blind: a New Perspective on Universal Optimization}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{35:1--35:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.35},
  URN =		{urn:nbn:de:0030-drops-74436},
  doi =		{10.4230/LIPIcs.ICALP.2017.35},
  annote =	{Keywords: approximation algorithms, stochastic optimization, submodularity}
}
Document
Reusable Garbled Deterministic Finite Automata from Learning With Errors

Authors: Shweta Agrawal and Ishaan Preet Singh


Abstract
We provide a single-key functional encryption scheme for Deterministic Finite Automata (DFA). The secret key of our scheme is associated with a DFA M, and a ciphertext is associated with an input x of arbitrary length. The decryptor learns M(x) and nothing else. The ciphertext and key sizes achieved by our scheme are optimal – the size of the public parameters is independent of the size of the machine or data being encrypted, the secret key size depends only on the machine size and the ciphertext size depends only on the input size. Our scheme achieves full functional encryption in the “private index model”, namely the entire input x is hidden (as against x being public and a single bit b being hidden). Our single key FE scheme can be compiled with symmetric key encryption to yield reusable garbled DFAs for arbitrary size inputs, that achieves machine and input privacy along with reusability under a strong simulation based definition of security. We generalize this to a functional encryption scheme for Turing machines TMFE which has short public parameters that are independent of the size of the machine or the data being encrypted, short function keys, and input-specific decryption time. However, the ciphertext of our construction is large and depends on the worst case running time of the Turing machine (but not its description size). These provide the first FE schemes that support unbounded length inputs, allow succinct public and function keys and rely on LWE. Our construction relies on a new and arguably natural notion of decomposable functional encryption which may be of independent interest.

Cite as

Shweta Agrawal and Ishaan Preet Singh. Reusable Garbled Deterministic Finite Automata from Learning With Errors. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 36:1-36:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ICALP.2017.36,
  author =	{Agrawal, Shweta and Singh, Ishaan Preet},
  title =	{{Reusable Garbled Deterministic Finite Automata from Learning With Errors}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{36:1--36:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.36},
  URN =		{urn:nbn:de:0030-drops-75014},
  doi =		{10.4230/LIPIcs.ICALP.2017.36},
  annote =	{Keywords: Functional Encryption, Learning With Errors, Deterministic Finite Automata, Garbled DFA}
}
Document
Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols

Authors: Ran Cohen, Sandro Coretti, Juan Garay, and Vassilis Zikas


Abstract
An important benchmark for multi-party computation protocols (MPC) is their round complexity. For several important MPC tasks, (tight) lower bounds on the round complexity are known. However, for some of these tasks, such as broadcast, the lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called probabilistic-termination (PT) protocols. Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of m protocols with constant expected round complexity might take O(log m) rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing '03) developed a technique for parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al. (CRYPTO '16) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is ``privacy free,'' and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity. In this paper we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, secure against a dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying protocols. Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the functionalities realized by the protocols, unless merely security against semi-honest corruptions is required, for which case we provide a protocol.

Cite as

Ran Cohen, Sandro Coretti, Juan Garay, and Vassilis Zikas. Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2017.37,
  author =	{Cohen, Ran and Coretti, Sandro and Garay, Juan and Zikas, Vassilis},
  title =	{{Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.37},
  URN =		{urn:nbn:de:0030-drops-74124},
  doi =		{10.4230/LIPIcs.ICALP.2017.37},
  annote =	{Keywords: Cryptographic protocols, secure multi-party computation, broadcast.}
}
Document
Cryptanalysis of Indistinguishability Obfuscations of Circuits over GGH13

Authors: Daniel Apon, Nico Döttling, Sanjam Garg, and Pratyay Mukherjee


Abstract
Annihilation attacks, introduced in the work of Miles, Sahai, and Zhandry (CRYPTO 2016), are a class of polynomial-time attacks against several candidate indistinguishability obfuscation (IO) schemes, built from Garg, Gentry, and Halevi (EUROCRYPT 2013) multilinear maps. In this work, we provide a general efficiently-testable property for two single-input branching programs, called partial inequivalence, which we show is sufficient for our variant of annihilation attacks on several obfuscation constructions based on GGH13 multilinear maps. We give examples of pairs of natural NC1 circuits, which - when processed via Barrington's Theorem - yield pairs of branching programs that are partially inequivalent. As a consequence we are also able to show examples of "bootstrapping circuits,'' (albeit somewhat artificially crafted) used to obtain obfuscations for all circuits (given an obfuscator for NC1 circuits), in certain settings also yield partially inequivalent branching programs. Prior to our work, no attacks on any obfuscation constructions for these settings were known.

Cite as

Daniel Apon, Nico Döttling, Sanjam Garg, and Pratyay Mukherjee. Cryptanalysis of Indistinguishability Obfuscations of Circuits over GGH13. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{apon_et_al:LIPIcs.ICALP.2017.38,
  author =	{Apon, Daniel and D\"{o}ttling, Nico and Garg, Sanjam and Mukherjee, Pratyay},
  title =	{{Cryptanalysis of Indistinguishability Obfuscations of Circuits over GGH13}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.38},
  URN =		{urn:nbn:de:0030-drops-73814},
  doi =		{10.4230/LIPIcs.ICALP.2017.38},
  annote =	{Keywords: Obfuscation, Multilinear Maps, Cryptanalysis.}
}
Document
Non-Uniform Attacks Against Pseudoentropy

Authors: Krzysztof Pietrzak and Maciej Skorski


Abstract
De, Trevisan and Tulsiani [CRYPTO 2010] show that every distribution over n-bit strings which has constant statistical distance to uniform (e.g., the output of a pseudorandom generator mapping n-1 to n bit strings), can be distinguished from the uniform distribution with advantage epsilon by a circuit of size O( 2^n epsilon^2). We generalize this result, showing that a distribution which has less than k bits of min-entropy, can be distinguished from any distribution with k bits of delta-smooth min-entropy with advantage epsilon by a circuit of size O(2^k epsilon^2/delta^2). As a special case, this implies that any distribution with support at most 2^k (e.g., the output of a pseudoentropy generator mapping k to n bit strings) can be distinguished from any given distribution with min-entropy k+1 with advantage epsilon by a circuit of size O(2^k epsilon^2). Our result thus shows that pseudoentropy distributions face basically the same non-uniform attacks as pseudorandom distributions.

Cite as

Krzysztof Pietrzak and Maciej Skorski. Non-Uniform Attacks Against Pseudoentropy. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 39:1-39:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pietrzak_et_al:LIPIcs.ICALP.2017.39,
  author =	{Pietrzak, Krzysztof and Skorski, Maciej},
  title =	{{Non-Uniform Attacks Against Pseudoentropy}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{39:1--39:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.39},
  URN =		{urn:nbn:de:0030-drops-74738},
  doi =		{10.4230/LIPIcs.ICALP.2017.39},
  annote =	{Keywords: pseudoentropy, non-uniform attacks}
}
Document
Interactive Oracle Proofs with Constant Rate and Query Complexity

Authors: Eli Ben-Sasson, Alessandro Chiesa, Ariel Gabizon, Michael Riabzev, and Nicholas Spooner


Abstract
We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed-Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code. We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs: * Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier's randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. * Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier's running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs.

Cite as

Eli Ben-Sasson, Alessandro Chiesa, Ariel Gabizon, Michael Riabzev, and Nicholas Spooner. Interactive Oracle Proofs with Constant Rate and Query Complexity. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ICALP.2017.40,
  author =	{Ben-Sasson, Eli and Chiesa, Alessandro and Gabizon, Ariel and Riabzev, Michael and Spooner, Nicholas},
  title =	{{Interactive Oracle Proofs with Constant Rate and Query Complexity}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.40},
  URN =		{urn:nbn:de:0030-drops-74713},
  doi =		{10.4230/LIPIcs.ICALP.2017.40},
  annote =	{Keywords: probabilistically checkable proofs, interactive proofs, proof composition, sumcheck}
}
Document
Dynamic Parameterized Problems and Algorithms

Authors: Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams


Abstract
Fixed-parameter algorithms and kernelization are two powerful methods to solve NP-hard problems. Yet, so far those algorithms have been largely restricted to static inputs. In this paper we provide fixed-parameter algorithms and kernelizations for fundamental NP-hard problems with dynamic inputs. We consider a variety of parameterized graph and hitting set problems which are known to have f(k)n^{1+o(1)} time algorithms on inputs of size n, and we consider the question of whether there is a data structure that supports small updates (such as edge/vertex/set/element insertions and deletions) with an update time of g(k)n^{o(1)}; such an update time would be essentially optimal. Update and query times independent of n are particularly desirable. Among many other results, we show that Feedback Vertex Set and k-Path admit dynamic algorithms with f(k)log O(1) n update and query times for some function f depending on the solution size k only. We complement our positive results by several conditional and unconditional lower bounds. For example, we show that unlike their undirected counterparts, Directed Feedback Vertex Set and Directed k-Path do not admit dynamic algorithms with n^{o(1) } update and query times even for constant solution sizes k <= 3, assuming popular hardness hypotheses. We also show that unconditionally, in the cell probe model, Directed Feedback Vertex Set cannot be solved with update time that is purely a function of k.

Cite as

Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams. Dynamic Parameterized Problems and Algorithms. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2017.41,
  author =	{Alman, Josh and Mnich, Matthias and Vassilevska Williams, Virginia},
  title =	{{Dynamic Parameterized Problems and Algorithms}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.41},
  URN =		{urn:nbn:de:0030-drops-74419},
  doi =		{10.4230/LIPIcs.ICALP.2017.41},
  annote =	{Keywords: Dynamic algorithms, fixed-parameter algorithms}
}
Document
Decremental Data Structures for Connectivity and Dominators in Directed Graphs

Authors: Loukas Georgiadis, Thomas Dueholm Hansen, Giuseppe F. Italiano, Sebastian Krinninger, and Nikos Parotsidis


Abstract
We introduce a new dynamic data structure for maintaining the strongly connected components (SCCs) of a directed graph (digraph) under edge deletions, so as to answer a rich repertoire of connectivity queries. Our main technical contribution is a decremental data structure that supports sensitivity queries of the form "are u and v strongly connected in the graph G \ w?", for any triple of vertices u, v, w, while G undergoes deletions of edges. Our data structure processes a sequence of edge deletions in a digraph with $n$ vertices in O(m n log n) total time and O(n^2 log n) space, where m is the number of edges before any deletion, and answers the above queries in constant time. We can leverage our data structure to obtain decremental data structures for many more types of queries within the same time and space complexity. For instance for edge-related queries, such as testing whether two query vertices u and v are strongly connected in G \ e, for some query edge e. As another important application of our decremental data structure, we provide the first nontrivial algorithm for maintaining the dominator tree of a flow graph under edge deletions. We present an algorithm that processes a sequence of edge deletions in a flow graph in O(m n log n) total time and O(n^2 log n) space. For reducible flow graphs we provide an O(mn)-time and O(m + n)-space algorithm. We give a conditional lower bound that provides evidence that these running times may be tight up to subpolynomial factors.

Cite as

Loukas Georgiadis, Thomas Dueholm Hansen, Giuseppe F. Italiano, Sebastian Krinninger, and Nikos Parotsidis. Decremental Data Structures for Connectivity and Dominators in Directed Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 42:1-42:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ICALP.2017.42,
  author =	{Georgiadis, Loukas and Dueholm Hansen, Thomas and Italiano, Giuseppe F. and Krinninger, Sebastian and Parotsidis, Nikos},
  title =	{{Decremental Data Structures for Connectivity and Dominators in Directed Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{42:1--42:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.42},
  URN =		{urn:nbn:de:0030-drops-74455},
  doi =		{10.4230/LIPIcs.ICALP.2017.42},
  annote =	{Keywords: dynamic graph algorithms, decremental algorithms, dominator tree, strong connectivity under failures}
}
Document
General Bounds for Incremental Maximization

Authors: Aaron Bernstein, Yann Disser, and Martin Groß


Abstract
We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value k in N that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all k between the incremental solution after~$k$ steps and an optimum solution of cardinality k. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general 2.618-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below 2.18 in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be 1.58-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) (b-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) 2.313 for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems.

Cite as

Aaron Bernstein, Yann Disser, and Martin Groß. General Bounds for Incremental Maximization. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ICALP.2017.43,
  author =	{Bernstein, Aaron and Disser, Yann and Gro{\ss}, Martin},
  title =	{{General Bounds for Incremental Maximization}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.43},
  URN =		{urn:nbn:de:0030-drops-74650},
  doi =		{10.4230/LIPIcs.ICALP.2017.43},
  annote =	{Keywords: incremental optimization, maximization problems, greedy algorithm, competitive analysis, cardinality constraint}
}
Document
Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs

Authors: Aaron Bernstein


Abstract
In this paper we consider the decremental single-source shortest paths (SSSP) problem, where given a graph G and a source node s the goal is to maintain shortest distances between s and all other nodes in G under a sequence of online adversarial edge deletions. In their seminal work, Even and Shiloach [JACM 1981] presented an exact solution to the problem in unweighted graphs with only O(mn) total update time over all edge deletions. Their classic algorithm was the state of the art for the decremental SSSP problem for three decades, even when approximate shortest paths are allowed. The first improvement over the Even-Shiloach algorithm was given by Bernstein and Roditty [SODA 2011], who for the case of an unweighted and undirected graph presented a (1+epsilon)-approximate algorithm with constant query time and a total update time of O(n^{2+o(1)}). This work triggered a series of new results, culminating in a recent breakthrough of Henzinger, Krinninger and Nanongkai [FOCS 14], who presented a (1+epsilon)-approximate algorithm for undirected weighted graphs whose total update time is near linear: O(m^{1+o(1)} log(W)), where W is the ratio of the heaviest to the lightest edge weight in the graph. In this paper they posed as a major open problem the question of derandomizing their result. Until very recently, all known improvements over the Even-Shiloach algorithm were randomized and required the assumption of a non-adaptive adversary. In STOC 2016, Bernstein and Chechik showed the first deterministic algorithm to go beyond O(mn) total update time: the algorithm is also (1+\epsilon)-approximate, and has total update time \tilde{O}(n^2). In SODA 2017, the same authors presented an algorithm with total update time \tilde{O}(mn^{3/4}). However, both algorithms are restricted to undirected, unweighted graphs. We present the first deterministic algorithm for weighted undirected graphs to go beyond the O(mn) bound. The total update time is \tilde{O}(n^2 \log(W)).

Cite as

Aaron Bernstein. Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein:LIPIcs.ICALP.2017.44,
  author =	{Bernstein, Aaron},
  title =	{{Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.44},
  URN =		{urn:nbn:de:0030-drops-74013},
  doi =		{10.4230/LIPIcs.ICALP.2017.44},
  annote =	{Keywords: Shortest Paths, Dynamic Algorithms, Deterministic, Weighted Graph}
}
Document
Testing Core Membership in Public Goods Economies

Authors: Greg Bodwin


Abstract
This paper develops a recent line of economic theory seeking to understand public goods economies using methods of topological analysis. Our first main result is a very clean characterization of the economy's core (the standard solution concept in public goods). Specifically, we prove that a point is in the core iff it is Pareto efficient, individually rational, and the set of points it dominates is path connected. While this structural theorem has a few interesting implications in economic theory, the main focus of the second part of this paper is on a particular algorithmic application that demonstrates its utility. Since the 1960s, economists have looked for an efficient computational process that decides whether or not a given point is in the core. All known algorithms so far run in exponential time (except in some artificially restricted settings). By heavily exploiting our new structure, we propose a new algorithm for testing core membership whose computational bottleneck is the solution of O(n) convex optimization problems on the utility function governing the economy. It is fairly natural to assume that convex optimization should be feasible, as it is needed even for very basic economic computational tasks such as testing Pareto efficiency. Nevertheless, even without this assumption, our work implies for the first time that core membership can be efficiently tested on (e.g.) utility functions that admit ``nice'' analytic expressions, or that appropriately defined epsilon-approximate versions of the problem are tractable (by using modern black-box epsilon-approximate convex optimization algorithms).

Cite as

Greg Bodwin. Testing Core Membership in Public Goods Economies. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bodwin:LIPIcs.ICALP.2017.45,
  author =	{Bodwin, Greg},
  title =	{{Testing Core Membership in Public Goods Economies}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.45},
  URN =		{urn:nbn:de:0030-drops-74910},
  doi =		{10.4230/LIPIcs.ICALP.2017.45},
  annote =	{Keywords: Algorithmic Game Theory, Economics, Algorithms, Public Goods, Coalitional Stability}
}
Document
Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting

Authors: Toni Böhnlein, Stefan Kratsch, and Oliver Schaudt


Abstract
In a Stackelberg Pricing Game a distinguished player, the leader, chooses prices for a set of items, and the other players, the followers, each seeks to buy a minimum cost feasible subset of the items. The goal of the leader is to maximize her revenue, which is determined by the sold items and their prices. Most previously studied cases of such games can be captured by a combinatorial model where we have a base set of items, some with fixed prices, some priceable, and constraints on the subsets that are feasible for each follower. In this combinatorial setting, Briest et al. and Balcan et al. independently showed that the maximum revenue can be approximated to a factor of H_k ~ log(k), where k is the number of priceable items. Our results are twofold. First, we strongly generalize the model by letting the follower minimize any continuous function plus a linear term over any compact subset of R_(n>=0); the coefficients (or prices) in the linear term are chosen by the leader and determine her revenue. In particular, this includes the fundamental case of linear programs. We give a tight lower bound on the revenue of the leader, generalizing the results of Briest et al. and Balcan et al. Besides, we prove that it is strongly NP-hard to decide whether the optimum revenue exceeds the lower bound by an arbitrarily small factor. Second, we study the parameterized complexity of computing the optimal revenue with respect to the number k of priceable items. In the combinatorial setting, given an efficient algorithm for optimal follower solutions, the maximum revenue can be found by enumerating the 2^k subsets of priceable items and computing optimal prices via a result of Briest et al., giving time O(2^k|I|^c ) where |I| is the input size. Our main result here is a W[1]-hardness proof for the case where the followers minimize a linear program, ruling out running time f(k)|I|^c unless FPT = W[1] and ruling out time |I|^o(k) under the Exponential-Time Hypothesis.

Cite as

Toni Böhnlein, Stefan Kratsch, and Oliver Schaudt. Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bohnlein_et_al:LIPIcs.ICALP.2017.46,
  author =	{B\"{o}hnlein, Toni and Kratsch, Stefan and Schaudt, Oliver},
  title =	{{Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.46},
  URN =		{urn:nbn:de:0030-drops-73771},
  doi =		{10.4230/LIPIcs.ICALP.2017.46},
  annote =	{Keywords: Algorithmic pricing, Stackelberg games, Approximation algorithms, Rev- enue maximization, Parameterized complexity}
}
Document
Online Market Intermediation

Authors: Yiannis Giannakopoulos, Elias Koutsoupias, and Philip Lazos


Abstract
We study a dynamic market setting where an intermediary interacts with an unknown large sequence of agents that can be either sellers or buyers: their identities, as well as the sequence length n, are decided in an adversarial, online way. Each agent is interested in trading a single item, and all items in the market are identical. The intermediary has some prior, incomplete knowledge of the agents' values for the items: all seller values are independently drawn from the same distribution F_S, and all buyer values from F_B. The two distributions may differ, and we make common regularity assumptions, namely that F_B is MHR and F_S is log-concave. We focus on online, posted-price mechanisms, and analyse two objectives: that of maximizing the intermediary's profit and that of maximizing the social welfare, under a competitive analysis benchmark. First, on the negative side, for general agent sequences we prove tight competitive ratios of Theta(\sqrt(n)) and Theta(\ln n), respectively for the two objectives. On the other hand, under the extra assumption that the intermediary knows some bound \alpha on the ratio between the number of sellers and buyers, we design asymptotically optimal online mechanisms with competitive ratios of 1+o(1) and 4, respectively. Additionally, we study the model where the number of items that can be stored in stock throughout the execution is bounded, in which case the competitive ratio for the profit is improved to O(ln n).

Cite as

Yiannis Giannakopoulos, Elias Koutsoupias, and Philip Lazos. Online Market Intermediation. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{giannakopoulos_et_al:LIPIcs.ICALP.2017.47,
  author =	{Giannakopoulos, Yiannis and Koutsoupias, Elias and Lazos, Philip},
  title =	{{Online Market Intermediation}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.47},
  URN =		{urn:nbn:de:0030-drops-74815},
  doi =		{10.4230/LIPIcs.ICALP.2017.47},
  annote =	{Keywords: optimal auctions, bilateral trade, sequential auctions, online algorithms, competitive analysis}
}
Document
Tight Lower Bounds for Multiplicative Weights Algorithmic Families

Authors: Nick Gravin, Yuval Peres, and Balasubramanian Sivan


Abstract
We study the fundamental problem of prediction with expert advice and develop regret lower bounds for a large family of algorithms for this problem. We develop simple adversarial primitives, that lend themselves to various combinations leading to sharp lower bounds for many algorithmic families. We use these primitives to show that the classic Multiplicative Weights Algorithm (MWA) has a regret of (T*ln(k)/2)^{0.5} (where T is the time horizon and k is the number of experts), there by completely closing the gap between upper and lower bounds. We further show a regret lower bound of (2/3)* (T*ln(k)/2)^{0.5} for a much more general family of algorithms than MWA, where the learning rate can be arbitrarily varied over time, or even picked from arbitrary distributions over time. We also use our primitives to construct adversaries in the geometric horizon setting for MWA to precisely characterize the regret at 0.391/(\delta)^{0.5} for the case of 2 experts and a lower bound of (1/2)*(ln(k)/(2*\delta))^{0.5}, for the case of arbitrary number of experts k (here \delta is the probability that the game ends in any given round).

Cite as

Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Tight Lower Bounds for Multiplicative Weights Algorithmic Families. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 48:1-48:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gravin_et_al:LIPIcs.ICALP.2017.48,
  author =	{Gravin, Nick and Peres, Yuval and Sivan, Balasubramanian},
  title =	{{Tight Lower Bounds for Multiplicative Weights Algorithmic Families}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{48:1--48:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.48},
  URN =		{urn:nbn:de:0030-drops-74997},
  doi =		{10.4230/LIPIcs.ICALP.2017.48},
  annote =	{Keywords: Multiplicative Weights, Lower Bounds, Adversarial Primitives}
}
Document
The Power of Shared Randomness in Uncertain Communication

Authors: Badih Ghazi and Madhu Sudan


Abstract
In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness brings to uncertain communication. Can Alice and Bob achieve efficient communication amid uncertainty without using public randomness? And how powerful are public-coin protocols in overcoming uncertainty? Motivated by these two questions: - We prove the first separation between private-coin uncertain communication and public-coin uncertain communication. Namely, we exhibit a function class for which the communication in the standard model and the public-coin uncertain communication are O(1) while the private-coin uncertain communication is a growing function of n (the length of the inputs). This lower bound (proved with respect to the uniform distribution) is in sharp contrast with the case of public-coin uncertain communication which was shown by the previous work to be within a constant factor from the certain communication. This lower bound also implies the first separation between public-coin uncertain communication and deterministic uncertain communication. Interestingly, we also show that if Alice and Bob imperfectly share a sequence of random bits (a setup weaker than public randomness), then achieving a constant blow-up in communication is still possible. - We improve the lower-bound of the previous work on public-coin uncertain communication. Namely, we exhibit a function class and a distribution (with mutual information I approx n) for which the one-way certain communication is k bits but the one-way public-coin uncertain communication is at least Omega(sqrt{k}*sqrt{I}) bits. Our proofs introduce new problems in the standard communication complexity model and prove lower bounds for these problems. Both the problems and the lower bound techniques may be of general interest.

Cite as

Badih Ghazi and Madhu Sudan. The Power of Shared Randomness in Uncertain Communication. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ICALP.2017.49,
  author =	{Ghazi, Badih and Sudan, Madhu},
  title =	{{The Power of Shared Randomness in Uncertain Communication}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.49},
  URN =		{urn:nbn:de:0030-drops-74871},
  doi =		{10.4230/LIPIcs.ICALP.2017.49},
  annote =	{Keywords: randomness, uncertainty, communication, imperfectly shared randomness, lower bounds}
}
Document
Separation of AC^0[oplus] Formulas and Circuits

Authors: Benjamin Rossman and Srikanth Srinivasan


Abstract
This paper gives the first separation between the power of formulas and circuits of equal depth in the AC^0[\oplus] basis (unbounded fan-in AND, OR, NOT and MOD_2 gates). We show, for all d(n) <= O(log n/log log n), that there exist polynomial-size depth-d circuits that are not equivalent to depth-d formulas of size n^{o(d)} (moreover, this is optimal in that n^{o(d)} cannot be improved to n^{O(d)}). This result is obtained by a combination of new lower and upper bounds for Approximate Majorities, the class of Boolean functions {0,1}^n to {0,1} that agree with the Majority function on 3/4 fraction of inputs. AC^0[\oplus] formula lower bound. We show that every depth-d AC^0[\oplus] formula of size s has a (1/8)-error polynomial approximation over F_2 of degree O((log s)/d)^{d-1}. This strengthens a classic $O(log s)^{d-1}$ degree approximation for circuits due to Razborov. Since the Majority function has approximate degree Theta(\sqrt n), this result implies an \exp(\Omega(dn^{1/2(d-1)})) lower bound on the depth-d AC^0[\oplus] formula size of all Approximate Majority functions for all d(n) <= O(log n). Monotone AC^0 circuit upper bound. For all d(n) <= O(log n/log log n), we give a randomized construction of depth-d monotone AC^0 circuits (without NOT or MOD_2 gates) of size \exp(O(n^{1/2(d-1)}))} that compute an Approximate Majority function. This strengthens a construction of formulas of size \exp(O(dn^{1/2(d-1)})) due to Amano.

Cite as

Benjamin Rossman and Srikanth Srinivasan. Separation of AC^0[oplus] Formulas and Circuits. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rossman_et_al:LIPIcs.ICALP.2017.50,
  author =	{Rossman, Benjamin and Srinivasan, Srikanth},
  title =	{{Separation of AC^0\lbrackoplus\rbrack Formulas and Circuits}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.50},
  URN =		{urn:nbn:de:0030-drops-73904},
  doi =		{10.4230/LIPIcs.ICALP.2017.50},
  annote =	{Keywords: circuit complexity, lower bounds, approximate majority, polynomial method}
}
Document
Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers

Authors: Chengyu Lin and Shengyu Zhang


Abstract
The Sensitivity Conjecture and the Log-rank Conjecture are among the most important and challenging problems in concrete complexity. Incidentally, the Sensitivity Conjecture is known to hold for monotone functions, and so is the Log-rank Conjecture for f(x and y) and f(x xor y) with monotone functions f, where and and xor are bit-wise AND and XOR , respectively. In this paper, we extend these results to functions f which alternate values for a relatively small number of times on any monotone path from 0^n to 1^n. These deepen our understandings of the two conjectures, and contribute to the recent line of research on functions with small alternating numbers.

Cite as

Chengyu Lin and Shengyu Zhang. Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 51:1-51:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ICALP.2017.51,
  author =	{Lin, Chengyu and Zhang, Shengyu},
  title =	{{Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{51:1--51:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.51},
  URN =		{urn:nbn:de:0030-drops-74045},
  doi =		{10.4230/LIPIcs.ICALP.2017.51},
  annote =	{Keywords: Analysis of Boolean functions, Sensitivity Conjecture, Log-rank Conjecture, Alternating Number}
}
Document
Randomized Communication vs. Partition Number

Authors: Mika Göös, T. S. Jayram, Toniann Pitassi, and Thomas Watson


Abstract
We show that randomized communication complexity can be superlogarithmic in the partition number of the associated communication matrix, and we obtain near-optimal randomized lower bounds for the Clique vs. Independent Set problem. These results strengthen the deterministic lower bounds obtained in prior work (Goos, Pitassi, and Watson, FOCS 2015). One of our main technical contributions states that information complexity when the cost is measured with respect to only 1-inputs (or only 0-inputs) is essentially equivalent to information complexity with respect to all inputs.

Cite as

Mika Göös, T. S. Jayram, Toniann Pitassi, and Thomas Watson. Randomized Communication vs. Partition Number. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.ICALP.2017.52,
  author =	{G\"{o}\"{o}s, Mika and Jayram, T. S. and Pitassi, Toniann and Watson, Thomas},
  title =	{{Randomized Communication vs. Partition Number}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.52},
  URN =		{urn:nbn:de:0030-drops-74861},
  doi =		{10.4230/LIPIcs.ICALP.2017.52},
  annote =	{Keywords: communication complexity, partition number, information complexity}
}
Document
Approximate Bounded Indistinguishability

Authors: Andrej Bogdanov and Christopher Williamson


Abstract
Two distributions over n-bit strings are (k,delta)-wise indistinguishable if no statistical test that observes k of the n bits can tell the two distributions apart with advantage better than delta. Motivated by secret sharing and cryptographic leakage resilience, we study the existence of pairs of distributions that are (k, delta)-wise indistinguishable, but can be distinguished by some function f of suitably low complexity. We prove bounds tight up to constants when f is the OR function, and tight up to logarithmic factors when f is a read-once uniform AND \circ OR formula, extending previous works that address the perfect indistinguishability case delta = 0. We also give an elementary proof of the following result in approximation theory: If p is a univariate degree-k polynomial such that |p(x)| <= 1 for all |x| <= 1 and p(1) = 1, then l (p) >= 2^{Omega(p'(1)/k)}, where lˆ (p) is the sum of the absolute values of p’s coefficients. A more general 1 statement was proved by Servedio, Tan, and Thaler (2012) using complex-analytic methods. As a secondary contribution, we derive new threshold weight lower bounds for bounded depth AND-OR formulas.

Cite as

Andrej Bogdanov and Christopher Williamson. Approximate Bounded Indistinguishability. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 53:1-53:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ICALP.2017.53,
  author =	{Bogdanov, Andrej and Williamson, Christopher},
  title =	{{Approximate Bounded Indistinguishability}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{53:1--53:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.53},
  URN =		{urn:nbn:de:0030-drops-74671},
  doi =		{10.4230/LIPIcs.ICALP.2017.53},
  annote =	{Keywords: pseudorandomness, polynomial approximation, secret sharing}
}
Document
Finding Detours is Fixed-Parameter Tractable

Authors: Ivona Bezáková, Radu Curticapean, Holger Dell, and Fedor V. Fomin


Abstract
We consider the following natural "above guarantee" parameterization of the classical longest path problem: For given vertices s and t of a graph G, and an integer k, the longest detour problem asks for an (s,t)-path in G that is at least k longer than a shortest (s,t)-path. Using insights into structural graph theory, we prove that the longest detour problem is fixed-parameter tractable (FPT) on undirected graphs and actually even admits a single-exponential algorithm, that is, one of running time exp(O(k)) * poly(n). This matches (up to the base of the exponential) the best algorithms for finding a path of length at least k. Furthermore, we study a related problem, exact detour, that asks whether a graph G contains an (s,t)-path that is exactly k longer than a shortest (s,t)-path. For this problem, we obtain a randomized algorithm with running time about 2.746^k * poly(n), and a deterministic algorithm with running time about 6.745^k * poly(n), showing that this problem is FPT as well. Our algorithms for the exact detour problem apply to both undirected and directed graphs.

Cite as

Ivona Bezáková, Radu Curticapean, Holger Dell, and Fedor V. Fomin. Finding Detours is Fixed-Parameter Tractable. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bezakova_et_al:LIPIcs.ICALP.2017.54,
  author =	{Bez\'{a}kov\'{a}, Ivona and Curticapean, Radu and Dell, Holger and Fomin, Fedor V.},
  title =	{{Finding Detours is Fixed-Parameter Tractable}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{54:1--54:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.54},
  URN =		{urn:nbn:de:0030-drops-74790},
  doi =		{10.4230/LIPIcs.ICALP.2017.54},
  annote =	{Keywords: longest path, fixed-parameter tractable algorithms, above-guarantee parameterization, graph minors}
}
Document
Further Approximations for Demand Matching: Matroid Constraints and Minor-Closed Graphs

Authors: Sara Ahmadian and Zachary Friggstad


Abstract
We pursue a study of the Generalized Demand Matching problem, a common generalization of the b-Matching and Knapsack problems. Here, we are given a graph with vertex capacities, edge profits, and asymmetric demands on the edges. The goal is to find a maximum-profit subset of edges so the demands of chosen edges do not violate the vertex capacities. This problem is APX-hard and constant-factor approximations are already known. Our main results fall into two categories. First, using iterated relaxation and various filtering strategies, we show with an efficient rounding algorithm that if an additional matroid structure M is given and we further only allow sets that are independent in M, the natural LP relaxation has an integrality gap of at most 25/3. This can be further improved in various special cases, for example we improve over the 15-approximation for the previously- studied Coupled Placement problem [Korupolu et al. 2014] by giving a 7-approximation. Using similar techniques, we show the problem of computing a minimum-cost base in M satisfying vertex capacities admits a (1,3)-bicriteria approximation: the cost is at most the optimum and the capacities are violated by a factor of at most 3. This improves over the previous (1,4)-approximation in the special case that M is the graphic matroid over the given graph [Fukanaga and Nagamochi, 2009]. Second, we show Demand Matching admits a polynomial-time approximation scheme in graphs that exclude a fixed minor. If all demands are polynomially-bounded integers, this is somewhat easy using dynamic programming in bounded-treewidth graphs. Our main technical contribution is a sparsification lemma that allows us to scale the demands of some items to be used in a more intricate dynamic programming algorithm, followed by some randomized rounding to filter our scaled-demand solution to one whose original demands satisfy all constraints.

Cite as

Sara Ahmadian and Zachary Friggstad. Further Approximations for Demand Matching: Matroid Constraints and Minor-Closed Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 55:1-55:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ahmadian_et_al:LIPIcs.ICALP.2017.55,
  author =	{Ahmadian, Sara and Friggstad, Zachary},
  title =	{{Further Approximations for Demand Matching: Matroid Constraints and Minor-Closed Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.55},
  URN =		{urn:nbn:de:0030-drops-74600},
  doi =		{10.4230/LIPIcs.ICALP.2017.55},
  annote =	{Keywords: Approximation Algorithms, Column-Restricted Packing, Demand Matching, Matroids, Planar Graphs}
}
Document
Covering Vectors by Spaces: Regular Matroids

Authors: Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, and Saket Saurabh


Abstract
We consider the problem of covering a set of vectors of a given finite dimensional linear space (vector space) by a subspace generated by a set of vectors of minimum size. Specifically, we study the Space Cover problem, where we are given a matrix M and a subset of its columns T; the task is to find a minimum set F of columns of M disjoint with T such that that the linear span of F contains all vectors of T. This is a fundamental problem arising in different domains, such as coding theory, machine learning, and graph algorithms. We give a parameterized algorithm with running time 2^{O(k)}||M|| ^{O(1)} solving this problem in the case when M is a totally unimodular matrix over rationals, where k is the size of F. In other words, we show that the problem is fixed-parameter tractable parameterized by the rank of the covering subspace. The algorithm is "asymptotically optimal" for the following reasons. Choice of matrices: Vector matroids corresponding to totally unimodular matrices over rationals are exactly the regular matroids. It is known that for matrices corresponding to a more general class of matroids, namely, binary matroids, the problem becomes W[1]-hard being parameterized by k. Choice of the parameter: The problem is NP-hard even if |T|=3 on matrix-representations of a subclass of regular matroids, namely cographic matroids. Thus for a stronger parameterization, like by the size of T, the problem becomes intractable. Running Time: The exponential dependence in the running time of our algorithm cannot be asymptotically improved unless Exponential Time Hypothesis (ETH) fails. Our algorithm exploits the classical decomposition theorem of Seymour for regular matroids.

Cite as

Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, and Saket Saurabh. Covering Vectors by Spaces: Regular Matroids. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 56:1-56:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ICALP.2017.56,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Lokshtanov, Daniel and Saurabh, Saket},
  title =	{{Covering Vectors by Spaces: Regular Matroids}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{56:1--56:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.56},
  URN =		{urn:nbn:de:0030-drops-73865},
  doi =		{10.4230/LIPIcs.ICALP.2017.56},
  annote =	{Keywords: regular matroids, spanning set, parameterized complexity}
}
Document
Linear Kernels for Edge Deletion Problems to Immersion-Closed Graph Classes

Authors: Archontia C. Giannopoulou, Michal Pilipczuk, Jean-Florent Raymond, Dimitrios M. Thilikos, and Marcin Wrochna


Abstract
Suppose F is a finite family of graphs. We consider the following meta-problem, called F-Immersion Deletion: given a graph G and an integer k, decide whether the deletion of at most k edges of G can result in a graph that does not contain any graph from F as an immersion. This problem is a close relative of the F-Minor Deletion problem studied by Fomin et al. [FOCS 2012], where one deletes vertices in order to remove all minor models of graphs from F. We prove that whenever all graphs from F are connected and at least one graph of F is planar and subcubic, then the F-Immersion Deletion problem admits: - a constant-factor approximation algorithm running in time O(m^3 n^3 log m) - a linear kernel that can be computed in time O(m^4 n^3 log m) and - a O(2^{O(k)} + m^4 n^3 log m)-time fixed-parameter algorithm, where n,m count the vertices and edges of the input graph. Our findings mirror those of Fomin et al. [FOCS 2012], who obtained similar results for F-Minor Deletion, under the assumption that at least one graph from F is planar. An important difference is that we are able to obtain a linear kernel for F-Immersion Deletion, while the exponent of the kernel of Fomin et al. depends heavily on the family F. In fact, this dependence is unavoidable under plausible complexity assumptions, as proven by Giannopoulou et al. [ICALP 2015]. This reveals that the kernelization complexity of F-Immersion Deletion is quite different than that of F-Minor Deletion.

Cite as

Archontia C. Giannopoulou, Michal Pilipczuk, Jean-Florent Raymond, Dimitrios M. Thilikos, and Marcin Wrochna. Linear Kernels for Edge Deletion Problems to Immersion-Closed Graph Classes. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{giannopoulou_et_al:LIPIcs.ICALP.2017.57,
  author =	{Giannopoulou, Archontia C. and Pilipczuk, Michal and Raymond, Jean-Florent and Thilikos, Dimitrios M. and Wrochna, Marcin},
  title =	{{Linear Kernels for Edge Deletion Problems to Immersion-Closed Graph Classes}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.57},
  URN =		{urn:nbn:de:0030-drops-73891},
  doi =		{10.4230/LIPIcs.ICALP.2017.57},
  annote =	{Keywords: Kernelization, Approximation, Immersion, Protrusion, Tree-cut width}
}
Document
k-Distinct In- and Out-Branchings in Digraphs

Authors: Gregory Gutin, Felix Reidl, and Magnus Wahlström


Abstract
An out-branching and an in-branching of a digraph D are called k-distinct if each of them has k arcs absent in the other. Bang-Jensen, Saurabh and Simonsen (2016) proved that the problem of deciding whether a strongly connected digraph D has k-distinct out-branching and in-branching is fixed-parameter tractable (FPT) when parameterized by k. They asked whether the problem remains FPT when extended to arbitrary digraphs. Bang-Jensen and Yeo (2008) asked whether the same problem is FPT when the out-branching and in-branching have the same root. By linking the two problems with the problem of whether a digraph has an out-branching with at least k leaves (a leaf is a vertex of out-degree zero), we first solve the problem of Bang-Jensen and Yeo (2008). We then develop a new digraph decomposition called the rooted cut decomposition and using it we prove that the problem of Bang-Jensen et al. (2016) is FPT for all digraphs. We believe that the rooted cut decomposition will be useful for obtaining other results on digraphs.

Cite as

Gregory Gutin, Felix Reidl, and Magnus Wahlström. k-Distinct In- and Out-Branchings in Digraphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 58:1-58:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gutin_et_al:LIPIcs.ICALP.2017.58,
  author =	{Gutin, Gregory and Reidl, Felix and Wahlstr\"{o}m, Magnus},
  title =	{{k-Distinct In- and Out-Branchings in Digraphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{58:1--58:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.58},
  URN =		{urn:nbn:de:0030-drops-73788},
  doi =		{10.4230/LIPIcs.ICALP.2017.58},
  annote =	{Keywords: Digraphs, Branchings, Decompositions, FPT algorithms}
}
Document
Fast Regression with an $ell_infty$ Guarantee

Authors: Eric Price, Zhao Song, and David P. Woodruff


Abstract
Sketching has emerged as a powerful technique for speeding up problems in numerical linear algebra, such as regression. In the overconstrained regression problem, one is given an n x d matrix A, with n >> d, as well as an n x 1 vector b, and one wants to find a vector \hat{x} so as to minimize the residual error ||Ax-b||_2. Using the sketch and solve paradigm, one first computes S \cdot A and S \cdot b for a randomly chosen matrix S, then outputs x' = (SA)^{\dagger} Sb so as to minimize || SAx' - Sb||_2. The sketch-and-solve paradigm gives a bound on ||x'-x^*||_2 when A is well-conditioned. Our main result is that, when S is the subsampled randomized Fourier/Hadamard transform, the error x' - x^* behaves as if it lies in a "random" direction within this bound: for any fixed direction a in R^d, we have with 1 - d^{-c} probability that (1) \langle a, x'-x^* \rangle \lesssim \frac{ \|a\|_2\|x'-x^*\|_2}{d^{\frac{1}{2}-\gamma}}, where c, \gamma > 0 are arbitrary constants. This implies ||x'-x^*||_{\infty} is a factor d^{\frac{1}{2}-\gamma} smaller than ||x'-x^*||_2. It also gives a better bound on the generalization of x' to new examples: if rows of A correspond to examples and columns to features, then our result gives a better bound for the error introduced by sketch-and-solve when classifying fresh examples. We show that not all oblivious subspace embeddings S satisfy these properties. In particular, we give counterexamples showing that matrices based on Count-Sketch or leverage score sampling do not satisfy these properties. We also provide lower bounds, both on how small ||x'-x^*||_2 can be, and for our new guarantee (1), showing that the subsampled randomized Fourier/Hadamard transform is nearly optimal. Our lower bound on ||x'-x^*||_2 shows that there is an O(1/epsilon) separation in the dimension of the optimal oblivious subspace embedding required for outputting an x' for which ||x'-x^*||_2 <= epsilon ||Ax^*-b||_2 \cdot ||A^{\dagger}||_2$, compared to the dimension of the optimal oblivious subspace embedding required for outputting an x' for which ||Ax'-b||_2 <= (1+epsilon)||Ax^*-b||_2, that is, the former problem requires dimension Omega(d/epsilon^2) while the latter problem can be solved with dimension O(d/epsilon). This explains the reason known upper bounds on the dimensions of these two variants of regression have differed in prior work.

Cite as

Eric Price, Zhao Song, and David P. Woodruff. Fast Regression with an $ell_infty$ Guarantee. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 59:1-59:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{price_et_al:LIPIcs.ICALP.2017.59,
  author =	{Price, Eric and Song, Zhao and Woodruff, David P.},
  title =	{{Fast Regression with an \$ell\underlineinfty\$ Guarantee}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{59:1--59:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.59},
  URN =		{urn:nbn:de:0030-drops-74488},
  doi =		{10.4230/LIPIcs.ICALP.2017.59},
  annote =	{Keywords: Linear regression, Count-Sketch, Gaussians, Leverage scores, ell\underlineinfty-guarantee}
}
Document
Embeddings of Schatten Norms with Applications to Data Streams

Authors: Yi Li and David P. Woodruff


Abstract
Given an n×d matrix A, its Schatten-p norm, p >= 1, is defined as |A|_p = (sum_{i=1}^rank(A) sigma(i)^p)^{1/p} where sigma_i(A) is the i-th largest singular value of A. These norms have been studied in functional analysis in the context of non-commutative L_p-spaces, and recently in data stream and linear sketching models of computation. Basic questions on the relations between these norms, such as their embeddability, are still open. Specifically, given a set of matrices A_1, ... , A_poly(nd) in R^{n x d}, suppose we want to construct a linear map L such that L(A_i) in R^{n' x d'} for each i, where n' < n and d' < d, and further, |A_i|p <= |L(A_i)|_q <= D_{p,q}|A_i|_p for a given approximation factor D_{p,q} and real number q >= 1. Then how large do n' and d' need to be as a function of D_{p,q}? We nearly resolve this question for every p, q >= 1, for the case where L(A_i) can be expressed as R*A_i*S, where R and S are arbitrary matrices that are allowed to depend on A_1, ... ,A_t, that is, L(A_i) can be implemented by left and right matrix multiplication. Namely, for every p, q >= 1, we provide nearly matching upper and lower bounds on the size of n' and d' as a function of D_{p,q}. Importantly, our upper bounds are oblivious, meaning that R and S do not depend on the A_i, while our lower bounds hold even if R and S depend on the A_i. As an application of our upper bounds, we answer a recent open question of Blasiok et al. about space-approximation trade-offs for the Schatten 1-norm, showing in a data stream it is possible to estimate the Schatten-1 norm up to a factor of D >= 1 using O~(min(n, d)^2/D^4) space.

Cite as

Yi Li and David P. Woodruff. Embeddings of Schatten Norms with Applications to Data Streams. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2017.60,
  author =	{Li, Yi and Woodruff, David P.},
  title =	{{Embeddings of Schatten Norms with Applications to Data Streams}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.60},
  URN =		{urn:nbn:de:0030-drops-73726},
  doi =		{10.4230/LIPIcs.ICALP.2017.60},
  annote =	{Keywords: data stream algorithms, embeddings, matrix norms, sketching}
}
Document
On Fast Decoding of High-Dimensional Signals from One-Bit Measurements

Authors: Vasileios Nakos


Abstract
In the problem of one-bit compressed sensing, the goal is to find a delta-close estimation of a k-sparse vector x in R^n given the signs of the entries of y = Phi x, where Phi is called the measurement matrix. For the one-bit compressed sensing problem, previous work [Plan, 2013][Gopi, 2013] achieved Theta (delta^{-2} k log(n/k)) and O~( 1/delta k log (n/k)) measurements, respectively, but the decoding time was Omega ( n k log (n/k)). In this paper, using tools and techniques developed in the context of two-stage group testing and streaming algorithms, we contribute towards the direction of sub-linear decoding time. We give a variety of schemes for the different versions of one-bit compressed sensing, such as the for-each and for-all versions, and for support recovery; all these have at most a log k overhead in the number of measurements and poly(k, log n) decoding time, which is an exponential improvement over previous work, in terms of the dependence on n.

Cite as

Vasileios Nakos. On Fast Decoding of High-Dimensional Signals from One-Bit Measurements. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nakos:LIPIcs.ICALP.2017.61,
  author =	{Nakos, Vasileios},
  title =	{{On Fast Decoding of High-Dimensional Signals from One-Bit Measurements}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.61},
  URN =		{urn:nbn:de:0030-drops-74887},
  doi =		{10.4230/LIPIcs.ICALP.2017.61},
  annote =	{Keywords: one-bit compressed sensing, sparse recovery, heavy hitters, dyadic trick, combinatorial group testing}
}
Document
String Inference from Longest-Common-Prefix Array

Authors: Juha Kärkkäinen, Marcin Piatkowski, and Simon J. Puglisi


Abstract
The suffix array, perhaps the most important data structure in modern string processing, is often augmented with the longest common prefix (LCP) array which stores the lengths of the LCPs for lexicographically adjacent suffixes of a string. Together the two arrays are roughly equivalent to the suffix tree with the LCP array representing the tree shape. In order to better understand the combinatorics of LCP arrays, we consider the problem of inferring a string from an LCP array, i.e., determining whether a given array of integers is a valid LCP array, and if it is, reconstructing some string or all strings with that LCP array. There are recent studies of inferring a string from a suffix tree shape but using significantly more information (in the form of suffix links) than is available in the LCP array. We provide two main results. (1) We describe two algorithms for inferring strings from an LCP array when we allow a generalized form of LCP array defined for a multiset of cyclic strings: a linear time algorithm for binary alphabet and a general algorithm with polynomial time complexity for a constant alphabet size. (2) We prove that determining whether a given integer array is a valid LCP array is NP-complete when we require more restricted forms of LCP array defined for a single cyclic or non-cyclic string or a multiset of non-cyclic strings. The result holds whether or not the alphabet is restricted to be binary. In combination, the two results show that the generalized form of LCP array for a multiset of cyclic strings is fundamentally different from the other more restricted forms.

Cite as

Juha Kärkkäinen, Marcin Piatkowski, and Simon J. Puglisi. String Inference from Longest-Common-Prefix Array. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{karkkainen_et_al:LIPIcs.ICALP.2017.62,
  author =	{K\"{a}rkk\"{a}inen, Juha and Piatkowski, Marcin and Puglisi, Simon J.},
  title =	{{String Inference from Longest-Common-Prefix Array}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.62},
  URN =		{urn:nbn:de:0030-drops-74989},
  doi =		{10.4230/LIPIcs.ICALP.2017.62},
  annote =	{Keywords: LCP array, string inference, BWT, suffix array, suffix tree, NP-hardness}
}
Document
Neighborhood Complexity and Kernelization for Nowhere Dense Classes of Graphs

Authors: Kord Eickmeyer, Archontia C. Giannopoulou, Stephan Kreutzer, O-joung Kwon, Michal Pilipczuk, Roman Rabinovich, and Sebastian Siebertz


Abstract
We prove that whenever G is a graph from a nowhere dense graph class C, and A is a subset of vertices of G, then the number of subsets of A that are realized as intersections of A with r-neighborhoods of vertices of G is at most f(r,eps)|A|^(1+eps), where r is any positive integer, eps is any positive real, and f is a function that depends only on the class C. This yields a characterization of nowhere dense classes of graphs in terms of neighborhood complexity, which answers a question posed by [Reidl et al., CoRR, 2016]. As an algorithmic application of the above result, we show that for every fixed integer r, the parameterized Distance-r Dominating Set problem admits an almost linear kernel on any nowhere dense graph class. This proves a conjecture posed by [Drange et al., STACS 2016], and shows that the limit of parameterized tractability of Distance-r Dominating Set on subgraph-closed graph classes lies exactly on the boundary between nowhere denseness and somewhere denseness.

Cite as

Kord Eickmeyer, Archontia C. Giannopoulou, Stephan Kreutzer, O-joung Kwon, Michal Pilipczuk, Roman Rabinovich, and Sebastian Siebertz. Neighborhood Complexity and Kernelization for Nowhere Dense Classes of Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{eickmeyer_et_al:LIPIcs.ICALP.2017.63,
  author =	{Eickmeyer, Kord and Giannopoulou, Archontia C. and Kreutzer, Stephan and Kwon, O-joung and Pilipczuk, Michal and Rabinovich, Roman and Siebertz, Sebastian},
  title =	{{Neighborhood Complexity and Kernelization for Nowhere Dense Classes of Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.63},
  URN =		{urn:nbn:de:0030-drops-74288},
  doi =		{10.4230/LIPIcs.ICALP.2017.63},
  annote =	{Keywords: Graph Structure Theory, Nowhere Dense Graphs, Parameterized Complexity, Kernelization, Dominating Set}
}
Document
Additive Spanners and Distance Oracles in Quadratic Time

Authors: Mathias Bæk Tejs Knudsen


Abstract
Let G be an unweighted, undirected graph. An additive k-spanner of G is a subgraph H that approximates all distances between pairs of nodes up to an additive error of +k, that is, it satisfies d_H(u,v) <= d_G(u,v)+k for all nodes u,v, where d is the shortest path distance. We give a deterministic algorithm that constructs an additive O(1)-spanner with O(n^(4/3)) edges in O(n^2) time. This should be compared with the randomized Monte Carlo algorithm by Woodruff [ICALP 2010] giving an additive 6-spanner with O(n^(4/3)log^3 n) edges in expected time O(n^2 log^2 n). An (alpha,beta)-approximate distance oracle for G is a data structure that supports the following distance queries between pairs of nodes in G. Given two nodes u, v it can in constant time compute a distance estimate d' that satisfies d <= d' <= alpha d + beta where d is the distance between u and v in G. Sommer [ICALP 2016] gave a randomized Monte Carlo (2,1)-distance oracle of size O(n^(5/3) polylog n) in expected time O(n^2 polylog n). As an application of the additive O(1)-spanner we improve the construction by Sommer [ICALP 2016] and give a Las Vegas (2,1)-distance oracle of size O(n^(5/3)) in time O(n^2). This also implies an algorithm that in O(n^2) time gives approximate distance for all pairs of nodes in G improving on the O(n^2 log n) algorithm by Baswana and Kavitha [SICOMP 2010].

Cite as

Mathias Bæk Tejs Knudsen. Additive Spanners and Distance Oracles in Quadratic Time. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 64:1-64:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{knudsen:LIPIcs.ICALP.2017.64,
  author =	{Knudsen, Mathias B{\ae}k Tejs},
  title =	{{Additive Spanners and Distance Oracles in Quadratic Time}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.64},
  URN =		{urn:nbn:de:0030-drops-73924},
  doi =		{10.4230/LIPIcs.ICALP.2017.64},
  annote =	{Keywords: graph algorithms, data structures, additive spanners, distance oracles}
}
Document
Finding, Hitting and Packing Cycles in Subexponential Time on Unit Disk Graphs

Authors: Fedor V. Fomin, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi


Abstract
We give algorithms with running time 2^{O({\sqrt{k}\log{k}})} n^{O(1)} for the following problems. Given an n-vertex unit disk graph G and an integer k, decide whether G contains (i) a path on exactly/at least k vertices, (ii) a cycle on exactly k vertices, (iii) a cycle on at least k vertices, (iv) a feedback vertex set of size at most k, and (v) a set of k pairwise vertex disjoint cycles. For the first three problems, no subexponential time parameterized algorithms were previously known. For the remaining two problems, our algorithms significantly outperform the previously best known parameterized algorithms that run in time 2^{O(k^{0.75}\log{k})} n^{O(1)}. Our algorithms are based on a new kind of tree decompositions of unit disk graphs where the separators can have size up to k^{O(1)} and there exists a solution that crosses every separator at most O(\sqrt{k}) times. The running times of our algorithms are optimal up to the log{k} factor in the exponent, assuming the Exponential Time Hypothesis.

Cite as

Fedor V. Fomin, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. Finding, Hitting and Packing Cycles in Subexponential Time on Unit Disk Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 65:1-65:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ICALP.2017.65,
  author =	{Fomin, Fedor V. and Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Finding, Hitting and Packing Cycles in Subexponential Time on Unit Disk Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{65:1--65:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.65},
  URN =		{urn:nbn:de:0030-drops-73937},
  doi =		{10.4230/LIPIcs.ICALP.2017.65},
  annote =	{Keywords: Longest Cycle, Cycle Packing, Feedback Vertex Set, Unit Disk Graph, Parameterized Complexity}
}
Document
A Polynomial-Time Randomized Reduction from Tournament Isomorphism to Tournament Asymmetry

Authors: Pascal Schweitzer


Abstract
The paper develops a new technique to extract a characteristic subset from a random source that repeatedly samples from a set of elements. Here a characteristic subset is a set that when containing an element contains all elements that have the same probability. With this technique at hand the paper looks at the special case of the tournament isomorphism problem that stands in the way towards a polynomial-time algorithm for the graph isomorphism problem. Noting that there is a reduction from the automorphism (asymmetry) problem to the isomorphism problem, a reduction in the other direction is nevertheless not known and remains a thorny open problem. Applying the new technique, we develop a randomized polynomial-time Turing-reduction from the tournament isomorphism problem to the tournament automorphism problem. This is the first such reduction for any kind of combinatorial object not known to have a polynomial-time solvable isomorphism problem.

Cite as

Pascal Schweitzer. A Polynomial-Time Randomized Reduction from Tournament Isomorphism to Tournament Asymmetry. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{schweitzer:LIPIcs.ICALP.2017.66,
  author =	{Schweitzer, Pascal},
  title =	{{A Polynomial-Time Randomized Reduction from Tournament Isomorphism to Tournament Asymmetry}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.66},
  URN =		{urn:nbn:de:0030-drops-74928},
  doi =		{10.4230/LIPIcs.ICALP.2017.66},
  annote =	{Keywords: graph isomorphism, asymmetry, tournaments, randomized reductions}
}
Document
A (1+epsilon)-Approximation for Unsplittable Flow on a Path in Fixed-Parameter Running Time

Authors: Andreas Wiese


Abstract
Unsplittable Flow on a Path (UFP) is a well-studied problem. It arises in many different settings such as bandwidth allocation, scheduling, and caching. We are given a path with capacities on the edges and a set of tasks, each of them is described by a start and an end vertex and a demand. The goal is to select as many tasks as possible such that the demand of the selected tasks using each edge does not exceed the capacity of this edge. The problem admits a QPTAS and the best known polynomial time result is a (2+epsilon)-approximation. As we prove in this paper, the problem is intractable for fixed-parameter algorithms since it is W[1]-hard. A PTAS seems difficult to construct. However, we show that if we combine the paradigms of approximation algorithms and fixed-parameter tractability we can break the mentioned boundaries. We show that on instances with |OPT|=k we can compute a (1+epsilon)-approximation in time 2^O(k log k)n^O_epsilon(1) log(u_max) (where u_max is the maximum edge capacity). To obtain this algorithm we develop new insights for UFP and enrich a recent dynamic programming framework for the problem. Our results yield a PTAS for (unweighted) UFP instances where |OPT| is at most O(log n/log log n) and they imply that the problem does not admit an EPTAS, unless W[1]=FPT.

Cite as

Andreas Wiese. A (1+epsilon)-Approximation for Unsplittable Flow on a Path in Fixed-Parameter Running Time. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 67:1-67:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{wiese:LIPIcs.ICALP.2017.67,
  author =	{Wiese, Andreas},
  title =	{{A (1+epsilon)-Approximation for Unsplittable Flow on a Path in Fixed-Parameter Running Time}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{67:1--67:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.67},
  URN =		{urn:nbn:de:0030-drops-74154},
  doi =		{10.4230/LIPIcs.ICALP.2017.67},
  annote =	{Keywords: Combinatorial optimization, Approximation algorithms, Fixed-parameter algorithms, Unsplittable Flow on a Path}
}
Document
Linear-Time Kernelization for Feedback Vertex Set

Authors: Yoichi Iwata


Abstract
In this paper, we give an algorithm that, given an undirected graph G of m edges and an integer k, computes a graph G' and an integer k' in O(k^4 m) time such that (1) the size of the graph G' is O(k^2), (2) k' \leq k, and (3) G has a feedback vertex set of size at most k if and only if G' has a feedback vertex set of size at most k'. This is the first linear-time polynomial-size kernel for Feedback Vertex Set. The size of our kernel is 2k^2+k vertices and 4k^2 edges, which is smaller than the previous best of 4k^2 vertices and 8k^2 edges. Thus, we improve the size and the running time simultaneously. We note that under the assumption of NP \not\subseteq coNP/poly, Feedback Vertex Set does not admit an O(k^{2-\epsilon})-size kernel for any \epsilon>0. Our kernel exploits k-submodular relaxation, which is a recently developed technique for obtaining efficient FPT algorithms for various problems. The dual of k-submodular relaxation of Feedback Vertex Set can be seen as a half-integral variant of A-path packing, and to obtain the linear-time complexity, we give an efficient augmenting-path algorithm for this problem. We believe that this combinatorial algorithm is of independent interest. A solver based on the proposed method won first place in the 1st Parameterized Algorithms and Computational Experiments (PACE) challenge.

Cite as

Yoichi Iwata. Linear-Time Kernelization for Feedback Vertex Set. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{iwata:LIPIcs.ICALP.2017.68,
  author =	{Iwata, Yoichi},
  title =	{{Linear-Time Kernelization for Feedback Vertex Set}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.68},
  URN =		{urn:nbn:de:0030-drops-74301},
  doi =		{10.4230/LIPIcs.ICALP.2017.68},
  annote =	{Keywords: FPT Algorithms, Kernelization, Path Packing, Half-integrality}
}
Document
Exact Algorithms via Multivariate Subroutines

Authors: Serge Gaspers and Edward J. Lee


Abstract
We consider the family of Phi-Subset problems, where the input consists of an instance I of size N over a universe U_I of size n and the task is to check whether the universe contains a subset with property Phi (e.g., Phi could be the property of being a feedback vertex set for the input graph of size at most k). Our main tool is a simple randomized algorithm which solves Phi-Subset in time (1+b-(1/c))^n N^(O(1)), provided that there is an algorithm for the Phi-Extension problem with running time b^{n-|X|} c^k N^{O(1)}. Here, the input for Phi-Extension is an instance I of size N over a universe U_I of size n, a subset X \subseteq U_I, and an integer k, and the task is to check whether there is a set Y with X \subseteq Y \subseteq U_I and |Y \ X| <= k with property Phi. We derandomize this algorithm at the cost of increasing the running time by a subexponential factor in n, and we adapt it to the enumeration setting where we need to enumerate all subsets of the universe with property Phi. This generalizes the results of Fomin et al. [STOC 2016] who proved the case where b=1. As case studies, we use these results to design faster deterministic algorithms for: - checking whether a graph has a feedback vertex set of size at most k - enumerating all minimal feedback vertex sets - enumerating all minimal vertex covers of size at most k, and - enumerating all minimal 3-hitting sets. We obtain these results by deriving new b^{n-|X|} c^k N^{O(1)}-time algorithms for the corresponding Phi-Extension problems (or enumeration variant). In some cases, this is done by adapting the analysis of an existing algorithm, or in other cases by designing a new algorithm. Our analyses are based on Measure and Conquer, but the value to minimize, 1+b-(1/c), is unconventional and requires non-convex optimization.

Cite as

Serge Gaspers and Edward J. Lee. Exact Algorithms via Multivariate Subroutines. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 69:1-69:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gaspers_et_al:LIPIcs.ICALP.2017.69,
  author =	{Gaspers, Serge and Lee, Edward J.},
  title =	{{Exact Algorithms via Multivariate Subroutines}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{69:1--69:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.69},
  URN =		{urn:nbn:de:0030-drops-74251},
  doi =		{10.4230/LIPIcs.ICALP.2017.69},
  annote =	{Keywords: enumeration algorithms, exponential time algorithms, feedback vertex set, hitting set}
}
Document
Exploring the Complexity of Layout Parameters in Tournaments and Semi-Complete Digraphs

Authors: Florian Barbero, Christophe Paul, and Michal Pilipczuk


Abstract
A simple digraph is semi-complete if for any two of its vertices u and v, at least one of the arcs (u,v) and (v,u) is present. We study the complexity of computing two layout parameters of semi-complete digraphs: cutwidth and optimal linear arrangement (OLA). We prove that: -Both parameters are NP-hard to compute and the known exact and parameterized algorithms for them have essentially optimal running times, assuming the Exponential Time Hypothesis. - The cutwidth parameter admits a quadratic Turing kernel, whereas it does not admit any polynomial kernel unless coNP/poly contains NP. By contrast, OLA admits a linear kernel. These results essentially complete the complexity analysis of computing cutwidth and OLA on semi-complete digraphs. Our techniques can be also used to analyze the sizes of minimal obstructions for having small cutwidth under the induced subdigraph relation.

Cite as

Florian Barbero, Christophe Paul, and Michal Pilipczuk. Exploring the Complexity of Layout Parameters in Tournaments and Semi-Complete Digraphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 70:1-70:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{barbero_et_al:LIPIcs.ICALP.2017.70,
  author =	{Barbero, Florian and Paul, Christophe and Pilipczuk, Michal},
  title =	{{Exploring the Complexity of Layout Parameters in Tournaments and Semi-Complete Digraphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{70:1--70:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.70},
  URN =		{urn:nbn:de:0030-drops-74537},
  doi =		{10.4230/LIPIcs.ICALP.2017.70},
  annote =	{Keywords: cutwidth, OLA, tournament, semi-complete digraph}
}
Document
Packing Cycles Faster Than Erdos-Posa

Authors: Daniel Lokshtanov, Amer E. Mouawad, Saket Saurabh, and Meirav Zehavi


Abstract
The Cycle Packing problem asks whether a given undirected graph G=(V,E) contains k vertex-disjoint cycles. Since the publication of the classic Erdos-Posa theorem in 1965, this problem received significant scientific attention in the fields of Graph Theory and Algorithm Design. In particular, this problem is one of the first problems studied in the framework of Parameterized Complexity. The non-uniform fixed-parameter tractability of Cycle Packing follows from the Robertson–Seymour theorem, a fact already observed by Fellows and Langston in the 1980s. In 1994, Bodlaender showed that Cycle Packing can be solved in time 2^{O(k^2)}|V| using exponential space. In case a solution exists, Bodlaender's algorithm also outputs a solution (in the same time). It has later become common knowledge that Cycle Packing admits a 2^{O(k\log^2 k)}|V|-time (deterministic) algorithm using exponential space, which is a consequence of the Erdos-Posa theorem. Nowadays, the design of this algorithm is given as an exercise in textbooks on Parameterized Complexity. Yet, no algorithm that runs in time 2^{o(k\log^2k)}|V|^{O(1)}, beating the bound 2^{O(k\log^2k)}\cdot |V|^{O(1)}, has been found. In light of this, it seems natural to ask whether the 2^{O(k\log^2k)}|V|^{O(1)}$ bound is essentially optimal. In this paper, we answer this question negatively by developing a 2^{O(k\log^2k/log log k})} |V|-time (deterministic) algorithm for Cycle Packing. In case a solution exists, our algorithm also outputs a solution (in the same time). Moreover, apart from beating the known bound, our algorithm runs in time linear in |V|, and its space complexity is polynomial in the input size.

Cite as

Daniel Lokshtanov, Amer E. Mouawad, Saket Saurabh, and Meirav Zehavi. Packing Cycles Faster Than Erdos-Posa. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 71:1-71:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ICALP.2017.71,
  author =	{Lokshtanov, Daniel and Mouawad, Amer E. and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Packing Cycles Faster Than Erdos-Posa}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{71:1--71:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.71},
  URN =		{urn:nbn:de:0030-drops-73857},
  doi =		{10.4230/LIPIcs.ICALP.2017.71},
  annote =	{Keywords: Parameterized Complexity, Graph Algorithms, Cycle Packing, Erd\"{o}s-P\'{o}sa Theorem}
}
Document
An Efficient Strongly Connected Components Algorithm in the Fault Tolerant Model

Authors: Surender Baswana, Keerti Choudhary, and Liam Roditty


Abstract
In this paper we study the problem of maintaining the strongly connected components of a graph in the presence of failures. In particular, we show that given a directed graph G=(V,E) with n=|V| and m=|E|, and an integer value k\geq 1, there is an algorithm that computes in O(2^{k}n log^2 n) time for any set F of size at most k the strongly connected components of the graph G\F. The running time of our algorithm is almost optimal since the time for outputting the SCCs of G\F is at least \Omega(n). The algorithm uses a data structure that is computed in a preprocessing phase in polynomial time and is of size O(2^{k} n^2). Our result is obtained using a new observation on the relation between strongly connected components (SCCs) and reachability. More specifically, one of the main building blocks in our result is a restricted variant of the problem in which we only compute strongly connected components that intersect a certain path. Restricting our attention to a path allows us to implicitly compute reachability between the path vertices and the rest of the graph in time that depends logarithmically rather than linearly in the size of the path. This new observation alone, however, is not enough, since we need to find an efficient way to represent the strongly connected components using paths. For this purpose we use a mixture of old and classical techniques such as the heavy path decomposition of Sleator and Tarjan and the classical Depth-First-Search algorithm. Although, these are by now standard techniques, we are not aware of any usage of them in the context of dynamic maintenance of SCCs. Therefore, we expect that our new insights and mixture of new and old techniques will be of independent interest.

Cite as

Surender Baswana, Keerti Choudhary, and Liam Roditty. An Efficient Strongly Connected Components Algorithm in the Fault Tolerant Model. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 72:1-72:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{baswana_et_al:LIPIcs.ICALP.2017.72,
  author =	{Baswana, Surender and Choudhary, Keerti and Roditty, Liam},
  title =	{{An Efficient  Strongly Connected Components Algorithm in the Fault Tolerant Model}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{72:1--72:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.72},
  URN =		{urn:nbn:de:0030-drops-74168},
  doi =		{10.4230/LIPIcs.ICALP.2017.72},
  annote =	{Keywords: Fault tolerant, Directed graph, Strongly connected components}
}
Document
Preserving Distances in Very Faulty Graphs

Authors: Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams


Abstract
Preservers and additive spanners are sparse (hence cheap to store) subgraphs that preserve the distances between given pairs of nodes exactly or with some small additive error, respectively. Since real-world networks are prone to failures, it makes sense to study fault-tolerant versions of the above structures. This turns out to be a surprisingly difficult task. For every small but arbitrary set of edge or vertex failures, the preservers and spanners need to contain replacement paths around the faulted set. Unfortunately, the complexity of the interaction between replacement paths blows up significantly, even from 1 to 2 faults, and the structure of optimal preservers and spanners is poorly understood. In particular, no nontrivial bounds for preservers and additive spanners are known when the number of faults is bigger than 2. Even the answer to the following innocent question is completely unknown: what is the worst-case size of a preserver for a single pair of nodes in the presence of f edge faults? There are no super-linear lower bounds, nor subquadratic upper bounds for f>2. In this paper we make substantial progress on this and other fundamental questions: - We present the first truly sub-quadratic size fault-tolerant single-pair preserver in unweighted (possibly directed) graphs: for any n node graph and any fixed number f of faults, O~(fn^{2-1/2^f}) size suffices. Our result also generalizes to the single-source (all targets) case, and can be used to build new fault-tolerant additive spanners (for all pairs). - The size of the above single-pair preserver grows to O(n^2) for increasing f. We show that this is necessary even in undirected unweighted graphs, and even if you allow for a small additive error: If you aim at size O(n^{2-eps}) for \eps>0, then the additive error has to be \Omega(eps f). This surprisingly matches known upper bounds in the literature. - For weighted graphs, we provide matching upper and lower bounds for the single pair case. Namely, the size of the preserver is Theta(n^2) for f > 1 in both directed and undirected graphs, while for f=1 the size is Theta(n) in undirected graphs. For directed graphs, we have a superlinear upper bound and a matching lower bound. Most of our lower bounds extend to the distance oracle setting, where rather than a subgraph we ask for any compact data structure.

Cite as

Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams. Preserving Distances in Very Faulty Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2017.73,
  author =	{Bodwin, Greg and Grandoni, Fabrizio and Parter, Merav and Vassilevska Williams, Virginia},
  title =	{{Preserving Distances in Very Faulty Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.73},
  URN =		{urn:nbn:de:0030-drops-74906},
  doi =		{10.4230/LIPIcs.ICALP.2017.73},
  annote =	{Keywords: Fault Tolerance, shortest paths, replacement paths}
}
Document
All-Pairs 2-Reachability in O(n^w log n) Time

Authors: Loukas Georgiadis, Daniel Graf, Giuseppe F. Italiano, Nikos Parotsidis, and Przemyslaw Uznanski


Abstract
In the 2-reachability problem we are given a directed graph G and we wish to determine if there are two (edge or vertex) disjoint paths from u to v, for given pair of vertices u and v. In this paper, we present an algorithm that computes 2-reachability information for all pairs of vertices in O(n^w log n) time, where n is the number of vertices and w is the matrix multiplication exponent. Hence, we show that the running time of all-pairs 2-reachability is only within a log factor of transitive closure. Moreover, our algorithm produces a witness (i.e., a separating edge or a separating vertex) for all pair of vertices where 2-reachability does not hold. By processing these witnesses, we can compute all the edge- and vertex-dominator trees of G in O(n^2) additional time, which in turn enables us to answer various connectivity queries in O(1) time. For instance, we can test in constant time if there is a path from u to v avoiding an edge e, for any pair of query vertices u and v, and any query edge e, or if there is a path from u to v avoiding a vertex w, for any query vertices u, v, and w.

Cite as

Loukas Georgiadis, Daniel Graf, Giuseppe F. Italiano, Nikos Parotsidis, and Przemyslaw Uznanski. All-Pairs 2-Reachability in O(n^w log n) Time. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ICALP.2017.74,
  author =	{Georgiadis, Loukas and Graf, Daniel and Italiano, Giuseppe F. and Parotsidis, Nikos and Uznanski, Przemyslaw},
  title =	{{All-Pairs 2-Reachability in O(n^w log n) Time}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{74:1--74:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.74},
  URN =		{urn:nbn:de:0030-drops-74510},
  doi =		{10.4230/LIPIcs.ICALP.2017.74},
  annote =	{Keywords: 2-reachability, All Dominator Trees, Directed Graphs, Boolean Matrix Multiplication}
}
Document
Edge-Orders

Authors: Lena Schlipf and Jens M. Schmidt


Abstract
Canonical orderings and their relatives such as st-numberings have been used as a key tool in algorithmic graph theory for the last decades. Recently, a unifying link behind all these orders has been shown that links them to well-known graph decompositions into parts that have a prescribed vertex-connectivity. Despite extensive interest in canonical orderings, no analogue of this unifying concept is known for edge-connectivity. In this paper, we establish such a concept named edge-orders and show how to compute (1,1)-edge-orders of 2-edge-connected graphs as well as (2,1)-edge-orders of 3-edge-connected graphs in linear time, respectively. While the former can be seen as the edge-variants of st-numberings, the latter are the edge-variants of Mondshein sequences and non-separating ear decompositions. The methods that we use for obtaining such edge-orders differ considerably in almost all details from the ones used for their vertex-counterparts, as different graph-theoretic constructions are used in the inductive proof and standard reductions from edge- to vertex-connectivity are bound to fail. As a first application, we consider the famous Edge-Independent Spanning Tree Conjecture, which asserts that every k-edge-connected graph contains k rooted spanning trees that are pairwise edge-independent. We illustrate the impact of the above edge-orders by deducing algorithms that construct 2- and 3-edge independent spanning trees of 2- and 3-edge-connected graphs, the latter of which improves the best known running time from O(n^2) to linear time.

Cite as

Lena Schlipf and Jens M. Schmidt. Edge-Orders. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{schlipf_et_al:LIPIcs.ICALP.2017.75,
  author =	{Schlipf, Lena and Schmidt, Jens M.},
  title =	{{Edge-Orders}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.75},
  URN =		{urn:nbn:de:0030-drops-74078},
  doi =		{10.4230/LIPIcs.ICALP.2017.75},
  annote =	{Keywords: edge-order, st-edge-order, canonical ordering, edge-independent spanning tree, Mondshein sequence, linear time}
}
Document
Relaxations of Graph Isomorphism

Authors: Laura Mancinska, David E. Roberson, Robert Samal, Simone Severini, and Antonios Varvitsiotis


Abstract
We introduce a nonlocal game that captures and extends the notion of graph isomorphism. This game can be won in the classical case if and only if the two input graphs are isomorphic. Thus, by considering quantum strategies we are able to define the notion of quantum isomorphism. We also consider the case of more general non-signalling strategies, and show that such a strategy exists if and only if the graphs are fractionally isomorphic. We prove several necessary conditions for quantum isomorphism, including cospectrality, and provide a construction for producing pairs of non-isomorphic graphs that are quantum isomorphic. We then show that both classical and quantum isomorphism can be reformulated as feasibility programs over the completely positive and completely positive semidefinite cones respectively. This leads us to considering relaxations of (quantum) isomorphism arrived at by relaxing the cone to either the doubly nonnegative (DNN) or positive semidefinite (PSD) cones. We show that DNN-isomorphism is equivalent to the previous defined notion of graph equivalence, a polynomial-time decidable relation that is related to coherent algebras. We also show that PSD-isomorphism implies several types of cospectrality, and that it is equivalent to cospectrality for connected 1-walk-regular graphs. Finally, we show that all of the above mentioned relations form a strict hierarchy of weaker and weaker relations, with non-singalling/fractional isomorphism being the weakest. The techniques used are an interesting mix of algebra, combinatorics, and quantum information.

Cite as

Laura Mancinska, David E. Roberson, Robert Samal, Simone Severini, and Antonios Varvitsiotis. Relaxations of Graph Isomorphism. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{mancinska_et_al:LIPIcs.ICALP.2017.76,
  author =	{Mancinska, Laura and Roberson, David E. and Samal, Robert and Severini, Simone and Varvitsiotis, Antonios},
  title =	{{Relaxations of Graph Isomorphism}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.76},
  URN =		{urn:nbn:de:0030-drops-74697},
  doi =		{10.4230/LIPIcs.ICALP.2017.76},
  annote =	{Keywords: graph isomorphism, quantum information, semidefinite programming}
}
Document
Honest Signaling in Zero-Sum Games Is Hard, and Lying Is Even Harder

Authors: Aviad Rubinstein


Abstract
We prove that, assuming the exponential time hypothesis, finding an epsilon-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [Cheng et al., FOCS'15] and resolves an open question of [Dughmi, FOCS'14]. We also prove that finding a multiplicative approximation is NP-hard. We also introduce a new model where a dishonest signaler may publicly commit to use one scheme, but post signals according to a different scheme. For this model, we prove that even finding a (1-2^{-n})-approximately optimal scheme is NP-hard.

Cite as

Aviad Rubinstein. Honest Signaling in Zero-Sum Games Is Hard, and Lying Is Even Harder. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 77:1-77:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rubinstein:LIPIcs.ICALP.2017.77,
  author =	{Rubinstein, Aviad},
  title =	{{Honest Signaling in Zero-Sum Games Is Hard, and Lying Is Even Harder}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{77:1--77:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.77},
  URN =		{urn:nbn:de:0030-drops-74057},
  doi =		{10.4230/LIPIcs.ICALP.2017.77},
  annote =	{Keywords: Signaling, Zero-sum Games, Algorithmic Game Theory, birthday repetition}
}
Document
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs

Authors: Pasin Manurangsi and Prasad Raghavendra


Abstract
A (k x l)-birthday repetition G^{k x l} of a two-prover game G is a game in which the two provers are sent random sets of questions from G of sizes k and l respectively. These two sets are sampled independently uniformly among all sets of questions of those particular sizes. We prove the following birthday repetition theorem: when G satisfies some mild conditions, val(G^{k x l}) decreases exponentially in Omega(kl/n) where n is the total number of questions. Our result positively resolves an open question posted by Aaronson, Impagliazzo and Moshkovitz [Aaronson et al., CCC, 2014]. As an application of our birthday repetition theorem, we obtain new fine-grained inapproximability results for dense CSPs. Specifically, we establish a tight trade-off between running time and approximation ratio by showing conditional lower bounds, integrality gaps and approximation algorithms; in particular, for any sufficiently large i and for every k >= 2, we show the following: - We exhibit an O(q^{1/i})-approximation algorithm for dense Max k-CSPs with alphabet size q via O_k(i)-level of Sherali-Adams relaxation. - Through our birthday repetition theorem, we obtain an integrality gap of q^{1/i} for Omega_k(i / polylog i)-level Lasserre relaxation for fully-dense Max k-CSP. - Assuming that there is a constant epsilon > 0 such that Max 3SAT cannot be approximated to within (1 - epsilon) of the optimal in sub-exponential time, our birthday repetition theorem implies that any algorithm that approximates fully-dense Max k-CSP to within a q^{1/i} factor takes (nq)^{Omega_k(i / polylog i)} time, almost tightly matching our algorithmic result. As a corollary of our algorithm for dense Max k-CSP, we give a new approximation algorithm for Densest k-Subhypergraph, a generalization of Densest k-Subgraph to hypergraphs. When the input hypergraph is O(1)-uniform and the optimal k-subhypergraph has constant density, our algorithm finds a k-subhypergraph of density Omega(n^{−1/i}) in time n^{O(i)} for any integer i > 0.

Cite as

Pasin Manurangsi and Prasad Raghavendra. A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 78:1-78:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{manurangsi_et_al:LIPIcs.ICALP.2017.78,
  author =	{Manurangsi, Pasin and Raghavendra, Prasad},
  title =	{{A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{78:1--78:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.78},
  URN =		{urn:nbn:de:0030-drops-74638},
  doi =		{10.4230/LIPIcs.ICALP.2017.78},
  annote =	{Keywords: Birthday Repetition, Constraint Satisfaction Problems, Linear Program}
}
Document
Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

Authors: Pasin Manurangsi


Abstract
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small set of vertices whose expansion is almost zero and one in which all small sets of vertices have expansion almost one. In this work, we prove conditional inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph G, find a complete bipartite subgraph of G with maximum number of edges. We show that, assuming SSEH and that NP != BPP, no polynomial time algorithm gives n^{1 - epsilon}-approximation for MEB for every constant epsilon > 0. - Maximum Balanced Biclique (MBB): given a bipartite graph G, find a balanced complete bipartite subgraph of G with maximum number of vertices. Similar to MEB, we prove n^{1 - epsilon} ratio inapproximability for MBB for every epsilon > 0, assuming SSEH and that NP != BPP. - Minimum k-Cut: given a weighted graph G, find a set of edges with minimum total weight whose removal splits the graph into k components. We prove that this problem is NP-hard to approximate to within (2 - epsilon) factor of the optimum for every epsilon > 0, assuming SSEH. The ratios in our results are essentially tight since trivial algorithms give n-approximation to both MEB and MBB and 2-approximation algorithms are known for Minimum k-Cut [Saran and Vazirani, SIAM J. Comput., 1995]. Our first two results are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani [Raghavendra et al., CCC, 2012] to avoid locality of gadget reductions with a generalization of Bansal and Khot's long code test [Bansal and Khot, FOCS, 2009] whereas our last result is shown via an elementary reduction.

Cite as

Pasin Manurangsi. Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 79:1-79:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{manurangsi:LIPIcs.ICALP.2017.79,
  author =	{Manurangsi, Pasin},
  title =	{{Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{79:1--79:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.79},
  URN =		{urn:nbn:de:0030-drops-75004},
  doi =		{10.4230/LIPIcs.ICALP.2017.79},
  annote =	{Keywords: Hardness of Approximation, Small Set Expansion Hypothesis}
}
Document
On the Bit Complexity of Sum-of-Squares Proofs

Authors: Prasad Raghavendra and Benjamin Weitz


Abstract
It has often been claimed in recent papers that one can find a degree d Sum-of-Squares proof if one exists via the Ellipsoid algorithm. In a recent paper, Ryan O'Donnell notes this widely quoted claim is not necessarily true. He presents an example of a polynomial system with bounded coefficients that admits low-degree proofs of non-negativity, but these proofs necessarily involve numbers with an exponential number of bits, causing the Ellipsoid algorithm to take exponential time. In this paper we obtain both positive and negative results on the bit complexity of SoS proofs. First, we propose a sufficient condition on a polynomial system that implies a bound on the coefficients in an SoS proof. We demonstrate that this sufficient condition is applicable for common use-cases of the SoS algorithm, such as Max-CSP, Balanced Separator, Max-Clique, Max-Bisection, and Unit-Vector constraints. On the negative side, O'Donnell asked whether every polynomial system containing Boolean constraints admits proofs of polynomial bit complexity. We answer this question in the negative, giving a counterexample system and non-negative polynomial which has degree two SoS proofs, but no SoS proof with small coefficients until degree sqrt(n).

Cite as

Prasad Raghavendra and Benjamin Weitz. On the Bit Complexity of Sum-of-Squares Proofs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 80:1-80:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{raghavendra_et_al:LIPIcs.ICALP.2017.80,
  author =	{Raghavendra, Prasad and Weitz, Benjamin},
  title =	{{On the Bit Complexity of Sum-of-Squares Proofs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{80:1--80:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.80},
  URN =		{urn:nbn:de:0030-drops-73804},
  doi =		{10.4230/LIPIcs.ICALP.2017.80},
  annote =	{Keywords: Sum-of-Squares, Combinatorial Optimization, Proof Complexity}
}
Document
The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback

Authors: Amos Korman and Yoav Rodeh


Abstract
We introduce the dependent doors problem as an abstraction for situations in which one must perform a sequence of possibly dependent decisions, without receiving feedback information on the effectiveness of previously made actions. Informally, the problem considers a set of d doors that are initially closed, and the aim is to open all of them as fast as possible. To open a door, the algorithm knocks on it and it might open or not according to some probability distribution. This distribution may depend on which other doors are currently open, as well as on which other doors were open during each of the previous knocks on that door. The algorithm aims to minimize the expected time until all doors open. Crucially, it must act at any time without knowing whether or which other doors have already opened. In this work, we focus on scenarios where dependencies between doors are both positively correlated and acyclic. The fundamental distribution of a door describes the probability it opens in the best of conditions (with respect to other doors being open or closed). We show that if in two configurations of d doors corresponding doors share the same fundamental distribution, then these configurations have the same optimal running time up to a universal constant, no matter what are the dependencies between doors and what are the distributions. We also identify algorithms that are optimal up to a universal constant factor. For the case in which all doors share the same fundamental distribution we additionally provide a simpler algorithm, and a formula to calculate its running time. We furthermore analyse the price of lacking feedback for several configurations governed by standard fundamental distributions. In particular, we show that the price is logarithmic in d for memoryless doors, but can potentially grow to be linear in d for other distributions. We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question. Here, we study the case of two cascading memoryless doors. That is, the first door opens on each knock independently with probability p_1. The second door can only open if the first door is open, in which case it will open on each knock independently with probability p_2. We solve this problem almost completely by identifying algorithms that are optimal up to an additive term of 1.

Cite as

Amos Korman and Yoav Rodeh. The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 81:1-81:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{korman_et_al:LIPIcs.ICALP.2017.81,
  author =	{Korman, Amos and Rodeh, Yoav},
  title =	{{The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{81:1--81:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.81},
  URN =		{urn:nbn:de:0030-drops-73738},
  doi =		{10.4230/LIPIcs.ICALP.2017.81},
  annote =	{Keywords: No Feedback, Sequential Decisions, Probabilistic Environment, Exploration and Exploitation, Golden Ratio}
}
Document
A Tight Lower Bound for the Capture Time of the Cops and Robbers Game

Authors: Sebastian Brandt, Yuval Emek, Jara Uitto, and Roger Wattenhofer


Abstract
For the game of Cops and Robbers, it is known that in 1-cop-win graphs, the cop can capture the robber in O(n) time, and that there exist graphs in which this capture time is tight. When k >= 2, a simple counting argument shows that in k-cop-win graphs, the capture time is at most O(n^{k + 1}), however, no non-trivial lower bounds were previously known; indeed, in their 2011 book, Bonato and Nowakowski ask whether this upper bound can be improved. In this paper, the question of Bonato and Nowakowski is answered on the negative, proving that the O(n^{k + 1}) bound is asymptotically tight for any constant k >= 2. This yields a surprising gap in the capture time complexities between the 1-cop and the 2-cop cases.

Cite as

Sebastian Brandt, Yuval Emek, Jara Uitto, and Roger Wattenhofer. A Tight Lower Bound for the Capture Time of the Cops and Robbers Game. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 82:1-82:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{brandt_et_al:LIPIcs.ICALP.2017.82,
  author =	{Brandt, Sebastian and Emek, Yuval and Uitto, Jara and Wattenhofer, Roger},
  title =	{{A Tight Lower Bound for the Capture Time of the Cops and Robbers Game}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{82:1--82:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.82},
  URN =		{urn:nbn:de:0030-drops-74134},
  doi =		{10.4230/LIPIcs.ICALP.2017.82},
  annote =	{Keywords: cops and robbers, capture time, lower bound}
}
Document
Stochastic Control via Entropy Compression

Authors: Dimitris Achlioptas, Fotis Iliopoulos, and Nikos Vlassis


Abstract
Consider an agent trying to bring a system to an acceptable state by repeated probabilistic action. Several recent works on algorithmizations of the Lovász Local Lemma (LLL) can be seen as establishing sufficient conditions for the agent to succeed. Here we study whether such stochastic control is also possible in a noisy environment, where both the process of state-observation and the process of state-evolution are subject to adversarial perturbation (noise). The introduction of noise causes the tools developed for LLL algorithmization to break down since the key LLL ingredient, the sparsity of the causality (dependence) relationship, no longer holds. To overcome this challenge we develop a new analysis where entropy plays a central role, both to measure the rate at which progress towards an acceptable state is made and the rate at which noise undoes this progress. The end result is a sufficient condition that allows a smooth tradeoff between the intensity of the noise and the amenability of the system, recovering an asymmetric LLL condition in the noiseless case.

Cite as

Dimitris Achlioptas, Fotis Iliopoulos, and Nikos Vlassis. Stochastic Control via Entropy Compression. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 83:1-83:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{achlioptas_et_al:LIPIcs.ICALP.2017.83,
  author =	{Achlioptas, Dimitris and Iliopoulos, Fotis and Vlassis, Nikos},
  title =	{{Stochastic Control via Entropy Compression}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{83:1--83:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.83},
  URN =		{urn:nbn:de:0030-drops-74279},
  doi =		{10.4230/LIPIcs.ICALP.2017.83},
  annote =	{Keywords: Stochastic Control, Lovasz Local Lemma}
}
Document
Approximation Strategies for Generalized Binary Search in Weighted Trees

Authors: Dariusz Dereniowski, Adrian Kosowski, Przemyslaw Uznanski, and Mengchuan Zou


Abstract
We consider the following generalization of the binary search problem. A search strategy is required to locate an unknown target node t in a given tree T. Upon querying a node v of the tree, the strategy receives as a reply an indication of the connected component of T\{v} containing the target t. The cost of querying each node is given by a known non-negative weight function, and the considered objective is to minimize the total query cost for a worst-case choice of the target. Designing an optimal strategy for a weighted tree search instance is known to be strongly NP-hard, in contrast to the unweighted variant of the problem which can be solved optimally in linear time. Here, we show that weighted tree search admits a quasi-polynomial time approximation scheme (QPTAS): for any 0 < epsilon < 1, there exists a (1+epsilon)-approximation strategy with a computation time of n^O(log n / epsilon^2). Thus, the problem is not APX-hard, unless NP is contained in DTIME(n^O(log n)). By applying a generic reduction, we obtain as a corollary that the studied problem admits a polynomial-time O(sqrt(log n))-approximation. This improves previous tilde-O(log n)-approximation approaches, where the tilde-O-notation disregards O(poly log log n)-factors.

Cite as

Dariusz Dereniowski, Adrian Kosowski, Przemyslaw Uznanski, and Mengchuan Zou. Approximation Strategies for Generalized Binary Search in Weighted Trees. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 84:1-84:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dereniowski_et_al:LIPIcs.ICALP.2017.84,
  author =	{Dereniowski, Dariusz and Kosowski, Adrian and Uznanski, Przemyslaw and Zou, Mengchuan},
  title =	{{Approximation Strategies for Generalized Binary Search in Weighted Trees}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{84:1--84:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.84},
  URN =		{urn:nbn:de:0030-drops-74507},
  doi =		{10.4230/LIPIcs.ICALP.2017.84},
  annote =	{Keywords: Approximation Algorithm, Adaptive Algorithm, Graph Search, Binary Search, Vertex Ranking, Trees}
}
Document
Tighter Hard Instances for PPSZ

Authors: Pavel Pudlák, Dominik Scheder, and Navid Talebanfard


Abstract
We construct uniquely satisfiable k-CNF formulas that are hard for the PPSZ algorithm, the currently best known algorithm solving k-SAT. This algorithm tries to generate a satisfying assignment by picking a random variable at a time and attempting to derive its value using some inference heuristic and otherwise assigning a random value. The "weak PPSZ" checks all subformulas of a given size to derive a value and the "strong PPSZ" runs resolution with width bounded by some given function. Firstly, we construct graph-instances on which "weak PPSZ" has savings of at most (2 + epsilon)/k; the saving of an algorithm on an input formula with n variables is the largest gamma such that the algorithm succeeds (i.e. finds a satisfying assignment) with probability at least 2^{- (1 - gamma) n}. Since PPSZ (both weak and strong) is known to have savings of at least (pi^2 + o(1))/6k, this is optimal up to the constant factor. In particular, for k=3, our upper bound is 2^{0.333... n}, which is fairly close to the lower bound 2^{0.386... n} of Hertli [SIAM J. Comput.'14]. We also construct instances based on linear systems over F_2 for which strong PPSZ has savings of at most O(log(k)/k). This is only a log(k) factor away from the optimal bound. Our constructions improve previous savings upper bound of O((log^2(k))/k) due to Chen et al. [SODA'13].

Cite as

Pavel Pudlák, Dominik Scheder, and Navid Talebanfard. Tighter Hard Instances for PPSZ. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 85:1-85:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pudlak_et_al:LIPIcs.ICALP.2017.85,
  author =	{Pudl\'{a}k, Pavel and Scheder, Dominik and Talebanfard, Navid},
  title =	{{Tighter Hard Instances for PPSZ}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{85:1--85:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.85},
  URN =		{urn:nbn:de:0030-drops-74144},
  doi =		{10.4230/LIPIcs.ICALP.2017.85},
  annote =	{Keywords: k-SAT, Strong Exponential Time Hypothesis, PPSZ, Resolution}
}
Document
Subspace Designs Based on Algebraic Function Fields

Authors: Venkatesan Guruswami, Chaoping Xing, and Chen Yuan


Abstract
Subspace designs are a (large) collection of high-dimensional subspaces {H_i} of F_q^m such that for any low-dimensional subspace W, only a small number of subspaces from the collection have non-trivial intersection with W; more precisely, the sum of dimensions of W cap H_i is at most some parameter L. The notion was put forth by Guruswami and Xing (STOC'13) with applications to list decoding variants of Reed-Solomon and algebraic-geometric codes, and later also used for explicit rank-metric codes with optimal list decoding radius. Guruswami and Kopparty (FOCS'13, Combinatorica'16) gave an explicit construction of subspace designs with near-optimal parameters. This construction was based on polynomials and has close connections to folded Reed-Solomon codes, and required large field size (specifically q >= m). Forbes and Guruswami (RANDOM'15) used this construction to give explicit constant degree "dimension expanders" over large fields, and noted that subspace designs are a powerful tool in linear-algebraic pseudorandomness. Here, we construct subspace designs over any field, at the expense of a modest worsening of the bound $L$ on total intersection dimension. Our approach is based on a (non-trivial) extension of the polynomial-based construction to algebraic function fields, and instantiating the approach with cyclotomic function fields. Plugging in our new subspace designs in the construction of Forbes and Guruswami yields dimension expanders over F^n for any field F, with logarithmic degree and expansion guarantee for subspaces of dimension Omega(n/(log(log(n)))).

Cite as

Venkatesan Guruswami, Chaoping Xing, and Chen Yuan. Subspace Designs Based on Algebraic Function Fields. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 86:1-86:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2017.86,
  author =	{Guruswami, Venkatesan and Xing, Chaoping and Yuan, Chen},
  title =	{{Subspace Designs Based on Algebraic Function Fields}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{86:1--86:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.86},
  URN =		{urn:nbn:de:0030-drops-73712},
  doi =		{10.4230/LIPIcs.ICALP.2017.86},
  annote =	{Keywords: Subspace Design, Dimension Expander, List Decoding}
}
Document
Bipartite Perfect Matching in Pseudo-Deterministic NC

Authors: Shafi Goldwasser and Ofer Grossman


Abstract
We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph. We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art. Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm.

Cite as

Shafi Goldwasser and Ofer Grossman. Bipartite Perfect Matching in Pseudo-Deterministic NC. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 87:1-87:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ICALP.2017.87,
  author =	{Goldwasser, Shafi and Grossman, Ofer},
  title =	{{Bipartite Perfect Matching in Pseudo-Deterministic NC}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{87:1--87:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.87},
  URN =		{urn:nbn:de:0030-drops-74824},
  doi =		{10.4230/LIPIcs.ICALP.2017.87},
  annote =	{Keywords: Parallel Algorithms, Pseudo-determinism, RNC, Perfect Matching}
}
Document
A Linear Lower Bound for Incrementing a Space-Optimal Integer Representation in the Bit-Probe Model

Authors: Mikhail Raskin


Abstract
We present the first linear lower bound for the number of bits required to be accessed in the worst case to increment an integer in an arbitrary space-optimal binary representation. The best previously known lower bound was logarithmic. It is known that a logarithmic number of read bits in the worst case is enough to increment some of the integer representations that use one bit of redundancy, therefore we show an exponential gap between space-optimal and redundant counters. Our proof is based on considering the increment procedure for a space optimal counter as a permutation and calculating its parity. For every space optimal counter, the permutation must be odd, and implementing an odd permutation requires reading at least half the bits in the worst case. The combination of these two observations explains why the worst-case space-optimal problem is substantially different from both average-case approach with constant expected number of reads and almost space optimal representations with logarithmic number of reads in the worst case.

Cite as

Mikhail Raskin. A Linear Lower Bound for Incrementing a Space-Optimal Integer Representation in the Bit-Probe Model. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 88:1-88:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{raskin:LIPIcs.ICALP.2017.88,
  author =	{Raskin, Mikhail},
  title =	{{A Linear Lower Bound for Incrementing a Space-Optimal Integer Representation in the Bit-Probe Model}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{88:1--88:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.88},
  URN =		{urn:nbn:de:0030-drops-74105},
  doi =		{10.4230/LIPIcs.ICALP.2017.88},
  annote =	{Keywords: binary counter, data structure, integer representation, bit-probe model, lower bound}
}
Document
Rerouting Flows When Links Fail

Authors: Jannik Matuschke, S. Thomas McCormick, and Gianpaolo Oriolo


Abstract
We introduce and investigate reroutable flows, a robust version of network flows in which link failures can be mitigated by rerouting the affected flow. Given a capacitated network, a path flow is reroutable if after failure of an arbitrary arc, we can reroute the interrupted flow from the tail of that arc to the sink, without modifying the flow that is not affected by the failure. Similar types of restoration, which are often termed "local", were previously investigated in the context of network design, such as min-cost capacity planning. In this paper, our interest is in computing maximum flows under this robustness assumption. An important new feature of our model, distinguishing it from existing max robust flow models, is that no flow can get lost in the network. We also study a tightening of reroutable flows, called strictly reroutable flows, making more restrictive assumptions on the capacities available for rerouting. For both variants, we devise a reroutable-flow equivalent of an s-t-cut and show that the corresponding max flow/min cut gap is bounded by 2. It turns out that a strictly reroutable flow of maximum value can be found using a compact LP formulation, whereas the problem of finding a maximum reroutable flow is NP-hard, even when all capacities are in {1, 2}. However, the tightening can be used to get a 2-approximation for reroutable flows. This ratio is tight in general networks, but we show that in the case of unit capacities, every reroutable flow can be transformed into a strictly reroutable flow of same value. While it is NP-hard to compute a maximal integral flow even for unit capacities, we devise a surprisingly simple combinatorial algorithm that finds a half-integral strictly reroutable flow of value 1, or certifies that no such solutions exits. Finally, we also give a hardness result for the case of multiple arc failures.

Cite as

Jannik Matuschke, S. Thomas McCormick, and Gianpaolo Oriolo. Rerouting Flows When Links Fail. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 89:1-89:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{matuschke_et_al:LIPIcs.ICALP.2017.89,
  author =	{Matuschke, Jannik and McCormick, S. Thomas and Oriolo, Gianpaolo},
  title =	{{Rerouting Flows When Links Fail}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{89:1--89:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.89},
  URN =		{urn:nbn:de:0030-drops-74466},
  doi =		{10.4230/LIPIcs.ICALP.2017.89},
  annote =	{Keywords: network flows, network interdiction, robust optimization}
}
Document
The Parameterized Complexity of Positional Games

Authors: Édouard Bonnet, Serge Gaspers, Antonin Lambilliotte, Stefan Rümmele, and Abdallah Saffidine


Abstract
We study the parameterized complexity of several positional games. Our main result is that Short Generalized Hex is W[1]-complete parameterized by the number of moves. This solves an open problem from Downey and Fellows’ influential list of open problems from 1999. Previously, the problem was thought of as a natural candidate for AW[*]-completeness. Our main tool is a new fragment of first-order logic where universally quantified variables only occur in inequalities. We show that model-checking on arbitrary relational structures for a formula in this fragment is W[1]-complete when parameterized by formula size. We also consider a general framework where a positional game is represented as a hypergraph and two players alternately pick vertices. In a Maker-Maker game, the first player to have picked all the vertices of some hyperedge wins the game. In a Maker-Breaker game, the first player wins if she picks all the vertices of some hyperedge, and the second player wins otherwise. In an Enforcer-Avoider game, the first player wins if the second player picks all the vertices of some hyperedge, and the second player wins otherwise. Short Maker-Maker, Short Maker-Breaker, and Short Enforcer-Avoider are respectively AW[*]-, W[1]-, and co-W[1]-complete parameterized by the number of moves. This suggests a rough parameterized complexity categorization into positional games that are complete for the first level of the W-hierarchy when the winning condition only depends on which vertices one player has been able to pick, but AW[*]-complete when it depends on which vertices both players have picked. However, some positional games with highly structured board and winning configurations are fixed-parameter tractable. We give another example of such a game, Short k-Connect, which is fixed-parameter tractable when parameterized by the number of moves.

Cite as

Édouard Bonnet, Serge Gaspers, Antonin Lambilliotte, Stefan Rümmele, and Abdallah Saffidine. The Parameterized Complexity of Positional Games. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ICALP.2017.90,
  author =	{Bonnet, \'{E}douard and Gaspers, Serge and Lambilliotte, Antonin and R\"{u}mmele, Stefan and Saffidine, Abdallah},
  title =	{{The Parameterized Complexity of Positional Games}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.90},
  URN =		{urn:nbn:de:0030-drops-74941},
  doi =		{10.4230/LIPIcs.ICALP.2017.90},
  annote =	{Keywords: Hex, Maker-Maker games, Maker-Breaker games, Enforcer-Avoider games, parameterized complexity theory}
}
Document
Directed Hamiltonicity and Out-Branchings via Generalized Laplacians

Authors: Andreas Björklund, Petteri Kaski, and Ioannis Koutis


Abstract
We are motivated by a tantalizing open question in exact algorithms: can we detect whether an n-vertex directed graph G has a Hamiltonian cycle in time significantly less than 2^n? We present new randomized algorithms that improve upon several previous works: 1. We show that for any constant 0<lambda<1 and prime p we can count the Hamiltonian cycles modulo p^((1-lambda)n/(3p)) in expected time less than c^n for a constant c<2 that depends only on p and lambda. Such an algorithm was previously known only for the case of counting modulo two [Bj\"orklund and Husfeldt, FOCS 2013]. 2. We show that we can detect a Hamiltonian cycle in O^*(3^(n-alpha(G))) time and polynomial space, where alpha(G) is the size of the maximum independent set in G. In particular, this yields an O^*(3^(n/2)) time algorithm for bipartite directed graphs, which is faster than the exponential-space algorithm in [Cygan et al., STOC 2013]. Our algorithms are based on the algebraic combinatorics of "incidence assignments" that we can capture through evaluation of determinants of Laplacian-like matrices, inspired by the Matrix--Tree Theorem for directed graphs. In addition to the novel algorithms for directed Hamiltonicity, we use the Matrix--Tree Theorem to derive simple algebraic algorithms for detecting out-branchings. Specifically, we give an O^*(2^k)-time randomized algorithm for detecting out-branchings with at least k internal vertices, improving upon the algorithms of [Zehavi, ESA 2015] and [Bj\"orklund et al., ICALP 2015]. We also present an algebraic algorithm for the directed k-Leaf problem, based on a non-standard monomial detection problem.

Cite as

Andreas Björklund, Petteri Kaski, and Ioannis Koutis. Directed Hamiltonicity and Out-Branchings via Generalized Laplacians. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 91:1-91:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2017.91,
  author =	{Bj\"{o}rklund, Andreas and Kaski, Petteri and Koutis, Ioannis},
  title =	{{Directed Hamiltonicity and Out-Branchings via Generalized Laplacians}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{91:1--91:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.91},
  URN =		{urn:nbn:de:0030-drops-74208},
  doi =		{10.4230/LIPIcs.ICALP.2017.91},
  annote =	{Keywords: counting, directed Hamiltonicity, graph Laplacian, independent set, k-internal out-branching}
}
Document
Improved Hardness for Cut, Interdiction, and Firefighter Problems

Authors: Euiwoong Lee


Abstract
We study variants of the classic s-t cut problem and prove the following improved hardness results assuming the Unique Games Conjecture (UGC). * For Length-Bounded Cut and Shortest Path Interdiction, we show that both problems are hard to approximate within any constant factor, even if we allow bicriteria approximation. If we want to cut vertices or the graph is directed, our hardness ratio for Length-Bounded Cut matches the best approximation ratio up to a constant. Previously, the best hardness ratio was 1.1377 for Length-Bounded Cut and 2 for Shortest Path Interdiction. * For any constant k >= 2 and epsilon > 0, we show that Directed Multicut with k source-sink pairs is hard to approximate within a factor k - epsilon. This matches the trivial k-approximation algorithm. By a simple reduction, our result for k = 2 implies that Directed Multiway Cut with two terminals (also known as s-t Bicut} is hard to approximate within a factor 2 - epsilon, matching the trivial 2-approximation algorithm. * Assuming a variant of the UGC (implied by another variant of Bansal and Khot), we prove that it is hard to approximate Resource Minimization Fire Containment within any constant factor. Previously, the best hardness ratio was 2. For directed layered graphs with b layers, our hardness ratio Omega(log b) matches the best approximation algorithm. Our results are based on a general method of converting an integrality gap instance to a length-control dictatorship test for variants of the s-t cut problem, which may be useful for other problems.

Cite as

Euiwoong Lee. Improved Hardness for Cut, Interdiction, and Firefighter Problems. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 92:1-92:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lee:LIPIcs.ICALP.2017.92,
  author =	{Lee, Euiwoong},
  title =	{{Improved Hardness for Cut, Interdiction, and Firefighter Problems}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{92:1--92:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.92},
  URN =		{urn:nbn:de:0030-drops-74854},
  doi =		{10.4230/LIPIcs.ICALP.2017.92},
  annote =	{Keywords: length bounded cut, shortest path interdiction, multicut; firefighter, unique games conjecture}
}
Document
Subspace-Invariant AC^0 Formulas

Authors: Benjamin Rossman


Abstract
The n-variable PARITY function is computable (by a well-known recursive construction) by AC^0 formulas of depth d+1 and leaf size n2^{dn^{1/d}}. These formulas are seen to possess a certain symmetry: they are syntactically invariant under the subspace P of even-weight elements in {0,1}^n, which acts (as a group) on formulas by toggling negations on input literals. In this paper, we prove a 2^{d(n^{1/d}-1)} lower bound on the size of syntactically P-invariant depth d+1 formulas for PARITY. Quantitatively, this beats the best 2^{Omega(d(n^{1/d}-1))} lower bound in the non-invariant setting.

Cite as

Benjamin Rossman. Subspace-Invariant AC^0 Formulas. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 93:1-93:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rossman:LIPIcs.ICALP.2017.93,
  author =	{Rossman, Benjamin},
  title =	{{Subspace-Invariant AC^0 Formulas}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{93:1--93:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.93},
  URN =		{urn:nbn:de:0030-drops-74235},
  doi =		{10.4230/LIPIcs.ICALP.2017.93},
  annote =	{Keywords: lower bounds, size-depth tradeoff, parity, symmetry in computation}
}
Document
On the Complexity of Quantified Integer Programming

Authors: Dmitry Chistikov and Christoph Haase


Abstract
Quantified integer programming is the problem of deciding assertions of the form Q_k x_k ... forall x_2 exists x_1 : A * x >= c where vectors of variables x_k,..,x_1 form the vector x, all variables are interpreted over N (alternatively, over Z), and A and c are a matrix and vector over Z of appropriate sizes. We show in this paper that quantified integer programming with alternation depth k is complete for the kth level of the polynomial hierarchy.

Cite as

Dmitry Chistikov and Christoph Haase. On the Complexity of Quantified Integer Programming. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 94:1-94:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.ICALP.2017.94,
  author =	{Chistikov, Dmitry and Haase, Christoph},
  title =	{{On the Complexity of Quantified Integer Programming}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{94:1--94:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.94},
  URN =		{urn:nbn:de:0030-drops-75024},
  doi =		{10.4230/LIPIcs.ICALP.2017.94},
  annote =	{Keywords: integer programming, semi-linear sets, Presburger arithmetic, quantifier elimination}
}
Document
Word Equations in Nondeterministic Linear Space

Authors: Artur Jez


Abstract
Satisfiability of word equations is an important problem in the intersection of formal languages and algebra: Given two sequences consisting of letters and variables we are to decide whether there is a substitution for the variables that turns this equation into true equality of strings. The computational complexity of this problem remains unknown, with the best lower and upper bounds being, respectively, NP and PSPACE. Recently, the novel technique of recompression was applied to this problem, simplifying the known proofs and lowering the space complexity to (nondeterministic) O(n log n). In this paper we show that satisfiability of word equations is in nondeterministic linear space, thus the language of satisfiable word equations is context-sensitive. We use the known recompression-based algorithm and additionally employ Huffman coding for letters. The proof, however, uses analysis of how the fragments of the equation depend on each other as well as a new strategy for nondeterministic choices of the algorithm, which uses several new ideas to limit the space occupied by the letters.

Cite as

Artur Jez. Word Equations in Nondeterministic Linear Space. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 95:1-95:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jez:LIPIcs.ICALP.2017.95,
  author =	{Jez, Artur},
  title =	{{Word Equations in Nondeterministic Linear Space}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{95:1--95:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.95},
  URN =		{urn:nbn:de:0030-drops-74089},
  doi =		{10.4230/LIPIcs.ICALP.2017.95},
  annote =	{Keywords: Word equations, string unification, context-sensitive languages, space efficient computations, linear space}
}
Document
Solutions of Twisted Word Equations, EDT0L Languages, and Context-Free Groups

Authors: Volker Diekert and Murray Elder


Abstract
We prove that the full solution set of a twisted word equation with regular constraints is an EDT0L language. It follows that the set of solutions to equations with rational constraints in a context-free group (= finitely generated virtually free group) in reduced normal forms is EDT0L. We can also decide whether or not the solution set is finite, which was an open problem. Moreover, this can all be done in PSPACE. Our results generalize the work by Lohrey and Senizergues (ICALP 2006) and Dahmani and Guirardel (J. of Topology 2010) with respect to complexity and with respect to expressive power. Both papers show that satisfiability is decidable, but neither gave any concrete complexity bound. Our results concern all solutions, and give, in some sense, the "optimal" formal language characterization.

Cite as

Volker Diekert and Murray Elder. Solutions of Twisted Word Equations, EDT0L Languages, and Context-Free Groups. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 96:1-96:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{diekert_et_al:LIPIcs.ICALP.2017.96,
  author =	{Diekert, Volker and Elder, Murray},
  title =	{{Solutions of Twisted Word Equations, EDT0L Languages, and Context-Free Groups}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{96:1--96:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.96},
  URN =		{urn:nbn:de:0030-drops-73976},
  doi =		{10.4230/LIPIcs.ICALP.2017.96},
  annote =	{Keywords: Twisted word equation, EDT0L, virtually free group, context-free group}
}
Document
Pumping Lemma for Higher-order Languages

Authors: Kazuyuki Asada and Naoki Kobayashi


Abstract
We study a pumping lemma for the word/tree languages generated by higher-order grammars. Pumping lemmas are known up to order-2 word languages (i.e., for regular/context-free/indexed languages), and have been used to show that a given language does not belong to the classes of regular/context-free/indexed languages. We prove a pumping lemma for word/tree languages of arbitrary orders, modulo a conjecture that a higher-order version of Kruskal's tree theorem holds. We also show that the conjecture indeed holds for the order-2 case, which yields a pumping lemma for order-2 tree languages and order-3 word languages.

Cite as

Kazuyuki Asada and Naoki Kobayashi. Pumping Lemma for Higher-order Languages. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 97:1-97:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{asada_et_al:LIPIcs.ICALP.2017.97,
  author =	{Asada, Kazuyuki and Kobayashi, Naoki},
  title =	{{Pumping Lemma for Higher-order Languages}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{97:1--97:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.97},
  URN =		{urn:nbn:de:0030-drops-74323},
  doi =		{10.4230/LIPIcs.ICALP.2017.97},
  annote =	{Keywords: pumping lemma, higher-order grammars, Kruskal's tree theorem}
}
Document
A Strategy for Dynamic Programs: Start over and Muddle Through

Authors: Samir Datta, Anish Mukherjee, Thomas Schwentick, Nils Vortmeier, and Thomas Zeume


Abstract
A strategy for constructing dynamic programs is introduced that utilises periodic computation of auxiliary data from scratch and the ability to maintain a query for a limited number of change steps. It is established that if some program can maintain a query for log n change steps after an AC^1-computable initialisation, it can be maintained by a first-order dynamic program as well, i.e., in DynFO. As an application, it is shown that decision and optimisation problems defined by monadic second-order (MSO) and guarded second-order logic (GSO) formulas are in DynFO, if only change sequences that produce graphs of bounded treewidth are allowed. To establish this result, Feferman-Vaught-type composition theorems for MSO and GSO are established that might be useful in their own right.

Cite as

Samir Datta, Anish Mukherjee, Thomas Schwentick, Nils Vortmeier, and Thomas Zeume. A Strategy for Dynamic Programs: Start over and Muddle Through. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 98:1-98:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.ICALP.2017.98,
  author =	{Datta, Samir and Mukherjee, Anish and Schwentick, Thomas and Vortmeier, Nils and Zeume, Thomas},
  title =	{{A Strategy for Dynamic Programs: Start over and Muddle Through}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{98:1--98:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.98},
  URN =		{urn:nbn:de:0030-drops-74470},
  doi =		{10.4230/LIPIcs.ICALP.2017.98},
  annote =	{Keywords: dynamic complexity, treewidth, monadic second order logic}
}
Document
Definability by Horn Formulas and Linear Time on Cellular Automata

Authors: Nicolas Bacquey, Etienne Grandjean, and Frédéric Olive


Abstract
We establish an exact logical characterization of linear time complexity of cellular automata of dimension d, for any fixed d: a set of pictures of dimension d belongs to this complexity class iff it is definable in existential second-order logic restricted to monotonic Horn formulas with built-in successor function and d+1 first-order variables. This logical characterization is optimal modulo an open problem in parallel complexity. Furthermore, its proof provides a systematic method for transforming an inductive formula defining some problem into a cellular automaton that computes it in linear time.

Cite as

Nicolas Bacquey, Etienne Grandjean, and Frédéric Olive. Definability by Horn Formulas and Linear Time on Cellular Automata. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 99:1-99:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bacquey_et_al:LIPIcs.ICALP.2017.99,
  author =	{Bacquey, Nicolas and Grandjean, Etienne and Olive, Fr\'{e}d\'{e}ric},
  title =	{{Definability by Horn Formulas and Linear Time on Cellular Automata}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{99:1--99:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.99},
  URN =		{urn:nbn:de:0030-drops-74174},
  doi =		{10.4230/LIPIcs.ICALP.2017.99},
  annote =	{Keywords: picture languages, linear time, cellular automata of any dimension, local induction, descriptive complexity, second-order logic, horn formulas, logic}
}
Document
Asynchronous Distributed Automata: A Characterization of the Modal Mu-Fragment

Authors: Fabian Reiter


Abstract
We establish the equivalence between a class of asynchronous distributed automata and a small fragment of least fixpoint logic, when restricted to finite directed graphs. More specifically, the logic we consider is (a variant of) the fragment of the modal mu-calculus that allows least fixpoints but forbids greatest fixpoints. The corresponding automaton model uses a network of identical finite-state machines that communicate in an asynchronous manner and whose state diagram must be acyclic except for self-loops. Exploiting the connection with logic, we also prove that the expressive power of those machines is independent of whether or not messages can be lost.

Cite as

Fabian Reiter. Asynchronous Distributed Automata: A Characterization of the Modal Mu-Fragment. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 100:1-100:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{reiter:LIPIcs.ICALP.2017.100,
  author =	{Reiter, Fabian},
  title =	{{Asynchronous Distributed Automata: A Characterization of the Modal Mu-Fragment}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{100:1--100:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.100},
  URN =		{urn:nbn:de:0030-drops-73695},
  doi =		{10.4230/LIPIcs.ICALP.2017.100},
  annote =	{Keywords: finite automata, distributed computing, modal logic, mu-calculus}
}
Document
A Counterexample to Thiagarajan's Conjecture on Regular Event Structures

Authors: Jérémie Chalopin and Victor Chepoi


Abstract
We provide a counterexample to a conjecture by Thiagarajan (1996 and 2002) that regular prime event structures correspond exactly to those obtained as unfoldings of finite 1-safe Petri nets. The same counterexample is used to disprove a closely related conjecture by Badouel, Darondeau, and Raoult (1999) that domains of regular event structures with bounded natural-cliques are recognizable by finite trace automata. Event structures, trace automata, and Petri nets are fundamental models in concurrency theory. There exist nice interpretations of these structures as combinatorial and geometric objects and both conjectures can be reformulated in this framework. Namely, the domains of prime event structures correspond exactly to pointed median graphs; from a geometric point of view, these domains are in bijection with pointed CAT(0) cube complexes. A necessary condition for both conjectures to be true is that domains of respective regular event structures admit a regular nice labeling. To disprove these conjectures, we describe a regular event domain (with bounded natural-cliques) that does not admit a regular nice labeling. Our counterexample is derived from an example by Wise (1996 and 2007) of a nonpositively curved square complex whose universal cover is a CAT(0) square complex containing a particular plane with an aperiodic tiling.

Cite as

Jérémie Chalopin and Victor Chepoi. A Counterexample to Thiagarajan's Conjecture on Regular Event Structures. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 101:1-101:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chalopin_et_al:LIPIcs.ICALP.2017.101,
  author =	{Chalopin, J\'{e}r\'{e}mie and Chepoi, Victor},
  title =	{{A Counterexample to Thiagarajan's Conjecture  on Regular Event Structures}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{101:1--101:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.101},
  URN =		{urn:nbn:de:0030-drops-74192},
  doi =		{10.4230/LIPIcs.ICALP.2017.101},
  annote =	{Keywords: Discrete event structures, Trace automata, Median graphs and CAT(0) cube Complexes, Unfoldings and universal covers}
}
Document
*-Liftings for Differential Privacy

Authors: Gilles Barthe, Thomas Espitau, Justin Hsu, Tetsuya Sato, and Pierre-Yves Strub


Abstract
Recent developments in formal verification have identified approximate liftings (also known as approximate couplings) as a clean, compositional abstraction for proving differential privacy. There are two styles of definitions for this construction. Earlier definitions require the existence of one or more witness distributions, while a recent definition by Sato uses universal quantification over all sets of samples. These notions have different strengths and weaknesses: the universal version is more general than the existential ones, but the existential versions enjoy more precise composition principles. We propose a novel, existential version of approximate lifting, called *-lifting, and show that it is equivalent to Sato's construction for discrete probability measures. Our work unifies all known notions of approximate lifting, giving cleaner properties, more general constructions, and more precise composition theorems for both styles of lifting, enabling richer proofs of differential privacy. We also clarify the relation between existing definitions of approximate lifting, and generalize our constructions to approximate liftings based on f-divergences.

Cite as

Gilles Barthe, Thomas Espitau, Justin Hsu, Tetsuya Sato, and Pierre-Yves Strub. *-Liftings for Differential Privacy. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 102:1-102:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{barthe_et_al:LIPIcs.ICALP.2017.102,
  author =	{Barthe, Gilles and Espitau, Thomas and Hsu, Justin and Sato, Tetsuya and Strub, Pierre-Yves},
  title =	{{*-Liftings for Differential Privacy}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{102:1--102:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.102},
  URN =		{urn:nbn:de:0030-drops-74358},
  doi =		{10.4230/LIPIcs.ICALP.2017.102},
  annote =	{Keywords: Differential Privacy, Probabilistic Couplings, Formal Verification}
}
Document
Bisimulation Metrics for Weighted Automata

Authors: Borja Balle, Pascale Gourdeau, and Prakash Panangaden


Abstract
We develop a new bisimulation (pseudo)metric for weighted finite automata (WFA) that generalizes Boreale's linear bisimulation relation. Our metrics are induced by seminorms on the state space of WFA. Our development is based on spectral properties of sets of linear operators. In particular, the joint spectral radius of the transition matrices of WFA plays a central role. We also study continuity properties of the bisimulation pseudometric, establish an undecidability result for computing the metric, and give a preliminary account of applications to spectral learning of weighted automata.

Cite as

Borja Balle, Pascale Gourdeau, and Prakash Panangaden. Bisimulation Metrics for Weighted Automata. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 103:1-103:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{balle_et_al:LIPIcs.ICALP.2017.103,
  author =	{Balle, Borja and Gourdeau, Pascale and Panangaden, Prakash},
  title =	{{Bisimulation Metrics for Weighted Automata}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{103:1--103:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.103},
  URN =		{urn:nbn:de:0030-drops-73959},
  doi =		{10.4230/LIPIcs.ICALP.2017.103},
  annote =	{Keywords: weighted automata, bisimulation, metrics, spectral theory, learning}
}
Document
On the Metric-Based Approximate Minimization of Markov Chains

Authors: Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, and Radu Mardare


Abstract
We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al., we show that optimal approximations always exist; show that the problem can be solved as a bilinear program; and prove that its threshold problem is in PSPACE and NP-hard. Finally, we present an approach inspired by expectation maximization techniques that provides suboptimal solutions. Experiments suggest that our method gives a practical approach that outperforms the bilinear program implementation run on state-of-the-art bilinear solvers.

Cite as

Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, and Radu Mardare. On the Metric-Based Approximate Minimization of Markov Chains. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 104:1-104:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bacci_et_al:LIPIcs.ICALP.2017.104,
  author =	{Bacci, Giovanni and Bacci, Giorgio and Larsen, Kim G. and Mardare, Radu},
  title =	{{On the Metric-Based Approximate Minimization of Markov Chains}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{104:1--104:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.104},
  URN =		{urn:nbn:de:0030-drops-73675},
  doi =		{10.4230/LIPIcs.ICALP.2017.104},
  annote =	{Keywords: Behavioral distances, Probabilistic Models, Automata Minimization}
}
Document
Expressiveness of Probabilistic Modal Logics, Revisited

Authors: Nathanaël Fijalkow, Bartek Klin, and Prakash Panangaden


Abstract
Labelled Markov processes are probabilistic versions of labelled transition systems. In general, the state space of a labelled Markov process may be a continuum. Logical characterizations of probabilistic bisimulation and simulation were given by Desharnais et al. These results hold for systems defined on analytic state spaces and assume that there are countably many labels in the case of bisimulation and finitely many labels in the case of simulation. In this paper, we first revisit these results by giving simpler and more streamlined proofs. In particular, our proof for simulation has the same structure as the one for bisimulation, relying on a new result of a topological nature. This departs from the known proof for this result, which uses domain theory techniques and falls out of a theory of approximation of Labelled Markov processes. Both our proofs assume the presence of countably many labels. We investigate the necessity of this assumption, and show that the logical characterization of bisimulation may fail when there are uncountably many labels. However, with a stronger assumption on the transition functions (continuity instead of just measurability), we can regain the logical characterization result, for arbitrarily many labels. These new results arose from a new game-theoretic way of understanding probabilistic simulation and bisimulation.

Cite as

Nathanaël Fijalkow, Bartek Klin, and Prakash Panangaden. Expressiveness of Probabilistic Modal Logics, Revisited. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 105:1-105:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{fijalkow_et_al:LIPIcs.ICALP.2017.105,
  author =	{Fijalkow, Nathana\"{e}l and Klin, Bartek and Panangaden, Prakash},
  title =	{{Expressiveness of Probabilistic Modal Logics, Revisited}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{105:1--105:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.105},
  URN =		{urn:nbn:de:0030-drops-73683},
  doi =		{10.4230/LIPIcs.ICALP.2017.105},
  annote =	{Keywords: probabilistic modal logic, probabilistic bisimulation, probabilistic simulation}
}
Document
Emptiness of Zero Automata Is Decidable

Authors: Mikolaj Bojanczyk, Hugo Gimbert, and Edon Kelmendi


Abstract
Zero automata are a probabilistic extension of parity automata on infinite trees. The satisfiability of a certain probabilistic variant of MSO, called TMSO+zero, reduces to the emptiness problem for zero automata. We introduce a variant of zero automata called nonzero automata. We prove that for every zero automaton there is an equivalent nonzero automaton of quadratic size and the emptiness problem of nonzero automata is decidable, with complexity co-NP. These results imply that TMSO+zero has decidable satisfiability.

Cite as

Mikolaj Bojanczyk, Hugo Gimbert, and Edon Kelmendi. Emptiness of Zero Automata Is Decidable. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 106:1-106:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2017.106,
  author =	{Bojanczyk, Mikolaj and Gimbert, Hugo and Kelmendi, Edon},
  title =	{{Emptiness of Zero Automata Is Decidable}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{106:1--106:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.106},
  URN =		{urn:nbn:de:0030-drops-74745},
  doi =		{10.4230/LIPIcs.ICALP.2017.106},
  annote =	{Keywords: tree automata, probabilistic automata, monadic second-order logic}
}
Document
Characterizing Definability in Decidable Fixpoint Logics

Authors: Michael Benedikt, Pierre Bourhis, and Michael Vanden Boom


Abstract
We look at characterizing which formulas are expressible in rich decidable logics such as guarded fixpoint logic, unary negation fixpoint logic, and guarded negation fixpoint logic. We consider semantic characterizations of definability, as well as effective characterizations. Our algorithms revolve around a finer analysis of the tree-model property and a refinement of the method of moving back-and-forth between relational logics and logics over trees.

Cite as

Michael Benedikt, Pierre Bourhis, and Michael Vanden Boom. Characterizing Definability in Decidable Fixpoint Logics. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 107:1-107:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{benedikt_et_al:LIPIcs.ICALP.2017.107,
  author =	{Benedikt, Michael and Bourhis, Pierre and Vanden Boom, Michael},
  title =	{{Characterizing Definability in Decidable Fixpoint Logics}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{107:1--107:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.107},
  URN =		{urn:nbn:de:0030-drops-74062},
  doi =		{10.4230/LIPIcs.ICALP.2017.107},
  annote =	{Keywords: Guarded logics, bisimulation, definability, automata}
}
Document
Conservative Extensions in Guarded and Two-Variable Fragments

Authors: Jean Christoph Jung, Carsten Lutz, Mauricio Martel, Thomas Schneider, and Frank Wolter


Abstract
We investigate the decidability and computational complexity of (deductive) conservative extensions in fragments of first-order logic (FO), with a focus on the two-variable fragment FO2 and the guarded fragment GF. We prove that conservative extensions are undecidable in any FO fragment that contains FO2 or GF (even the three-variable fragment thereof), and that they are decidable and 2ExpTime-complete in the intersection GF2 of FO2 and GF.

Cite as

Jean Christoph Jung, Carsten Lutz, Mauricio Martel, Thomas Schneider, and Frank Wolter. Conservative Extensions in Guarded and Two-Variable Fragments. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 108:1-108:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.ICALP.2017.108,
  author =	{Jung, Jean Christoph and Lutz, Carsten and Martel, Mauricio and Schneider, Thomas and Wolter, Frank},
  title =	{{Conservative Extensions in Guarded and Two-Variable Fragments}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{108:1--108:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.108},
  URN =		{urn:nbn:de:0030-drops-74647},
  doi =		{10.4230/LIPIcs.ICALP.2017.108},
  annote =	{Keywords: Conservative Extensions, Decidable Fragments of First-Order Logic, Computational Complexity\}}
}
Document
Models and Termination of Proof Reduction in the lambda Pi-Calculus Modulo Theory

Authors: Gilles Dowek


Abstract
We define a notion of model for the lambda Pi-calculus modulo theory and prove a soundness theorem. We then define a notion of super-consistency and prove that proof reduction terminates in the lambda Pi-calculus modulo any super-consistent theory. We prove this way the termination of proof reduction in several theories including Simple type theory and the Calculus of constructions.

Cite as

Gilles Dowek. Models and Termination of Proof Reduction in the lambda Pi-Calculus Modulo Theory. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 109:1-109:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dowek:LIPIcs.ICALP.2017.109,
  author =	{Dowek, Gilles},
  title =	{{Models and Termination of Proof Reduction in the lambda Pi-Calculus Modulo Theory}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{109:1--109:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.109},
  URN =		{urn:nbn:de:0030-drops-73919},
  doi =		{10.4230/LIPIcs.ICALP.2017.109},
  annote =	{Keywords: model, proof reduction, Simple type theory, Calculus of constructions}
}
Document
Proof Complexity Meets Algebra

Authors: Albert Atserias and Joanna Ochremiak


Abstract
We analyse how the standard reductions between constraint satisfaction problems affect their proof complexity. We show that, for the most studied propositional and semi-algebraic proof systems, the classical constructions of pp-interpretability, homomorphic equivalence and addition of constants to a core preserve the proof complexity of the CSP. As a result, for those proof systems, the classes of constraint languages for which small unsatisfiability certificates exist can be characterised algebraically. We illustrate our results by a gap theorem saying that a constraint language either has resolution refutations of bounded width, or does not have bounded-depth Frege refutations of subexponential size. The former holds exactly for the widely studied class of constraint languages of bounded width. This class is also known to coincide with the class of languages with Sums-of-Squares refutations of sublinear degree, a fact for which we provide an alternative proof. We hence ask for the existence of a natural proof system with good behaviour with respect to reductions and simultaneously small size refutations beyond bounded width. We give an example of such a proof system by showing that bounded-degree Lovasz-Schrijver satisfies both requirements.

Cite as

Albert Atserias and Joanna Ochremiak. Proof Complexity Meets Algebra. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 110:1-110:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{atserias_et_al:LIPIcs.ICALP.2017.110,
  author =	{Atserias, Albert and Ochremiak, Joanna},
  title =	{{Proof Complexity Meets Algebra}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{110:1--110:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.110},
  URN =		{urn:nbn:de:0030-drops-74956},
  doi =		{10.4230/LIPIcs.ICALP.2017.110},
  annote =	{Keywords: Constraint Satisfaction Problem, Proof Complexity, Reductions, Gap Theorems}
}
Document
A Circuit-Based Approach to Efficient Enumeration

Authors: Antoine Amarilli, Pierre Bourhis, Louis Jachiet, and Stefan Mengel


Abstract
We study the problem of enumerating the satisfying valuations of a circuit while bounding the delay, i.e., the time needed to compute each successive valuation. We focus on the class of structured d-DNNF circuits originally introduced in knowledge compilation, a sub-area of artificial intelligence. We propose an algorithm for these circuits that enumerates valuations with linear preprocessing and delay linear in the Hamming weight of each valuation. Moreover, valuations of constant Hamming weight can be enumerated with linear preprocessing and constant delay. Our results yield a framework for efficient enumeration that applies to all problems whose solutions can be compiled to structured d-DNNFs. In particular, we use it to recapture classical results in database theory, for factorized database representations and for MSO evaluation. This gives an independent proof of constant-delay enumeration for MSO formulae with first-order free variables on bounded-treewidth structures.

Cite as

Antoine Amarilli, Pierre Bourhis, Louis Jachiet, and Stefan Mengel. A Circuit-Based Approach to Efficient Enumeration. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 111:1-111:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{amarilli_et_al:LIPIcs.ICALP.2017.111,
  author =	{Amarilli, Antoine and Bourhis, Pierre and Jachiet, Louis and Mengel, Stefan},
  title =	{{A Circuit-Based Approach to Efficient Enumeration}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{111:1--111:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.111},
  URN =		{urn:nbn:de:0030-drops-74626},
  doi =		{10.4230/LIPIcs.ICALP.2017.111},
  annote =	{Keywords: circuits, constant-delay, enumeration, d-DNNFs, MSO}
}
Document
Automata-Based Stream Processing

Authors: Rajeev Alur, Konstantinos Mamouras, and Caleb Stanford


Abstract
We propose an automata-theoretic framework for modularly expressing computations on streams of data. With weighted automata as a starting point, we identify three key features that are useful for an automaton model for stream processing: expressing the regular decomposition of streams whose data items are elements of a complex type (e.g., tuple of values), allowing the hierarchical nesting of several different kinds of aggregations, and specifying modularly the parallel execution and combination of various subcomputations. The combination of these features leads to subtle efficiency considerations that concern the interaction between nondeterminism, hierarchical nesting, and parallelism. We identify a syntactic restriction where the nondeterminism is unambiguous and parallel subcomputations synchronize their outputs. For automata satisfying these restrictions, we show that there is a space- and time-efficient streaming evaluation algorithm. We also prove that when these restrictions are relaxed, the evaluation problem becomes inherently computationally expensive.

Cite as

Rajeev Alur, Konstantinos Mamouras, and Caleb Stanford. Automata-Based Stream Processing. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 112:1-112:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{alur_et_al:LIPIcs.ICALP.2017.112,
  author =	{Alur, Rajeev and Mamouras, Konstantinos and Stanford, Caleb},
  title =	{{Automata-Based Stream Processing}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{112:1--112:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.112},
  URN =		{urn:nbn:de:0030-drops-74720},
  doi =		{10.4230/LIPIcs.ICALP.2017.112},
  annote =	{Keywords: weighted automata, Quantitative Regular Expressions, stream processing}
}
Document
On Reversible Transducers

Authors: Luc Dartois, Paulin Fournier, Ismaël Jecker, and Nathan Lhote


Abstract
Deterministic two-way transducers define the robust class of regular functions which is, among other good properties, closed under composition. However, the best known algorithms for composing two-way transducers cause a double exponential blow-up in the size of the inputs. In this paper, we introduce a class of transducers for which the composition has polynomial complexity. It is the class of reversible transducers, for which the computation steps can be reversed deterministically. While in the one-way setting this class is not very expressive, we prove that any two-way transducer can be made reversible through a single exponential blow-up. As a consequence, we prove that the composition of two-way transducers can be done with a single exponential blow-up in the number of states. A uniformization of a relation is a function with the same domain and which is included in the original relation. Our main result actually states that we can uniformize any non-deterministic two-way transducer by a reversible transducer with a single exponential blow-up, improving the known result by de Souza which has a quadruple exponential complexity. As a side result, our construction also gives a quadratic transformation from copyless streaming string transducers to two-way transducers, improving the exponential previous bound.

Cite as

Luc Dartois, Paulin Fournier, Ismaël Jecker, and Nathan Lhote. On Reversible Transducers. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 113:1-113:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dartois_et_al:LIPIcs.ICALP.2017.113,
  author =	{Dartois, Luc and Fournier, Paulin and Jecker, Isma\"{e}l and Lhote, Nathan},
  title =	{{On Reversible Transducers}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{113:1--113:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.113},
  URN =		{urn:nbn:de:0030-drops-74491},
  doi =		{10.4230/LIPIcs.ICALP.2017.113},
  annote =	{Keywords: Transducers, reversibility, two-way, uniformization}
}
Document
Which Classes of Origin Graphs Are Generated by Transducers

Authors: Mikolaj Bojanczyk, Laure Daviaud, Bruno Guillon, and Vincent Penelle


Abstract
We study various models of transducers equipped with origin information. We consider the semantics of these models as particular graphs, called origin graphs, and we characterise the families of such graphs recognised by streaming string transducers.

Cite as

Mikolaj Bojanczyk, Laure Daviaud, Bruno Guillon, and Vincent Penelle. Which Classes of Origin Graphs Are Generated by Transducers. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 114:1-114:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2017.114,
  author =	{Bojanczyk, Mikolaj and Daviaud, Laure and Guillon, Bruno and Penelle, Vincent},
  title =	{{Which Classes of Origin Graphs Are Generated by Transducers}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{114:1--114:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.114},
  URN =		{urn:nbn:de:0030-drops-73984},
  doi =		{10.4230/LIPIcs.ICALP.2017.114},
  annote =	{Keywords: Streaming String Transducers, Origin Semantics, String-to-String Transductions, MSO Definability}
}
Document
Continuity and Rational Functions

Authors: Michaël Cadilhac, Olivier Carton, and Charles Paperman


Abstract
A word-to-word function is continuous for a class of languages V if its inverse maps V languages to V. This notion provides a basis for an algebraic study of transducers, and was integral to the characterization of the sequential transducers computable in some circuit complexity classes. Here, we report on the decidability of continuity for functional transducers and some standard classes of regular languages. Previous algebraic studies of transducers have focused on the structure of the underlying input automaton, disregarding the output. We propose a comparison of the two algebraic approaches through two questions: When are the automaton structure and the continuity properties related, and when does continuity propagate to superclasses?

Cite as

Michaël Cadilhac, Olivier Carton, and Charles Paperman. Continuity and Rational Functions. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 115:1-115:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cadilhac_et_al:LIPIcs.ICALP.2017.115,
  author =	{Cadilhac, Micha\"{e}l and Carton, Olivier and Paperman, Charles},
  title =	{{Continuity and Rational Functions}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{115:1--115:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.115},
  URN =		{urn:nbn:de:0030-drops-74583},
  doi =		{10.4230/LIPIcs.ICALP.2017.115},
  annote =	{Keywords: Transducers, rational functions, language varieties, continuity}
}
Document
A Universal Ordinary Differential Equation

Authors: Olivier Bournez and Amaury Pouly


Abstract
An astonishing fact was established by Lee A. Rubel (1981): there exists a fixed non-trivial fourth-order polynomial differential algebraic equation (DAE) such that for any positive continuous function phi on the reals, and for any positive continuous function epsilon(t), it has a C^infinity solution with | y(t) - phi(t) | < epsilon(t) for all t. Lee A. Rubel provided an explicit example of such a polynomial DAE. Other examples of universal DAE have later been proposed by other authors. However, while these results may seem very surprising, their proofs are quite simple and are frustrating for a computability theorist, or for people interested in modeling systems in experimental sciences. First, the involved notions of universality is far from usual notions of universality in computability theory. In particular, the proofs heavily rely on the fact that constructed DAE does not have unique solutions for a given initial data. This is very different from usual notions of universality where one would expect that there is clear unambiguous notion of evolution for a given initial data, for example as in computability theory. Second, the proofs usually rely on solutions that are piecewise defined. Hence they cannot be analytic, while analycity is often a key expected property in experimental sciences. Third, the proofs of these results can be interpreted more as the fact that (fourth-order) polynomial algebraic differential equations is a too loose a model compared to classical ordinary differential equations. In particular, one may challenge whether the result is really a universality result. The question whether one can require the solution that approximates phi to be the unique solution for a given initial data is a well known open problem [Rubel 1981, page 2], [Boshernitzan 1986, Conjecture 6.2]. In this article, we solve it and show that Rubel's statement holds for polynomial ordinary differential equations (ODEs), and since polynomial ODEs have a unique solution given an initial data, this positively answers Rubel's open problem. More precisely, we show that there exists a fixed polynomial ODE such that for any phi and epsilon(t) there exists some initial condition that yields a solution that is epsilon-close to phi at all times. The proof uses ordinary differential equation programming. We believe it sheds some light on computability theory for continuous-time models of computations. It also demonstrates that ordinary differential equations are indeed universal in the sense of Rubel and hence suffer from the same problem as DAEs for modelization: a single equation is capable of modelling any phenomenon with arbitrary precision, meaning that trying to fit a model based on polynomial DAEs or ODEs is too general (if ithas a sufficient dimension).

Cite as

Olivier Bournez and Amaury Pouly. A Universal Ordinary Differential Equation. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 116:1-116:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bournez_et_al:LIPIcs.ICALP.2017.116,
  author =	{Bournez, Olivier and Pouly, Amaury},
  title =	{{A Universal Ordinary Differential Equation}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{116:1--116:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.116},
  URN =		{urn:nbn:de:0030-drops-74335},
  doi =		{10.4230/LIPIcs.ICALP.2017.116},
  annote =	{Keywords: Ordinary Differential Equations, Universal Differential Equations, Analog Models of Computation, Continuous-Time Models of Computation, Computabilit}
}
Document
Regular Separability of Parikh Automata

Authors: Lorenzo Clemente, Wojciech Czerwinski, Slawomir Lasota, and Charles Paperman


Abstract
We investigate a subclass of languages recognized by vector addition systems, namely languages of nondeterministic Parikh automata. While the regularity problem (is the language of a given automaton regular?) is undecidable for this model, we surprisingly show decidability of the regular separability problem: given two Parikh automata, is there a regular language that contains one of them and is disjoint from the other? We supplement this result by proving undecidability of the same problem already for languages of visibly one counter automata.

Cite as

Lorenzo Clemente, Wojciech Czerwinski, Slawomir Lasota, and Charles Paperman. Regular Separability of Parikh Automata. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 117:1-117:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{clemente_et_al:LIPIcs.ICALP.2017.117,
  author =	{Clemente, Lorenzo and Czerwinski, Wojciech and Lasota, Slawomir and Paperman, Charles},
  title =	{{Regular Separability of Parikh Automata}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{117:1--117:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.117},
  URN =		{urn:nbn:de:0030-drops-74971},
  doi =		{10.4230/LIPIcs.ICALP.2017.117},
  annote =	{Keywords: Regular separability problem, Parikh automata, integer vector addition systems, visible one counter automata, decidability, undecidability}
}
Document
An Efficient Algorithm to Decide Periodicity of b-Recognisable Sets Using MSDF Convention

Authors: Bernard Boigelot, Isabelle Mainz, Victor Marsault, and Michel Rigo


Abstract
Given an integer base b>1, a set of integers is represented in base b by a language over {0,1,...,b-1}. The set is said to be b-recognisable if its representation is a regular language. It is known that eventually periodic sets are b-recognisable in every base b, and Cobham's theorem implies the converse: no other set is b-recognisable in every base b. We are interested in deciding whether a b-recognisable set of integers (given as a finite automaton) is eventually periodic. Honkala showed that this problem is decidable in 1986 and recent developments give efficient decision algorithms. However, they only work when the integers are written with the least significant digit first. In this work, we consider the natural order of digits (Most Significant Digit First) and give a quasi-linear algorithm to solve the problem in this case.

Cite as

Bernard Boigelot, Isabelle Mainz, Victor Marsault, and Michel Rigo. An Efficient Algorithm to Decide Periodicity of b-Recognisable Sets Using MSDF Convention. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 118:1-118:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{boigelot_et_al:LIPIcs.ICALP.2017.118,
  author =	{Boigelot, Bernard and Mainz, Isabelle and Marsault, Victor and Rigo, Michel},
  title =	{{An Efficient Algorithm to Decide Periodicity of b-Recognisable Sets Using MSDF Convention}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{118:1--118:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.118},
  URN =		{urn:nbn:de:0030-drops-74317},
  doi =		{10.4230/LIPIcs.ICALP.2017.118},
  annote =	{Keywords: integer-base systems, automata, recognisable sets, periodic sets}
}
Document
Polynomial-Space Completeness of Reachability for Succinct Branching VASS in Dimension One

Authors: Diego Figueira, Ranko Lazic, Jérôme Leroux, Filip Mazowiecki, and Grégoire Sutre


Abstract
Whether the reachability problem for branching vector addition systems, or equivalently the provability problem for multiplicative exponential linear logic, is decidable has been a long-standing open question. The one-dimensional case is a generalisation of the extensively studied one-counter nets, and it was recently established polynomial-time complete provided counter updates are given in unary. Our main contribution is to determine the complexity when the encoding is binary: polynomial-space complete.

Cite as

Diego Figueira, Ranko Lazic, Jérôme Leroux, Filip Mazowiecki, and Grégoire Sutre. Polynomial-Space Completeness of Reachability for Succinct Branching VASS in Dimension One. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 119:1-119:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{figueira_et_al:LIPIcs.ICALP.2017.119,
  author =	{Figueira, Diego and Lazic, Ranko and Leroux, J\'{e}r\^{o}me and Mazowiecki, Filip and Sutre, Gr\'{e}goire},
  title =	{{Polynomial-Space Completeness of Reachability for Succinct Branching VASS in Dimension One}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{119:1--119:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.119},
  URN =		{urn:nbn:de:0030-drops-74374},
  doi =		{10.4230/LIPIcs.ICALP.2017.119},
  annote =	{Keywords: branching vector addition systems, reachability problem}
}
Document
Satisfiability and Model Checking for the Logic of Sub-Intervals under the Homogeneity Assumption

Authors: Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala


Abstract
In this paper, we investigate the finite satisfiability and model checking problems for the logic D of the sub-interval relation under the homogeneity assumption, that constrains a proposition letter to hold over an interval if and only if it holds over all its points. First, we prove that the satisfiability problem for D, over finite linear orders, is PSPACE-complete; then, we show that its model checking problem, over finite Kripke structures, is PSPACE-complete as well.

Cite as

Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala. Satisfiability and Model Checking for the Logic of Sub-Intervals under the Homogeneity Assumption. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 120:1-120:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bozzelli_et_al:LIPIcs.ICALP.2017.120,
  author =	{Bozzelli, Laura and Molinari, Alberto and Montanari, Angelo and Peron, Adriano and Sala, Pietro},
  title =	{{Satisfiability and Model Checking for the Logic of Sub-Intervals under the Homogeneity Assumption}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{120:1--120:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.120},
  URN =		{urn:nbn:de:0030-drops-74703},
  doi =		{10.4230/LIPIcs.ICALP.2017.120},
  annote =	{Keywords: Interval Temporal Logic, Satisfiability, Model Checking, Decidability, Computational Complexity}
}
Document
Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes

Authors: Raphaël Berthon, Mickael Randour, and Jean-François Raskin


Abstract
The beyond worst-case synthesis problem was introduced recently by Bruyère et al. [BFRR14]: it aims at building system controllers that provide strict worst-case performance guarantees against an antagonistic environment while ensuring higher expected performance against a stochastic model of the environment. Our work extends the framework of [Bruyère/Filiot/Randour/Raskin, STACS 2014] and follow-up papers, which focused on quantitative objectives, by addressing the case of omega-regular conditions encoded as parity objectives, a natural way to represent functional requirements of systems. We build strategies that satisfy a main parity objective on all plays, while ensuring a secondary one with sufficient probability. This setting raises new challenges in comparison to quantitative objectives, as one cannot easily mix different strategies without endangering the functional properties of the system. We establish that, for all variants of this problem, deciding the existence of a strategy lies in NP and in coNP, the same complexity class as classical parity games. Hence, our framework provides additional modeling power while staying in the same complexity class.

Cite as

Raphaël Berthon, Mickael Randour, and Jean-François Raskin. Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 121:1-121:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{berthon_et_al:LIPIcs.ICALP.2017.121,
  author =	{Berthon, Rapha\"{e}l and Randour, Mickael and Raskin, Jean-Fran\c{c}ois},
  title =	{{Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{121:1--121:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.121},
  URN =		{urn:nbn:de:0030-drops-74360},
  doi =		{10.4230/LIPIcs.ICALP.2017.121},
  annote =	{Keywords: Markov decision processes, parity objectives, beyond worst-case synthesis}
}
Document
Synchronizability of Communicating Finite State Machines is not Decidable

Authors: Alain Finkel and Etienne Lozes


Abstract
A system of communicating finite state machines is synchronizable if its send trace semantics, i.e. the set of sequences of sendings it can perform, is the same when its communications are FIFO asynchronous and when they are just rendez-vous synchronizations. This property was claimed to be decidable in several conference and journal papers for either mailboxes or peer-to-peer communications, thanks to a form of small model property. In this paper, we show that this small model property does not hold neither for mailbox communications, nor for peer-to-peer communications, therefore the decidability of synchronizability becomes an open question. We close this question for peer-to-peer communications, and we show that synchronizability is actually undecidable. We show that synchronizability is decidable if the topology of communications is an oriented ring. We also show that, in this case, synchronizability implies the absence of unspecified receptions and orphan messages, and the channel-recognizability of the reachability set.

Cite as

Alain Finkel and Etienne Lozes. Synchronizability of Communicating Finite State Machines is not Decidable. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 122:1-122:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{finkel_et_al:LIPIcs.ICALP.2017.122,
  author =	{Finkel, Alain and Lozes, Etienne},
  title =	{{Synchronizability of Communicating Finite State Machines is not Decidable}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{122:1--122:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.122},
  URN =		{urn:nbn:de:0030-drops-74020},
  doi =		{10.4230/LIPIcs.ICALP.2017.122},
  annote =	{Keywords: verification, distributed system, asynchronous communications, choreographies}
}
Document
Admissiblity in Concurrent Games

Authors: Nicolas Basset, Gilles Geeraerts, Jean-François Raskin, and Ocan Sankur


Abstract
In this paper, we study the notion of admissibility for randomised strategies in concurrent games. Intuitively, an admissible strategy is one where the player plays 'as well as possible', because there is no other strategy that dominates it, i.e., that wins (almost surely) against a superset of adversarial strategies. We prove that admissible strategies always exist in concurrent games, and we characterise them precisely. Then, when the objectives of the players are omega-regular, we show how to perform assume-admissible synthesis, i.e., how to compute admissible strategies that win (almost surely) under the hypothesis that the other players play admissible strategies only.

Cite as

Nicolas Basset, Gilles Geeraerts, Jean-François Raskin, and Ocan Sankur. Admissiblity in Concurrent Games. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 123:1-123:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{basset_et_al:LIPIcs.ICALP.2017.123,
  author =	{Basset, Nicolas and Geeraerts, Gilles and Raskin, Jean-Fran\c{c}ois and Sankur, Ocan},
  title =	{{Admissiblity in Concurrent Games}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{123:1--123:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.123},
  URN =		{urn:nbn:de:0030-drops-74765},
  doi =		{10.4230/LIPIcs.ICALP.2017.123},
  annote =	{Keywords: Multi-player games, admissibility, concurrent games, randomized strategies}
}
Document
Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs

Authors: Karl Bringmann, Thomas Dueholm Hansen, and Sebastian Krinninger


Abstract
We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with n nodes and m edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time ~O(m^{3/4} n^{3/2}), which gives the first improvement over Megiddo's ~O(n^3) algorithm [JACM'83] for sparse graphs (We use the notation ~O(.) to hide factors that are polylogarithmic in n.) We further demonstrate how to obtain both an algorithm with running time n^3/2^{Omega(sqrt(log n)} on general graphs and an algorithm with running time ~O(n) on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest.

Cite as

Karl Bringmann, Thomas Dueholm Hansen, and Sebastian Krinninger. Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 124:1-124:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2017.124,
  author =	{Bringmann, Karl and Dueholm Hansen, Thomas and Krinninger, Sebastian},
  title =	{{Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{124:1--124:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.124},
  URN =		{urn:nbn:de:0030-drops-74398},
  doi =		{10.4230/LIPIcs.ICALP.2017.124},
  annote =	{Keywords: quantitative verification and synthesis, parametric search, shortest paths, negative cycle detection}
}
Document
Simple Greedy Algorithms for Fundamental Multidimensional Graph Problems

Authors: Vittorio Bilò, Ioannis Caragiannis, Angelo Fanelli, Michele Flammini, and Gianpiero Monaco


Abstract
We revisit fundamental problems in undirected and directed graphs, such as the problems of computing spanning trees, shortest paths, steiner trees, and spanning arborescences of minimum cost. We assume that there are d different cost functions associated with the edges of the input graph and seek for solutions to the resulting multidimensional graph problems so that the p-norm of the different costs of the solution is minimized. We present combinatorial algorithms that achieve very good approximations for this objective. The main advantage of our algorithms is their simplicity: they are as simple as classical combinatorial graph algorithms of Dijkstra and Kruskal, or the greedy algorithm for matroids.

Cite as

Vittorio Bilò, Ioannis Caragiannis, Angelo Fanelli, Michele Flammini, and Gianpiero Monaco. Simple Greedy Algorithms for Fundamental Multidimensional Graph Problems. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 125:1-125:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ICALP.2017.125,
  author =	{Bil\`{o}, Vittorio and Caragiannis, Ioannis and Fanelli, Angelo and Flammini, Michele and Monaco, Gianpiero},
  title =	{{Simple Greedy Algorithms for Fundamental Multidimensional Graph Problems}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{125:1--125:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.125},
  URN =		{urn:nbn:de:0030-drops-74669},
  doi =		{10.4230/LIPIcs.ICALP.2017.125},
  annote =	{Keywords: multidimensional graph problems, matroids, shortest paths, Steiner trees, arborescences}
}
Document
Stochastic k-Server: How Should Uber Work?

Authors: Sina Dehghani, Soheil Ehsani, MohammadTaghi Hajiaghayi, Vahid Liaghat, and Saeed Seddighin


Abstract
In this paper we study a stochastic variant of the celebrated $k$-server problem. In the k-server problem, we are required to minimize the total movement of k servers that are serving an online sequence of $t$ requests in a metric. In the stochastic setting we are given t independent distributions <P_1, P_2, ..., P_t> in advance, and at every time step i a request is drawn from P_i. Designing the optimal online algorithm in such setting is NP-hard, therefore the emphasis of our work is on designing an approximately optimal online algorithm. We first show a structural characterization for a certain class of non-adaptive online algorithms. We prove that in general metrics, the best of such algorithms has a cost of no worse than three times that of the optimal online algorithm. Next, we present an integer program that finds the optimal algorithm of this class for any arbitrary metric. Finally by rounding the solution of the linear relaxation of this program, we present an online algorithm for the stochastic k-server problem with an approximation factor of $3$ in the line and circle metrics and factor of O(log n) in general metrics. In this way, we achieve an approximation factor that is independent of k, the number of servers. Moreover, we define the Uber problem, motivated by extraordinary growth of online network transportation services. In the Uber problem, each demand consists of two points -a source and a destination- in the metric. Serving a demand is to move a server to its source and then to its destination. The objective is again minimizing the total movement of the k given servers. It is not hard to show that given an alpha-approximation algorithm for the k-server problem, we can obtain a max{3,alpha}-approximation algorithm for the Uber problem. Motivated by the fact that demands are usually highly correlated with the time (e.g. what day of the week or what time of the day the demand is arrived), we study the stochastic Uber problem. Using our results for stochastic k-server we can obtain a 3-approximation algorithm for the stochastic Uber problem in line and circle metrics, and a O(log n)-approximation algorithm for a general metric of size n. Furthermore, we extend our results to the correlated setting where the probability of a request arriving at a certain point depends not only on the time step but also on the previously arrived requests.

Cite as

Sina Dehghani, Soheil Ehsani, MohammadTaghi Hajiaghayi, Vahid Liaghat, and Saeed Seddighin. Stochastic k-Server: How Should Uber Work?. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 126:1-126:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dehghani_et_al:LIPIcs.ICALP.2017.126,
  author =	{Dehghani, Sina and Ehsani, Soheil and Hajiaghayi, MohammadTaghi and Liaghat, Vahid and Seddighin, Saeed},
  title =	{{Stochastic k-Server: How Should Uber Work?}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{126:1--126:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.126},
  URN =		{urn:nbn:de:0030-drops-74806},
  doi =		{10.4230/LIPIcs.ICALP.2017.126},
  annote =	{Keywords: k-server, stochastic, competitive ratio, online algorithm, Uber}
}
Document
Multiple Source Dual Fault Tolerant BFS Trees

Authors: Manoj Gupta and Shahbaz Khan


Abstract
Let G=(V,E) be a graph with n vertices and m edges, with a designated set of sigma sources S subseteq V. The fault tolerant subgraph for any graph problem maintains a sparse subgraph H=(V,E') of G with E' subseteq E, such that for any set F of k failures, the solution for the graph problem on G\F is maintained in its subgraph H\F. We address the problem of maintaining a fault tolerant subgraph for computing Breath First Search tree (BFS) of the graph from a single source s in V (referred as k FT-BFS) or multiple sources S subseteq V (referred as k FT-MBFS). We simply refer to them as FT-BFS (or FT-MBFS) for k=1, and dual FT-BFS (or dual FT-MBFS) for k=2. The problem of k FT-BFS was first studied by Parter and Peleg [ESA13]. They designed an algorithm to compute FT-BFS subgraph of size O(n^{3/2}). Further, they showed how their algorithm can be easily extended to FT-MBFS requiring O(sigma^{1/2}n^{3/2}) space. They also presented matching lower bounds for these results. The result was later extended to solve dual FT-BFS by Parter [PODC15] requiring (n^{5/3}) space, again with matching lower bounds. However, their result was limited to only edge failures in undirected graphs and involved very complex analysis. Moreover, their solution doesn't seems to be directly extendible for dual FT-MBFS problem. We present a similar algorithm to solve dual FT-BFS problem with a much simpler analysis. Moreover, our algorithm also works for vertex failures and directed graphs, and can be easily extended to handle dual FT-MBFS problem, matching the lower bound of O(sigma^{1/3}n^{5/3}) space described by Parter [PODC15]. The key difference in our approach is a much simpler classification of path interactions which formed the basis of the analysis by Parter [PODC15].

Cite as

Manoj Gupta and Shahbaz Khan. Multiple Source Dual Fault Tolerant BFS Trees. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 127:1-127:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2017.127,
  author =	{Gupta, Manoj and Khan, Shahbaz},
  title =	{{Multiple Source Dual Fault Tolerant BFS Trees}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{127:1--127:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.127},
  URN =		{urn:nbn:de:0030-drops-74184},
  doi =		{10.4230/LIPIcs.ICALP.2017.127},
  annote =	{Keywords: BFS, fault-tolerant, graph, algorithms, data-structures}
}
Document
Near-Optimal Induced Universal Graphs for Bounded Degree Graphs

Authors: Mikkel Abrahamsen, Stephen Alstrup, Jacob Holm, Mathias Bæk Tejs Knudsen, and Morten Stöckel


Abstract
A graph U is an induced universal graph for a family F of graphs if every graph in F is a vertex-induced subgraph of U. We give upper and lower bounds for the size of induced universal graphs for the family of graphs with n vertices of maximum degree D. Our new bounds improve several previous results except for the special cases where D is either near-constant or almost n/2. For constant even D Butler [Graphs and Combinatorics 2009] has shown O(n^(D/2)) and recently Alon and Nenadov [SODA 2017] showed the same bound for constant odd D. For constant D Butler also gave a matching lower bound. For generals graphs, which corresponds to D = n, Alon [Geometric and Functional Analysis, to appear] proved the existence of an induced universal graph with (1+o(1)) \cdot 2^((n-1)/2) vertices, leading to a smaller constant than in the previously best known bound of 16 * 2^(n/2) by Alstrup, Kaplan, Thorup, and Zwick [STOC 2015]. In this paper we give the following lower and upper bound of binom(floor(n/2))(floor(D/2)) * n^(-O(1)) and binom(floor(n/2))(floor(D/2)) * 2^(O(sqrt(D log D) * log(n/D))), respectively, where the upper bound is the main contribution. The proof that it is an induced universal graph relies on a randomized argument. We also give a deterministic upper bound of O(n^k / (k-1)!). These upper bounds are the best known when D <= n/2 - tilde-Omega(n^(3/4)) and either D is even and D = omega(1) or D is odd and D = omega(log n/log log n). In this range we improve asymptotically on the previous best known results by Butler [Graphs and Combinatorics 2009], Esperet, Arnaud and Ochem [IPL 2008], Adjiashvili and Rotbart [ICALP 2014], Alon and Nenadov [SODA 2017], and Alon [Geometric and Functional Analysis, to appear].

Cite as

Mikkel Abrahamsen, Stephen Alstrup, Jacob Holm, Mathias Bæk Tejs Knudsen, and Morten Stöckel. Near-Optimal Induced Universal Graphs for Bounded Degree Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 128:1-128:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{abrahamsen_et_al:LIPIcs.ICALP.2017.128,
  author =	{Abrahamsen, Mikkel and Alstrup, Stephen and Holm, Jacob and Knudsen, Mathias B{\ae}k Tejs and St\"{o}ckel, Morten},
  title =	{{Near-Optimal Induced Universal Graphs for Bounded Degree Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{128:1--128:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.128},
  URN =		{urn:nbn:de:0030-drops-74114},
  doi =		{10.4230/LIPIcs.ICALP.2017.128},
  annote =	{Keywords: Adjacency labeling schemes, Bounded degree graphs, Induced universal graphs, Distributed computing}
}
Document
Universal Framework for Wireless Scheduling Problems

Authors: Eyjólfur I. Ásgeirsson, Magnús M. Halldórsson, and Tigran Tonoyan


Abstract
An overarching issue in resource management of wireless networks is assessing their capacity: How much communication can be achieved in a network, utilizing all the tools available: power control, scheduling, routing, channel assignment and rate adjustment? We propose the first framework for approximation algorithms in the physical model that addresses these questions in full, including rate control. The approximations obtained are doubly logarithmic in the link length and rate diversity. Where previous bounds are known, this gives an exponential improvement. A key contribution is showing that the complex interference relationship of the physical model can be simplified into a novel type of amenable conflict graphs, at a small cost. We also show that the approximation obtained is provably the best possible for any conflict graph formulation.

Cite as

Eyjólfur I. Ásgeirsson, Magnús M. Halldórsson, and Tigran Tonoyan. Universal Framework for Wireless Scheduling Problems. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 129:1-129:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{asgeirsson_et_al:LIPIcs.ICALP.2017.129,
  author =	{\'{A}sgeirsson, Eyj\'{o}lfur I. and Halld\'{o}rsson, Magn\'{u}s M. and Tonoyan, Tigran},
  title =	{{Universal Framework for Wireless Scheduling Problems}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{129:1--129:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.129},
  URN =		{urn:nbn:de:0030-drops-74228},
  doi =		{10.4230/LIPIcs.ICALP.2017.129},
  annote =	{Keywords: Wireless, Scheduling, Physical Model, Approximation framework}
}
Document
Streaming Communication Protocols

Authors: Lucas Boczkowski, Iordanis Kerenidis, and Frédéric Magniez


Abstract
We define the Streaming Communication model that combines the main aspects of communication complexity and streaming. Input arrives as a stream, spread between several agents across a network. Each agent has a bounded memory, which can be updated upon receiving a new bit, or a message from another agent. We provide tight tradeoffs between the necessary resources, i.e. communication between agents and memory, for some of the canonical problems from communication complexity by proving a strong general lower bound technique. Second, we analyze the Approximate Matching problem and show that the complexity of this problem (i.e. the achievable approximation ratio) in the one-way variant of our model is strictly different both from the streaming complexity and the one-way communication complexity thereof.

Cite as

Lucas Boczkowski, Iordanis Kerenidis, and Frédéric Magniez. Streaming Communication Protocols. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 130:1-130:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{boczkowski_et_al:LIPIcs.ICALP.2017.130,
  author =	{Boczkowski, Lucas and Kerenidis, Iordanis and Magniez, Fr\'{e}d\'{e}ric},
  title =	{{Streaming Communication Protocols}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{130:1--130:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.130},
  URN =		{urn:nbn:de:0030-drops-74404},
  doi =		{10.4230/LIPIcs.ICALP.2017.130},
  annote =	{Keywords: Networks, Communication Complexity, Streaming Algorithms}
}
Document
Testable Bounded Degree Graph Properties Are Random Order Streamable

Authors: Morteza Monemizadeh, S. Muthukrishnan, Pan Peng, and Christian Sohler


Abstract
We study which property testing and sublinear time algorithms can be transformed into graph streaming algorithms for random order streams. Our main result is that for bounded degree graphs, any property that is constant-query testable in the adjacency list model can be tested with constant space in a single-pass in random order streams. Our result is obtained by estimating the distribution of local neighborhoods of the vertices on a random order graph stream using constant space. We then show that our approach can also be applied to constant time approximation algorithms for bounded degree graphs in the adjacency list model: As an example, we obtain a constant-space single-pass random order streaming algorithms for approximating the size of a maximum matching with additive error epsilon n (n is the number of nodes). Our result establishes for the first time that a large class of sublinear algorithms can be simulated in random order streams, while Omega(n) space is needed for many graph streaming problems for adversarial orders.

Cite as

Morteza Monemizadeh, S. Muthukrishnan, Pan Peng, and Christian Sohler. Testable Bounded Degree Graph Properties Are Random Order Streamable. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 131:1-131:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{monemizadeh_et_al:LIPIcs.ICALP.2017.131,
  author =	{Monemizadeh, Morteza and Muthukrishnan, S. and Peng, Pan and Sohler, Christian},
  title =	{{Testable Bounded Degree Graph Properties Are Random Order Streamable}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{131:1--131:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.131},
  URN =		{urn:nbn:de:0030-drops-74782},
  doi =		{10.4230/LIPIcs.ICALP.2017.131},
  annote =	{Keywords: Graph streaming algorithms, graph property testing, constant-time approximation algorithms}
}
Document
Deterministic Graph Exploration with Advice

Authors: Barun Gorain and Andrzej Pelc


Abstract
We consider the task of graph exploration. An n-node graph has unlabeled nodes, and all ports at any node of degree d are arbitrarily numbered 0,..., d-1. A mobile agent has to visit all nodes and stop. The exploration time is the number of edge traversals. We consider the problem of how much knowledge the agent has to have a priori, in order to explore the graph in a given time, using a deterministic algorithm. This a priori information (advice) is provided to the agent by an oracle, in the form of a binary string, whose length is called the size of advice. We consider two types of oracles. The instance oracle knows the entire instance of the exploration problem, i.e., the port-numbered map of the graph and the starting node of the agent in this map. The map oracle knows the port-numbered map of the graph but does not know the starting node of the agent. What is the minimum size of advice that must be given to the agent by each of these oracles, so that the agent explores the graph in a given time? We first consider exploration in polynomial time, and determine the exact minimum size of advice to achieve it. This size is log(log(log(n))) - Theta(1), for both types of oracles. When advice is large, there are two natural time thresholds: Theta(n^2) for a map oracle, and Theta(n) for an instance oracle, that can be achieved with sufficiently large advice. We show that, with a map oracle, time Theta(n^2) cannot be improved in general, regardless of the size of advice. We also show that the smallest size of advice to achieve this time is larger than n^delta, for any delta <1/3. For an instance oracle, advice of size O(n*log(n)) is enough to achieve time O(n). We show that, with any advice of size o(n*log(n)), the time of exploration must be at least n^epsilon, for any epsilon <2, and with any advice of size O(n), the time must be Omega(n^2). We also investigate minimum advice sufficient for fast exploration of Hamiltonian graphs.

Cite as

Barun Gorain and Andrzej Pelc. Deterministic Graph Exploration with Advice. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 132:1-132:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gorain_et_al:LIPIcs.ICALP.2017.132,
  author =	{Gorain, Barun and Pelc, Andrzej},
  title =	{{Deterministic Graph Exploration with Advice}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{132:1--132:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.132},
  URN =		{urn:nbn:de:0030-drops-73701},
  doi =		{10.4230/LIPIcs.ICALP.2017.132},
  annote =	{Keywords: algorithm, graph, exploration, mobile agent, advice}
}
Document
Combinatorial Secretary Problems with Ordinal Information

Authors: Martin Hoefer and Bojana Kodric


Abstract
The secretary problem is a classic model for online decision making. Recently, combinatorial extensions such as matroid or matching secretary problems have become an important tool to study algorithmic problems in dynamic markets. Here the decision maker must know the numerical value of each arriving element, which can be a demanding informational assumption. In this paper, we initiate the study of combinatorial secretary problems with ordinal information, in which the decision maker only needs to be aware of a preference order consistent with the values of arrived elements. The goal is to design online algorithms with small competitive ratios. For a variety of combinatorial problems, such as bipartite matching, general packing LPs, and independent set with bounded local independence number, we design new algorithms that obtain constant competitive ratios. For the matroid secretary problem, we observe that many existing algorithms for special matroid structures maintain their competitive ratios even in the ordinal model. In these cases, the restriction to ordinal information does not represent any additional obstacle. Moreover, we show that ordinal variants of the submodular matroid secretary problems can be solved using algorithms for the linear versions by extending [Feldman and Zenklusen, 2015]. In contrast, we provide a lower bound of Omega(sqrt(n)/log(n)) for algorithms that are oblivious to the matroid structure, where n is the total number of elements. This contrasts an upper bound of O(log n) in the cardinal model, and it shows that the technique of thresholding is not sufficient for good algorithms in the ordinal model.

Cite as

Martin Hoefer and Bojana Kodric. Combinatorial Secretary Problems with Ordinal Information. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 133:1-133:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.ICALP.2017.133,
  author =	{Hoefer, Martin and Kodric, Bojana},
  title =	{{Combinatorial Secretary Problems with Ordinal Information}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{133:1--133:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.133},
  URN =		{urn:nbn:de:0030-drops-74594},
  doi =		{10.4230/LIPIcs.ICALP.2017.133},
  annote =	{Keywords: Secretary Problem, Matroid Secretary, Ordinal Information, Online Algorithms}
}
Document
Selling Complementary Goods: Dynamics, Efficiency and Revenue

Authors: Moshe Babaioff, Liad Blumrosen, and Noam Nisan


Abstract
We consider a price competition between two sellers of perfect-complement goods. Each seller posts a price for the good it sells, but the demand is determined according to the sum of prices. This is a classic model by Cournot (1838), who showed that in this setting a monopoly that sells both goods is better for the society than two competing sellers. We show that non-trivial pure Nash equilibria always exist in this game. We also quantify Cournot's observation with respect to both the optimal welfare and the monopoly revenue. We then prove a series of mostly negative results regarding the convergence of best response dynamics to equilibria in such games.

Cite as

Moshe Babaioff, Liad Blumrosen, and Noam Nisan. Selling Complementary Goods: Dynamics, Efficiency and Revenue. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 134:1-134:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{babaioff_et_al:LIPIcs.ICALP.2017.134,
  author =	{Babaioff, Moshe and Blumrosen, Liad and Nisan, Noam},
  title =	{{Selling Complementary Goods: Dynamics, Efficiency and Revenue}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{134:1--134:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.134},
  URN =		{urn:nbn:de:0030-drops-74757},
  doi =		{10.4230/LIPIcs.ICALP.2017.134},
  annote =	{Keywords: Complements, Pricing, Networks, Game Theory, Price of Stability}
}
Document
Saving Critical Nodes with Firefighters is FPT

Authors: Jayesh Choudhari, Anirban Dasgupta, Neeldhara Misra, and M. S. Ramanujan


Abstract
We consider the problem of firefighting to save a critical subset of nodes. The firefighting game is a turn-based game played on a graph, where the fire spreads to vertices in a breadth-first manner from a source, and firefighters can be placed on yet unburnt vertices on alternate rounds to block the fire. In this work, we consider the problem of saving a critical subset of nodes from catching fire, given a total budget on the number of firefighters. We show that the problem is para-NP-hard when parameterized by the size of the critical set. We also show that it is fixed-parameter tractable on general graphs when parameterized by the number of firefighters. We also demonstrate improved running times on trees and establish that the problem is unlikely to admit a polynomial kernelization (even when restricted to trees). Our work is the first to exploit the connection between the firefighting problem and the notions of important separators and tight separator sequences. Finally, we consider the spreading model of the firefighting game, a closely related problem, and show that the problem of saving a critical set parameterized by the number of firefighters is W[2]-hard, which contrasts our FPT result for the non-spreading model.

Cite as

Jayesh Choudhari, Anirban Dasgupta, Neeldhara Misra, and M. S. Ramanujan. Saving Critical Nodes with Firefighters is FPT. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 135:1-135:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{choudhari_et_al:LIPIcs.ICALP.2017.135,
  author =	{Choudhari, Jayesh and Dasgupta, Anirban and Misra, Neeldhara and Ramanujan, M. S.},
  title =	{{Saving Critical Nodes with Firefighters is FPT}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{135:1--135:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.135},
  URN =		{urn:nbn:de:0030-drops-74968},
  doi =		{10.4230/LIPIcs.ICALP.2017.135},
  annote =	{Keywords: firefighting, cuts, FPT, kernelization}
}
Document
On the Transformation Capability of Feasible Mechanisms for Programmable Matter

Authors: Othon Michail, George Skretas, and Paul G. Spirakis


Abstract
In this work, we study theoretical models of programmable matter systems. The systems under consideration consist of spherical modules, kept together by magnetic forces and able to perform two minimal mechanical operations (or movements): rotate around a neighbor and slide over a line. In terms of modeling, there are n nodes arranged in a 2-dimensional grid and forming some initial shape. The goal is for the initial shape A to transform to some target shape B by a sequence of movements. Most of the paper focuses on transformability questions, meaning whether it is in principle feasible to transform a given shape to another. We first consider the case in which only rotation is available to the nodes. Our main result is that deciding whether two given shapes A and B can be transformed to each other is in P. We then insist on rotation only and impose the restriction that the nodes must maintain global connectivity throughout the transformation. We prove that the corresponding transformability question is in PSPACE and study the problem of determining the minimum seeds that can make feasible otherwise infeasible transformations. Next we allow both rotations and slidings and prove universality: any two connected shapes A,B of the same number of nodes, can be transformed to each other without breaking connectivity. The worst-case number of movements of the generic strategy is Theta(n^2). We improve this to O(n) parallel time, by a pipelining strategy, and prove optimality of both by matching lower bounds. We next turn our attention to distributed transformations. The nodes are now distributed processes able to perform communicate-compute-move rounds. We provide distributed algorithms for a general type of transformation.

Cite as

Othon Michail, George Skretas, and Paul G. Spirakis. On the Transformation Capability of Feasible Mechanisms for Programmable Matter. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 136:1-136:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{michail_et_al:LIPIcs.ICALP.2017.136,
  author =	{Michail, Othon and Skretas, George and Spirakis, Paul G.},
  title =	{{On the Transformation Capability of Feasible Mechanisms for Programmable Matter}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{136:1--136:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.136},
  URN =		{urn:nbn:de:0030-drops-74341},
  doi =		{10.4230/LIPIcs.ICALP.2017.136},
  annote =	{Keywords: programmable matter, transformation, reconfigurable robotics, shape formation, complexity, distributed algorithms}
}
Document
Distributed Monitoring of Network Properties: The Power of Hybrid Networks

Authors: Robert Gmyr, Kristian Hinnenthal, Christian Scheideler, and Christian Sohler


Abstract
We initiate the study of network monitoring algorithms in a class of hybrid networks in which the nodes are connected by an external network and an internal network (as a short form for externally and internally controlled network). While the external network lies outside of the control of the nodes (or in our case, the monitoring protocol running in them) and might be exposed to continuous changes, the internal network is fully under the control of the nodes. As an example, consider a group of users with mobile devices having access to the cell phone infrastructure. While the network formed by the WiFi connections of the devices is an external network (as its structure is not necessarily under the control of the monitoring protocol), the connections between the devices via the cell phone infrastructure represent an internal network (as it can be controlled by the monitoring protocol). Our goal is to continuously monitor properties of the external network with the help of the internal network. We present scalable distributed algorithms that efficiently monitor the number of edges, the average node degree, the clustering coefficient, the bipartiteness, and the weight of a minimum spanning tree. Their performance bounds demonstrate that monitoring the external network state with the help of an internal network can be done much more efficiently than just using the external network, as is usually done in the literature.

Cite as

Robert Gmyr, Kristian Hinnenthal, Christian Scheideler, and Christian Sohler. Distributed Monitoring of Network Properties: The Power of Hybrid Networks. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 137:1-137:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gmyr_et_al:LIPIcs.ICALP.2017.137,
  author =	{Gmyr, Robert and Hinnenthal, Kristian and Scheideler, Christian and Sohler, Christian},
  title =	{{Distributed Monitoring of Network Properties: The Power of Hybrid Networks}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{137:1--137:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.137},
  URN =		{urn:nbn:de:0030-drops-73750},
  doi =		{10.4230/LIPIcs.ICALP.2017.137},
  annote =	{Keywords: Network Monitoring, Hybrid Networks, Overlay Networks}
}
Document
Randomized Rumor Spreading Revisited

Authors: Benjamin Doerr and Anatolii Kostrygin


Abstract
We develop a simple and generic method to analyze randomized rumor spreading processes in fully connected networks. In contrast to all previous works, which heavily exploit the precise definition of the process under investigation, we only need to understand the probability and the covariance of the events that uninformed nodes become informed. This universality allows us to easily analyze the classic push, pull, and push-pull protocols both in their pure version and in several variations such as messages failing with constant probability or nodes calling a random number of others each round. Some dynamic models can be analyzed as well, e.g., when the network is a G(n,p) random graph sampled independently each round [Clementi et al. (ESA 2013)]. Despite this generality, our method determines the expected rumor spreading time precisely apart from additive constants, which is more precise than almost all previous works. We also prove tail bounds showing that a deviation from the expectation by more than an additive number of r rounds occurs with probability at most exp(-Omega(r)). We further use our method to discuss the common assumption that nodes can answer any number of incoming calls. We observe that the restriction that only one call can be answered leads to a significant increase of the runtime of the push-pull protocol. In particular, the double logarithmic end phase of the process now takes logarithmic time. This also increases the message complexity from the asymptotically optimal Theta(n*log(log(n))) [Karp, Shenker, Schindelhauer, Vöcking (FOCS 2000)] to Theta(n*log(n)). We propose a simple variation of the push-pull protocol that reverts back to the double logarithmic end phase and thus to the $\Theta(n*log(log(n))) message complexity.

Cite as

Benjamin Doerr and Anatolii Kostrygin. Randomized Rumor Spreading Revisited. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 138:1-138:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{doerr_et_al:LIPIcs.ICALP.2017.138,
  author =	{Doerr, Benjamin and Kostrygin, Anatolii},
  title =	{{Randomized Rumor Spreading Revisited}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{138:1--138:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.138},
  URN =		{urn:nbn:de:0030-drops-73798},
  doi =		{10.4230/LIPIcs.ICALP.2017.138},
  annote =	{Keywords: Epidemic algorithm, rumor spreading, dynamic graph}
}
Document
Randomized Load Balancing on Networks with Stochastic Inputs

Authors: Leran Cai and Thomas Sauerwald


Abstract
Iterative load balancing algorithms for indivisible tokens have been studied intensively in the past. Complementing previous worst-case analyses, we study an average-case scenario where the load inputs are drawn from a fixed probability distribution. For cycles, tori, hypercubes and expanders, we obtain almost matching upper and lower bounds on the discrepancy, the difference between the maximum and the minimum load. Our bounds hold for a variety of probability distributions including the uniform and binomial distribution but also distributions with unbounded range such as the Poisson and geometric distribution. For graphs with slow convergence like cycles and tori, our results demonstrate a substantial difference between the convergence in the worst- and average-case. An important ingredient in our analysis is a new upper bound on the t-step transition probability of a general Markov chain, which is derived by invoking the evolving set process.

Cite as

Leran Cai and Thomas Sauerwald. Randomized Load Balancing on Networks with Stochastic Inputs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 139:1-139:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ICALP.2017.139,
  author =	{Cai, Leran and Sauerwald, Thomas},
  title =	{{Randomized Load Balancing on Networks with Stochastic Inputs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{139:1--139:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.139},
  URN =		{urn:nbn:de:0030-drops-74839},
  doi =		{10.4230/LIPIcs.ICALP.2017.139},
  annote =	{Keywords: random walks, randomized algorithms, parallel computing}
}
Document
Opinion Dynamics in Networks: Convergence, Stability and Lack of Explosion

Authors: Tung Mai, Ioannis Panageas, and Vijay V. Vazirani


Abstract
Inspired by the work of Kempe et al. [Kempe, Kleinberg, Oren, Slivkins, EC 2013], we introduce and analyze a model on opinion formation; the update rule of our dynamics is a simplified version of that of [Kempe, Kleinberg, Oren, Slivkins, EC 2013]. We assume that the population is partitioned into types whose interaction pattern is specified by a graph. Interaction leads to population mass moving from types of smaller mass to those of bigger mass. We show that starting uniformly at random over all population vectors on the simplex, our dynamics converges point-wise with probability one to an independent set. This settles an open problem of [Kempe, Kleinberg, Oren, Slivkins, EC 2013], as applicable to our dynamics. We believe that our techniques can be used to settle the open problem for the Kempe et al. dynamics as well. Next, we extend the model of Kempe et al. by introducing the notion of birth and death of types, with the interaction graph evolving appropriately. Birth of types is determined by a Bernoulli process and types die when their population mass is less than epsilon (a parameter). We show that if the births are infrequent, then there are long periods of "stability" in which there is no population mass that moves. Finally we show that even if births are frequent and "stability" is not attained, the total number of types does not explode: it remains logarithmic in 1/epsilon.

Cite as

Tung Mai, Ioannis Panageas, and Vijay V. Vazirani. Opinion Dynamics in Networks: Convergence, Stability and Lack of Explosion. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 140:1-140:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{mai_et_al:LIPIcs.ICALP.2017.140,
  author =	{Mai, Tung and Panageas, Ioannis and Vazirani, Vijay V.},
  title =	{{Opinion Dynamics in Networks: Convergence, Stability and Lack of Explosion}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{140:1--140:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.140},
  URN =		{urn:nbn:de:0030-drops-74440},
  doi =		{10.4230/LIPIcs.ICALP.2017.140},
  annote =	{Keywords: Opinion Dynamics, Convergence, Jacobian, Center-stable Manifold}
}
Document
Hardness of Computing and Approximating Predicates and Functions with Leaderless Population Protocols

Authors: Amanda Belleville, David Doty, and David Soloveichik


Abstract
Population protocols are a distributed computing model appropriate for describing massive numbers of agents with very limited computational power (finite automata in this paper), such as sensor networks or programmable chemical reaction networks in synthetic biology. A population protocol is said to require a leader if every valid initial configuration contains a single agent in a special "leader" state that helps to coordinate the computation. Although the class of predicates and functions computable with probability 1 (stable computation) is the same whether a leader is required or not (semilinear functions and predicates), it is not known whether a leader is necessary for fast computation. Due to the large number of agents n (synthetic molecular systems routinely have trillions of molecules), efficient population protocols are generally defined as those computing in polylogarithmic in n (parallel) time. We consider population protocols that start in leaderless initial configurations, and the computation is regarded finished when the population protocol reaches a configuration from which a different output is no longer reachable. In this setting we show that a wide class of functions and predicates computable by population protocols are not efficiently computable (they require at least linear time), nor are some linear functions even efficiently approximable. It requires at least linear time for a population protocol even to approximate division by a constant or subtraction (or any linear function with a coefficient outside of N), in the sense that for sufficiently small gamma > 0, the output of a sublinear time protocol can stabilize outside the interval f(m) (1 +/- gamma) on infinitely many inputs m. In a complementary positive result, we show that with a sufficiently large value of gamma, a population protocol can approximate any linear f with nonnegative rational coefficients, within approximation factor gamma, in O(log n) time. We also show that it requires linear time to exactly compute a wide range of semilinear functions (e.g., f(m)=m if m is even and 2m if m is odd) and predicates (e.g., parity, equality).

Cite as

Amanda Belleville, David Doty, and David Soloveichik. Hardness of Computing and Approximating Predicates and Functions with Leaderless Population Protocols. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 141:1-141:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{belleville_et_al:LIPIcs.ICALP.2017.141,
  author =	{Belleville, Amanda and Doty, David and Soloveichik, David},
  title =	{{Hardness of Computing and Approximating Predicates and Functions with Leaderless Population Protocols}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{141:1--141:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.141},
  URN =		{urn:nbn:de:0030-drops-75044},
  doi =		{10.4230/LIPIcs.ICALP.2017.141},
  annote =	{Keywords: population protocol, time lower bound, stable computation}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail