No documents found matching your filter selection.
Document
Complete Volume
Authors:
Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi
Abstract
LIPIcs, Volume 132, ICALP'19, Complete Volume
Cite as
46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@Proceedings{baier_et_al:LIPIcs.ICALP.2019,
title = {{LIPIcs, Volume 132, ICALP'19, Complete Volume}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019},
URN = {urn:nbn:de:0030-drops-108644},
doi = {10.4230/LIPIcs.ICALP.2019},
annote = {Keywords: Theory of computation}
}
Document
Front Matter
Authors:
Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi
Abstract
Front Matter, Table of Contents, Preface, Conference Organization
Cite as
46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 0:i-0:xxxviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{baier_et_al:LIPIcs.ICALP.2019.0,
author = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
title = {{Front Matter, Table of Contents, Preface, Conference Organization}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {0:i--0:xxxviii},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.0},
URN = {urn:nbn:de:0030-drops-105765},
doi = {10.4230/LIPIcs.ICALP.2019.0},
annote = {Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Authors:
Michal Feldman
Abstract
We study combinatorial auctions with interdependent valuations. In such settings, every agent has a private signal, and every agent has a valuation function that depends on the private signals of all the agents. Interdependent valuations capture settings where agents lack information to determine their own valuations. Examples include auctions for artwork or oil drilling rights. For single item auctions and assume some restrictive conditions (the so-called single-crossing condition), full welfare can be achieved. However, in general, there are strong impossibility results on welfare maximization in the interdependent setting. This is in contrast to settings where agents are aware of their own valuations, where the optimal welfare can always be obtained by an incentive compatible mechanism.
Motivated by these impossibility results, we study welfare maximization for interdependent valuations through the lens of approximation. We introduce two valuation properties that enable positive results. The first is a relaxed, parameterized version of single crossing; the second is a submodularity condition over the signals. We obtain a host of approximation guarantees under these two notions for various scenarios.
Related publications: [Alon Eden et al., 2018; Alon Eden et al., 2019]
Cite as
Michal Feldman. Auction Design under Interdependent Values (Invited Talk). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{feldman:LIPIcs.ICALP.2019.1,
author = {Feldman, Michal},
title = {{Auction Design under Interdependent Values}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {1:1--1:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.1},
URN = {urn:nbn:de:0030-drops-105778},
doi = {10.4230/LIPIcs.ICALP.2019.1},
annote = {Keywords: Combinatorial auctions, Interdependent values, Welfare approximation}
}
Document
Invited Talk
Authors:
Martin Grohe
Abstract
Deciding if two graphs are isomorphic, or equivalently, computing the symmetries of a graph, is a fundamental algorithmic problem. It has many interesting applications, and it is one of the few natural problems in the class NP whose complexity status is still unresolved. Three years ago, Babai (STOC 2016) gave a quasi-polynomial time isomorphism algorithm. Despite of this breakthrough, the question for a polynomial algorithm remains wide open.
Related to the isomorphism problem is the problem of determining the similarity between graphs. Variations of this problems are known as robust graph isomorphism or graph matching (the latter in the machine learning and computer vision literature). This problem is significantly harder than the isomorphism problem, both from a complexity theoretical and from a practical point of view, but for many applications it is the more relevant problem.
My talk will be a survey of recent progress on the isomorphism and on the similarity problem. I will focus on generic algorithmic strategies (as opposed to algorithms tailored towards specific graph classes) that have proved to be useful and interesting in various context, both theoretical and practical.
Cite as
Martin Grohe. Symmetry and Similarity (Invited Talk). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{grohe:LIPIcs.ICALP.2019.2,
author = {Grohe, Martin},
title = {{Symmetry and Similarity}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {2:1--2:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.2},
URN = {urn:nbn:de:0030-drops-105787},
doi = {10.4230/LIPIcs.ICALP.2019.2},
annote = {Keywords: Graph Isomorphism, Graph Similarity, Graph Matching}
}
Document
Invited Talk
Authors:
Ola Svensson
Abstract
The matching problem is one of our favorite benchmark problems. Work on it has contributed to the development of many core concepts of computer science, including the equation of efficiency with polynomial time computation in the groundbreaking work by Edmonds in 1965.
However, half a century later, we still do not have full understanding of the complexity of the matching problem in several models of computation such as parallel, online, and streaming algorithms. In this talk we survey some of the major challenges and report some recent progress.
Cite as
Ola Svensson. Approximately Good and Modern Matchings (Invited Talk). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{svensson:LIPIcs.ICALP.2019.3,
author = {Svensson, Ola},
title = {{Approximately Good and Modern Matchings}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {3:1--3:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.3},
URN = {urn:nbn:de:0030-drops-105797},
doi = {10.4230/LIPIcs.ICALP.2019.3},
annote = {Keywords: Algorithms, Matchings, Computational Complexity}
}
Document
Invited Talk
Authors:
Frits Vaandrager
Abstract
Automata learning is emerging as an effective technique for obtaining state machine models of software and hardware systems. I will present an overview of recent work in which we used active automata learning to find standard violations and security vulnerabilities in implementations of network protocols such as TCP and SSH. Also, I will discuss applications of automata learning to support refactoring of legacy control software and identifying job patterns in manufacturing systems. As a guiding theme in my presentation, I will show how Galois connections (adjunctions) help us to scale the application of learning algorithms to practical problems.
Cite as
Frits Vaandrager. Automata Learning and Galois Connections (Invited Talk). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{vaandrager:LIPIcs.ICALP.2019.4,
author = {Vaandrager, Frits},
title = {{Automata Learning and Galois Connections}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {4:1--4:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.4},
URN = {urn:nbn:de:0030-drops-105800},
doi = {10.4230/LIPIcs.ICALP.2019.4},
annote = {Keywords: Automaton Learning, Model Learning, Protocol Verification, Applications of Automata Learning, Galois Connections}
}
Document
Invited Talk
Authors:
Mihalis Yannakakis
Abstract
Many problems from a wide variety of areas can be formulated mathematically as the problem of computing a fixed point of a suitable given multivariate function. Examples include a variety of problems from game theory, economics, optimization, stochastic analysis, verification, and others. In some problems there is a unique fixed point (for example if the function is a contraction); in others there may be multiple fixed points and any one of them is an acceptable solution; while in other cases the desired object is a specific fixed point (for example the least fixed point or greatest fixed point of a monotone function). In this talk we will discuss several types of fixed point computation problems, their complexity, and some of the common themes that have emerged: classes of problems for which there are efficient algorithms, and other classes for which there seem to be serious obstacles.
Cite as
Mihalis Yannakakis. Fixed Point Computation Problems and Facets of Complexity (Invited Talk). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{yannakakis:LIPIcs.ICALP.2019.5,
author = {Yannakakis, Mihalis},
title = {{Fixed Point Computation Problems and Facets of Complexity}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {5:1--5:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.5},
URN = {urn:nbn:de:0030-drops-105812},
doi = {10.4230/LIPIcs.ICALP.2019.5},
annote = {Keywords: Fixed Point, Polynomial Time Algorithm, Computational Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Scott Aaronson, Alexandru Cojocaru, Alexandru Gheorghiu, and Elham Kashefi
Abstract
Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know, from work over the past decade, that blind delegation protocols can achieve information-theoretic security (provided the client and the server exchange some amount of quantum information). In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained.
In the first part of our paper we provide some indication that ITS-BQC protocols for delegating polynomial-time quantum computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol in which the client and the server exchange O(n^d) bits of communication, implies that BQP subset MA/O(n^d). We conjecture that this containment is unlikely by proving that there exists an oracle relative to which BQP not subset MA/O(n^d). We then show that if an ITS-BQC protocol exists in which the client and the server interact only classically and which allows the client to delegate quantum sampling problems to the server (such as BosonSampling) then there exist non-uniform circuits of size 2^{n - Omega(n/log(n))}, making polynomially-sized queries to an NP^{NP} oracle, for computing the permanent of an n x n matrix.
The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol by showing that they must be contained in QCMA/qpoly cap coQCMA/qpoly. Then, we show that having such a protocol for delegating NP-hard functions implies coNP^{NP^{NP}} subseteq NP^{NP^{PromiseQMA}}.
Cite as
Scott Aaronson, Alexandru Cojocaru, Alexandru Gheorghiu, and Elham Kashefi. Complexity-Theoretic Limitations on Blind Delegated Quantum Computation. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{aaronson_et_al:LIPIcs.ICALP.2019.6,
author = {Aaronson, Scott and Cojocaru, Alexandru and Gheorghiu, Alexandru and Kashefi, Elham},
title = {{Complexity-Theoretic Limitations on Blind Delegated Quantum Computation}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {6:1--6:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.6},
URN = {urn:nbn:de:0030-drops-105826},
doi = {10.4230/LIPIcs.ICALP.2019.6},
annote = {Keywords: Quantum cryptography, Complexity theory, Delegated quantum computation, Computing on encrypted data}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf
Abstract
The All-Pairs Min-Cut problem (aka All-Pairs Max-Flow) asks to compute a minimum s-t cut (or just its value) for all pairs of vertices s,t. We study this problem in directed graphs with unit edge/vertex capacities (corresponding to edge/vertex connectivity). Our focus is on the k-bounded case, where the algorithm has to find all pairs with min-cut value less than k, and report only those. The most basic case k=1 is the Transitive Closure (TC) problem, which can be solved in graphs with n vertices and m edges in time O(mn) combinatorially, and in time O(n^{omega}) where omega<2.38 is the matrix-multiplication exponent. These time bounds are conjectured to be optimal.
We present new algorithms and conditional lower bounds that advance the frontier for larger k, as follows:
- A randomized algorithm for vertex capacities that runs in time {O}((nk)^{omega}). This is only a factor k^omega away from the TC bound, and nearly matches it for all k=n^{o(1)}.
- Two deterministic algorithms for edge capacities (which is more general) that work in DAGs and further reports a minimum cut for each pair. The first algorithm is combinatorial (does not involve matrix multiplication) and runs in time {O}(2^{{O}(k^2)}* mn). The second algorithm can be faster on dense DAGs and runs in time {O}((k log n)^{4^{k+o(k)}}* n^{omega}). Previously, Georgiadis et al. [ICALP 2017], could match the TC bound (up to n^{o(1)} factors) only when k=2, and now our two algorithms match it for all k=o(sqrt{log n}) and k=o(log log n).
- The first super-cubic lower bound of n^{omega-1-o(1)} k^2 time under the 4-Clique conjecture, which holds even in the simplest case of DAGs with unit vertex capacities. It improves on the previous (SETH-based) lower bounds even in the unbounded setting k=n. For combinatorial algorithms, our reduction implies an n^{2-o(1)} k^2 conditional lower bound. Thus, we identify new settings where the complexity of the problem is (conditionally) higher than that of TC.
Our three sets of results are obtained via different techniques. The first one adapts the network coding method of Cheung, Lau, and Leung [SICOMP 2013] to vertex-capacitated digraphs. The second set exploits new insights on the structure of latest cuts together with suitable algebraic tools. The lower bounds arise from a novel reduction of a different structure than the SETH-based constructions.
Cite as
Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf. Faster Algorithms for All-Pairs Bounded Min-Cuts. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{abboud_et_al:LIPIcs.ICALP.2019.7,
author = {Abboud, Amir and Georgiadis, Loukas and Italiano, Giuseppe F. and Krauthgamer, Robert and Parotsidis, Nikos and Trabelsi, Ohad and Uzna\'{n}ski, Przemys{\l}aw and Wolleb-Graf, Daniel},
title = {{Faster Algorithms for All-Pairs Bounded Min-Cuts}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {7:1--7:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.7},
URN = {urn:nbn:de:0030-drops-105833},
doi = {10.4230/LIPIcs.ICALP.2019.7},
annote = {Keywords: All-pairs min-cut, k-reachability, network coding, Directed graphs, fine-grained complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Amir Abboud
Abstract
This paper points at a connection between certain (classical) fine-grained reductions and the question: Do quantum algorithms offer an advantage for problems whose (classical) best solution is via dynamic programming?
A remarkable recent result of Ambainis et al. [SODA 2019] indicates that the answer is positive for some fundamental problems such as Set-Cover and Travelling Salesman. They design a quantum O^*(1.728^n) time algorithm whereas the dynamic programming O^*(2^n) time algorithms are conjectured to be classically optimal. In this paper, fine-grained reductions are extracted from their algorithms giving the first lower bounds for problems in P that are based on the intriguing Set-Cover Conjecture (SeCoCo) of Cygan et al. [CCC 2010].
In particular, the SeCoCo implies:
- a super-linear Omega(n^{1.08}) lower bound for 3-SUM on n integers,
- an Omega(n^{k/(c_k)-epsilon}) lower bound for k-SUM on n integers and k-Clique on n-node graphs, for any integer k >= 3, where c_k <= log_2{k}+1.4427.
While far from being tight, these lower bounds are significantly stronger than what is known to follow from the Strong Exponential Time Hypothesis (SETH); the well-known n^{Omega(k)} ETH-based lower bounds for k-Clique and k-SUM are vacuous when k is constant.
Going in the opposite direction, this paper observes that some "sequential" problems with previously known fine-grained reductions to a "parallelizable" core also enjoy quantum speedups over their classical dynamic programming solutions. Examples include RNA Folding and Least-Weight Subsequence.
Cite as
Amir Abboud. Fine-Grained Reductions and Quantum Speedups for Dynamic Programming. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{abboud:LIPIcs.ICALP.2019.8,
author = {Abboud, Amir},
title = {{Fine-Grained Reductions and Quantum Speedups for Dynamic Programming}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {8:1--8:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.8},
URN = {urn:nbn:de:0030-drops-105846},
doi = {10.4230/LIPIcs.ICALP.2019.8},
annote = {Keywords: Fine-Grained Complexity, Set-Cover, 3-SUM, k-Clique, k-SUM, Dynamic Programming, Quantum Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mikkel Abrahamsen, Panos Giannopoulos, Maarten Löffler, and Günter Rote
Abstract
We study the following separation problem: Given a collection of colored objects in the plane, compute a shortest "fence" F, i.e., a union of curves of minimum total length, that separates every two objects of different colors. Two objects are separated if F contains a simple closed curve that has one object in the interior and the other in the exterior. We refer to the problem as GEOMETRIC k-CUT, where k is the number of different colors, as it can be seen as a geometric analogue to the well-studied multicut problem on graphs. We first give an O(n^4 log^3 n)-time algorithm that computes an optimal fence for the case where the input consists of polygons of two colors and n corners in total. We then show that the problem is NP-hard for the case of three colors. Finally, we give a (2-4/3k)-approximation algorithm.
Cite as
Mikkel Abrahamsen, Panos Giannopoulos, Maarten Löffler, and Günter Rote. Geometric Multicut. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{abrahamsen_et_al:LIPIcs.ICALP.2019.9,
author = {Abrahamsen, Mikkel and Giannopoulos, Panos and L\"{o}ffler, Maarten and Rote, G\"{u}nter},
title = {{Geometric Multicut}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {9:1--9:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.9},
URN = {urn:nbn:de:0030-drops-105850},
doi = {10.4230/LIPIcs.ICALP.2019.9},
annote = {Keywords: multicut, clustering, Steiner tree}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Peyman Afshani, Casper Benjamin Freksen, Lior Kamma, and Kasper Green Larsen
Abstract
Multiplication is one of the most fundamental computational problems, yet its true complexity remains elusive. The best known upper bound, very recently proved by Harvey and van der Hoeven (2019), shows that two n-bit numbers can be multiplied via a boolean circuit of size O(n lg n). In this work, we prove that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multiplication must have size Omega(n lg n), thus almost completely settling the complexity of multiplication circuits. We additionally revisit classic conjectures in circuit complexity, due to Valiant, and show that the network coding conjecture also implies one of Valiant’s conjectures.
Cite as
Peyman Afshani, Casper Benjamin Freksen, Lior Kamma, and Kasper Green Larsen. Lower Bounds for Multiplication via Network Coding. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 10:1-10:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{afshani_et_al:LIPIcs.ICALP.2019.10,
author = {Afshani, Peyman and Freksen, Casper Benjamin and Kamma, Lior and Larsen, Kasper Green},
title = {{Lower Bounds for Multiplication via Network Coding}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {10:1--10:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.10},
URN = {urn:nbn:de:0030-drops-105861},
doi = {10.4230/LIPIcs.ICALP.2019.10},
annote = {Keywords: Circuit Complexity, Circuit Lower Bounds, Multiplication, Network Coding, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Akanksha Agrawal, Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Prafullkumar Tale
Abstract
A graph G is contractible to a graph H if there is a set X subseteq E(G), such that G/X is isomorphic to H. Here, G/X is the graph obtained from G by contracting all the edges in X. For a family of graphs F, the F-Contraction problem takes as input a graph G on n vertices, and the objective is to output the largest integer t, such that G is contractible to a graph H in F, where |V(H)|=t. When F is the family of paths, then the corresponding F-Contraction problem is called Path Contraction. The problem Path Contraction admits a simple algorithm running in time 2^n * n^{O(1)}. In spite of the deceptive simplicity of the problem, beating the 2^n * n^{O(1)} bound for Path Contraction seems quite challenging. In this paper, we design an exact exponential time algorithm for Path Contraction that runs in time 1.99987^n * n^{O(1)}. We also define a problem called 3-Disjoint Connected Subgraphs, and design an algorithm for it that runs in time 1.88^n * n^{O(1)}. The above algorithm is used as a sub-routine in our algorithm for Path Contraction.
Cite as
Akanksha Agrawal, Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Prafullkumar Tale. Path Contraction Faster Than 2^n. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 11:1-11:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{agrawal_et_al:LIPIcs.ICALP.2019.11,
author = {Agrawal, Akanksha and Fomin, Fedor V. and Lokshtanov, Daniel and Saurabh, Saket and Tale, Prafullkumar},
title = {{Path Contraction Faster Than 2^n}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {11:1--11:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.11},
URN = {urn:nbn:de:0030-drops-105874},
doi = {10.4230/LIPIcs.ICALP.2019.11},
annote = {Keywords: path contraction, exact exponential time algorithms, graph algorithms, enumerating connected sets, 3-disjoint connected subgraphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Noga Alon, Shiri Chechik, and Sarel Cohen
Abstract
In this work we derandomize two central results in graph algorithms, replacement paths and distance sensitivity oracles (DSOs) matching in both cases the running time of the randomized algorithms.
For the replacement paths problem, let G = (V,E) be a directed unweighted graph with n vertices and m edges and let P be a shortest path from s to t in G. The replacement paths problem is to find for every edge e in P the shortest path from s to t avoiding e. Roditty and Zwick [ICALP 2005] obtained a randomized algorithm with running time of O~(m sqrt{n}). Here we provide the first deterministic algorithm for this problem, with the same O~(m sqrt{n}) time. Due to matching conditional lower bounds of Williams et al. [FOCS 2010], our deterministic combinatorial algorithm for the replacement paths problem is optimal up to polylogarithmic factors (unless the long standing bound of O~(mn) for the combinatorial boolean matrix multiplication can be improved). This also implies a deterministic algorithm for the second simple shortest path problem in O~(m sqrt{n}) time, and a deterministic algorithm for the k-simple shortest paths problem in O~(k m sqrt{n}) time (for any integer constant k > 0).
For the problem of distance sensitivity oracles, let G = (V,E) be a directed graph with real-edge weights. An f-Sensitivity Distance Oracle (f-DSO) gets as input the graph G=(V,E) and a parameter f, preprocesses it into a data-structure, such that given a query (s,t,F) with s,t in V and F subseteq E cup V, |F| <=f being a set of at most f edges or vertices (failures), the query algorithm efficiently computes the distance from s to t in the graph G \ F (i.e., the distance from s to t in the graph G after removing from it the failing edges and vertices F).
For weighted graphs with real edge weights, Weimann and Yuster [FOCS 2010] presented several randomized f-DSOs. In particular, they presented a combinatorial f-DSO with O~(mn^{4-alpha}) preprocessing time and subquadratic O~(n^{2-2(1-alpha)/f}) query time, giving a tradeoff between preprocessing and query time for every value of 0 < alpha < 1. We derandomize this result and present a combinatorial deterministic f-DSO with the same asymptotic preprocessing and query time.
Cite as
Noga Alon, Shiri Chechik, and Sarel Cohen. Deterministic Combinatorial Replacement Paths and Distance Sensitivity Oracles. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{alon_et_al:LIPIcs.ICALP.2019.12,
author = {Alon, Noga and Chechik, Shiri and Cohen, Sarel},
title = {{Deterministic Combinatorial Replacement Paths and Distance Sensitivity Oracles}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {12:1--12:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.12},
URN = {urn:nbn:de:0030-drops-105882},
doi = {10.4230/LIPIcs.ICALP.2019.12},
annote = {Keywords: replacement paths, distance sensitivity oracles, derandomization}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Bertie Ancona, Monika Henzinger, Liam Roditty, Virginia Vassilevska Williams, and Nicole Wein
Abstract
The diameter, radius and eccentricities are natural graph parameters. While these problems have been studied extensively, there are no known dynamic algorithms for them beyond the ones that follow from trivial recomputation after each update or from solving dynamic All-Pairs Shortest Paths (APSP), which is very computationally intensive. This is the situation for dynamic approximation algorithms as well, and even if only edge insertions or edge deletions need to be supported.
This paper provides a comprehensive study of the dynamic approximation of Diameter, Radius and Eccentricities, providing both conditional lower bounds, and new algorithms whose bounds are optimal under popular hypotheses in fine-grained complexity. Some of the highlights include:
- Under popular hardness hypotheses, there can be no significantly better fully dynamic approximation algorithms than recomputing the answer after each update, or maintaining full APSP.
- Nearly optimal partially dynamic (incremental/decremental) algorithms can be achieved via efficient reductions to (incremental/decremental) maintenance of Single-Source Shortest Paths. For instance, a nearly (3/2+epsilon)-approximation to Diameter in directed or undirected n-vertex, m-edge graphs can be maintained decrementally in total time m^{1+o(1)}sqrt{n}/epsilon^2. This nearly matches the static 3/2-approximation algorithm for the problem that is known to be conditionally optimal.
Cite as
Bertie Ancona, Monika Henzinger, Liam Roditty, Virginia Vassilevska Williams, and Nicole Wein. Algorithms and Hardness for Diameter in Dynamic Graphs. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ancona_et_al:LIPIcs.ICALP.2019.13,
author = {Ancona, Bertie and Henzinger, Monika and Roditty, Liam and Williams, Virginia Vassilevska and Wein, Nicole},
title = {{Algorithms and Hardness for Diameter in Dynamic Graphs}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {13:1--13:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.13},
URN = {urn:nbn:de:0030-drops-105891},
doi = {10.4230/LIPIcs.ICALP.2019.13},
annote = {Keywords: fine-grained complexity, graph algorithms, dynamic algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alexandr Andoni, Clifford Stein, and Peilin Zhong
Abstract
Many modern parallel systems, such as MapReduce, Hadoop and Spark, can be modeled well by the MPC model. The MPC model captures well coarse-grained computation on large data - data is distributed to processors, each of which has a sublinear (in the input data) amount of memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. This model is stronger than the classical PRAM model, and it is an intriguing question to design algorithms whose running time is smaller than in the PRAM model.
In this paper, we study two fundamental problems, 2-edge connectivity and 2-vertex connectivity (biconnectivity). PRAM algorithms which run in O(log n) time have been known for many years. We give algorithms using roughly log diameter rounds in the MPC model. Our main results are, for an n-vertex, m-edge graph of diameter D and bi-diameter D', 1) a O(log D log log_{m/n} n) parallel time 2-edge connectivity algorithm, 2) a O(log D log^2 log_{m/n}n+log D'log log_{m/n}n) parallel time biconnectivity algorithm, where the bi-diameter D' is the largest cycle length over all the vertex pairs in the same biconnected component. Our results are fully scalable, meaning that the memory per processor can be O(n^{delta}) for arbitrary constant delta>0, and the total memory used is linear in the problem size. Our 2-edge connectivity algorithm achieves the same parallel time as the connectivity algorithm of [Andoni et al., 2018]. We also show an Omega(log D') conditional lower bound for the biconnectivity problem.
Cite as
Alexandr Andoni, Clifford Stein, and Peilin Zhong. Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{andoni_et_al:LIPIcs.ICALP.2019.14,
author = {Andoni, Alexandr and Stein, Clifford and Zhong, Peilin},
title = {{Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {14:1--14:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.14},
URN = {urn:nbn:de:0030-drops-105906},
doi = {10.4230/LIPIcs.ICALP.2019.14},
annote = {Keywords: parallel algorithms, biconnectivity, 2-edge connectivity, the MPC model}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alexandr Andoni, Tal Malkin, and Negev Shekel Nosatzki
Abstract
We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing problem, Alice and Bob each have t samples from, respectively, distributions a and b over [n], and they need to test whether a=b or a,b are epsilon-far (in the l_1 distance). This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions. Despite being a natural constraint in applications, the two-party setting has previously evaded attention.
We address two fundamental aspects of the two-party setting: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other’s input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have t samples from distributions a and b respectively, which may be correlated; the question is whether a,b are independent or epsilon-far from being independent. Our contribution is three-fold: 1) We show how to gain communication efficiency given more samples, beyond the information-theoretic bound on t. The gain is polynomially better than what one would obtain via adapting one-party algorithms. 2) We prove tightness of our trade-off for the closeness testing, as well as that the independence testing requires tight Omega(sqrt{m}) communication for unbounded number of samples. These lower bounds are of independent interest as, to the best of our knowledge, these are the first 2-party communication lower bounds for testing problems, where the inputs are a set of i.i.d. samples. 3) We define the concept of secure distribution testing, and provide secure versions of the above protocols with an overhead that is only polynomial in the security parameter.
Cite as
Alexandr Andoni, Tal Malkin, and Negev Shekel Nosatzki. Two Party Distribution Testing: Communication and Security. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 15:1-15:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{andoni_et_al:LIPIcs.ICALP.2019.15,
author = {Andoni, Alexandr and Malkin, Tal and Nosatzki, Negev Shekel},
title = {{Two Party Distribution Testing: Communication and Security}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {15:1--15:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.15},
URN = {urn:nbn:de:0030-drops-105916},
doi = {10.4230/LIPIcs.ICALP.2019.15},
annote = {Keywords: distribution testing, communication complexity, security}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Srinivasan Arunachalam, Sourav Chakraborty, Troy Lee, Manaswi Paraashar, and Ronald de Wolf
Abstract
We present two new results about exact learning by quantum computers. First, we show how to exactly learn a k-Fourier-sparse n-bit Boolean function from O(k^{1.5}(log k)^2) uniform quantum examples for that function. This improves over the bound of Theta~(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang’s lemma for sparse Boolean functions. Second, we show that if a concept class {C} can be exactly learned using Q quantum membership queries, then it can also be learned using O ({Q^2}/{log Q} * log|C|) classical membership queries. This improves the previous-best simulation result (Servedio-Gortler, SICOMP'04) by a log Q-factor.
Cite as
Srinivasan Arunachalam, Sourav Chakraborty, Troy Lee, Manaswi Paraashar, and Ronald de Wolf. Two New Results About Quantum Exact Learning. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{arunachalam_et_al:LIPIcs.ICALP.2019.16,
author = {Arunachalam, Srinivasan and Chakraborty, Sourav and Lee, Troy and Paraashar, Manaswi and de Wolf, Ronald},
title = {{Two New Results About Quantum Exact Learning}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {16:1--16:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.16},
URN = {urn:nbn:de:0030-drops-105929},
doi = {10.4230/LIPIcs.ICALP.2019.16},
annote = {Keywords: quantum computing, exact learning, analysis of Boolean functions, Fourier sparse Boolean functions}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sepehr Assadi and Shay Solomon
Abstract
Maximal independent set (MIS), maximal matching (MM), and (Delta+1)-(vertex) coloring in graphs of maximum degree Delta are among the most prominent algorithmic graph theory problems. They are all solvable by a simple linear-time greedy algorithm and up until very recently this constituted the state-of-the-art. In SODA 2019, Assadi, Chen, and Khanna gave a randomized algorithm for (Delta+1)-coloring that runs in O~(n sqrt{n}) time, which even for moderately dense graphs is sublinear in the input size. The work of Assadi et al. however contained a spoiler for MIS and MM: neither problems provably admits a sublinear-time algorithm in general graphs. In this work, we dig deeper into the possibility of achieving sublinear-time algorithms for MIS and MM.
The neighborhood independence number of a graph G, denoted by beta(G), is the size of the largest independent set in the neighborhood of any vertex. We identify beta(G) as the "right" parameter to measure the runtime of MIS and MM algorithms: Although graphs of bounded neighborhood independence may be very dense (clique is one example), we prove that carefully chosen variants of greedy algorithms for MIS and MM run in O(n beta(G)) and O(n log{n} * beta(G)) time respectively on any n-vertex graph G. We complement this positive result by observing that a simple extension of the lower bound of Assadi et al. implies that Omega(n beta(G)) time is also necessary for any algorithm to either problem for all values of beta(G) from 1 to Theta(n). We note that our algorithm for MIS is deterministic while for MM we use randomization which we prove is unavoidable: any deterministic algorithm for MM requires Omega(n^2) time even for beta(G) = 2.
Graphs with bounded neighborhood independence, already for constant beta = beta(G), constitute a rich family of possibly dense graphs, including line graphs, proper interval graphs, unit-disk graphs, claw-free graphs, and graphs of bounded growth. Our results suggest that even though MIS and MM do not admit sublinear-time algorithms in general graphs, one can still solve both problems in sublinear time for a wide range of beta(G) << n.
Finally, by observing that the lower bound of Omega(n sqrt{n}) time for (Delta+1)-coloring due to Assadi et al. applies to graphs of (small) constant neighborhood independence, we unveil an intriguing separation between the time complexity of MIS and MM, and that of (Delta+1)-coloring: while the time complexity of MIS and MM is strictly higher than that of (Delta+1) coloring in general graphs, the exact opposite relation holds for graphs with small neighborhood independence.
Cite as
Sepehr Assadi and Shay Solomon. When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear Time. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 17:1-17:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{assadi_et_al:LIPIcs.ICALP.2019.17,
author = {Assadi, Sepehr and Solomon, Shay},
title = {{When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear Time}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {17:1--17:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.17},
URN = {urn:nbn:de:0030-drops-105931},
doi = {10.4230/LIPIcs.ICALP.2019.17},
annote = {Keywords: Maximal Independent Set, Maximal Matching, Sublinear-Time Algorithms, Bounded Neighborhood Independence}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Pranjal Awasthi, Ainesh Bakshi, Maria-Florina Balcan, Colin White, and David P. Woodruff
Abstract
In this work, we study the k-median and k-means clustering problems when the data is distributed across many servers and can contain outliers. While there has been a lot of work on these problems for worst-case instances, we focus on gaining a finer understanding through the lens of beyond worst-case analysis. Our main motivation is the following: for many applications such as clustering proteins by function or clustering communities in a social network, there is some unknown target clustering, and the hope is that running a k-median or k-means algorithm will produce clusterings which are close to matching the target clustering. Worst-case results can guarantee constant factor approximations to the optimal k-median or k-means objective value, but not closeness to the target clustering.
Our first result is a distributed algorithm which returns a near-optimal clustering assuming a natural notion of stability, namely, approximation stability [Awasthi and Balcan, 2014], even when a constant fraction of the data are outliers. The communication complexity is O~(sk+z) where s is the number of machines, k is the number of clusters, and z is the number of outliers. Next, we show this amount of communication cannot be improved even in the setting when the input satisfies various non-worst-case assumptions. We give a matching Omega(sk+z) lower bound on the communication required both for approximating the optimal k-means or k-median cost up to any constant, and for returning a clustering that is close to the target clustering in Hamming distance. These lower bounds hold even when the data satisfies approximation stability or other common notions of stability, and the cluster sizes are balanced. Therefore, Omega(sk+z) is a communication bottleneck, even for real-world instances.
Cite as
Pranjal Awasthi, Ainesh Bakshi, Maria-Florina Balcan, Colin White, and David P. Woodruff. Robust Communication-Optimal Distributed Clustering Algorithms. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{awasthi_et_al:LIPIcs.ICALP.2019.18,
author = {Awasthi, Pranjal and Bakshi, Ainesh and Balcan, Maria-Florina and White, Colin and Woodruff, David P.},
title = {{Robust Communication-Optimal Distributed Clustering Algorithms}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {18:1--18:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.18},
URN = {urn:nbn:de:0030-drops-105942},
doi = {10.4230/LIPIcs.ICALP.2019.18},
annote = {Keywords: robust distributed clustering, communication complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kyriakos Axiotis and Christos Tzamos
Abstract
One of the most fundamental problems in Computer Science is the Knapsack problem. Given a set of n items with different weights and values, it asks to pick the most valuable subset whose total weight is below a capacity threshold T. Despite its wide applicability in various areas in Computer Science, Operations Research, and Finance, the best known running time for the problem is O(T n). The main result of our work is an improved algorithm running in time O(TD), where D is the number of distinct weights. Previously, faster runtimes for Knapsack were only possible when both weights and values are bounded by M and V respectively, running in time O(nMV) [Pisinger, 1999]. In comparison, our algorithm implies a bound of O(n M^2) without any dependence on V, or O(n V^2) without any dependence on M. Additionally, for the unbounded Knapsack problem, we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms match recent conditional lower bounds shown for the Knapsack problem [Marek Cygan et al., 2017; Marvin Künnemann et al., 2017].
We also initiate a systematic study of general capacitated dynamic programming, of which Knapsack is a core problem. This problem asks to compute the maximum weight path of length k in an edge- or node-weighted directed acyclic graph. In a graph with m edges, these problems are solvable by dynamic programming in time O(k m), and we explore under which conditions the dependence on k can be eliminated. We identify large classes of graphs where this is possible and apply our results to obtain linear time algorithms for the problem of k-sparse Delta-separated sequences. The main technical innovation behind our results is identifying and exploiting concavity that appears in relaxations and subproblems of the tasks we consider.
Cite as
Kyriakos Axiotis and Christos Tzamos. Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 19:1-19:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{axiotis_et_al:LIPIcs.ICALP.2019.19,
author = {Axiotis, Kyriakos and Tzamos, Christos},
title = {{Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {19:1--19:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.19},
URN = {urn:nbn:de:0030-drops-105952},
doi = {10.4230/LIPIcs.ICALP.2019.19},
annote = {Keywords: Knapsack, Fine-Grained Complexity, Dynamic Programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Yair Bartal, Nova Fandina, and Ofer Neiman
Abstract
A tree cover of a metric space (X,d) is a collection of trees, so that every pair x,y in X has a low distortion path in one of the trees. If it has the stronger property that every point x in X has a single tree with low distortion paths to all other points, we call this a Ramsey tree cover. Tree covers and Ramsey tree covers have been studied by [Yair Bartal et al., 2005; Anupam Gupta et al., 2004; T-H. Hubert Chan et al., 2005; Gupta et al., 2006; Mendel and Naor, 2007], and have found several important algorithmic applications, e.g. routing and distance oracles. The union of trees in a tree cover also serves as a special type of spanner, that can be decomposed into a few trees with low distortion paths contained in a single tree; Such spanners for Euclidean pointsets were presented by [S. Arya et al., 1995].
In this paper we devise efficient algorithms to construct tree covers and Ramsey tree covers for general, planar and doubling metrics. We pay particular attention to the desirable case of distortion close to 1, and study what can be achieved when the number of trees is small. In particular, our work shows a large separation between what can be achieved by tree covers vs. Ramsey tree covers.
Cite as
Yair Bartal, Nova Fandina, and Ofer Neiman. Covering Metric Spaces by Few Trees. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 20:1-20:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bartal_et_al:LIPIcs.ICALP.2019.20,
author = {Bartal, Yair and Fandina, Nova and Neiman, Ofer},
title = {{Covering Metric Spaces by Few Trees}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {20:1--20:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.20},
URN = {urn:nbn:de:0030-drops-105967},
doi = {10.4230/LIPIcs.ICALP.2019.20},
annote = {Keywords: tree cover, Ramsey tree cover, probabilistic hierarchical family}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Giulia Bernardini, Paweł Gawrychowski, Nadia Pisanti, Solon P. Pissis, and Giovanna Rosone
Abstract
An elastic-degenerate (ED) string is a sequence of n sets of strings of total length N, which was recently proposed to model a set of similar sequences. The ED string matching (EDSM) problem is to find all occurrences of a pattern of length m in an ED text. The EDSM problem has recently received some attention in the combinatorial pattern matching community, and an O(nm^{1.5}sqrt{log m} + N)-time algorithm is known [Aoyama et al., CPM 2018]. The standard assumption in the prior work on this question is that N is substantially larger than both n and m, and thus we would like to have a linear dependency on the former. Under this assumption, the natural open problem is whether we can decrease the 1.5 exponent in the time complexity, similarly as in the related (but, to the best of our knowledge, not equivalent) word break problem [Backurs and Indyk, FOCS 2016].
Our starting point is a conditional lower bound for the EDSM problem. We use the popular combinatorial Boolean matrix multiplication (BMM) conjecture stating that there is no truly subcubic combinatorial algorithm for BMM [Abboud and Williams, FOCS 2014]. By designing an appropriate reduction we show that a combinatorial algorithm solving the EDSM problem in O(nm^{1.5-epsilon} + N) time, for any epsilon>0, refutes this conjecture. Of course, the notion of combinatorial algorithms is not clearly defined, so our reduction should be understood as an indication that decreasing the exponent requires fast matrix multiplication.
Two standard tools used in algorithms on strings are string periodicity and fast Fourier transform. Our main technical contribution is that we successfully combine these tools with fast matrix multiplication to design a non-combinatorial O(nm^{1.381} + N)-time algorithm for EDSM. To the best of our knowledge, we are the first to do so.
Cite as
Giulia Bernardini, Paweł Gawrychowski, Nadia Pisanti, Solon P. Pissis, and Giovanna Rosone. Even Faster Elastic-Degenerate String Matching via Fast Matrix Multiplication. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bernardini_et_al:LIPIcs.ICALP.2019.21,
author = {Bernardini, Giulia and Gawrychowski, Pawe{\l} and Pisanti, Nadia and Pissis, Solon P. and Rosone, Giovanna},
title = {{Even Faster Elastic-Degenerate String Matching via Fast Matrix Multiplication}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {21:1--21:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.21},
URN = {urn:nbn:de:0030-drops-105973},
doi = {10.4230/LIPIcs.ICALP.2019.21},
annote = {Keywords: string algorithms, pattern matching, elastic-degenerate string, matrix multiplication, fast Fourier transform}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič
Abstract
We study the problem of approximating the value of the matching polynomial on graphs with edge parameter gamma, where gamma takes arbitrary values in the complex plane.
When gamma is a positive real, Jerrum and Sinclair showed that the problem admits an FPRAS on general graphs. For general complex values of gamma, Patel and Regts, building on methods developed by Barvinok, showed that the problem admits an FPTAS on graphs of maximum degree Delta as long as gamma is not a negative real number less than or equal to -1/(4(Delta-1)). Our first main result completes the picture for the approximability of the matching polynomial on bounded degree graphs. We show that for all Delta >= 3 and all real gamma less than -1/(4(Delta-1)), the problem of approximating the value of the matching polynomial on graphs of maximum degree Delta with edge parameter gamma is #P-hard.
We then explore whether the maximum degree parameter can be replaced by the connective constant. Sinclair et al. showed that for positive real gamma it is possible to approximate the value of the matching polynomial using a correlation decay algorithm on graphs with bounded connective constant (and potentially unbounded maximum degree). We first show that this result does not extend in general in the complex plane; in particular, the problem is #P-hard on graphs with bounded connective constant for a dense set of gamma values on the negative real axis. Nevertheless, we show that the result does extend for any complex value gamma that does not lie on the negative real axis. Our analysis accounts for complex values of gamma using geodesic distances in the complex plane in the metric defined by an appropriate density function.
Cite as
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič. The Complexity of Approximating the Matching Polynomial in the Complex Plane. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 22:1-22:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bezakova_et_al:LIPIcs.ICALP.2019.22,
author = {Bez\'{a}kov\'{a}, Ivona and Galanis, Andreas and Goldberg, Leslie Ann and \v{S}tefankovi\v{c}, Daniel},
title = {{The Complexity of Approximating the Matching Polynomial in the Complex Plane}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {22:1--22:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.22},
URN = {urn:nbn:de:0030-drops-105983},
doi = {10.4230/LIPIcs.ICALP.2019.22},
annote = {Keywords: matchings, partition function, correlation decay, connective constant}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Therese Biedl and Philipp Kindermann
Abstract
It is well-known that every planar graph has a Tutte path, i.e., a path P such that any component of G-P has at most three attachment points on P. However, it was only recently shown that such Tutte paths can be found in polynomial time. In this paper, we give a new proof that 3-connected planar graphs have Tutte paths, which leads to a linear-time algorithm to find Tutte paths. Furthermore, our Tutte path has special properties: it visits all exterior vertices, all components of G-P have exactly three attachment points, and we can assign distinct representatives to them that are interior vertices. Finally, our running time bound is slightly stronger; we can bound it in terms of the degrees of the faces that are incident to P. This allows us to find some applications of Tutte paths (such as binary spanning trees and 2-walks) in linear time as well.
Cite as
Therese Biedl and Philipp Kindermann. Finding Tutte Paths in Linear Time. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{biedl_et_al:LIPIcs.ICALP.2019.23,
author = {Biedl, Therese and Kindermann, Philipp},
title = {{Finding Tutte Paths in Linear Time}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {23:1--23:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.23},
URN = {urn:nbn:de:0030-drops-105991},
doi = {10.4230/LIPIcs.ICALP.2019.23},
annote = {Keywords: planar graph, Tutte path, Hamiltonian path, 2-walk, linear time}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andreas Björklund, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi
Abstract
A few years ago, Alon et al. [ISMB 2008] gave a simple randomized O((2e)^km epsilon^{-2})-time exponential-space algorithm to approximately compute the number of paths on k vertices in a graph G up to a multiplicative error of 1 +/- epsilon. Shortly afterwards, Alon and Gutner [IWPEC 2009, TALG 2010] gave a deterministic exponential-space algorithm with running time (2e)^{k+O(log^3k)}m log n whenever epsilon^{-1}=k^{O(1)}. Recently, Brand et al. [STOC 2018] provided a speed-up at the cost of reintroducing randomization. Specifically, they gave a randomized O(4^km epsilon^{-2})-time exponential-space algorithm. In this article, we revisit the algorithm by Alon and Gutner. We modify the foundation of their work, and with a novel twist, obtain the following results.
- We present a deterministic 4^{k+O(sqrt{k}(log^2k+log^2 epsilon^{-1}))}m log n-time polynomial-space algorithm. This matches the running time of the best known deterministic polynomial-space algorithm for deciding whether a given graph G has a path on k vertices.
- Additionally, we present a randomized 4^{k+O(log k(log k + log epsilon^{-1}))}m log n-time polynomial-space algorithm. While Brand et al. make non-trivial use of exterior algebra, our algorithm is very simple; we only make elementary use of the probabilistic method.
Thus, the algorithm by Brand et al. runs in time 4^{k+o(k)}m whenever epsilon^{-1}=2^{o(k)}, while our deterministic and randomized algorithms run in time 4^{k+o(k)}m log n whenever epsilon^{-1}=2^{o(k^{1/4})} and epsilon^{-1}=2^{o(k/(log k))}, respectively. Prior to our work, no 2^{O(k)}n^{O(1)}-time polynomial-space algorithm was known. Additionally, our approach is embeddable in the classic framework of divide-and-color, hence it immediately extends to approximate counting of graphs of bounded treewidth; in comparison, Brand et al. note that their approach is limited to graphs of bounded pathwidth.
Cite as
Andreas Björklund, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Approximate Counting of k-Paths: Deterministic and in Polynomial Space. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2019.24,
author = {Bj\"{o}rklund, Andreas and Lokshtanov, Daniel and Saurabh, Saket and Zehavi, Meirav},
title = {{Approximate Counting of k-Paths: Deterministic and in Polynomial Space}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {24:1--24:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.24},
URN = {urn:nbn:de:0030-drops-106001},
doi = {10.4230/LIPIcs.ICALP.2019.24},
annote = {Keywords: parameterized complexity, approximate counting, \{ k\}-Path}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andreas Björklund and Ryan Williams
Abstract
We show that the permanent of an n x n matrix over any finite ring of r <= n elements can be computed with a deterministic 2^{n-Omega(n/r)} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/(r log r))} time, implicit in [Björklund, Husfeldt, and Lyckberg, IPL 2017]. For the permanent over the integers of a 0/1-matrix with exactly d ones per row and column, we provide a deterministic 2^{n-Omega(n/(d^{3/4)})} time algorithm. This improves on a 2^{n-Omega(n/d)} time algorithm in [Cygan and Pilipczuk ICALP 2013]. We also show that the number of Hamiltonian cycles in an n-vertex directed graph of average degree delta can be computed by a deterministic 2^{n-Omega(n/(delta))} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/poly(delta))} time in [Björklund, Kaski, and Koutis, ICALP 2017].
A key tool in our approach is a reduction from computing the permanent to listing pairs of dissimilar vectors from two sets of vectors, i.e., vectors over a finite set that differ in each coordinate, building on an observation of [Bax and Franklin, Algorithmica 2002]. We propose algorithms that can be used both to derandomise the construction of Bax and Franklin, and efficiently list dissimilar pairs using several algorithmic tools. We also give a simple randomised algorithm resulting in Monte Carlo algorithms within the same time bounds.
Our new fast algorithms for listing dissimilar vector pairs from two sets of vectors are inspired by recent algorithms for detecting and counting orthogonal vectors by [Abboud, Williams, and Yu, SODA 2015] and [Chan and Williams, SODA 2016].
Cite as
Andreas Björklund and Ryan Williams. Computing Permanents and Counting Hamiltonian Cycles by Listing Dissimilar Vectors. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2019.25,
author = {Bj\"{o}rklund, Andreas and Williams, Ryan},
title = {{Computing Permanents and Counting Hamiltonian Cycles by Listing Dissimilar Vectors}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {25:1--25:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.25},
URN = {urn:nbn:de:0030-drops-106018},
doi = {10.4230/LIPIcs.ICALP.2019.25},
annote = {Keywords: permanent, Hamiltonian cycle, orthogonal vectors}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andreas Björklund, Petteri Kaski, and Ryan Williams
Abstract
We consider the problem of finding solutions to systems of polynomial equations over a finite field. Lokshtanov et al. [SODA'17] recently obtained the first worst-case algorithms that beat exhaustive search for this problem. In particular for degree-d equations modulo two in n variables, they gave an O^*(2^{(1-1/(5d))n}) time algorithm, and for the special case d=2 they gave an O^*(2^{0.876n}) time algorithm.
We modify their approach in a way that improves these running times to O^*(2^{(1-1/(2.7d))n}) and O^*{2^{0.804n}), respectively. In particular, our latter bound - that holds for all systems of quadratic equations modulo 2 - comes close to the O^*(2^{0.792n}) expected time bound of an algorithm empirically found to hold for random equation systems in Bardet et al. [J. Complexity, 2013]. Our improvement involves three observations:
1) The Valiant-Vazirani lemma can be used to reduce the solution-finding problem to that of counting solutions modulo 2.
2) The monomials in the probabilistic polynomials used in this solution-counting modulo 2 have a special form that we exploit to obtain better bounds on their number than in Lokshtanov et al. [SODA'17].
3) The problem of solution-counting modulo 2 can be "embedded" in a smaller instance of the original problem, which enables us to apply the algorithm as a subroutine to itself.
Cite as
Andreas Björklund, Petteri Kaski, and Ryan Williams. Solving Systems of Polynomial Equations over GF(2) by a Parity-Counting Self-Reduction. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2019.26,
author = {Bj\"{o}rklund, Andreas and Kaski, Petteri and Williams, Ryan},
title = {{Solving Systems of Polynomial Equations over GF(2) by a Parity-Counting Self-Reduction}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {26:1--26:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.26},
URN = {urn:nbn:de:0030-drops-106023},
doi = {10.4230/LIPIcs.ICALP.2019.26},
annote = {Keywords: equation systems, polynomial method}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu
Abstract
We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r.
We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics.
As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.
Cite as
Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{brandao_et_al:LIPIcs.ICALP.2019.27,
author = {Brand\~{a}o, Fernando G. S. L. and Kalev, Amir and Li, Tongyang and Lin, Cedric Yen-Yu and Svore, Krysta M. and Wu, Xiaodi},
title = {{Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {27:1--27:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.27},
URN = {urn:nbn:de:0030-drops-106036},
doi = {10.4230/LIPIcs.ICALP.2019.27},
annote = {Keywords: quantum algorithms, semidefinite program, convex optimization}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alex B. Grilo
Abstract
The importance of being able to verify quantum computation delegated to remote servers increases with recent development of quantum technologies. In some of the proposed protocols for this task, a client delegates her quantum computation to non-communicating servers in multiple rounds of communication. In this work, we propose the first protocol where the client delegates her quantum computation to two servers in one-round of communication. Another advantage of our protocol is that it is conceptually simpler than previous protocols. The parameters of our protocol also make it possible to prove security even if the servers are allowed to communicate, but respecting the plausible assumption that information cannot be propagated faster than speed of light, making it the first relativistic protocol for quantum computation.
Cite as
Alex B. Grilo. A Simple Protocol for Verifiable Delegation of Quantum Computation in One Round. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{grilo:LIPIcs.ICALP.2019.28,
author = {Grilo, Alex B.},
title = {{A Simple Protocol for Verifiable Delegation of Quantum Computation in One Round}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {28:1--28:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.28},
URN = {urn:nbn:de:0030-drops-106044},
doi = {10.4230/LIPIcs.ICALP.2019.28},
annote = {Keywords: quantum computation, quantum cryptography, delegation of quantum computation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Raimundo Briceño, Andrei A. Bulatov, Víctor Dalmau, and Benoît Larose
Abstract
The Constraint Satisfaction Problem (CSP) and its counting counterpart appears under different guises in many areas of mathematics, computer science, statistical physics, and elsewhere. Its structural and algorithmic properties have demonstrated to play a crucial role in many of those applications. For instance, topological properties of the solution set such as connectedness is related to the hardness of CSPs over random structures. In approximate counting and statistical physics, where CSPs emerge in the form of spin systems, mixing properties and the uniqueness of Gibbs measures have been heavily exploited for approximating partition functions or the free energy of spin systems. Additionally, in the decision CSPs, structural properties of the relational structures involved - like, for example, dismantlability - and their logical characterizations have been instrumental for determining the complexity and other properties of the problem.
In spite of the great diversity of those features, there are some eerie similarities between them. These were observed and made more precise in the case of graph homomorphisms by Brightwell and Winkler, who showed that the structural property of dismantlability of the target graph, the connectedness of the set of homomorphisms, good mixing properties of the corresponding spin system, and the uniqueness of Gibbs measure are all equivalent. In this paper we go a step further and demonstrate similar connections for arbitrary CSPs. This requires much deeper understanding of dismantling and the structure of the solution space in the case of relational structures, and new refined concepts of mixing introduced by Briceño. In addition, we develop properties related to the study of valid extensions of a given partially defined homomorphism, an approach that turns out to be novel even in the graph case. We also add to the mix the combinatorial property of finite duality and its logic counterpart, FO-definability, studied by Larose, Loten, and Tardif.
Cite as
Raimundo Briceño, Andrei A. Bulatov, Víctor Dalmau, and Benoît Larose. Dismantlability, Connectedness, and Mixing in Relational Structures. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{briceno_et_al:LIPIcs.ICALP.2019.29,
author = {Brice\~{n}o, Raimundo and Bulatov, Andrei A. and Dalmau, V{\'\i}ctor and Larose, Beno\^{i}t},
title = {{Dismantlability, Connectedness, and Mixing in Relational Structures}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {29:1--29:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.29},
URN = {urn:nbn:de:0030-drops-106059},
doi = {10.4230/LIPIcs.ICALP.2019.29},
annote = {Keywords: relational structure, constraint satisfaction problem, homomorphism, mixing properties, Gibbs measure}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mark Bun, Nikhil S. Mande, and Justin Thaler
Abstract
The communication class UPP^{cc} is a communication analog of the Turing Machine complexity class PP. It is characterized by a matrix-analytic complexity measure called sign-rank (also called dimension complexity), and is essentially the most powerful communication class against which we know how to prove lower bounds.
For a communication problem f, let f wedge f denote the function that evaluates f on two disjoint inputs and outputs the AND of the results. We exhibit a communication problem f with UPP^{cc}(f)= O(log n), and UPP^{cc}(f wedge f) = Theta(log^2 n). This is the first result showing that UPP communication complexity can increase by more than a constant factor under intersection. We view this as a first step toward showing that UPP^{cc}, the class of problems with polylogarithmic-cost UPP communication protocols, is not closed under intersection.
Our result shows that the function class consisting of intersections of two majorities on n bits has dimension complexity n^{Omega(log n)}. This matches an upper bound of (Klivans, O'Donnell, and Servedio, FOCS 2002), who used it to give a quasipolynomial time algorithm for PAC learning intersections of polylogarithmically many majorities. Hence, fundamentally new techniques will be needed to learn this class of functions in polynomial time.
Cite as
Mark Bun, Nikhil S. Mande, and Justin Thaler. Sign-Rank Can Increase Under Intersection. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bun_et_al:LIPIcs.ICALP.2019.30,
author = {Bun, Mark and Mande, Nikhil S. and Thaler, Justin},
title = {{Sign-Rank Can Increase Under Intersection}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {30:1--30:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.30},
URN = {urn:nbn:de:0030-drops-106067},
doi = {10.4230/LIPIcs.ICALP.2019.30},
annote = {Keywords: Sign rank, dimension complexity, communication complexity, learning theory}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Angel A. Cantu, Austin Luchsinger, Robert Schweller, and Tim Wylie
Abstract
Traditionally, computation within self-assembly models is hard to conceal because the self-assembly process generates a crystalline assembly whose computational history is inherently part of the structure itself. With no way to remove information from the computation, this computational model offers a unique problem: how can computational input and computation be hidden while still computing and reporting the final output? Designing such systems is inherently motivated by privacy concerns in biomedical computing and applications in cryptography.
In this paper we propose the problem of performing "covert computation" within tile self-assembly that seeks to design self-assembly systems that "conceal" both the input and computational history of performed computations. We achieve these results within the growth-only restricted abstract tile assembly model (aTAM) with positive and negative interactions. We show that general-case covert computation is possible by implementing a set of basic covert logic gates capable of simulating any circuit (functionally complete). To further motivate the study of covert computation, we apply our new framework to resolve an outstanding complexity question; we use our covert circuitry to show that the unique assembly verification problem within the growth-only aTAM with negative interactions is coNP-complete.
Cite as
Angel A. Cantu, Austin Luchsinger, Robert Schweller, and Tim Wylie. Covert Computation in Self-Assembled Circuits. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cantu_et_al:LIPIcs.ICALP.2019.31,
author = {Cantu, Angel A. and Luchsinger, Austin and Schweller, Robert and Wylie, Tim},
title = {{Covert Computation in Self-Assembled Circuits}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {31:1--31:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.31},
URN = {urn:nbn:de:0030-drops-106075},
doi = {10.4230/LIPIcs.ICALP.2019.31},
annote = {Keywords: self-assembly, covert circuits}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Igor Carboni Oliveira
Abstract
We introduce randomized time-bounded Kolmogorov complexity (rKt), a natural extension of Levin’s notion [Leonid A. Levin, 1984] of Kolmogorov complexity. A string w of low rKt complexity can be decompressed from a short representation via a time-bounded algorithm that outputs w with high probability.
This complexity measure gives rise to a decision problem over strings: MrKtP (The Minimum rKt Problem). We explore ideas from pseudorandomness to prove that MrKtP and its variants cannot be solved in randomized quasi-polynomial time. This exhibits a natural string compression problem that is provably intractable, even for randomized computations. Our techniques also imply that there is no n^{1 - epsilon}-approximate algorithm for MrKtP running in randomized quasi-polynomial time.
Complementing this lower bound, we observe connections between rKt, the power of randomness in computing, and circuit complexity. In particular, we present the first hardness magnification theorem for a natural problem that is unconditionally hard against a strong model of computation.
Cite as
Igor Carboni Oliveira. Randomness and Intractability in Kolmogorov Complexity. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{oliveira:LIPIcs.ICALP.2019.32,
author = {Oliveira, Igor Carboni},
title = {{Randomness and Intractability in Kolmogorov Complexity}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {32:1--32:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.32},
URN = {urn:nbn:de:0030-drops-106087},
doi = {10.4230/LIPIcs.ICALP.2019.32},
annote = {Keywords: computational complexity, randomness, circuit lower bounds, Kolmogorov complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shantanav Chakraborty, András Gilyén, and Stacey Jeffery
Abstract
We apply the framework of block-encodings, introduced by Low and Chuang (under the name standard-form), to the study of quantum machine learning algorithms and derive general results that are applicable to a variety of input models, including sparse matrix oracles and matrices stored in a data structure. We develop several tools within the block-encoding framework, such as singular value estimation of a block-encoded matrix, and quantum linear system solvers using block-encodings. The presented results give new techniques for Hamiltonian simulation of non-sparse matrices, which could be relevant for certain quantum chemistry applications, and which in turn imply an exponential improvement in the dependence on precision in quantum linear systems solvers for non-sparse matrices.
In addition, we develop a technique of variable-time amplitude estimation, based on Ambainis' variable-time amplitude amplification technique, which we are also able to apply within the framework.
As applications, we design the following algorithms: (1) a quantum algorithm for the quantum weighted least squares problem, exhibiting a 6-th power improvement in the dependence on the condition number and an exponential improvement in the dependence on the precision over the previous best algorithm of Kerenidis and Prakash; (2) the first quantum algorithm for the quantum generalized least squares problem; and (3) quantum algorithms for estimating electrical-network quantities, including effective resistance and dissipated power, improving upon previous work.
Cite as
Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The Power of Block-Encoded Matrix Powers: Improved Regression Techniques via Faster Hamiltonian Simulation. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{chakraborty_et_al:LIPIcs.ICALP.2019.33,
author = {Chakraborty, Shantanav and Gily\'{e}n, Andr\'{a}s and Jeffery, Stacey},
title = {{The Power of Block-Encoded Matrix Powers: Improved Regression Techniques via Faster Hamiltonian Simulation}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {33:1--33:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.33},
URN = {urn:nbn:de:0030-drops-106092},
doi = {10.4230/LIPIcs.ICALP.2019.33},
annote = {Keywords: Quantum algorithms, Hamiltonian simulation, Quantum machine learning}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Jérémie Chalopin, Victor Chepoi, Shay Moran, and Manfred K. Warmuth
Abstract
We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by H. Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that the previous constructions of optimal unlabeled compression schemes for maximum classes are erroneous.
On the positive side we present a new construction of an optimal unlabeled compression scheme for maximum classes. We leave as open whether our unlabeled compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes.
Cite as
Jérémie Chalopin, Victor Chepoi, Shay Moran, and Manfred K. Warmuth. Unlabeled Sample Compression Schemes and Corner Peelings for Ample and Maximum Classes. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{chalopin_et_al:LIPIcs.ICALP.2019.34,
author = {Chalopin, J\'{e}r\'{e}mie and Chepoi, Victor and Moran, Shay and Warmuth, Manfred K.},
title = {{Unlabeled Sample Compression Schemes and Corner Peelings for Ample and Maximum Classes}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {34:1--34:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.34},
URN = {urn:nbn:de:0030-drops-106105},
doi = {10.4230/LIPIcs.ICALP.2019.34},
annote = {Keywords: VC-dimension, sample compression, Sauer-Shelah-Perles lemma, Sandwich lemma, maximum class, ample/extremal class, corner peeling, unique sink orientation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Arkadev Chattopadhyay, Yuval Filmus, Sajin Koroth, Or Meir, and Toniann Pitassi
Abstract
We prove a new query-to-communication lifting for randomized protocols, with inner product as gadget. This allows us to use a much smaller gadget, leading to a more efficient lifting. Prior to this work, such a theorem was known only for deterministic protocols, due to Chattopadhyay et al. [Arkadev Chattopadhyay et al., 2017] and Wu et al. [Xiaodi Wu et al., 2017]. The only query-to-communication lifting result for randomized protocols, due to Göös, Pitassi and Watson [Mika Göös et al., 2017], used the much larger indexing gadget.
Our proof also provides a unified treatment of randomized and deterministic lifting. Most existing proofs of deterministic lifting theorems use a measure of information known as thickness. In contrast, Göös, Pitassi and Watson [Mika Göös et al., 2017] used blockwise min-entropy as a measure of information. Our proof uses the blockwise min-entropy framework to prove lifting theorems in both settings in a unified way.
Cite as
Arkadev Chattopadhyay, Yuval Filmus, Sajin Koroth, Or Meir, and Toniann Pitassi. Query-To-Communication Lifting for BPP Using Inner Product. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{chattopadhyay_et_al:LIPIcs.ICALP.2019.35,
author = {Chattopadhyay, Arkadev and Filmus, Yuval and Koroth, Sajin and Meir, Or and Pitassi, Toniann},
title = {{Query-To-Communication Lifting for BPP Using Inner Product}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {35:1--35:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.35},
URN = {urn:nbn:de:0030-drops-106110},
doi = {10.4230/LIPIcs.ICALP.2019.35},
annote = {Keywords: lifting theorems, inner product, BPP Lifting, Deterministic Lifting}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Xue Chen and Eric Price
Abstract
We consider the problem of locating a signal whose frequencies are "off grid" and clustered in a narrow band. Given noisy sample access to a function g(t) with Fourier spectrum in a narrow range [f_0 - Delta, f_0 + Delta], how accurately is it possible to identify f_0? We present generic conditions on g that allow for efficient, accurate estimates of the frequency. We then show bounds on these conditions for k-Fourier-sparse signals that imply recovery of f_0 to within Delta + O~(k^3) from samples on [-1, 1]. This improves upon the best previous bound of O(Delta + O~(k^5))^{1.5}. We also show that no algorithm can do better than Delta + O~(k^2).
In the process we provide a new O~(k^3) bound on the ratio between the maximum and average value of continuous k-Fourier-sparse signals, which has independent application.
Cite as
Xue Chen and Eric Price. Estimating the Frequency of a Clustered Signal. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 36:1-36:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2019.36,
author = {Chen, Xue and Price, Eric},
title = {{Estimating the Frequency of a Clustered Signal}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {36:1--36:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.36},
URN = {urn:nbn:de:0030-drops-106128},
doi = {10.4230/LIPIcs.ICALP.2019.36},
annote = {Keywords: sublinear algorithms, Fourier transform}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kuan Cheng, Zhengzhong Jin, Xin Li, and Ke Wu
Abstract
Document exchange and error correcting codes are two fundamental problems regarding communications. In the first problem, Alice and Bob each holds a string, and the goal is for Alice to send a short sketch to Bob, so that Bob can recover Alice’s string. In the second problem, Alice sends a message with some redundant information to Bob through a channel that can add adversarial errors, and the goal is for Bob to correctly recover the message despite the errors. In both problems, an upper bound is placed on the number of errors between the two strings or that the channel can add, and a major goal is to minimize the size of the sketch or the redundant information. In this paper we focus on deterministic document exchange protocols and binary error correcting codes.
Both problems have been studied extensively. In the case of Hamming errors (i.e., bit substitutions) and bit erasures, we have explicit constructions with asymptotically optimal parameters. However, other error types are still rather poorly understood. In a recent work [Kuan Cheng et al., 2018], the authors constructed explicit deterministic document exchange protocols and binary error correcting codes for edit errors with almost optimal parameters. Unfortunately, the constructions in [Kuan Cheng et al., 2018] do not work for other common errors such as block transpositions.
In this paper, we generalize the constructions in [Kuan Cheng et al., 2018] to handle a much larger class of errors. These include bursts of insertions and deletions, as well as block transpositions. Specifically, we consider document exchange and error correcting codes where the total number of block insertions, block deletions, and block transpositions is at most k <= alpha n/log n for some constant 0<alpha<1. In addition, the total number of bits inserted and deleted by the first two kinds of operations is at most t <= beta n for some constant 0<beta<1, where n is the length of Alice’s string or message. We construct explicit, deterministic document exchange protocols with sketch size O((k log n +t) log^2 n/{k log n + t}) and explicit binary error correcting code with O(k log n log log log n+t) redundant bits. As a comparison, the information-theoretic optimum for both problems is Theta(k log n+t). As far as we know, previously there are no known explicit deterministic document exchange protocols in this case, and the best known binary code needs Omega(n) redundant bits even to correct just one block transposition [L. J. Schulman and D. Zuckerman, 1999].
Cite as
Kuan Cheng, Zhengzhong Jin, Xin Li, and Ke Wu. Block Edit Errors with Transpositions: Deterministic Document Exchange Protocols and Almost Optimal Binary Codes. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cheng_et_al:LIPIcs.ICALP.2019.37,
author = {Cheng, Kuan and Jin, Zhengzhong and Li, Xin and Wu, Ke},
title = {{Block Edit Errors with Transpositions: Deterministic Document Exchange Protocols and Almost Optimal Binary Codes}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {37:1--37:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.37},
URN = {urn:nbn:de:0030-drops-106137},
doi = {10.4230/LIPIcs.ICALP.2019.37},
annote = {Keywords: Deterministic document exchange, error correcting code, block edit error}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Siu-Wing Cheng and Yuchen Mao
Abstract
Asadpour, Feige, and Saberi proved that the integrality gap of the configuration LP for the restricted max-min allocation problem is at most 4. However, their proof does not give a polynomial-time approximation algorithm. A lot of efforts have been devoted to designing an efficient algorithm whose approximation ratio can match this upper bound for the integrality gap. In ICALP 2018, we present a (6 + delta)-approximation algorithm where delta can be any positive constant, and there is still a gap of roughly 2. In this paper, we narrow the gap significantly by proposing a (4+delta)-approximation algorithm where delta can be any positive constant. The approximation ratio is with respect to the optimal value of the configuration LP, and the running time is poly(m,n)* n^{poly(1/(delta))} where n is the number of players and m is the number of resources. We also improve the upper bound for the integrality gap of the configuration LP to 3 + 21/26 =~ 3.808.
Cite as
Siu-Wing Cheng and Yuchen Mao. Restricted Max-Min Allocation: Approximation and Integrality Gap. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 38:1-38:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cheng_et_al:LIPIcs.ICALP.2019.38,
author = {Cheng, Siu-Wing and Mao, Yuchen},
title = {{Restricted Max-Min Allocation: Approximation and Integrality Gap}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {38:1--38:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.38},
URN = {urn:nbn:de:0030-drops-106143},
doi = {10.4230/LIPIcs.ICALP.2019.38},
annote = {Keywords: fair allocation, configuration LP, approximation, integrality gap}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mahdi Cheraghchi, Valentine Kabanets, Zhenjian Lu, and Dimitrios Myrisiotis
Abstract
The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most theta, for a given parameter theta. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires
- N^{3-o(1)}-size de Morgan formulas, improving the recent N^{2-o(1)} lower bound by Hirahara and Santhanam (CCC, 2017),
- N^{2-o(1)}-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and
- 2^{Omega (N^{1/(d+2.01)})}-size depth-d AC^0 circuits, improving the superpolynomial lower bound by Allender et al. (SICOMP, 2006).
The AC^0 lower bound stated above matches the best-known AC^0 lower bound (for PARITY) up to a small additive constant in the depth. Also, for the special case of depth-2 circuits (i.e., CNFs or DNFs), we get an almost optimal lower bound of 2^{N^{1-o(1)}} for MCSP.
Cite as
Mahdi Cheraghchi, Valentine Kabanets, Zhenjian Lu, and Dimitrios Myrisiotis. Circuit Lower Bounds for MCSP from Local Pseudorandom Generators. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cheraghchi_et_al:LIPIcs.ICALP.2019.39,
author = {Cheraghchi, Mahdi and Kabanets, Valentine and Lu, Zhenjian and Myrisiotis, Dimitrios},
title = {{Circuit Lower Bounds for MCSP from Local Pseudorandom Generators}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {39:1--39:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.39},
URN = {urn:nbn:de:0030-drops-106156},
doi = {10.4230/LIPIcs.ICALP.2019.39},
annote = {Keywords: minimum circuit size problem (MCSP), circuit lower bounds, pseudorandom generators (PRGs), local PRGs, de Morgan formulas, branching programs, constant depth circuits}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Eden Chlamtáč, Michael Dinitz, and Thomas Robinson
Abstract
A t-spanner of a graph G is a subgraph H in which all distances are preserved up to a multiplicative t factor. A classical result of Althöfer et al. is that for every integer k and every graph G, there is a (2k-1)-spanner of G with at most O(n^{1+1/k}) edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have "few" nodes of "large" degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the l_p-norm of their degree vector, thus simultaneously modeling the number of edges (the l_1-norm) and the maximum degree (the l_{infty}-norm). We give precise upper bounds for all ranges of p and stretch t: we prove that the greedy (2k-1)-spanner has l_p norm of at most max(O(n), O(n^{{k+p}/{kp}})), and that this bound is tight (assuming the Erdős girth conjecture). We also study universal lower bounds, allowing us to give "generic" guarantees on the approximation ratio of the greedy algorithm which generalize and interpolate between the known approximations for the l_1 and l_{infty} norm. Finally, we show that at least in some situations, the l_p norm behaves fundamentally differently from l_1 or l_{infty}: there are regimes (p=2 and stretch 3 in particular) where the greedy spanner has a provably superior approximation to the generic guarantee.
Cite as
Eden Chlamtáč, Michael Dinitz, and Thomas Robinson. The Norms of Graph Spanners. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{chlamtac_et_al:LIPIcs.ICALP.2019.40,
author = {Chlamt\'{a}\v{c}, Eden and Dinitz, Michael and Robinson, Thomas},
title = {{The Norms of Graph Spanners}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {40:1--40:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.40},
URN = {urn:nbn:de:0030-drops-106163},
doi = {10.4230/LIPIcs.ICALP.2019.40},
annote = {Keywords: spanners, approximations}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Vincent Cohen-Addad and Jason Li
Abstract
We study the complexity of the classic capacitated k-median and k-means problems parameterized by the number of centers, k. These problems are notoriously difficult since the best known approximation bound for high dimensional Euclidean space and general metric space is Theta(log k) and it remains a major open problem whether a constant factor exists.
We show that there exists a (3+epsilon)-approximation algorithm for the capacitated k-median and a (9+epsilon)-approximation algorithm for the capacitated k-means problem in general metric spaces whose running times are f(epsilon,k) n^{O(1)}. For Euclidean inputs of arbitrary dimension, we give a (1+epsilon)-approximation algorithm for both problems with a similar running time. This is a significant improvement over the (7+epsilon)-approximation of Adamczyk et al. for k-median in general metric spaces and the (69+epsilon)-approximation of Xu et al. for Euclidean k-means.
Cite as
Vincent Cohen-Addad and Jason Li. On the Fixed-Parameter Tractability of Capacitated Clustering. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cohenaddad_et_al:LIPIcs.ICALP.2019.41,
author = {Cohen-Addad, Vincent and Li, Jason},
title = {{On the Fixed-Parameter Tractability of Capacitated Clustering}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {41:1--41:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.41},
URN = {urn:nbn:de:0030-drops-106171},
doi = {10.4230/LIPIcs.ICALP.2019.41},
annote = {Keywords: approximation algorithms, fixed-parameter tractability, capacitated, k-median, k-means, clustering, core-sets, Euclidean}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Vincent Cohen-Addad, Anupam Gupta, Amit Kumar, Euiwoong Lee, and Jason Li
Abstract
We investigate the fine-grained complexity of approximating the classical k-Median/k-Means clustering problems in general metric spaces. We show how to improve the approximation factors to (1+2/e+epsilon) and (1+8/e+epsilon) respectively, using algorithms that run in fixed-parameter time. Moreover, we show that we cannot do better in FPT time, modulo recent complexity-theoretic conjectures.
Cite as
Vincent Cohen-Addad, Anupam Gupta, Amit Kumar, Euiwoong Lee, and Jason Li. Tight FPT Approximations for k-Median and k-Means. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 42:1-42:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cohenaddad_et_al:LIPIcs.ICALP.2019.42,
author = {Cohen-Addad, Vincent and Gupta, Anupam and Kumar, Amit and Lee, Euiwoong and Li, Jason},
title = {{Tight FPT Approximations for k-Median and k-Means}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {42:1--42:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.42},
URN = {urn:nbn:de:0030-drops-106182},
doi = {10.4230/LIPIcs.ICALP.2019.42},
annote = {Keywords: approximation algorithms, fixed-parameter tractability, k-median, k-means, clustering, core-sets}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, and Philipp Loick
Abstract
In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [Aldridge et al. 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [Aldridge et al. 2014, Johnson et al. 2019].
Cite as
Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, and Philipp Loick. Information-Theoretic and Algorithmic Thresholds for Group Testing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cojaoghlan_et_al:LIPIcs.ICALP.2019.43,
author = {Coja-Oghlan, Amin and Gebhard, Oliver and Hahn-Klimroth, Max and Loick, Philipp},
title = {{Information-Theoretic and Algorithmic Thresholds for Group Testing}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {43:1--43:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.43},
URN = {urn:nbn:de:0030-drops-106196},
doi = {10.4230/LIPIcs.ICALP.2019.43},
annote = {Keywords: Group testing problem, phase transitions, information theory, efficient algorithms, sharp threshold, Bayesian inference}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Thomas Colcombet, Joël Ouaknine, Pavel Semukhin, and James Worrell
Abstract
We consider the Membership and the Half-Space Reachability problems for matrices in dimensions two and three. Our first main result is that the Membership Problem is decidable for finitely generated sub-semigroups of the Heisenberg group over rational numbers. Furthermore, we prove two decidability results for the Half-Space Reachability Problem. Namely, we show that this problem is decidable for sub-semigroups of GL(2,Z) and of the Heisenberg group over rational numbers.
Cite as
Thomas Colcombet, Joël Ouaknine, Pavel Semukhin, and James Worrell. On Reachability Problems for Low-Dimensional Matrix Semigroups. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 44:1-44:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{colcombet_et_al:LIPIcs.ICALP.2019.44,
author = {Colcombet, Thomas and Ouaknine, Jo\"{e}l and Semukhin, Pavel and Worrell, James},
title = {{On Reachability Problems for Low-Dimensional Matrix Semigroups}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {44:1--44:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.44},
URN = {urn:nbn:de:0030-drops-106209},
doi = {10.4230/LIPIcs.ICALP.2019.44},
annote = {Keywords: membership problem, half-space reachability problem, matrix semigroups, Heisenberg group, general linear group}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Graham Cormode, Jacques Dark, and Christian Konrad
Abstract
We consider the maximal and maximum independent set problems in three models of graph streams:
- In the edge model we see a stream of edges which collectively define a graph; this model is well-studied for a variety of problems. We show that the space complexity for a one-pass streaming algorithm to find a maximal independent set is quadratic (i.e. we must store all edges). We further show that it is not much easier if we only require approximate maximality. This contrasts strongly with the other two vertex-based models, where one can greedily find an exact solution in only the space needed to store the independent set.
- In the "explicit" vertex model, the input stream is a sequence of vertices making up the graph. Every vertex arrives along with its incident edges that connect to previously arrived vertices. Various graph problems require substantially less space to solve in this setting than in edge-arrival streams. We show that every one-pass c-approximation streaming algorithm for maximum independent set (MIS) on explicit vertex streams requires Omega({n^2}/{c^6}) bits of space, where n is the number of vertices of the input graph. It is already known that Theta~({n^2}/{c^2}) bits of space are necessary and sufficient in the edge arrival model (Halldórsson et al. 2012), thus the MIS problem is not significantly easier to solve under the explicit vertex arrival order assumption. Our result is proved via a reduction from a new multi-party communication problem closely related to pointer jumping.
- In the "implicit" vertex model, the input stream consists of a sequence of objects, one per vertex. The algorithm is equipped with a function that maps pairs of objects to the presence or absence of edges, thus defining the graph. This model captures, for example, geometric intersection graphs such as unit disc graphs. Our final set of results consists of several improved upper and lower bounds for interval and square intersection graphs, in both explicit and implicit streams. In particular, we show a gap between the hardness of the explicit and implicit vertex models for interval graphs.
Cite as
Graham Cormode, Jacques Dark, and Christian Konrad. Independent Sets in Vertex-Arrival Streams. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{cormode_et_al:LIPIcs.ICALP.2019.45,
author = {Cormode, Graham and Dark, Jacques and Konrad, Christian},
title = {{Independent Sets in Vertex-Arrival Streams}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {45:1--45:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.45},
URN = {urn:nbn:de:0030-drops-106212},
doi = {10.4230/LIPIcs.ICALP.2019.45},
annote = {Keywords: streaming algorithms, independent set size, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, Nicole Wein, Yinzhan Xu, and Yuancheng Yu
Abstract
We study fundamental graph parameters such as the Diameter and Radius in directed graphs, when distances are measured using a somewhat unorthodox but natural measure: the distance between u and v is the minimum of the shortest path distances from u to v and from v to u. The center node in a graph under this measure can for instance represent the optimal location for a hospital to ensure the fastest medical care for everyone, as one can either go to the hospital, or a doctor can be sent to help.
By computing All-Pairs Shortest Paths, all pairwise distances and thus the parameters we study can be computed exactly in O~(mn) time for directed graphs on n vertices, m edges and nonnegative edge weights. Furthermore, this time bound is tight under the Strong Exponential Time Hypothesis [Roditty-Vassilevska W. STOC 2013] so it is natural to study how well these parameters can be approximated in O(mn^{1-epsilon}) time for constant epsilon>0. Abboud, Vassilevska Williams, and Wang [SODA 2016] gave a polynomial factor approximation for Diameter and Radius, as well as a constant factor approximation for both problems in the special case where the graph is a DAG. We greatly improve upon these bounds by providing the first constant factor approximations for Diameter, Radius and the related Eccentricities problem in general graphs. Additionally, we provide a hierarchy of algorithms for Diameter that gives a time/accuracy trade-off.
Cite as
Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, Nicole Wein, Yinzhan Xu, and Yuancheng Yu. Approximation Algorithms for Min-Distance Problems. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2019.46,
author = {Dalirrooyfard, Mina and Williams, Virginia Vassilevska and Vyas, Nikhil and Wein, Nicole and Xu, Yinzhan and Yu, Yuancheng},
title = {{Approximation Algorithms for Min-Distance Problems}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {46:1--46:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.46},
URN = {urn:nbn:de:0030-drops-106223},
doi = {10.4230/LIPIcs.ICALP.2019.46},
annote = {Keywords: fine-grained complexity, graph algorithms, diameter, radius, eccentricities}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, and Nicole Wein
Abstract
Some of the most fundamental and well-studied graph parameters are the Diameter (the largest shortest paths distance) and Radius (the smallest distance for which a "center" node can reach all other nodes). The natural and important ST-variant considers two subsets S and T of the vertex set and lets the ST-diameter be the maximum distance between a node in S and a node in T, and the ST-radius be the minimum distance for a node of S to reach all nodes of T. The bichromatic variant is the special case in which S and T partition the vertex set.
In this paper we present a comprehensive study of the approximability of ST and Bichromatic Diameter, Radius, and Eccentricities, and variants, in graphs with and without directions and weights. We give the first nontrivial approximation algorithms for most of these problems, including time/accuracy trade-off upper and lower bounds. We show that nearly all of our obtained bounds are tight under the Strong Exponential Time Hypothesis (SETH), or the related Hitting Set Hypothesis.
For instance, for Bichromatic Diameter in undirected weighted graphs with m edges, we present an O~(m^{3/2}) time 5/3-approximation algorithm, and show that under SETH, neither the running time, nor the approximation factor can be significantly improved while keeping the other unchanged.
Cite as
Mina Dalirrooyfard, Virginia Vassilevska Williams, Nikhil Vyas, and Nicole Wein. Tight Approximation Algorithms for Bichromatic Graph Diameter and Related Problems. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 47:1-47:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2019.47,
author = {Dalirrooyfard, Mina and Williams, Virginia Vassilevska and Vyas, Nikhil and Wein, Nicole},
title = {{Tight Approximation Algorithms for Bichromatic Graph Diameter and Related Problems}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {47:1--47:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.47},
URN = {urn:nbn:de:0030-drops-106238},
doi = {10.4230/LIPIcs.ICALP.2019.47},
annote = {Keywords: approximation algorithms, fine-grained complexity, diameter, radius, eccentricities}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ran Duan, Ce Jin, and Hongxun Wu
Abstract
In this paper, we present an improved algorithm for the All Pairs Non-decreasing Paths (APNP) problem on weighted simple digraphs, which has running time O~(n^{{3 + omega}/{2}}) = O~(n^{2.686}). Here n is the number of vertices, and omega < 2.373 is the exponent of time complexity of fast matrix multiplication [Williams 2012, Le Gall 2014]. This matches the current best upper bound for (max, min)-matrix product [Duan, Pettie 2009] which is reducible to APNP. Thus, further improvement for APNP will imply a faster algorithm for (max, min)-matrix product. The previous best upper bound for APNP on weighted digraphs was O~(n^{1/2(3 + {3 - omega}/{omega + 1} + omega)}) = O~(n^{2.78}) [Duan, Gu, Zhang 2018]. We also show an O~(n^2) time algorithm for APNP in undirected simple graphs which also reaches optimal within logarithmic factors.
Cite as
Ran Duan, Ce Jin, and Hongxun Wu. Faster Algorithms for All Pairs Non-Decreasing Paths Problem. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 48:1-48:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{duan_et_al:LIPIcs.ICALP.2019.48,
author = {Duan, Ran and Jin, Ce and Wu, Hongxun},
title = {{Faster Algorithms for All Pairs Non-Decreasing Paths Problem}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {48:1--48:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.48},
URN = {urn:nbn:de:0030-drops-106241},
doi = {10.4230/LIPIcs.ICALP.2019.48},
annote = {Keywords: graph optimization, matrix multiplication, non-decreasing paths}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Guillaume Ducoffe
Abstract
Given an n-vertex m-edge graph G with non-negative edge-weights, a shortest cycle of G is one minimizing the sum of the weights on its edges. The girth of G is the weight of such a shortest cycle. We obtain several new approximation algorithms for computing the girth of weighted graphs:
- For any graph G with polynomially bounded integer weights, we present a deterministic algorithm that computes, in O~(n^{5/3}+m)-time, a cycle of weight at most twice the girth of G. This matches both the approximation factor and - almost - the running time of the best known subquadratic-time approximation algorithm for the girth of unweighted graphs.
- Then, we turn our algorithm into a deterministic (2+epsilon)-approximation for graphs with arbitrary non-negative edge-weights, at the price of a slightly worse running-time in O~(n^{5/3}polylog(1/epsilon)+m). For that, we introduce a generic method in order to obtain a polynomial-factor approximation of the girth in subquadratic time, that may be of independent interest.
- Finally, if we assume that the adjacency lists are sorted then we can get rid off the dependency in the number m of edges. Namely, we can transform our algorithms into an O~(n^{5/3})-time randomized 4-approximation for graphs with non-negative edge-weights. This can be derandomized, thereby leading to an O~(n^{5/3})-time deterministic 4-approximation for graphs with polynomially bounded integer weights, and an O~(n^{5/3}polylog(1/epsilon))-time deterministic (4+epsilon)-approximation for graphs with non-negative edge-weights.
To the best of our knowledge, these are the first known subquadratic-time approximation algorithms for computing the girth of weighted graphs.
Cite as
Guillaume Ducoffe. Faster Approximation Algorithms for Computing Shortest Cycles on Weighted Graphs. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ducoffe:LIPIcs.ICALP.2019.49,
author = {Ducoffe, Guillaume},
title = {{Faster Approximation Algorithms for Computing Shortest Cycles on Weighted Graphs}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {49:1--49:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.49},
URN = {urn:nbn:de:0030-drops-106254},
doi = {10.4230/LIPIcs.ICALP.2019.49},
annote = {Keywords: girth, weighted graphs, approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Josep Díaz, Lefteris Kirousis, Sofia Kokonezi, and John Livieratos
Abstract
We call domain any arbitrary subset of a Cartesian power of the set {0,1} when we think of it as reflecting abstract rationality restrictions on vectors of two-valued judgments on a number of issues. In Computational Social Choice Theory, and in particular in the theory of judgment aggregation, a domain is called a possibility domain if it admits a non-dictatorial aggregator, i.e. if for some k there exists a unanimous (idempotent) function F:D^k - > D which is not a projection function. We prove that a domain is a possibility domain if and only if there is a propositional formula of a certain syntactic form, sometimes called an integrity constraint, whose set of satisfying truth assignments, or models, comprise the domain. We call possibility integrity constraints the formulas of the specific syntactic type we define. Given a possibility domain D, we show how to construct a possibility integrity constraint for D efficiently, i.e, in polynomial time in the size of the domain. We also show how to distinguish formulas that are possibility integrity constraints in linear time in the size of the input formula. Finally, we prove the analogous results for local possibility domains, i.e. domains that admit an aggregator which is not a projection function, even when restricted to any given issue. Our result falls in the realm of classical results that give syntactic characterizations of logical relations that have certain closure properties, like e.g. the result that logical relations component-wise closed under logical AND are precisely the models of Horn formulas. However, our techniques draw from results in judgment aggregation theory as well from results about propositional formulas and logical relations.
Cite as
Josep Díaz, Lefteris Kirousis, Sofia Kokonezi, and John Livieratos. Algorithmically Efficient Syntactic Characterization of Possibility Domains. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{diaz_et_al:LIPIcs.ICALP.2019.50,
author = {D{\'\i}az, Josep and Kirousis, Lefteris and Kokonezi, Sofia and Livieratos, John},
title = {{Algorithmically Efficient Syntactic Characterization of Possibility Domains}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {50:1--50:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.50},
URN = {urn:nbn:de:0030-drops-106269},
doi = {10.4230/LIPIcs.ICALP.2019.50},
annote = {Keywords: collective decision making, computational social choice, judgment aggregation, logical relations, algorithm complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Julian Dörfler, Christian Ikenmeyer, and Greta Panova
Abstract
Geometric Complexity Theory as initiated by Mulmuley and Sohoni in two papers (SIAM J Comput 2001, 2008) aims to separate algebraic complexity classes via representation theoretic multiplicities in coordinate rings of specific group varieties. We provide the first toy setting in which a separation can be achieved for a family of polynomials via these multiplicities.
Mulmuley and Sohoni’s papers also conjecture that the vanishing behavior of multiplicities would be sufficient to separate complexity classes (so-called occurrence obstructions). The existence of such strong occurrence obstructions has been recently disproven in 2016 in two successive papers, Ikenmeyer-Panova (Adv. Math.) and Bürgisser-Ikenmeyer-Panova (J. AMS). This raises the question whether separating group varieties via representation theoretic multiplicities is stronger than separating them via occurrences. We provide first finite settings where a separation via multiplicities can be achieved, while the separation via occurrences is provably impossible. These settings are surprisingly simple and natural: We study the variety of products of homogeneous linear forms (the so-called Chow variety) and the variety of polynomials of bounded border Waring rank (i.e. a higher secant variety of the Veronese variety).
As a side result we prove a slight generalization of Hermite’s reciprocity theorem, which proves Foulkes' conjecture for a new infinite family of cases.
Cite as
Julian Dörfler, Christian Ikenmeyer, and Greta Panova. On Geometric Complexity Theory: Multiplicity Obstructions Are Stronger Than Occurrence Obstructions. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dorfler_et_al:LIPIcs.ICALP.2019.51,
author = {D\"{o}rfler, Julian and Ikenmeyer, Christian and Panova, Greta},
title = {{On Geometric Complexity Theory: Multiplicity Obstructions Are Stronger Than Occurrence Obstructions}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {51:1--51:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.51},
URN = {urn:nbn:de:0030-drops-106276},
doi = {10.4230/LIPIcs.ICALP.2019.51},
annote = {Keywords: Algebraic complexity theory, geometric complexity theory, Waring rank, plethysm coefficients, occurrence obstructions, multiplicity obstructions}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Talya Eden, Dana Ron, and Will Rosenbaum
Abstract
In this paper, we revisit the problem of sampling edges in an unknown graph G = (V, E) from a distribution that is (pointwise) almost uniform over E. We consider the case where there is some a priori upper bound on the arboriciy of G. Given query access to a graph G over n vertices and of average degree {d} and arboricity at most alpha, we design an algorithm that performs O(alpha/d * {log^3 n}/epsilon) queries in expectation and returns an edge in the graph such that every edge e in E is sampled with probability (1 +/- epsilon)/m. The algorithm performs two types of queries: degree queries and neighbor queries. We show that the upper bound is tight (up to poly-logarithmic factors and the dependence in epsilon), as Omega(alpha/d) queries are necessary for the easier task of sampling edges from any distribution over E that is close to uniform in total variational distance. We also prove that even if G is a tree (i.e., alpha = 1 so that alpha/d = Theta(1)), Omega({log n}/{loglog n}) queries are necessary to sample an edge from any distribution that is pointwise close to uniform, thus establishing that a poly(log n) factor is necessary for constant alpha. Finally we show how our algorithm can be applied to obtain a new result on approximately counting subgraphs, based on the recent work of Assadi, Kapralov, and Khanna (ITCS, 2019).
Cite as
Talya Eden, Dana Ron, and Will Rosenbaum. The Arboricity Captures the Complexity of Sampling Edges. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{eden_et_al:LIPIcs.ICALP.2019.52,
author = {Eden, Talya and Ron, Dana and Rosenbaum, Will},
title = {{The Arboricity Captures the Complexity of Sampling Edges}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {52:1--52:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.52},
URN = {urn:nbn:de:0030-drops-106287},
doi = {10.4230/LIPIcs.ICALP.2019.52},
annote = {Keywords: sampling, graph algorithms, arboricity, sublinear-time algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alina Ene and Huy L. Nguyen
Abstract
We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon)^{O(1/epsilon^4)} n log^2{n} function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to Omega(n^2) running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.
Cite as
Alina Ene and Huy L. Nguyen. A Nearly-Linear Time Algorithm for Submodular Maximization with a Knapsack Constraint. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 53:1-53:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ene_et_al:LIPIcs.ICALP.2019.53,
author = {Ene, Alina and Nguyen, Huy L.},
title = {{A Nearly-Linear Time Algorithm for Submodular Maximization with a Knapsack Constraint}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {53:1--53:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.53},
URN = {urn:nbn:de:0030-drops-106290},
doi = {10.4230/LIPIcs.ICALP.2019.53},
annote = {Keywords: submodular maximization, knapsack constraint, fast algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alina Ene and Huy L. Nguyen
Abstract
We consider fast algorithms for monotone submodular maximization subject to a matroid constraint. We assume that the matroid is given as input in an explicit form, and the goal is to obtain the best possible running times for important matroids. We develop a new algorithm for a general matroid constraint with a 1 - 1/e - epsilon approximation that achieves a fast running time provided we have a fast data structure for maintaining an approximately maximum weight base in the matroid through a sequence of decrease weight operations. We construct such data structures for graphic matroids and partition matroids, and we obtain the first algorithms for these classes of matroids that achieve a nearly-optimal, 1 - 1/e - epsilon approximation, using a nearly-linear number of function evaluations and arithmetic operations.
Cite as
Alina Ene and Huy L. Nguyen. Towards Nearly-Linear Time Algorithms for Submodular Maximization with a Matroid Constraint. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ene_et_al:LIPIcs.ICALP.2019.54,
author = {Ene, Alina and Nguyen, Huy L.},
title = {{Towards Nearly-Linear Time Algorithms for Submodular Maximization with a Matroid Constraint}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {54:1--54:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.54},
URN = {urn:nbn:de:0030-drops-106303},
doi = {10.4230/LIPIcs.ICALP.2019.54},
annote = {Keywords: submodular maximization, matroid constraints, fast running times}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Massimo Equi, Roberto Grossi, Veli Mäkinen, and Alexandru I. Tomescu
Abstract
Exact string matching in labeled graphs is the problem of searching paths of a graph G=(V,E) such that the concatenation of their node labels is equal to the given pattern string P[1..m]. This basic problem can be found at the heart of more complex operations on variation graphs in computational biology, of query operations in graph databases, and of analysis operations in heterogeneous networks.
We prove a conditional lower bound stating that, for any constant epsilon>0, an O(|E|^{1 - epsilon} m)-time, or an O(|E| m^{1 - epsilon})-time algorithm for exact string matching in graphs, with node labels and patterns drawn from a binary alphabet, cannot be achieved unless the Strong Exponential Time Hypothesis (SETH) is false. This holds even if restricted to undirected graphs with maximum node degree two, i.e. to zig-zag matching in bidirectional strings, or to deterministic directed acyclic graphs whose nodes have maximum sum of indegree and outdegree three. These restricted cases make the lower bound stricter than what can be directly derived from related bounds on regular expression matching (Backurs and Indyk, FOCS'16). In fact, our bounds are tight in the sense that lowering the degree or the alphabet size yields linear-time solvable problems.
An interesting corollary is that exact and approximate matching are equally hard (quadratic time) in graphs under SETH. In comparison, the same problems restricted to strings have linear-time vs quadratic-time solutions, respectively (approximate pattern matching having also a matching SETH lower bound (Backurs and Indyk, STOC'15)).
Cite as
Massimo Equi, Roberto Grossi, Veli Mäkinen, and Alexandru I. Tomescu. On the Complexity of String Matching for Graphs. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 55:1-55:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{equi_et_al:LIPIcs.ICALP.2019.55,
author = {Equi, Massimo and Grossi, Roberto and M\"{a}kinen, Veli and Tomescu, Alexandru I.},
title = {{On the Complexity of String Matching for Graphs}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {55:1--55:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.55},
URN = {urn:nbn:de:0030-drops-106314},
doi = {10.4230/LIPIcs.ICALP.2019.55},
annote = {Keywords: exact pattern matching, graph query, graph search, labeled graphs, string matching, string search, strong exponential time hypothesis, heterogeneous networks, variation graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
John Fearnley, Spencer Gordon, Ruta Mehta, and Rahul Savani
Abstract
The complexity class CLS was proposed by Daskalakis and Papadimitriou in 2011 to understand the complexity of important NP search problems that admit both path following and potential optimizing algorithms. Here we identify a subclass of CLS - called UniqueEOPL - that applies a more specific combinatorial principle that guarantees unique solutions. We show that UniqueEOPL contains several important problems such as the P-matrix Linear Complementarity Problem, finding Fixed Point of Contraction Maps, and solving Unique Sink Orientations (USOs). UniqueEOPL seems to a proper subclass of CLS and looks more likely to be the right class for the problems of interest. We identify a problem - closely related to solving contraction maps and USOs - that is complete for UniqueEOPL. Our results also give the fastest randomised algorithm for P-matrix LCP.
Cite as
John Fearnley, Spencer Gordon, Ruta Mehta, and Rahul Savani. Unique End of Potential Line. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 56:1-56:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{fearnley_et_al:LIPIcs.ICALP.2019.56,
author = {Fearnley, John and Gordon, Spencer and Mehta, Ruta and Savani, Rahul},
title = {{Unique End of Potential Line}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {56:1--56:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.56},
URN = {urn:nbn:de:0030-drops-106327},
doi = {10.4230/LIPIcs.ICALP.2019.56},
annote = {Keywords: P-matrix linear complementarity problem, unique sink orientation, contraction map, TFNP, total search problems, continuous local search}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Miron Ficak, Marcin Kozik, Miroslav Olšák, and Szymon Stankiewicz
Abstract
In one of the most actively studied version of Constraint Satisfaction Problem, a CSP is defined by a relational structure called a template. In the decision version of the problem the goal is to determine whether a structure given on input admits a homomorphism into this template. Two recent independent results of Bulatov [FOCS'17] and Zhuk [FOCS'17] state that each finite template defines CSP which is tractable or NP-complete.
In a recent paper Brakensiek and Guruswami [SODA'18] proposed an extension of the CSP framework. This extension, called Promise Constraint Satisfaction Problem, includes many naturally occurring computational questions, e.g. approximate coloring, that cannot be cast as CSPs. A PCSP is a combination of two CSPs defined by two similar templates; the computational question is to distinguish a YES instance of the first one from a NO instance of the second.
The computational complexity of many PCSPs remains unknown. Even the case of Boolean templates (solved for CSP by Schaefer [STOC'78]) remains wide open. The main result of Brakensiek and Guruswami [SODA'18] shows that Boolean PCSPs exhibit a dichotomy (PTIME vs. NPC) when "all the clauses are symmetric and allow for negation of variables". In this paper we remove the "allow for negation of variables" assumption from the theorem. The "symmetric" assumption means that changing the order of variables in a constraint does not change its satisfiability. The "negation of variables" means that both of the templates share a relation which can be used to effectively negate Boolean variables.
The main result of this paper establishes dichotomy for all the symmetric boolean templates. The tractability case of our theorem and the theorem of Brakensiek and Guruswami are almost identical. The main difference, and the main contribution of this work, is the new reason for hardness and the reasoning proving the split.
Cite as
Miron Ficak, Marcin Kozik, Miroslav Olšák, and Szymon Stankiewicz. Dichotomy for Symmetric Boolean PCSPs. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 57:1-57:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ficak_et_al:LIPIcs.ICALP.2019.57,
author = {Ficak, Miron and Kozik, Marcin and Ol\v{s}\'{a}k, Miroslav and Stankiewicz, Szymon},
title = {{Dichotomy for Symmetric Boolean PCSPs}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {57:1--57:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.57},
URN = {urn:nbn:de:0030-drops-106339},
doi = {10.4230/LIPIcs.ICALP.2019.57},
annote = {Keywords: promise constraint satisfaction problem, PCSP, algebraic approach}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Yuval Filmus, Lianna Hambardzumyan, Hamed Hatami, Pooya Hatami, and David Zuckerman
Abstract
The seminal result of Kahn, Kalai and Linial shows that a coalition of O(n/(log n)) players can bias the outcome of any Boolean function {0,1}^n -> {0,1} with respect to the uniform measure. We extend their result to arbitrary product measures on {0,1}^n, by combining their argument with a completely different argument that handles very biased input bits.
We view this result as a step towards proving a conjecture of Friedgut, which states that Boolean functions on the continuous cube [0,1]^n (or, equivalently, on {1,...,n}^n) can be biased using coalitions of o(n) players. This is the first step taken in this direction since Friedgut proposed the conjecture in 2004.
Russell, Saks and Zuckerman extended the result of Kahn, Kalai and Linial to multi-round protocols, showing that when the number of rounds is o(log^* n), a coalition of o(n) players can bias the outcome with respect to the uniform measure. We extend this result as well to arbitrary product measures on {0,1}^n.
The argument of Russell et al. relies on the fact that a coalition of o(n) players can boost the expectation of any Boolean function from epsilon to 1-epsilon with respect to the uniform measure. This fails for general product distributions, as the example of the AND function with respect to mu_{1-1/n} shows. Instead, we use a novel boosting argument alongside a generalization of our first result to arbitrary finite ranges.
Cite as
Yuval Filmus, Lianna Hambardzumyan, Hamed Hatami, Pooya Hatami, and David Zuckerman. Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 58:1-58:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{filmus_et_al:LIPIcs.ICALP.2019.58,
author = {Filmus, Yuval and Hambardzumyan, Lianna and Hatami, Hamed and Hatami, Pooya and Zuckerman, David},
title = {{Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {58:1--58:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.58},
URN = {urn:nbn:de:0030-drops-106340},
doi = {10.4230/LIPIcs.ICALP.2019.58},
annote = {Keywords: Boolean function analysis, coin flipping}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi
Abstract
Perturbed graphic matroids are binary matroids that can be obtained from a graphic matroid by adding a noise of small rank. More precisely, an r-rank perturbed graphic matroid M is a binary matroid that can be represented in the form I +P, where I is the incidence matrix of some graph and P is a binary matrix of rank at most r. Such matroids naturally appear in a number of theoretical and applied settings. The main motivation behind our work is an attempt to understand which parameterized algorithms for various problems on graphs could be lifted to perturbed graphic matroids.
We study the parameterized complexity of a natural generalization (for matroids) of the following fundamental problems on graphs: Steiner Tree and Multiway Cut. In this generalization, called the Space Cover problem, we are given a binary matroid M with a ground set E, a set of terminals T subseteq E, and a non-negative integer k. The task is to decide whether T can be spanned by a subset of E \ T of size at most k.
We prove that on graphic matroid perturbations, for every fixed r, Space Cover is fixed-parameter tractable parameterized by k. On the other hand, the problem becomes W[1]-hard when parameterized by r+k+|T| and it is NP-complete for r <= 2 and |T|<= 2.
On cographic matroids, that are the duals of graphic matroids, Space Cover generalizes another fundamental and well-studied problem, namely Multiway Cut. We show that on the duals of perturbed graphic matroids the Space Cover problem is fixed-parameter tractable parameterized by r+k.
Cite as
Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Covering Vectors by Spaces in Perturbed Graphic Matroids and Their Duals. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 59:1-59:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{fomin_et_al:LIPIcs.ICALP.2019.59,
author = {Fomin, Fedor V. and Golovach, Petr A. and Lokshtanov, Daniel and Saurabh, Saket and Zehavi, Meirav},
title = {{Covering Vectors by Spaces in Perturbed Graphic Matroids and Their Duals}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {59:1--59:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.59},
URN = {urn:nbn:de:0030-drops-106351},
doi = {10.4230/LIPIcs.ICALP.2019.59},
annote = {Keywords: Binary matroids, perturbed graphic matroids, spanning set, parameterized complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Fedor V. Fomin, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi
Abstract
Bidimensionality is the most common technique to design subexponential-time parameterized algorithms on special classes of graphs, particularly planar graphs. The core engine behind it is a combinatorial lemma of Robertson, Seymour and Thomas that states that every planar graph either has a sqrt{k} x sqrt{k}-grid as a minor, or its treewidth is O(sqrt{k}). However, bidimensionality theory cannot be extended directly to several well-known classes of geometric graphs like unit disk or map graphs. This is mainly due to the presence of large cliques in these classes of graphs. Nevertheless, a relaxation of this lemma has been proven useful for unit disk graphs. Inspired by this, we prove a new decomposition lemma for map graphs, the intersection graphs of finitely many simply-connected and interior-disjoint regions of the Euclidean plane. Informally, our lemma states the following. For any map graph G, there exists a collection (U_1,...,U_t) of cliques of G with the following property: G either contains a sqrt{k} x sqrt{k}-grid as a minor, or it admits a tree decomposition where every bag is the union of O(sqrt{k}) cliques in the above collection.
The new lemma appears to be a handy tool in the design of subexponential parameterized algorithms on map graphs. We demonstrate its usability by designing algorithms on map graphs with running time 2^{O({sqrt{k}log{k}})} * n^{O(1)} for Connected Planar F-Deletion (that encompasses problems such as Feedback Vertex Set and Vertex Cover). Obtaining subexponential algorithms for Longest Cycle/Path and Cycle Packing is more challenging. We have to construct tree decompositions with more powerful properties and to prove sublinear bounds on the number of ways an optimum solution could "cross" bags in these decompositions.
For Longest Cycle/Path, these are the first subexponential-time parameterized algorithm on map graphs. For Feedback Vertex Set and Cycle Packing, we improve upon known 2^{O({k^{0.75}log{k}})} * n^{O(1)}-time algorithms on map graphs.
Cite as
Fedor V. Fomin, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. Decomposition of Map Graphs with Applications. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 60:1-60:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{fomin_et_al:LIPIcs.ICALP.2019.60,
author = {Fomin, Fedor V. and Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Zehavi, Meirav},
title = {{Decomposition of Map Graphs with Applications}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {60:1--60:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.60},
URN = {urn:nbn:de:0030-drops-106366},
doi = {10.4230/LIPIcs.ICALP.2019.60},
annote = {Keywords: Longest Cycle, Cycle Packing, Feedback Vertex Set, Map Graphs, FPT}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Tobias Friedrich and Ralf Rothenberger
Abstract
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. Its worst-case hardness lies at the core of computational complexity theory, for example in the form of NP-hardness and the (Strong) Exponential Time Hypothesis. In practice however, SAT instances can often be solved efficiently. This contradicting behavior has spawned interest in the average-case analysis of SAT and has triggered the development of sophisticated rigorous and non-rigorous techniques for analyzing random structures.
Despite a long line of research and substantial progress, most theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, real-world instances often exhibit large fluctuations in variable occurrence. This can be modeled by a non-uniform distribution of the variables, which can result in distributions closer to industrial SAT instances.
We study satisfiability thresholds of non-uniform random 2-SAT with n variables and m clauses and with an arbitrary probability distribution (p_i)_{i in[n]} with p_1 >=slant p_2 >=slant ... >=slant p_n>0 over the n variables. We show for p_{1}^2=Theta (sum_{i=1}^n p_i^2) that the asymptotic satisfiability threshold is at {m=Theta ((1-{sum_{i=1}^n p_i^2})/(p_1 * (sum_{i=2}^n p_i^2)^{1/2}))} and that it is coarse. For p_{1}^2=o (sum_{i=1}^n p_i^2) we show that there is a sharp satisfiability threshold at m=(sum_{i=1}^n p_i^2)^{-1}. This result generalizes the seminal works by Chvatal and Reed [FOCS 1992] and by Goerdt [JCSS 1996].
Cite as
Tobias Friedrich and Ralf Rothenberger. The Satisfiability Threshold for Non-Uniform Random 2-SAT. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{friedrich_et_al:LIPIcs.ICALP.2019.61,
author = {Friedrich, Tobias and Rothenberger, Ralf},
title = {{The Satisfiability Threshold for Non-Uniform Random 2-SAT}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {61:1--61:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.61},
URN = {urn:nbn:de:0030-drops-106372},
doi = {10.4230/LIPIcs.ICALP.2019.61},
annote = {Keywords: random SAT, satisfiability threshold, sharpness, non-uniform distribution, 2-SAT}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ankit Garg, Nikhil Gupta, Neeraj Kayal, and Chandan Saha
Abstract
The determinant polynomial Det_n(x) of degree n is the determinant of a n x n matrix of formal variables. A polynomial f is equivalent to Det_n(x) over a field F if there exists a A in GL(n^2,F) such that f = Det_n(A * x). Determinant equivalence test over F is the following algorithmic task: Given black-box access to a f in F[x], check if f is equivalent to Det_n(x) over F, and if so then output a transformation matrix A in GL(n^2,F). In (Kayal, STOC 2012), a randomized polynomial time determinant equivalence test was given over F = C. But, to our knowledge, the complexity of the problem over finite fields and over Q was not well understood.
In this work, we give a randomized poly(n,log |F|) time determinant equivalence test over finite fields F (under mild restrictions on the characteristic and size of F). Over Q, we give an efficient randomized reduction from factoring square-free integers to determinant equivalence test for quadratic forms (i.e. the n=2 case), assuming GRH. This shows that designing a polynomial-time determinant equivalence test over Q is a challenging task. Nevertheless, we show that determinant equivalence test over Q is decidable: For bounded n, there is a randomized polynomial-time determinant equivalence test over Q with access to an oracle for integer factoring. Moreover, for any n, there is a randomized polynomial-time algorithm that takes input black-box access to a f in Q[x] and if f is equivalent to Det_n over Q then it returns a A in GL(n^2,L) such that f = Det_n(A * x), where L is an extension field of Q and [L : Q] <= n.
The above algorithms over finite fields and over Q are obtained by giving a polynomial-time randomized reduction from determinant equivalence test to another problem, namely the full matrix algebra isomorphism problem. We also show a reduction in the converse direction which is efficient if n is bounded. These reductions, which hold over any F (under mild restrictions on the characteristic and size of F), establish a close connection between the complexity of the two problems. This then leads to our results via applications of known results on the full algebra isomorphism problem over finite fields (Rónyai, STOC 1987 and Rónyai, J. Symb. Comput. 1990) and over Q (Ivanyos {et al}., Journal of Algebra 2012 and Babai {et al}., Mathematics of Computation 1990).
Cite as
Ankit Garg, Nikhil Gupta, Neeraj Kayal, and Chandan Saha. Determinant Equivalence Test over Finite Fields and over Q. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 62:1-62:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{garg_et_al:LIPIcs.ICALP.2019.62,
author = {Garg, Ankit and Gupta, Nikhil and Kayal, Neeraj and Saha, Chandan},
title = {{Determinant Equivalence Test over Finite Fields and over Q}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {62:1--62:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.62},
URN = {urn:nbn:de:0030-drops-106382},
doi = {10.4230/LIPIcs.ICALP.2019.62},
annote = {Keywords: Determinant equivalence test, full matrix algebra isomorphism, Lie algebra}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Naveen Garg, Anupam Gupta, Amit Kumar, and Sahil Singla
Abstract
We consider the online problem of scheduling jobs on identical machines, where jobs have precedence constraints. We are interested in the demanding setting where the jobs sizes are not known up-front, but are revealed only upon completion (the non-clairvoyant setting). Such precedence-constrained scheduling problems routinely arise in map-reduce and large-scale optimization. For minimizing the total weighted completion time, we give a constant-competitive algorithm. And for total weighted flow-time, we give an O(1/epsilon^2)-competitive algorithm under (1+epsilon)-speed augmentation and a natural "no-surprises" assumption on release dates of jobs (which we show is necessary in this context).
Our algorithm proceeds by assigning virtual rates to all waiting jobs, including the ones which are dependent on other uncompleted jobs. We then use these virtual rates to decide on the actual rates of minimal jobs (i.e., jobs which do not have dependencies and hence are eligible to run). Interestingly, the virtual rates are obtained by allocating time in a fair manner, using a Eisenberg-Gale-type convex program (which we can solve optimally using a primal-dual scheme). The optimality condition of this convex program allows us to show dual-fitting proofs more easily, without having to guess and hand-craft the duals. This idea of using fair virtual rates may have broader applicability in scheduling problems.
Cite as
Naveen Garg, Anupam Gupta, Amit Kumar, and Sahil Singla. Non-Clairvoyant Precedence Constrained Scheduling. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{garg_et_al:LIPIcs.ICALP.2019.63,
author = {Garg, Naveen and Gupta, Anupam and Kumar, Amit and Singla, Sahil},
title = {{Non-Clairvoyant Precedence Constrained Scheduling}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {63:1--63:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.63},
URN = {urn:nbn:de:0030-drops-106394},
doi = {10.4230/LIPIcs.ICALP.2019.63},
annote = {Keywords: Online algorithms, Scheduling, Primal-Dual analysis, Nash welfare}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Dmitry Gavinsky, Troy Lee, Miklos Santha, and Swagato Sanyal
Abstract
For any relation f subseteq {0,1}^n x S and any partial Boolean function g:{0,1}^m -> {0,1,*}, we show that R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * sqrt{R_{1/3}(g)}) , where R_epsilon(*) stands for the bounded-error randomized query complexity with error at most epsilon, and f o g^n subseteq ({0,1}^m)^n x S denotes the composition of f with n instances of g.
The new composition theorem is optimal, at least, for the general case of relational problems: A relation f_0 and a partial Boolean function g_0 are constructed, such that R_{4/9}(f_0) in Theta(sqrt n), R_{1/3}(g_0)in Theta(n) and R_{1/3}(f_0 o g_0^n) in Theta(n).
The theorem is proved via introducing a new complexity measure, max-conflict complexity, denoted by bar{chi}(*). Its investigation shows that bar{chi}(g) in Omega(sqrt{R_{1/3}(g)}) for any partial Boolean function g and R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * bar{chi}(g)) for any relation f, which readily implies the composition statement. It is further shown that bar{chi}(g) is always at least as large as the sabotage complexity of g.
Cite as
Dmitry Gavinsky, Troy Lee, Miklos Santha, and Swagato Sanyal. A Composition Theorem for Randomized Query Complexity via Max-Conflict Complexity. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 64:1-64:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{gavinsky_et_al:LIPIcs.ICALP.2019.64,
author = {Gavinsky, Dmitry and Lee, Troy and Santha, Miklos and Sanyal, Swagato},
title = {{A Composition Theorem for Randomized Query Complexity via Max-Conflict Complexity}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {64:1--64:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.64},
URN = {urn:nbn:de:0030-drops-106407},
doi = {10.4230/LIPIcs.ICALP.2019.64},
annote = {Keywords: query complexity, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Paul W. Goldberg and Alexandros Hollender
Abstract
The Hairy Ball Theorem states that every continuous tangent vector field on an even-dimensional sphere must have a zero. We prove that the associated computational problem of computing an approximate zero is PPAD-complete. We also give a FIXP-hardness result for the general exact computation problem.
In order to show that this problem lies in PPAD, we provide new results on multiple-source variants of End-of-Line, the canonical PPAD-complete problem. In particular, finding an approximate zero of a Hairy Ball vector field on an even-dimensional sphere reduces to a 2-source End-of-Line problem. If the domain is changed to be the torus of genus g >= 2 instead (where the Hairy Ball Theorem also holds), then the problem reduces to a 2(g-1)-source End-of-Line problem.
These multiple-source End-of-Line results are of independent interest and provide new tools for showing membership in PPAD. In particular, we use them to provide the first full proof of PPAD-completeness for the Imbalance problem defined by Beame et al. in 1998.
Cite as
Paul W. Goldberg and Alexandros Hollender. The Hairy Ball Problem is PPAD-Complete. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 65:1-65:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{goldberg_et_al:LIPIcs.ICALP.2019.65,
author = {Goldberg, Paul W. and Hollender, Alexandros},
title = {{The Hairy Ball Problem is PPAD-Complete}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {65:1--65:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.65},
URN = {urn:nbn:de:0030-drops-106416},
doi = {10.4230/LIPIcs.ICALP.2019.65},
annote = {Keywords: Computational Complexity, TFNP, PPAD, End-of-Line}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, and Avishay Tal
Abstract
Minimum Circuit Size Problem (MCSP) asks to decide if a given truth table of an n-variate boolean function has circuit complexity less than a given parameter s. We prove that MCSP is hard for constant-depth circuits with mod p gates, for any prime p >= 2 (the circuit class AC^0[p]). Namely, we show that MCSP requires d-depth AC^0[p] circuits of size at least exp(N^{0.49/d}), where N=2^n is the size of an input truth table of an n-variate boolean function. Our circuit lower bound proof shows that MCSP can solve the coin problem: distinguish uniformly random N-bit strings from those generated using independent samples from a biased random coin which is 1 with probability 1/2+N^{-0.49}, and 0 otherwise. Solving the coin problem with such parameters is known to require exponentially large AC^0[p] circuits. Moreover, this also implies that MAJORITY is computable by a non-uniform AC^0 circuit of polynomial size that also has MCSP-oracle gates. The latter has a few other consequences for the complexity of MCSP, e.g., we get that any boolean function in NC^1 (i.e., computable by a polynomial-size formula) can also be computed by a non-uniform polynomial-size AC^0 circuit with MCSP-oracle gates.
Cite as
Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, and Avishay Tal. AC^0[p] Lower Bounds Against MCSP via the Coin Problem. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 66:1-66:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{golovnev_et_al:LIPIcs.ICALP.2019.66,
author = {Golovnev, Alexander and Ilango, Rahul and Impagliazzo, Russell and Kabanets, Valentine and Kolokolova, Antonina and Tal, Avishay},
title = {{AC^0\lbrackp\rbrack Lower Bounds Against MCSP via the Coin Problem}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {66:1--66:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.66},
URN = {urn:nbn:de:0030-drops-106422},
doi = {10.4230/LIPIcs.ICALP.2019.66},
annote = {Keywords: Minimum Circuit Size Problem (MCSP), circuit lower bounds, AC0\lbrackp\rbrack, coin problem, hybrid argument, MKTP, biased random boolean functions}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Anupam Gupta, Guru Guruganesh, Binghui Peng, and David Wajc
Abstract
We study the minimum-cost metric perfect matching problem under online i.i.d arrivals. We are given a fixed metric with a server at each of the points, and then requests arrive online, each drawn independently from a known probability distribution over the points. Each request has to be matched to a free server, with cost equal to the distance. The goal is to minimize the expected total cost of the matching.
Such stochastic arrival models have been widely studied for the maximization variants of the online matching problem; however, the only known result for the minimization problem is a tight O(log n)-competitiveness for the random-order arrival model. This is in contrast with the adversarial model, where an optimal competitive ratio of O(log n) has long been conjectured and remains a tantalizing open question.
In this paper, we show that the i.i.d model admits substantially better algorithms: our main result is an O((log log log n)^2)-competitive algorithm in this model, implying a strict separation between the i.i.d model and the adversarial and random order models. Along the way we give a 9-competitive algorithm for the line and tree metrics - the first O(1)-competitive algorithm for any non-trivial arrival model for these much-studied metrics.
Cite as
Anupam Gupta, Guru Guruganesh, Binghui Peng, and David Wajc. Stochastic Online Metric Matching. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{gupta_et_al:LIPIcs.ICALP.2019.67,
author = {Gupta, Anupam and Guruganesh, Guru and Peng, Binghui and Wajc, David},
title = {{Stochastic Online Metric Matching}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {67:1--67:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.67},
URN = {urn:nbn:de:0030-drops-106430},
doi = {10.4230/LIPIcs.ICALP.2019.67},
annote = {Keywords: stochastic, online, online matching, metric matching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Venkatesan Guruswami, Lingfei Jin, and Chaoping Xing
Abstract
Local Reconstruction Codes (LRCs) allow for recovery from a small number of erasures in a local manner based on just a few other codeword symbols. They have emerged as the codes of choice for large scale distributed storage systems due to the very efficient repair of failed storage nodes in the typical scenario of a single or few nodes failing, while also offering fault tolerance against worst-case scenarios with more erasures. A maximally recoverable (MR) LRC offers the best possible blend of such local and global fault tolerance, guaranteeing recovery from all erasure patterns which are information-theoretically correctable given the presence of local recovery groups. In an (n,r,h,a)-LRC, the n codeword symbols are partitioned into r disjoint groups each of which include a local parity checks capable of locally correcting a erasures. The codeword symbols further obey h heavy (global) parity checks. Such a code is maximally recoverable if it can correct all patterns of a erasures per local group plus up to h additional erasures anywhere in the codeword. This property amounts to linear independence of all such subsets of columns of the parity check matrix.
MR LRCs have received much attention recently, with many explicit constructions covering different regimes of parameters. Unfortunately, all known constructions require a large field size that is exponential in h or a, and it is of interest to obtain MR LRCs of minimal possible field size. In this work, we develop an approach based on function fields to construct MR LRCs. Our method recovers, and in most parameter regimes improves, the field size of previous approaches. For instance, for the case of small r << epsilon log n and large h >=slant Omega(n^{1-epsilon}), we improve the field size from roughly n^h to n^{epsilon h}. For the case of a=1 (one local parity check), we improve the field size quadratically from r^{h(h+1)} to r^{h floor[(h+1)/2]} for some range of r. The improvements are modest, but more importantly are obtained in a unified manner via a promising new idea.
Cite as
Venkatesan Guruswami, Lingfei Jin, and Chaoping Xing. Constructions of Maximally Recoverable Local Reconstruction Codes via Function Fields. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{guruswami_et_al:LIPIcs.ICALP.2019.68,
author = {Guruswami, Venkatesan and Jin, Lingfei and Xing, Chaoping},
title = {{Constructions of Maximally Recoverable Local Reconstruction Codes via Function Fields}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {68:1--68:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.68},
URN = {urn:nbn:de:0030-drops-106449},
doi = {10.4230/LIPIcs.ICALP.2019.68},
annote = {Keywords: Erasure codes, Algebraic constructions, Linear algebra, Locally Repairable Codes, Explicit constructions}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Yassine Hamoudi and Frédéric Magniez
Abstract
In this paper we provide new quantum algorithms with polynomial speed-up for a range of problems for which no such results were known, or we improve previous algorithms. First, we consider the approximation of the frequency moments F_k of order k >= 3 in the multi-pass streaming model with updates (turnstile model). We design a P-pass quantum streaming algorithm with memory M satisfying a tradeoff of P^2 M = O~(n^{1-2/k}), whereas the best classical algorithm requires P M = Theta(n^{1-2/k}). Then, we study the problem of estimating the number m of edges and the number t of triangles given query access to an n-vertex graph. We describe optimal quantum algorithms that perform O~(sqrt{n}/m^{1/4}) and O~(sqrt{n}/t^{1/6} + m^{3/4}/sqrt{t}) queries respectively. This is a quadratic speed-up compared to the classical complexity of these problems.
For this purpose we develop a new quantum paradigm that we call Quantum Chebyshev’s inequality. Namely we demonstrate that, in a certain model of quantum sampling, one can approximate with relative error the mean of any random variable with a number of quantum samples that is linear in the ratio of the square root of the variance to the mean. Classically the dependence is quadratic. Our algorithm subsumes a previous result of Montanaro [Montanaro, 2015]. This new paradigm is based on a refinement of the Amplitude Estimation algorithm of Brassard et al. [Brassard et al., 2002] and of previous quantum algorithms for the mean estimation problem. We show that this speed-up is optimal, and we identify another common model of quantum sampling where it cannot be obtained. Finally, we develop a new technique called "variable-time amplitude estimation" that reduces the dependence of our algorithm on the sample preparation time.
Cite as
Yassine Hamoudi and Frédéric Magniez. Quantum Chebyshev’s Inequality and Applications. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 69:1-69:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{hamoudi_et_al:LIPIcs.ICALP.2019.69,
author = {Hamoudi, Yassine and Magniez, Fr\'{e}d\'{e}ric},
title = {{Quantum Chebyshev’s Inequality and Applications}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {69:1--69:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.69},
URN = {urn:nbn:de:0030-drops-106458},
doi = {10.4230/LIPIcs.ICALP.2019.69},
annote = {Keywords: Quantum algorithms, approximation algorithms, sublinear-time algorithms, Monte Carlo method, streaming algorithms, subgraph counting}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Samuel Haney, Mehraneh Liaee, Bruce M. Maggs, Debmalya Panigrahi, Rajmohan Rajaraman, and Ravi Sundaram
Abstract
We initiate the algorithmic study of retracting a graph into a cycle in the graph, which seeks a mapping of the graph vertices to the cycle vertices so as to minimize the maximum stretch of any edge, subject to the constraint that the restriction of the mapping to the cycle is the identity map. This problem has its roots in the rich theory of retraction of topological spaces, and has strong ties to well-studied metric embedding problems such as minimum bandwidth and 0-extension. Our first result is an O(min{k, sqrt{n}})-approximation for retracting any graph on n nodes to a cycle with k nodes. We also show a surprising connection to Sperner’s Lemma that rules out the possibility of improving this result using certain natural convex relaxations of the problem. Nevertheless, if the problem is restricted to planar graphs, we show that we can overcome these integrality gaps by giving an optimal combinatorial algorithm, which is the technical centerpiece of the paper. Building on our planar graph algorithm, we also obtain a constant-factor approximation algorithm for retraction of points in the Euclidean plane to a uniform cycle.
Cite as
Samuel Haney, Mehraneh Liaee, Bruce M. Maggs, Debmalya Panigrahi, Rajmohan Rajaraman, and Ravi Sundaram. Retracting Graphs to Cycles. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 70:1-70:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{haney_et_al:LIPIcs.ICALP.2019.70,
author = {Haney, Samuel and Liaee, Mehraneh and Maggs, Bruce M. and Panigrahi, Debmalya and Rajaraman, Rajmohan and Sundaram, Ravi},
title = {{Retracting Graphs to Cycles}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {70:1--70:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.70},
URN = {urn:nbn:de:0030-drops-106462},
doi = {10.4230/LIPIcs.ICALP.2019.70},
annote = {Keywords: Graph algorithms, Graph embedding, Planar graphs, Approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Falko Hegerfeld and Stefan Kratsch
Abstract
In the fundamental Maximum Matching problem the task is to find a maximum cardinality set of pairwise disjoint edges in a given undirected graph. The fastest algorithm for this problem, due to Micali and Vazirani, runs in time O(sqrt{n}m) and stands unbeaten since 1980. It is complemented by faster, often linear-time, algorithms for various special graph classes. Moreover, there are fast parameterized algorithms, e.g., time O(km log n) relative to tree-width k, which outperform O(sqrt{n}m) when the parameter is sufficiently small.
We show that the Micali-Vazirani algorithm, and in fact any algorithm following the phase framework of Hopcroft and Karp, is adaptive to beneficial input structure. We exhibit several graph classes for which such algorithms run in linear time O(n+m). More strongly, we show that they run in time O(sqrt{k}m) for graphs that are k vertex deletions away from any of several such classes, without explicitly computing an optimal or approximate deletion set; before, most such bounds were at least Omega(km). Thus, any phase-based matching algorithm with linear-time phases obliviously interpolates between linear time for k=O(1) and the worst case of O(sqrt{n}m) when k=Theta(n). We complement our findings by proving that the phase framework by itself still allows Omega(sqrt{n}) phases, and hence time Omega(sqrt{n}m), even on paths, cographs, and bipartite chain graphs.
Cite as
Falko Hegerfeld and Stefan Kratsch. On Adaptive Algorithms for Maximum Matching. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{hegerfeld_et_al:LIPIcs.ICALP.2019.71,
author = {Hegerfeld, Falko and Kratsch, Stefan},
title = {{On Adaptive Algorithms for Maximum Matching}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {71:1--71:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.71},
URN = {urn:nbn:de:0030-drops-106477},
doi = {10.4230/LIPIcs.ICALP.2019.71},
annote = {Keywords: Matchings, Adaptive Analysis, Parameterized Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Pavel Hrubeš, Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao, and Amir Yehudayoff
Abstract
There are various notions of balancing set families that appear in combinatorics and computer science. For example, a family of proper non-empty subsets S_1,...,S_k subset [n] is balancing if for every subset X subset {1,2,...,n} of size n/2, there is an i in [k] so that |S_i cap X| = |S_i|/2. We extend and simplify the framework developed by Hegedűs for proving lower bounds on the size of balancing set families. We prove that if n=2p for a prime p, then k >= p. For arbitrary values of n, we show that k >= n/2 - o(n).
We then exploit the connection between balancing families and depth-2 threshold circuits. This connection helps resolve a question raised by Kulikov and Podolskii on the fan-in of depth-2 majority circuits computing the majority function on n bits. We show that any depth-2 threshold circuit that computes the majority on n bits has at least one gate with fan-in at least n/2 - o(n). We also prove a sharp lower bound on the fan-in of depth-2 threshold circuits computing a specific weighted threshold function.
Cite as
Pavel Hrubeš, Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao, and Amir Yehudayoff. Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 72:1-72:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{hrubes_et_al:LIPIcs.ICALP.2019.72,
author = {Hrube\v{s}, Pavel and Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup and Yehudayoff, Amir},
title = {{Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {72:1--72:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.72},
URN = {urn:nbn:de:0030-drops-106487},
doi = {10.4230/LIPIcs.ICALP.2019.72},
annote = {Keywords: Balancing sets, depth-2 threshold circuits, polynomials, majority, weighted thresholds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Zhiyi Huang and Xue Zhu
Abstract
We introduce an (epsilon, delta)-jointly differentially private algorithm for packing problems. Our algorithm not only achieves the optimal trade-off between the privacy parameter epsilon and the minimum supply requirement (up to logarithmic factors), but is also scalable in the sense that the running time is linear in the number of agents n. Previous algorithms either run in cubic time in n, or require a minimum supply per resource that is sqrt{n} times larger than the best possible.
Cite as
Zhiyi Huang and Xue Zhu. Scalable and Jointly Differentially Private Packing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 73:1-73:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{huang_et_al:LIPIcs.ICALP.2019.73,
author = {Huang, Zhiyi and Zhu, Xue},
title = {{Scalable and Jointly Differentially Private Packing}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {73:1--73:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.73},
URN = {urn:nbn:de:0030-drops-106498},
doi = {10.4230/LIPIcs.ICALP.2019.73},
annote = {Keywords: Joint differential privacy, packing, scalable algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Klaus Jansen and Lars Rohwedder
Abstract
Graph Balancing is the problem of orienting the edges of a weighted multigraph so as to minimize the maximum weighted in-degree. Since the introduction of the problem the best algorithm known achieves an approximation ratio of 1.75 and it is based on rounding a linear program with this exact integrality gap. It is also known that there is no (1.5 - epsilon)-approximation algorithm, unless P=NP. Can we do better than 1.75?
We prove that a different LP formulation, the configuration LP, has a strictly smaller integrality gap. Graph Balancing was the last one in a group of related problems from literature, for which it was open whether the configuration LP is stronger than previous, simple LP relaxations. We base our proof on a local search approach that has been applied successfully to the more general Restricted Assignment problem, which in turn is a prominent special case of makespan minimization on unrelated machines. With a number of technical novelties we are able to obtain a bound of 1.749 for the case of Graph Balancing. It is not clear whether the local search algorithm we present terminates in polynomial time, which means that the bound is non-constructive. However, it is a strong evidence that a better approximation algorithm is possible using the configuration LP and it allows the optimum to be estimated within a factor better than 1.75.
A particularly interesting aspect of our techniques is the way we handle small edges in the local search. We manage to exploit the configuration constraints enforced on small edges in the LP. This may be of interest to other problems such as Restricted Assignment as well.
Cite as
Klaus Jansen and Lars Rohwedder. Local Search Breaks 1.75 for Graph Balancing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{jansen_et_al:LIPIcs.ICALP.2019.74,
author = {Jansen, Klaus and Rohwedder, Lars},
title = {{Local Search Breaks 1.75 for Graph Balancing}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {74:1--74:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.74},
URN = {urn:nbn:de:0030-drops-106501},
doi = {10.4230/LIPIcs.ICALP.2019.74},
annote = {Keywords: graph, approximation algorithm, scheduling, integrality gap, local search}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Klaus Jansen, Alexandra Lassota, and Lars Rohwedder
Abstract
We study an important case of ILPs max {c^Tx | Ax = b, l <= x <= u, x in Z^{n t}} with n * t variables and lower and upper bounds l, u in Z^{nt}. In n-fold ILPs non-zero entries only appear in the first r rows of the matrix A and in small blocks of size s x t along the diagonal underneath. Despite this restriction many optimization problems can be expressed in this form. It is known that n-fold ILPs can be solved in FPT time regarding the parameters s, r, and Delta, where Delta is the greatest absolute value of an entry in A. The state-of-the-art technique is a local search algorithm that subsequently moves in an improving direction. Both, the number of iterations and the search for such an improving direction take time Omega(n), leading to a quadratic running time in n. We introduce a technique based on Color Coding, which allows us to compute these improving directions in logarithmic time after a single initialization step. This leads to the first algorithm for n-fold ILPs with a running time that is near-linear in the number nt of variables, namely (rs Delta)^{O(r^2s + s^2)} L^2 * nt log^{O(1)}(nt), where L is the encoding length of the largest integer in the input. In contrast to the algorithms in recent literature, we do not need to solve the LP relaxation in order to handle unbounded variables. Instead, we give a structural lemma to introduce appropriate bounds. If, on the other hand, we are given such an LP solution, the running time can be decreased by a factor of L.
Cite as
Klaus Jansen, Alexandra Lassota, and Lars Rohwedder. Near-Linear Time Algorithm for n-fold ILPs via Color Coding. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 75:1-75:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{jansen_et_al:LIPIcs.ICALP.2019.75,
author = {Jansen, Klaus and Lassota, Alexandra and Rohwedder, Lars},
title = {{Near-Linear Time Algorithm for n-fold ILPs via Color Coding}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {75:1--75:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.75},
URN = {urn:nbn:de:0030-drops-106518},
doi = {10.4230/LIPIcs.ICALP.2019.75},
annote = {Keywords: Near-Linear Time Algorithm, n-fold ILP, Color Coding}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ce Jin
Abstract
The 0-1 knapsack problem is an important NP-hard problem that admits fully polynomial-time approximation schemes (FPTASs). Previously the fastest FPTAS by Chan (2018) with approximation factor 1+epsilon runs in O~(n + (1/epsilon)^{12/5}) time, where O~ hides polylogarithmic factors. In this paper we present an improved algorithm in O~(n+(1/epsilon)^{9/4}) time, with only a (1/epsilon)^{1/4} gap from the quadratic conditional lower bound based on (min,+)-convolution. Our improvement comes from a multi-level extension of Chan’s number-theoretic construction, and a greedy lemma that reduces unnecessary computation spent on cheap items.
Cite as
Ce Jin. An Improved FPTAS for 0-1 Knapsack. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{jin:LIPIcs.ICALP.2019.76,
author = {Jin, Ce},
title = {{An Improved FPTAS for 0-1 Knapsack}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {76:1--76:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.76},
URN = {urn:nbn:de:0030-drops-106527},
doi = {10.4230/LIPIcs.ICALP.2019.76},
annote = {Keywords: approximation algorithms, knapsack, subset sum}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Vladimir Kolmogorov
Abstract
A Valued Constraint Satisfaction Problem (VCSP) provides a common framework that can express a wide range of discrete optimization problems. A VCSP instance is given by a finite set of variables, a finite domain of labels, and an objective function to be minimized. This function is represented as a sum of terms where each term depends on a subset of the variables. To obtain different classes of optimization problems, one can restrict all terms to come from a fixed set Gamma of cost functions, called a language.
Recent breakthrough results have established a complete complexity classification of such classes with respect to language Gamma: if all cost functions in Gamma satisfy a certain algebraic condition then all Gamma-instances can be solved in polynomial time, otherwise the problem is NP-hard. Unfortunately, testing this condition for a given language Gamma is known to be NP-hard. We thus study exponential algorithms for this meta-problem. We show that the tractability condition of a finite-valued language Gamma can be tested in O(sqrt[3]{3}^{|D|}* poly(size(Gamma))) time, where D is the domain of Gamma and poly(*) is some fixed polynomial. We also obtain a matching lower bound under the Strong Exponential Time Hypothesis (SETH). More precisely, we prove that for any constant delta<1 there is no O(sqrt[3]{3}^{delta|D|}) algorithm, assuming that SETH holds.
Cite as
Vladimir Kolmogorov. Testing the Complexity of a Valued CSP Language. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 77:1-77:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kolmogorov:LIPIcs.ICALP.2019.77,
author = {Kolmogorov, Vladimir},
title = {{Testing the Complexity of a Valued CSP Language}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {77:1--77:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.77},
URN = {urn:nbn:de:0030-drops-106531},
doi = {10.4230/LIPIcs.ICALP.2019.77},
annote = {Keywords: Valued Constraint Satisfaction Problems, Exponential time algorithms, Exponential Time Hypothesis}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mrinal Kumar, Rafael Oliveira, and Ramprasad Saptharishi
Abstract
We show that any n-variate polynomial computable by a syntactically multilinear circuit of size poly(n) can be computed by a depth-4 syntactically multilinear (Sigma Pi Sigma Pi) circuit of size at most exp ({O (sqrt{n log n})}). For degree d = omega(n/log n), this improves upon the upper bound of exp ({O(sqrt{d}log n)}) obtained by Tavenas [Sébastien Tavenas, 2015] for general circuits, and is known to be asymptotically optimal in the exponent when d < n^{epsilon} for a small enough constant epsilon. Our upper bound matches the lower bound of exp ({Omega (sqrt{n log n})}) proved by Raz and Yehudayoff [Ran Raz and Amir Yehudayoff, 2009], and thus cannot be improved further in the exponent. Our results hold over all fields and also generalize to circuits of small individual degree.
More generally, we show that an n-variate polynomial computable by a syntactically multilinear circuit of size poly(n) can be computed by a syntactically multilinear circuit of product-depth Delta of size at most exp inparen{O inparen{Delta * (n/log n)^{1/Delta} * log n}}. It follows from the lower bounds of Raz and Yehudayoff [Ran Raz and Amir Yehudayoff, 2009] that in general, for constant Delta, the exponent in this upper bound is tight and cannot be improved to o inparen{inparen{n/log n}^{1/Delta}* log n}.
Cite as
Mrinal Kumar, Rafael Oliveira, and Ramprasad Saptharishi. Towards Optimal Depth Reductions for Syntactically Multilinear Circuits. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 78:1-78:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kumar_et_al:LIPIcs.ICALP.2019.78,
author = {Kumar, Mrinal and Oliveira, Rafael and Saptharishi, Ramprasad},
title = {{Towards Optimal Depth Reductions for Syntactically Multilinear Circuits}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {78:1--78:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.78},
URN = {urn:nbn:de:0030-drops-106548},
doi = {10.4230/LIPIcs.ICALP.2019.78},
annote = {Keywords: arithmetic circuits, multilinear circuits, depth reduction, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Adam Kurpisz
Abstract
We introduce a method for proving bounds on the SoS rank based on Boolean Function Analysis and Approximation Theory. We apply our technique to improve upon existing results, thus making progress towards answering several open questions.
We consider two questions by Laurent. First, finding what is the SoS rank of the linear representation of the set with no integral points. We prove that the SoS rank is between ceil[n/2] and ceil[~ n/2 +sqrt{n log{2n}} ~]. Second, proving the bounds on the SoS rank for the instance of the Min Knapsack problem. We show that the SoS rank is at least Omega(sqrt{n}) and at most ceil[{n+ 4 ceil[sqrt{n} ~]}/2]. Finally, we consider the question by Bienstock regarding the instance of the Set Cover problem. For this problem we prove the SoS rank lower bound of Omega(sqrt{n}).
Cite as
Adam Kurpisz. Sum-Of-Squares Bounds via Boolean Function Analysis. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 79:1-79:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kurpisz:LIPIcs.ICALP.2019.79,
author = {Kurpisz, Adam},
title = {{Sum-Of-Squares Bounds via Boolean Function Analysis}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {79:1--79:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.79},
URN = {urn:nbn:de:0030-drops-106556},
doi = {10.4230/LIPIcs.ICALP.2019.79},
annote = {Keywords: SoS certificate, SoS rank, hypercube optimization, semidefinite programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
William Kuszmaul
Abstract
Dynamic time warping distance (DTW) is a widely used distance measure between time series, with applications in areas such as speech recognition and bioinformatics. The best known algorithms for computing DTW run in near quadratic time, and conditional lower bounds prohibit the existence of significantly faster algorithms.
The lower bounds do not prevent a faster algorithm for the important special case in which the DTW is small, however. For an arbitrary metric space Sigma with distances normalized so that the smallest non-zero distance is one, we present an algorithm which computes dtw(x, y) for two strings x and y over Sigma in time O(n * dtw(x, y)). When dtw(x, y) is small, this represents a significant speedup over the standard quadratic-time algorithm.
Using our low-distance regime algorithm as a building block, we also present an approximation algorithm which computes dtw(x, y) within a factor of O(n^epsilon) in time O~(n^{2 - epsilon}) for 0 < epsilon < 1. The algorithm allows for the strings x and y to be taken over an arbitrary well-separated tree metric with logarithmic depth and at most exponential aspect ratio. Notably, any polynomial-size metric space can be efficiently embedded into such a tree metric with logarithmic expected distortion. Extending our techniques further, we also obtain the first approximation algorithm for edit distance to work with characters taken from an arbitrary metric space, providing an n^epsilon-approximation in time O~(n^{2 - epsilon}), with high probability.
Finally, we turn our attention to the relationship between edit distance and dynamic time warping distance. We prove a reduction from computing edit distance over an arbitrary metric space to computing DTW over the same metric space, except with an added null character (whose distance to a letter l is defined to be the edit-distance insertion cost of l). Applying our reduction to a conditional lower bound of Bringmann and Künnemann pertaining to edit distance over {0, 1}, we obtain a conditional lower bound for computing DTW over a three letter alphabet (with distances of zero and one). This improves on a previous result of Abboud, Backurs, and Williams, who gave a conditional lower bound for DTW over an alphabet of size five.
With a similar approach, we also prove a reduction from computing edit distance (over generalized Hamming Space) to computing longest-common-subsequence length (LCS) over an alphabet with an added null character. Surprisingly, this means that one can recover conditional lower bounds for LCS directly from those for edit distance, which was not previously thought to be the case.
Cite as
William Kuszmaul. Dynamic Time Warping in Strongly Subquadratic Time: Algorithms for the Low-Distance Regime and Approximate Evaluation. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 80:1-80:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kuszmaul:LIPIcs.ICALP.2019.80,
author = {Kuszmaul, William},
title = {{Dynamic Time Warping in Strongly Subquadratic Time: Algorithms for the Low-Distance Regime and Approximate Evaluation}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {80:1--80:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.80},
URN = {urn:nbn:de:0030-drops-106568},
doi = {10.4230/LIPIcs.ICALP.2019.80},
annote = {Keywords: dynamic time warping, edit distance, approximation algorithm, tree metrics}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Bingkai Lin
Abstract
Given an n-vertex bipartite graph I=(S,U,E), the goal of set cover problem is to find a minimum sized subset of S such that every vertex in U is adjacent to some vertex of this subset. It is NP-hard to approximate set cover to within a (1-o(1))ln n factor [I. Dinur and D. Steurer, 2014]. If we use the size of the optimum solution k as the parameter, then it can be solved in n^{k+o(1)} time [Eisenbrand and Grandoni, 2004]. A natural question is: can we approximate set cover to within an o(ln n) factor in n^{k-epsilon} time?
In a recent breakthrough result[Karthik et al., 2018], Karthik, Laekhanukit and Manurangsi showed that assuming the Strong Exponential Time Hypothesis (SETH), for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can approximate set cover to a factor below (log n)^{1/poly(k,e(epsilon))} for some function e.
This paper presents a simple gap-producing reduction which, given a set cover instance I=(S,U,E) and two integers k<h <=(1-o(1))sqrt[k]{log |S|/log log |S|}, outputs a new set cover instance I'=(S,U',E') with |U'|=|U|^{h^k}|S|^{O(1)} in |U|^{h^k}* |S|^{O(1)} time such that
- if I has a k-sized solution, then so does I';
- if I has no k-sized solution, then every solution of I' must contain at least h vertices.
Setting h=(1-o(1))sqrt[k]{log |S|/log log |S|}, we show that assuming SETH, for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can distinguish between a set cover instance with k-sized solution and one whose minimum solution size is at least (1-o(1))* sqrt[k]((log n)/(log log n)). This improves the result in [Karthik et al., 2018].
Cite as
Bingkai Lin. A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 81:1-81:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{lin:LIPIcs.ICALP.2019.81,
author = {Lin, Bingkai},
title = {{A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {81:1--81:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.81},
URN = {urn:nbn:de:0030-drops-106573},
doi = {10.4230/LIPIcs.ICALP.2019.81},
annote = {Keywords: set cover, FPT inapproximability, gap-producing reduction, (n, k)-universal set}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Jannik Matuschke, Ulrike Schmidt-Kraepelin, and José Verschae
Abstract
The min-cost matching problem suffers from being very sensitive to small changes of the input. Even in a simple setting, e.g., when the costs come from the metric on the line, adding two nodes to the input might change the optimal solution completely. On the other hand, one expects that small changes in the input should incur only small changes on the constructed solutions, measured as the number of modified edges. We introduce a two-stage model where we study the trade-off between quality and robustness of solutions. In the first stage we are given a set of nodes in a metric space and we must compute a perfect matching. In the second stage 2k new nodes appear and we must adapt the solution to a perfect matching for the new instance.
We say that an algorithm is (alpha,beta)-robust if the solutions constructed in both stages are alpha-approximate with respect to min-cost perfect matchings, and if the number of edges deleted from the first stage matching is at most beta k. Hence, alpha measures the quality of the algorithm and beta its robustness. In this setting we aim to balance both measures by deriving algorithms for constant alpha and beta. We show that there exists an algorithm that is (3,1)-robust for any metric if one knows the number 2k of arriving nodes in advance. For the case that k is unknown the situation is significantly more involved. We study this setting under the metric on the line and devise a (10,2)-robust algorithm that constructs a solution with a recursive structure that carefully balances cost and redundancy.
Cite as
Jannik Matuschke, Ulrike Schmidt-Kraepelin, and José Verschae. Maintaining Perfect Matchings at Low Cost. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 82:1-82:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{matuschke_et_al:LIPIcs.ICALP.2019.82,
author = {Matuschke, Jannik and Schmidt-Kraepelin, Ulrike and Verschae, Jos\'{e}},
title = {{Maintaining Perfect Matchings at Low Cost}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {82:1--82:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.82},
URN = {urn:nbn:de:0030-drops-106582},
doi = {10.4230/LIPIcs.ICALP.2019.82},
annote = {Keywords: matchings, robust optimization, approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Arturo I. Merino and José A. Soto
Abstract
We study the minimum weight basis problem on matroid when elements' weights are uncertain. For each element we only know a set of possible values (an uncertainty area) that contains its real weight. In some cases there exist bases that are uniformly optimal, that is, they are minimum weight bases for every possible weight function obeying the uncertainty areas. In other cases, computing such a basis is not possible unless we perform some queries for the exact value of some elements.
Our main result is a polynomial time algorithm for the following problem. Given a matroid with uncertainty areas and a query cost function on its elements, find the set of elements of minimum total cost that we need to simultaneously query such that, no matter their revelation, the resulting instance admits a uniformly optimal base. We also provide combinatorial characterizations of all uniformly optimal bases, when one exists; and of all sets of queries that can be performed so that after revealing the corresponding weights the resulting instance admits a uniformly optimal base.
Cite as
Arturo I. Merino and José A. Soto. The Minimum Cost Query Problem on Matroids with Uncertainty Areas. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 83:1-83:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{merino_et_al:LIPIcs.ICALP.2019.83,
author = {Merino, Arturo I. and Soto, Jos\'{e} A.},
title = {{The Minimum Cost Query Problem on Matroids with Uncertainty Areas}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {83:1--83:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.83},
URN = {urn:nbn:de:0030-drops-106592},
doi = {10.4230/LIPIcs.ICALP.2019.83},
annote = {Keywords: Minimum spanning tree, matroids, uncertainty, queries}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ian Mertz, Toniann Pitassi, and Yuanhao Wei
Abstract
We obtain a streamlined proof of an important result by Alekhnovich and Razborov [Michael Alekhnovich and Alexander A. Razborov, 2008], showing that it is hard to automatize both tree-like and general Resolution. Under a different assumption than [Michael Alekhnovich and Alexander A. Razborov, 2008], our simplified proof gives improved bounds: we show under ETH that these proof systems are not automatizable in time n^f(n), whenever f(n) = o(log^{1/7 - epsilon} log n) for any epsilon > 0. Previously non-automatizability was only known for f(n) = O(1). Our proof also extends fairly straightforwardly to prove similar hardness results for PCR and Res(r).
Cite as
Ian Mertz, Toniann Pitassi, and Yuanhao Wei. Short Proofs Are Hard to Find. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 84:1-84:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{mertz_et_al:LIPIcs.ICALP.2019.84,
author = {Mertz, Ian and Pitassi, Toniann and Wei, Yuanhao},
title = {{Short Proofs Are Hard to Find}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {84:1--84:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.84},
URN = {urn:nbn:de:0030-drops-106605},
doi = {10.4230/LIPIcs.ICALP.2019.84},
annote = {Keywords: automatizability, Resolution, SAT solvers, proof complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Eyal Mizrachi, Roy Schwartz, Joachim Spoerhase, and Sumedha Uniyal
Abstract
Motivated by applications in machine learning, such as subset selection and data summarization, we consider the problem of maximizing a monotone submodular function subject to mixed packing and covering constraints. We present a tight approximation algorithm that for any constant epsilon >0 achieves a guarantee of 1-(1/e)-epsilon while violating only the covering constraints by a multiplicative factor of 1-epsilon. Our algorithm is based on a novel enumeration method, which unlike previously known enumeration techniques, can handle both packing and covering constraints. We extend the above main result by additionally handling a matroid independence constraint as well as finding (approximate) pareto set optimal solutions when multiple submodular objectives are present. Finally, we propose a novel and purely combinatorial dynamic programming approach. While this approach does not give tight bounds it yields deterministic and in some special cases also considerably faster algorithms. For example, for the well-studied special case of only packing constraints (Kulik et al. [Math. Oper. Res. `13] and Chekuri et al. [FOCS `10]), we are able to present the first deterministic non-trivial approximation algorithm. We believe our new combinatorial approach might be of independent interest.
Cite as
Eyal Mizrachi, Roy Schwartz, Joachim Spoerhase, and Sumedha Uniyal. A Tight Approximation for Submodular Maximization with Mixed Packing and Covering Constraints. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 85:1-85:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{mizrachi_et_al:LIPIcs.ICALP.2019.85,
author = {Mizrachi, Eyal and Schwartz, Roy and Spoerhase, Joachim and Uniyal, Sumedha},
title = {{A Tight Approximation for Submodular Maximization with Mixed Packing and Covering Constraints}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {85:1--85:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.85},
URN = {urn:nbn:de:0030-drops-106610},
doi = {10.4230/LIPIcs.ICALP.2019.85},
annote = {Keywords: submodular function, approximation algorithm, covering, packing}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Benjamin Moseley
Abstract
This paper considers scheduling on identical machines. The scheduling objective considered in this paper generalizes most scheduling minimization problems. In the problem, there are n jobs and each job j is associated with a monotonically increasing function g_j. The goal is to design a schedule that minimizes sum_{j in [n]} g_{j}(C_j) where C_j is the completion time of job j in the schedule. An O(1)-approximation is known for the single machine case. On multiple machines, this paper shows that if the scheduler is required to be either non-migratory or non-preemptive then any algorithm has an unbounded approximation ratio. Using preemption and migration, this paper gives a O(log log nP)-approximation on multiple machines, the first result on multiple machines. These results imply the first non-trivial positive results for several special cases of the problem considered, such as throughput minimization and tardiness.
Natural linear programs known for the problem have a poor integrality gap. The results are obtained by strengthening a natural linear program for the problem with a set of covering inequalities we call job cover inequalities. This linear program is rounded to an integral solution by building on quasi-uniform sampling and rounding techniques.
Cite as
Benjamin Moseley. Scheduling to Approximate Minimization Objectives on Identical Machines. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 86:1-86:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{moseley:LIPIcs.ICALP.2019.86,
author = {Moseley, Benjamin},
title = {{Scheduling to Approximate Minimization Objectives on Identical Machines}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {86:1--86:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.86},
URN = {urn:nbn:de:0030-drops-106621},
doi = {10.4230/LIPIcs.ICALP.2019.86},
annote = {Keywords: Scheduling, LP rounding, Approximation Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Nabil H. Mustafa
Abstract
Given a set system (X, R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an epsilon-net for (X, R) of size O(d/epsilon log 1/epsilon). Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems.
In this paper, we consider the following natural algorithm to compute an epsilon-net: start with an initial random sample N. Iteratively, as long as N is not an epsilon-net for R, pick any unhit set S in R (say, given by an Oracle), and add O(1) randomly chosen points from S to N.
We prove that the above algorithm computes, in expectation, epsilon-nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O(1/epsilon) calls to the Oracle. In particular, this implies that computing optimal-sized epsilon-nets are as easy as computing an unhit set in the given set system.
Cite as
Nabil H. Mustafa. Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 87:1-87:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{mustafa:LIPIcs.ICALP.2019.87,
author = {Mustafa, Nabil H.},
title = {{Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {87:1--87:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.87},
URN = {urn:nbn:de:0030-drops-106632},
doi = {10.4230/LIPIcs.ICALP.2019.87},
annote = {Keywords: epsilon-nets, Geometric Set Systems}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Joseph (Seffi) Naor, Seeun William Umboh, and David P. Williamson
Abstract
The Weighted Tree Augmentation problem (WTAP) is a fundamental problem in network design. In this paper, we consider this problem in the online setting. We are given an n-vertex spanning tree T and an additional set L of edges (called links) with costs. Then, terminal pairs arrive one-by-one and our task is to maintain a low-cost subset of links F such that every terminal pair that has arrived so far is 2-edge-connected in T cup F. This online problem was first studied by Gupta, Krishnaswamy and Ravi (SICOMP 2012) who used it as a subroutine for the online survivable network design problem. They gave a deterministic O(log^2 n)-competitive algorithm and showed an Omega(log n) lower bound on the competitive ratio of randomized algorithms. The case when T is a path is also interesting: it is exactly the online interval set cover problem, which also captures as a special case the parking permit problem studied by Meyerson (FOCS 2005). The contribution of this paper is to give tight results for online weighted tree and path augmentation problems. The main result of this work is a deterministic O(log n)-competitive algorithm for online WTAP, which is tight up to constant factors.
Cite as
Joseph (Seffi) Naor, Seeun William Umboh, and David P. Williamson. Tight Bounds for Online Weighted Tree Augmentation. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 88:1-88:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{naor_et_al:LIPIcs.ICALP.2019.88,
author = {Naor, Joseph (Seffi) and Umboh, Seeun William and Williamson, David P.},
title = {{Tight Bounds for Online Weighted Tree Augmentation}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {88:1--88:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.88},
URN = {urn:nbn:de:0030-drops-106647},
doi = {10.4230/LIPIcs.ICALP.2019.88},
annote = {Keywords: Online algorithms, competitive analysis, tree augmentation, network design}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Merav Parter and Eylon Yogev
Abstract
Short cycle decomposition is an edge partitioning of an unweighted graph into edge-disjoint short cycles, plus a small number of extra edges not in any cycle. This notion was introduced by Chu et al. [FOCS'18] as a fundamental tool for graph sparsification and sketching. Clearly, it is most desirable to have a fast algorithm for partitioning the edges into as short as possible cycles, while omitting few edges.
The most naïve procedure for such decomposition runs in time O(m * n) and partitions the edges into O(log n)-length edge-disjoint cycles plus at most 2n edges. Chu et al. improved the running time considerably to m^{1+o(1)}, while increasing both the length of the cycles and the number of omitted edges by a factor of n^{o(1)}. Even more recently, Liu-Sachdeva-Yu [SODA'19] showed that for every constant delta in (0,1] there is an O(m * n^{delta})-time algorithm that provides, w.h.p., cycles of length O(log n)^{1/delta} and O(n) extra edges.
In this paper, we significantly improve upon these bounds. We first show an m^{1+o(1)}-time deterministic algorithm for computing nearly optimal cycle decomposition, i.e., with cycle length O(log^2 n) and an extra subset of O(n log n) edges not in any cycle. This algorithm is based on a reduction to low-congestion cycle covers, introduced by the authors in [SODA'19].
We also provide a simple deterministic algorithm that computes edge-disjoint cycles of length 2^{1/epsilon} with n^{1+epsilon}* 2^{1/epsilon} extra edges, for every epsilon in (0,1]. Combining this with Liu-Sachdeva-Yu [SODA'19] gives a linear time randomized algorithm for computing cycles of length poly(log n) and O(n) extra edges, for every n-vertex graphs with n^{1+1/delta} edges for some constant delta.
These decomposition algorithms lead to improvements in all the algorithmic applications of Chu et al. as well as to new distributed constructions.
Cite as
Merav Parter and Eylon Yogev. Optimal Short Cycle Decomposition in Almost Linear Time. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 89:1-89:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{parter_et_al:LIPIcs.ICALP.2019.89,
author = {Parter, Merav and Yogev, Eylon},
title = {{Optimal Short Cycle Decomposition in Almost Linear Time}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {89:1--89:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.89},
URN = {urn:nbn:de:0030-drops-106653},
doi = {10.4230/LIPIcs.ICALP.2019.89},
annote = {Keywords: Cycle decomposition, low-congestion cycle cover, graph sparsification}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Konstantinos Panagiotou and Matija Pasch
Abstract
In the last two decades the study of random instances of constraint satisfaction problems (CSPs) has flourished across several disciplines, including computer science, mathematics and physics. The diversity of the developed methods, on the rigorous and non-rigorous side, has led to major advances regarding both the theoretical as well as the applied viewpoints. The two most popular types of such CSPs are the Erdős-Rényi and the random regular CSPs.
Based on a ceteris paribus approach in terms of the density evolution equations known from statistical physics, we focus on a specific prominent class of problems of the latter type, the so-called occupation problems. The regular r-in-k occupation problems resemble a basis of this class. By now, out of these CSPs only the satisfiability threshold - the largest degree for which the problem admits asymptotically a solution - for the 1-in-k occupation problem has been rigorously established. In the present work we take a general approach towards a systematic analysis of occupation problems. In particular, we discover a surprising and explicit connection between the 2-in-k occupation problem satisfiability threshold and the determination of contraction coefficients, an important quantity in information theory measuring the loss of information that occurs when communicating through a noisy channel. We present methods to facilitate the computation of these coefficients and use them to establish explicitly the threshold for the 2-in-k occupation problem for k=4. Based on this result, for general k >= 5 we formulate a conjecture that pins down the exact value of the corresponding coefficient, which, if true, is shown to determine the threshold in all these cases.
Cite as
Konstantinos Panagiotou and Matija Pasch. Satisfiability Thresholds for Regular Occupation Problems. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{panagiotou_et_al:LIPIcs.ICALP.2019.90,
author = {Panagiotou, Konstantinos and Pasch, Matija},
title = {{Satisfiability Thresholds for Regular Occupation Problems}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {90:1--90:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.90},
URN = {urn:nbn:de:0030-drops-106665},
doi = {10.4230/LIPIcs.ICALP.2019.90},
annote = {Keywords: Constraint satisfaction problem, replica symmetric, contraction coefficient, first moment, second moment, small subgraph conditioning}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Akbar Rafiey, Arash Rafiey, and Thiago Santos
Abstract
Given two (di)graphs G, H and a cost function c:V(G) x V(H) -> Q_{>= 0} cup {+infty}, in the minimum cost homomorphism problem, MinHOM(H), we are interested in finding a homomorphism f:V(G)-> V(H) (a.k.a H-coloring) that minimizes sum limits_{v in V(G)}c(v,f(v)). The complexity of exact minimization of this problem is well understood [Pavol Hell and Arash Rafiey, 2012], and the class of digraphs H, for which the MinHOM(H) is polynomial time solvable is a small subset of all digraphs.
In this paper, we consider the approximation of MinHOM within a constant factor. In terms of digraphs, MinHOM(H) is not approximable if H contains a digraph asteroidal triple (DAT). We take a major step toward a dichotomy classification of approximable cases. We give a dichotomy classification for approximating the MinHOM(H) when H is a graph (i.e. symmetric digraph). For digraphs, we provide constant factor approximation algorithms for two important classes of digraphs, namely bi-arc digraphs (digraphs with a conservative semi-lattice polymorphism or min-ordering), and k-arc digraphs (digraphs with an extended min-ordering). Specifically, we show that:
- Dichotomy for Graphs: MinHOM(H) has a 2|V(H)|-approximation algorithm if graph H admits a conservative majority polymorphims (i.e. H is a bi-arc graph), otherwise, it is inapproximable;
- MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a bi-arc digraph;
- MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a k-arc digraph.
In conclusion, we show the importance of these results and provide insights for achieving a dichotomy classification of approximable cases. Our constant factors depend on the size of H. However, the implementation of our algorithms provides a much better approximation ratio. It leaves open to investigate a classification of digraphs H, where MinHOM(H) admits a constant factor approximation algorithm that is independent of |V(H)|.
Cite as
Akbar Rafiey, Arash Rafiey, and Thiago Santos. Toward a Dichotomy for Approximation of H-Coloring. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 91:1-91:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{rafiey_et_al:LIPIcs.ICALP.2019.91,
author = {Rafiey, Akbar and Rafiey, Arash and Santos, Thiago},
title = {{Toward a Dichotomy for Approximation of H-Coloring}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {91:1--91:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.91},
URN = {urn:nbn:de:0030-drops-106678},
doi = {10.4230/LIPIcs.ICALP.2019.91},
annote = {Keywords: Approximation algorithms, minimum cost homomorphism, randomized rounding}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Venkatesan Guruswami and Andrii Riazanov
Abstract
We say a subset C subseteq {1,2,...,k}^n is a k-hash code (also called k-separated) if for every subset of k codewords from C, there exists a coordinate where all these codewords have distinct values. Understanding the largest possible rate (in bits), defined as (log_2 |C|)/n, of a k-hash code is a classical problem. It arises in two equivalent contexts: (i) the smallest size possible for a perfect hash family that maps a universe of N elements into {1,2,...,k}, and (ii) the zero-error capacity for decoding with lists of size less than k for a certain combinatorial channel.
A general upper bound of k!/k^{k-1} on the rate of a k-hash code (in the limit of large n) was obtained by Fredman and Komlós in 1984 for any k >= 4. While better bounds have been obtained for k=4, their original bound has remained the best known for each k >= 5. In this work, we present a method to obtain the first improvement to the Fredman-Komlós bound for every k >= 5, and we apply this method to give explicit numerical bounds for k=5, 6.
Cite as
Venkatesan Guruswami and Andrii Riazanov. Beating Fredman-Komlós for Perfect k-Hashing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 92:1-92:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{guruswami_et_al:LIPIcs.ICALP.2019.92,
author = {Guruswami, Venkatesan and Riazanov, Andrii},
title = {{Beating Fredman-Koml\'{o}s for Perfect k-Hashing}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {92:1--92:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.92},
URN = {urn:nbn:de:0030-drops-106687},
doi = {10.4230/LIPIcs.ICALP.2019.92},
annote = {Keywords: Coding theory, perfect hashing, hash family, graph entropy, zero-error information theory}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Thomas Sauerwald and Luca Zanetti
Abstract
We establish and generalise several bounds for various random walk quantities including the mixing time and the maximum hitting time. Unlike previous analyses, our derivations are based on rather intuitive notions of local expansion properties which allow us to capture the progress the random walk makes through t-step probabilities.
We apply our framework to dynamically changing graphs, where the set of vertices is fixed while the set of edges changes in each round. For random walks on dynamic connected graphs for which the stationary distribution does not change over time, we show that their behaviour is in a certain sense similar to static graphs. For example, we show that the mixing and hitting times of any sequence of d-regular connected graphs is O(n^2), generalising a well-known result for static graphs. We also provide refined bounds depending on the isoperimetric dimension of the graph, matching again known results for static graphs. Finally, we investigate properties of random walks on dynamic graphs that are not always connected: we relate their convergence to stationarity to the spectral properties of an average of transition matrices and provide some examples that demonstrate strong discrepancies between static and dynamic graphs.
Cite as
Thomas Sauerwald and Luca Zanetti. Random Walks on Dynamic Graphs: Mixing Times, Hitting Times, and Return Probabilities. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{sauerwald_et_al:LIPIcs.ICALP.2019.93,
author = {Sauerwald, Thomas and Zanetti, Luca},
title = {{Random Walks on Dynamic Graphs: Mixing Times, Hitting Times, and Return Probabilities}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {93:1--93:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.93},
URN = {urn:nbn:de:0030-drops-106696},
doi = {10.4230/LIPIcs.ICALP.2019.93},
annote = {Keywords: random walks, dynamic graphs, hitting times}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang
Abstract
We consider algorithms with access to an unknown matrix M in F^{n x d} via matrix-vector products, namely, the algorithm chooses vectors v^1, ..., v^q, and observes Mv^1, ..., Mv^q. Here the v^i can be randomized as well as chosen adaptively as a function of Mv^1, ..., Mv^{i-1}. Motivated by applications of sketching in distributed computation, linear algebra, and streaming models, as well as connections to areas such as communication complexity and property testing, we initiate the study of the number q of queries needed to solve various fundamental problems. We study problems in three broad categories, including linear algebra, statistics problems, and graph problems. For example, we consider the number of queries required to approximate the rank, trace, maximum eigenvalue, and norms of a matrix M; to compute the AND/OR/Parity of each column or row of M, to decide whether there are identical columns or rows in M or whether M is symmetric, diagonal, or unitary; or to compute whether a graph defined by M is connected or triangle-free. We also show separations for algorithms that are allowed to obtain matrix-vector products only by querying vectors on the right, versus algorithms that can query vectors on both the left and the right. We also show separations depending on the underlying field the matrix-vector product occurs in. For graph problems, we show separations depending on the form of the matrix (bipartite adjacency versus signed edge-vertex incidence matrix) to represent the graph.
Surprisingly, this fundamental model does not appear to have been studied on its own, and we believe a thorough investigation of problems in this model would be beneficial to a number of different application areas.
Cite as
Xiaoming Sun, David P. Woodruff, Guang Yang, and Jialin Zhang. Querying a Matrix Through Matrix-Vector Products. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 94:1-94:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{sun_et_al:LIPIcs.ICALP.2019.94,
author = {Sun, Xiaoming and Woodruff, David P. and Yang, Guang and Zhang, Jialin},
title = {{Querying a Matrix Through Matrix-Vector Products}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {94:1--94:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.94},
URN = {urn:nbn:de:0030-drops-106709},
doi = {10.4230/LIPIcs.ICALP.2019.94},
annote = {Keywords: Communication complexity, linear algebra, sketching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mikkel Thorup, Or Zamir, and Uri Zwick
Abstract
We consider word RAM data structures for maintaining ordered sets of integers whose select and rank operations are allowed to return approximate results, i.e., ranks, or items whose rank, differ by less than Delta from the exact answer, where Delta=Delta(n) is an error parameter. Related to approximate select and rank is approximate (one-dimensional) nearest-neighbor. A special case of approximate select queries are approximate min queries. Data structures that support approximate min operations are known as approximate heaps (priority queues). Related to approximate heaps are soft heaps, which are approximate heaps with a different notion of approximation.
We prove the optimality of all the data structures presented, either through matching cell-probe lower bounds, or through equivalences to well studied static problems. For approximate select, rank, and nearest-neighbor operations we get matching cell-probe lower bounds. We prove an equivalence between approximate min operations, i.e., approximate heaps, and the static partitioning problem. Finally, we prove an equivalence between soft heaps and the classical sorting problem, on a smaller number of items.
Our results have many interesting and unexpected consequences. It turns out that approximation greatly speeds up some of these operations, while others are almost unaffected. In particular, while select and rank have identical operation times, both in comparison-based and word RAM implementations, an interesting separation emerges between the approximate versions of these operations in the word RAM model. Approximate select is much faster than approximate rank. It also turns out that approximate min is exponentially faster than the more general approximate select. Next, we show that implementing soft heaps is harder than implementing approximate heaps. The relation between them corresponds to the relation between sorting and partitioning.
Finally, as an interesting byproduct, we observe that a combination of known techniques yields a deterministic word RAM algorithm for (exactly) sorting n items in O(n log log_w n) time, where w is the word length. Even for the easier problem of finding duplicates, the best previous deterministic bound was O(min{n log log n,n log_w n}). Our new unifying bound is an improvement when w is sufficiently large compared with n.
Cite as
Mikkel Thorup, Or Zamir, and Uri Zwick. Dynamic Ordered Sets with Approximate Queries, Approximate Heaps and Soft Heaps. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 95:1-95:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{thorup_et_al:LIPIcs.ICALP.2019.95,
author = {Thorup, Mikkel and Zamir, Or and Zwick, Uri},
title = {{Dynamic Ordered Sets with Approximate Queries, Approximate Heaps and Soft Heaps}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {95:1--95:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.95},
URN = {urn:nbn:de:0030-drops-106712},
doi = {10.4230/LIPIcs.ICALP.2019.95},
annote = {Keywords: Order queries, word RAM, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Thomas Watson
Abstract
We provide a complete picture of the extent to which amplification of success probability is possible for randomized algorithms having access to one NP oracle query, in the settings of two-sided, one-sided, and zero-sided error. We generalize this picture to amplifying one-query algorithms with q-query algorithms, and we show our inclusions are tight for relativizing techniques.
Cite as
Thomas Watson. Amplification with One NP Oracle Query. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 96:1-96:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{watson:LIPIcs.ICALP.2019.96,
author = {Watson, Thomas},
title = {{Amplification with One NP Oracle Query}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {96:1--96:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.96},
URN = {urn:nbn:de:0030-drops-106726},
doi = {10.4230/LIPIcs.ICALP.2019.96},
annote = {Keywords: Amplification, NP, oracle, query}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
David P. Woodruff and Guang Yang
Abstract
In a k-party communication problem, the k players with inputs x_1, x_2, ..., x_k, respectively, want to evaluate a function f(x_1, x_2, ..., x_k) using as little communication as possible. We consider the message-passing model, in which the inputs are partitioned in an arbitrary, possibly worst-case manner, among a smaller number t of players (t<k). The t-player communication cost of computing f can only be smaller than the k-player communication cost, since the t players can trivially simulate the k-player protocol. But how much smaller can it be? We study deterministic and randomized protocols in the one-way model, and provide separations for product input distributions, which are optimal for low error probability protocols. We also provide much stronger separations when the input distribution is non-product.
A key application of our results is in proving lower bounds for data stream algorithms. In particular, we give an optimal Omega(epsilon^{-2}log(N) log log(mM)) bits of space lower bound for the fundamental problem of (1 +/-{epsilon})-approximating the number |x |_0 of non-zero entries of an n-dimensional vector x after m updates each of magnitude M, and with success probability >= 2/3, in a strict turnstile stream. Our result matches the best known upper bound when epsilon >= 1/polylog(mM). It also improves on the prior Omega({epsilon}^{-2}log(mM)) lower bound and separates the complexity of approximating L_0 from approximating the p-norm L_p for p bounded away from 0, since the latter has an O(epsilon^{-2}log(mM)) bit upper bound.
Cite as
David P. Woodruff and Guang Yang. Separating k-Player from t-Player One-Way Communication, with Applications to Data Streams. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 97:1-97:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{woodruff_et_al:LIPIcs.ICALP.2019.97,
author = {Woodruff, David P. and Yang, Guang},
title = {{Separating k-Player from t-Player One-Way Communication, with Applications to Data Streams}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {97:1--97:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.97},
URN = {urn:nbn:de:0030-drops-106733},
doi = {10.4230/LIPIcs.ICALP.2019.97},
annote = {Keywords: Communication complexity, multi-player communication, one-way communication, streaming complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Chaoping Xing and Chen Yuan
Abstract
Locally recoverable codes are a class of block codes with an additional property called locality. A locally recoverable code with locality r can recover a symbol by reading at most r other symbols. Recently, it was discovered by several authors that a q-ary optimal locally recoverable code, i.e., a locally recoverable code achieving the Singleton-type bound, can have length much bigger than q+1. In this paper, we present both the upper bound and the lower bound on the length of optimal locally recoverable codes. Our lower bound improves the best known result in [Yuan Luo et al., 2018] for all distance d >= 7. This result is built on the observation of the parity-check matrix equipped with the Vandermonde structure. It turns out that a parity-check matrix with the Vandermonde structure produces an optimal locally recoverable code if it satisfies a certain expansion property for subsets of F_q. To our surprise, this expansion property is then shown to be equivalent to a well-studied problem in extremal graph theory. Our upper bound is derived by an refined analysis of the arguments of Theorem 3.3 in [Venkatesan Guruswami et al., 2018].
Cite as
Chaoping Xing and Chen Yuan. Construction of Optimal Locally Recoverable Codes and Connection with Hypergraph. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 98:1-98:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{xing_et_al:LIPIcs.ICALP.2019.98,
author = {Xing, Chaoping and Yuan, Chen},
title = {{Construction of Optimal Locally Recoverable Codes and Connection with Hypergraph}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {98:1--98:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.98},
URN = {urn:nbn:de:0030-drops-106745},
doi = {10.4230/LIPIcs.ICALP.2019.98},
annote = {Keywords: Locally Repairable Codes, Hypergraph}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Joran van Apeldoorn and András Gilyén
Abstract
Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework.
We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case when the input matrices are proportional to mixed quantum states that one can query. For this model Brandão et al. [Fernando G. S. L. Brandão et al., 2018] showed that the dependence on n can be replaced by a polynomial dependence on both the rank and the trace of the input matrices. We remove the dependence on the rank and hence require only a dependence on the trace of the input matrices.
After we obtain these results we apply them to a few different problems. The most notable of which is the problem of shadow tomography, recently introduced by Aaronson [Aaronson, 2018]. Here we simultaneously improve both the sample and computational complexity of the previous best results. Finally we prove a new Omega~(sqrt{m}alpha gamma) lower bound for solving LPs and SDPs in the quantum operator model, which also implies a lower bound for the model of Brandão et al. [Fernando G. S. L. Brandão et al., 2018].
Cite as
Joran van Apeldoorn and András Gilyén. Improvements in Quantum SDP-Solving with Applications. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 99:1-99:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{vanapeldoorn_et_al:LIPIcs.ICALP.2019.99,
author = {van Apeldoorn, Joran and Gily\'{e}n, Andr\'{a}s},
title = {{Improvements in Quantum SDP-Solving with Applications}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {99:1--99:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.99},
URN = {urn:nbn:de:0030-drops-106750},
doi = {10.4230/LIPIcs.ICALP.2019.99},
annote = {Keywords: quantum algorithms, semidefinite programming, shadow tomography}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Bader Abu Radi and Orna Kupferman
Abstract
While many applications of automata in formal methods can use nondeterministic automata, some applications, most notably synthesis, need deterministic or good-for-games automata. The latter are nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past. The minimization problem for nondeterministic and deterministic Büchi and co-Büchi word automata are PSPACE-complete and NP-complete, respectively. We describe a polynomial minimization algorithm for good-for-games co-Büchi word automata with transition-based acceptance. Thus, a run is accepting if it traverses a set of designated transitions only finitely often. Our algorithm is based on a sequence of transformations we apply to the automaton, on top of which a minimal quotient automaton is defined.
Cite as
Bader Abu Radi and Orna Kupferman. Minimizing GFG Transition-Based Automata (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 100:1-100:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{aburadi_et_al:LIPIcs.ICALP.2019.100,
author = {Abu Radi, Bader and Kupferman, Orna},
title = {{Minimizing GFG Transition-Based Automata}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {100:1--100:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.100},
URN = {urn:nbn:de:0030-drops-106761},
doi = {10.4230/LIPIcs.ICALP.2019.100},
annote = {Keywords: Minimization, Deterministic co-B\"{u}chi Automata}
}
Document
Extended Abstract
Authors:
Mohamed-Amine Baazizi, Dario Colazzo, Giorgio Ghelli, and Carlo Sartiani
Abstract
In this paper we present the first JSON type system that provides the possibility of inferring a schema by adopting different levels of precision/succinctness for different parts of the dataset, under user control. This feature gives the data analyst the possibility to have detailed schemas for parts of the data of greater interest, while more succinct schema is provided for other parts, and the decision can be changed as many times as needed, in order to explore the schema in a gradual fashion, moving the focus to different parts of the collection, without the need of reprocessing data and by only performing type rewriting operations on the most precise schema.
Cite as
Mohamed-Amine Baazizi, Dario Colazzo, Giorgio Ghelli, and Carlo Sartiani. A Type System for Interactive JSON Schema Inference (Extended Abstract). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 101:1-101:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{baazizi_et_al:LIPIcs.ICALP.2019.101,
author = {Baazizi, Mohamed-Amine and Colazzo, Dario and Ghelli, Giorgio and Sartiani, Carlo},
title = {{A Type System for Interactive JSON Schema Inference}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {101:1--101:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.101},
URN = {urn:nbn:de:0030-drops-106774},
doi = {10.4230/LIPIcs.ICALP.2019.101},
annote = {Keywords: JSON, type systems, interactive inference}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Nikhil Balaji, Stefan Kiefer, Petr Novotný, Guillermo A. Pérez, and Mahsa Shirmohammadi
Abstract
Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXPTIME-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. To obtain this main result, we develop several stepping stones that yield results of an independent interest. For instance, we show that it is EXPTIME-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. We also provide new complexity results for the bounded halting problem in linear-update counter machines.
Cite as
Nikhil Balaji, Stefan Kiefer, Petr Novotný, Guillermo A. Pérez, and Mahsa Shirmohammadi. On the Complexity of Value Iteration (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 102:1-102:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{balaji_et_al:LIPIcs.ICALP.2019.102,
author = {Balaji, Nikhil and Kiefer, Stefan and Novotn\'{y}, Petr and P\'{e}rez, Guillermo A. and Shirmohammadi, Mahsa},
title = {{On the Complexity of Value Iteration}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {102:1--102:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.102},
URN = {urn:nbn:de:0030-drops-106782},
doi = {10.4230/LIPIcs.ICALP.2019.102},
annote = {Keywords: Markov decision processes, Value iteration, Formal verification}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Pablo Barceló, Chih-Duo Hong, Xuan-Bach Le, Anthony W. Lin, and Reino Niskanen
Abstract
Monadic decomposibility - the ability to determine whether a formula in a given logical theory can be decomposed into a boolean combination of monadic formulas - is a powerful tool for devising a decision procedure for a given logical theory. In this paper, we revisit a classical decision problem in automata theory: given a regular (a.k.a. synchronized rational) relation, determine whether it is recognizable, i.e., it has a monadic decomposition (that is, a representation as a boolean combination of cartesian products of regular languages). Regular relations are expressive formalisms which, using an appropriate string encoding, can capture relations definable in Presburger Arithmetic. In fact, their expressive power coincide with relations definable in a universal automatic structure; equivalently, those definable by finite set interpretations in WS1S (Weak Second Order Theory of One Successor). Determining whether a regular relation admits a recognizable relation was known to be decidable (and in exponential time for binary relations), but its precise complexity still hitherto remains open. Our main contribution is to fully settle the complexity of this decision problem by developing new techniques employing infinite Ramsey theory. The complexity for DFA (resp. NFA) representations of regular relations is shown to be NLOGSPACE-complete (resp. PSPACE-complete).
Cite as
Pablo Barceló, Chih-Duo Hong, Xuan-Bach Le, Anthony W. Lin, and Reino Niskanen. Monadic Decomposability of Regular Relations (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 103:1-103:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{barcelo_et_al:LIPIcs.ICALP.2019.103,
author = {Barcel\'{o}, Pablo and Hong, Chih-Duo and Le, Xuan-Bach and Lin, Anthony W. and Niskanen, Reino},
title = {{Monadic Decomposability of Regular Relations}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {103:1--103:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.103},
URN = {urn:nbn:de:0030-drops-106790},
doi = {10.4230/LIPIcs.ICALP.2019.103},
annote = {Keywords: Transducers, Automata, Synchronized Rational Relations, Ramsey Theory, Variable Independence, Automatic Structures}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Pablo Barceló, Diego Figueira, and Miguel Romero
Abstract
We study the boundedness problem for unions of conjunctive regular path queries with inverses (UC2RPQs). This is the problem of, given a UC2RPQ, checking whether it is equivalent to a union of conjunctive queries (UCQ). We show the problem to be ExpSpace-complete, thus coinciding with the complexity of containment for UC2RPQs. As a corollary, when a UC2RPQ is bounded, it is equivalent to a UCQ of at most triple-exponential size, and in fact we show that this bound is optimal. We also study better behaved classes of UC2RPQs, namely acyclic UC2RPQs of bounded thickness, and strongly connected UCRPQs, whose boundedness problem is, respectively, PSpace-complete and Pi_2^P-complete. Most upper bounds exploit results on limitedness for distance automata, in particular extending the model with alternation and two-wayness, which may be of independent interest.
Cite as
Pablo Barceló, Diego Figueira, and Miguel Romero. Boundedness of Conjunctive Regular Path Queries (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 104:1-104:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{barcelo_et_al:LIPIcs.ICALP.2019.104,
author = {Barcel\'{o}, Pablo and Figueira, Diego and Romero, Miguel},
title = {{Boundedness of Conjunctive Regular Path Queries}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {104:1--104:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.104},
URN = {urn:nbn:de:0030-drops-106803},
doi = {10.4230/LIPIcs.ICALP.2019.104},
annote = {Keywords: regular path queries, boundedness, limitedness, distance automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Paul C. Bell
Abstract
We consider the computability and complexity of decision questions for Probabilistic Finite Automata (PFA) with sub-exponential ambiguity. We show that the emptiness problem for non-strict cut-points of polynomially ambiguous PFA remains undecidable even when the input word is over a bounded language and all PFA transition matrices are commutative. In doing so, we introduce a new technique based upon the Turakainen construction of a PFA from a Weighted Finite Automata which can be used to generate PFA of lower dimensions and of subexponential ambiguity. We also study freeness/injectivity problems for polynomially ambiguous PFA and study the border of decidability and tractability for various cases.
Cite as
Paul C. Bell. Polynomially Ambiguous Probabilistic Automata on Restricted Languages (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 105:1-105:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bell:LIPIcs.ICALP.2019.105,
author = {Bell, Paul C.},
title = {{Polynomially Ambiguous Probabilistic Automata on Restricted Languages}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {105:1--105:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.105},
URN = {urn:nbn:de:0030-drops-106814},
doi = {10.4230/LIPIcs.ICALP.2019.105},
annote = {Keywords: Probabilistic finite automata, ambiguity, undecidability, bounded language, emptiness}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Mikołaj Bojańczyk, Sandra Kiefer, and Nathan Lhote
Abstract
String-to-string MSO interpretations are like Courcelle’s MSO transductions, except that a single output position can be represented using a tuple of input positions instead of just a single input position. In particular, the output length is polynomial in the input length, as opposed to MSO transductions, which have output of linear length. We show that string-to-string MSO interpretations are exactly the polyregular functions. The latter class has various characterisations, one of which is that it consists of the string-to-string functions recognised by pebble transducers.
Our main result implies the surprising fact that string-to-string MSO interpretations are closed under composition.
Cite as
Mikołaj Bojańczyk, Sandra Kiefer, and Nathan Lhote. String-to-String Interpretations With Polynomial-Size Output (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 106:1-106:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2019.106,
author = {Boja\'{n}czyk, Miko{\l}aj and Kiefer, Sandra and Lhote, Nathan},
title = {{String-to-String Interpretations With Polynomial-Size Output}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {106:1--106:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.106},
URN = {urn:nbn:de:0030-drops-106821},
doi = {10.4230/LIPIcs.ICALP.2019.106},
annote = {Keywords: MSO, interpretations, pebble transducers, polyregular functions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Paul Brunet and Alexandra Silva
Abstract
Nominal automata are a widely studied class of automata designed to recognise languages over infinite alphabets. In this paper, we present a Kleene theorem for nominal automata by providing a syntax to denote regular nominal languages. We use regular expressions with explicit binders for creation and destruction of names and pinpoint an exact property of these expressions - namely memory-finiteness - identifying a subclass of expressions denoting exactly regular nominal languages.
Cite as
Paul Brunet and Alexandra Silva. A Kleene Theorem for Nominal Automata (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 107:1-107:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{brunet_et_al:LIPIcs.ICALP.2019.107,
author = {Brunet, Paul and Silva, Alexandra},
title = {{A Kleene Theorem for Nominal Automata}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {107:1--107:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.107},
URN = {urn:nbn:de:0030-drops-106834},
doi = {10.4230/LIPIcs.ICALP.2019.107},
annote = {Keywords: Kleene Theorem, Nominal automata, Bracket Algebra}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Titouan Carette, Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart
Abstract
There exist several graphical languages for quantum information processing, like quantum circuits, ZX-Calculus, ZW-Calculus, etc. Each of these languages forms a dagger-symmetric monoidal category (dagger-SMC) and comes with an interpretation functor to the dagger-SMC of (finite dimension) Hilbert spaces. In the recent years, one of the main achievements of the categorical approach to quantum mechanics has been to provide several equational theories for most of these graphical languages, making them complete for various fragments of pure quantum mechanics.
We address the question of the extension of these languages beyond pure quantum mechanics, in order to reason on mixed states and general quantum operations, i.e. completely positive maps. Intuitively, such an extension relies on the axiomatisation of a discard map which allows one to get rid of a quantum system, operation which is not allowed in pure quantum mechanics.
We introduce a new construction, the discard construction, which transforms any dagger-symmetric monoidal category into a symmetric monoidal category equipped with a discard map. Roughly speaking this construction consists in making any isometry causal.
Using this construction we provide an extension for several graphical languages that we prove to be complete for general quantum operations. However this construction fails for some fringe cases like the Clifford+T quantum mechanics, as the category does not have enough isometries.
Cite as
Titouan Carette, Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart. Completeness of Graphical Languages for Mixed States Quantum Mechanics (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 108:1-108:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{carette_et_al:LIPIcs.ICALP.2019.108,
author = {Carette, Titouan and Jeandel, Emmanuel and Perdrix, Simon and Vilmart, Renaud},
title = {{Completeness of Graphical Languages for Mixed States Quantum Mechanics}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {108:1--108:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.108},
URN = {urn:nbn:de:0030-drops-106844},
doi = {10.4230/LIPIcs.ICALP.2019.108},
annote = {Keywords: Quantum Computing, Quantum Categorical Mechanics, Category Theory, Mixed States, Completely Positive Maps}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Katrin Casel, Joel D. Day, Pamela Fleischmann, Tomasz Kociumaka, Florin Manea, and Markus L. Schmid
Abstract
We investigate the locality number, a recently introduced structural parameter for strings (with applications in pattern matching with variables), and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow us to show that computing the locality number is NP-hard but fixed-parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with ratio O(sqrt{log{opt}} log n). As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the best currently known approximation algorithm for cutwidth. In addition to these main results, we also consider the possibility of greedy-based approximation algorithms for the locality number.
Cite as
Katrin Casel, Joel D. Day, Pamela Fleischmann, Tomasz Kociumaka, Florin Manea, and Markus L. Schmid. Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 109:1-109:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{casel_et_al:LIPIcs.ICALP.2019.109,
author = {Casel, Katrin and Day, Joel D. and Fleischmann, Pamela and Kociumaka, Tomasz and Manea, Florin and Schmid, Markus L.},
title = {{Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {109:1--109:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.109},
URN = {urn:nbn:de:0030-drops-106858},
doi = {10.4230/LIPIcs.ICALP.2019.109},
annote = {Keywords: Graph and String Parameters, NP-Completeness, Approximation Algorithms}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Laura Ciobanu and Murray Elder
Abstract
We show that the full set of solutions to systems of equations and inequations in a hyperbolic group, with or without torsion, as shortlex geodesic words, is an EDT0L language whose specification can be computed in NSPACE(n^2 log n) for the torsion-free case and NSPACE(n^4 log n) for the torsion case. Our work combines deep geometric results by Rips, Sela, Dahmani and Guirardel on decidability of existential theories of hyperbolic groups, work of computer scientists including Plandowski, Jeż, Diekert and others on PSPACE algorithms to solve equations in free monoids and groups using compression, and an intricate language-theoretic analysis.
The present work gives an essentially optimal formal language description for all solutions in all hyperbolic groups, and an explicit and surprising low space complexity to compute them.
Cite as
Laura Ciobanu and Murray Elder. Solutions Sets to Systems of Equations in Hyperbolic Groups Are EDT0L in PSPACE (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 110:1-110:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ciobanu_et_al:LIPIcs.ICALP.2019.110,
author = {Ciobanu, Laura and Elder, Murray},
title = {{Solutions Sets to Systems of Equations in Hyperbolic Groups Are EDT0L in PSPACE}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {110:1--110:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.110},
URN = {urn:nbn:de:0030-drops-106867},
doi = {10.4230/LIPIcs.ICALP.2019.110},
annote = {Keywords: Hyperbolic group, Existential theory, EDT0L language, PSPACE}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Ugo Dal Lago, Francesco Gavazzo, and Akira Yoshimizu
Abstract
We introduce a new form of logical relation which, in the spirit of metric relations, allows us to assign each pair of programs a quantity measuring their distance, rather than a boolean value standing for their being equivalent. The novelty of differential logical relations consists in measuring the distance between terms not (necessarily) by a numerical value, but by a mathematical object which somehow reflects the interactive complexity, i.e. the type, of the compared terms. We exemplify this concept in the simply-typed lambda-calculus, and show a form of soundness theorem. We also see how ordinary logical relations and metric relations can be seen as instances of differential logical relations. Finally, we show that differential logical relations can be organised in a cartesian closed category, contrarily to metric relations, which are well-known not to have such a structure, but only that of a monoidal closed category.
Cite as
Ugo Dal Lago, Francesco Gavazzo, and Akira Yoshimizu. Differential Logical Relations, Part I: The Simply-Typed Case (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 111:1-111:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dallago_et_al:LIPIcs.ICALP.2019.111,
author = {Dal Lago, Ugo and Gavazzo, Francesco and Yoshimizu, Akira},
title = {{Differential Logical Relations, Part I: The Simply-Typed Case}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {111:1--111:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.111},
URN = {urn:nbn:de:0030-drops-106879},
doi = {10.4230/LIPIcs.ICALP.2019.111},
annote = {Keywords: Logical Relations, lambda-Calculus, Program Equivalence, Semantics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Anuj Dawar, Erich Grädel, and Wied Pakusa
Abstract
Invertible map equivalences are approximations of graph isomorphism that refine the well-known Weisfeiler-Leman method. They are parameterized by a number k and a set Q of primes. The intuition is that two equivalent graphs G equiv^IM_{k, Q} H cannot be distinguished by means of partitioning the set of k-tuples in both graphs with respect to any linear-algebraic operator acting on vector spaces over fields of characteristic p, for any p in Q. These equivalences have first appeared in the study of rank logic, but in fact they can be used to delimit the expressive power of any extension of fixed-point logic with linear-algebraic operators. We define {LA^{k}}(Q), an infinitary logic with k variables and all linear-algebraic operators over finite vector spaces of characteristic p in Q and show that equiv^IM_{k, Q} is the natural notion of elementary equivalence for this logic. The logic LA^{omega}(Q) = Cup_{k in omega} LA^{k}(Q) is then a natural upper bound on the expressive power of any extension of fixed-point logics by means of Q-linear-algebraic operators.
By means of a new and much deeper algebraic analysis of a generalized variant, for any prime p, of the CFI-structures due to Cai, Fürer, and Immerman, we prove that, as long as Q is not the set of all primes, there is no k such that equiv^IM_{k, Q} is the same as isomorphism. It follows that there are polynomial-time properties of graphs which are not definable in LA^{omega}(Q), which implies that no extension of fixed-point logic with linear-algebraic operators can capture PTIME, unless it includes such operators for all prime characteristics. Our analysis requires substantial algebraic machinery, including a homogeneity property of CFI-structures and Maschke’s Theorem, an important result from the representation theory of finite groups.
Cite as
Anuj Dawar, Erich Grädel, and Wied Pakusa. Approximations of Isomorphism and Logics with Linear-Algebraic Operators (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 112:1-112:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dawar_et_al:LIPIcs.ICALP.2019.112,
author = {Dawar, Anuj and Gr\"{a}del, Erich and Pakusa, Wied},
title = {{Approximations of Isomorphism and Logics with Linear-Algebraic Operators}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {112:1--112:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.112},
URN = {urn:nbn:de:0030-drops-106887},
doi = {10.4230/LIPIcs.ICALP.2019.112},
annote = {Keywords: Finite Model Theory, Graph Isomorphism, Descriptive Complexity, Algebra}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Holger Dell, Marc Roth, and Philip Wellnitz
Abstract
Conjunctive queries select and are expected to return certain tuples from a relational database. We study the potentially easier problem of counting all selected tuples, rather than enumerating them. In particular, we are interested in the problem’s parameterized and data complexity, where the query is considered to be small or even fixed, and the database is considered to be large. We identify two structural parameters for conjunctive queries that capture their inherent complexity: The dominating star size and the linked matching number. If the dominating star size of a conjunctive query is large, then we show that counting solution tuples to the query is at least as hard as counting dominating sets, which yields a fine-grained complexity lower bound under the Strong Exponential Time Hypothesis (SETH) as well as a #W[2]-hardness result in parameterized complexity. Moreover, if the linked matching number of a conjunctive query is large, then we show that the structure of the query is so rich that arbitrary queries up to a certain size can be encoded into it; in the language of parameterized complexity, this essentially establishes a #A[2]-completeness result.
Using ideas stemming from Lovász (1967), we lift complexity results from the class of conjunctive queries to arbitrary existential or universal formulas that might contain inequalities and negations on constraints over the free variables. As a consequence, we obtain a complexity classification that refines and generalizes previous results of Chen, Durand, and Mengel (ToCS 2015; ICDT 2015; PODS 2016) for conjunctive queries and of Curticapean and Marx (FOCS 2014) for the subgraph counting problem. Our proof also relies on graph minors, and we show a strengthening of the Excluded-Grid-Theorem which might be of independent interest: If the linked matching number (and thus the treewidth) is large, then not only can we find a large grid somewhere in the graph, but we can find a large grid whose diagonal has disjoint paths leading into an assumed node-well-linked set.
Cite as
Holger Dell, Marc Roth, and Philip Wellnitz. Counting Answers to Existential Questions (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 113:1-113:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dell_et_al:LIPIcs.ICALP.2019.113,
author = {Dell, Holger and Roth, Marc and Wellnitz, Philip},
title = {{Counting Answers to Existential Questions}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {113:1--113:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.113},
URN = {urn:nbn:de:0030-drops-106894},
doi = {10.4230/LIPIcs.ICALP.2019.113},
annote = {Keywords: Conjunctive queries, graph homomorphisms, counting complexity, parameterized complexity, fine-grained complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Dani Dorfman, Haim Kaplan, and Uri Zwick
Abstract
We present an improved exponential time algorithm for Energy Games, and hence also for Mean Payoff Games. The running time of the new algorithm is O (min(m n W, m n 2^{n/2} log W)), where n is the number of vertices, m is the number of edges, and when the edge weights are integers of absolute value at most W. For small values of W, the algorithm matches the performance of the pseudopolynomial time algorithm of Brim et al. on which it is based. For W >= n2^{n/2}, the new algorithm is faster than the algorithm of Brim et al. and is currently the fastest deterministic algorithm for Energy Games and Mean Payoff Games. The new algorithm is obtained by introducing a technique of forecasting repetitive actions performed by the algorithm of Brim et al., along with the use of an edge-weight scaling technique.
Cite as
Dani Dorfman, Haim Kaplan, and Uri Zwick. A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 114:1-114:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dorfman_et_al:LIPIcs.ICALP.2019.114,
author = {Dorfman, Dani and Kaplan, Haim and Zwick, Uri},
title = {{A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {114:1--114:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.114},
URN = {urn:nbn:de:0030-drops-106909},
doi = {10.4230/LIPIcs.ICALP.2019.114},
annote = {Keywords: Energy Games, Mean Payoff Games, Scaling}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Kousha Etessami, Emanuel Martinov, Alistair Stewart, and Mihalis Yannakakis
Abstract
We give polynomial time algorithms for deciding almost-sure and limit-sure reachability in Branching Concurrent Stochastic Games (BCSGs). These are a class of infinite-state imperfect-information stochastic games that generalize both finite-state concurrent stochastic reachability games ([L. de Alfaro et al., 2007]) and branching simple stochastic reachability games ([K. Etessami et al., 2018]).
Cite as
Kousha Etessami, Emanuel Martinov, Alistair Stewart, and Mihalis Yannakakis. Reachability for Branching Concurrent Stochastic Games (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 115:1-115:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{etessami_et_al:LIPIcs.ICALP.2019.115,
author = {Etessami, Kousha and Martinov, Emanuel and Stewart, Alistair and Yannakakis, Mihalis},
title = {{Reachability for Branching Concurrent Stochastic Games}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {115:1--115:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.115},
URN = {urn:nbn:de:0030-drops-106917},
doi = {10.4230/LIPIcs.ICALP.2019.115},
annote = {Keywords: stochastic games, multi-type branching processes, concurrent games, minimax-polynomial equations, reachability, almost-sure, limit-sure}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Marie Fortin
Abstract
We show that over the class of linear orders with additional binary relations satisfying some monotonicity conditions, monadic first-order logic has the three-variable property. This generalizes (and gives a new proof of) several known results, including the fact that monadic first-order logic has the three-variable property over linear orders, as well as over (R,<,+1), and answers some open questions mentioned in a paper from Antonopoulos, Hunter, Raza and Worrell [FoSSaCS 2015]. Our proof is based on a translation of monadic first-order logic formulas into formulas of a star-free variant of Propositional Dynamic Logic, which are in turn easily expressible in monadic first-order logic with three variables.
Cite as
Marie Fortin. FO = FO^3 for Linear Orders with Monotone Binary Relations (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 116:1-116:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{fortin:LIPIcs.ICALP.2019.116,
author = {Fortin, Marie},
title = {{FO = FO^3 for Linear Orders with Monotone Binary Relations}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {116:1--116:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.116},
URN = {urn:nbn:de:0030-drops-106923},
doi = {10.4230/LIPIcs.ICALP.2019.116},
annote = {Keywords: first-order logic, three-variable property, propositional dynamic logic}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Martin Grohe and Sandra Kiefer
Abstract
The Weisfeiler-Leman (WL) dimension of a graph is a measure for the inherent descriptive complexity of the graph. While originally derived from a combinatorial graph isomorphism test called the Weisfeiler-Leman algorithm, the WL dimension can also be characterised in terms of the number of variables that is required to describe the graph up to isomorphism in first-order logic with counting quantifiers.
It is known that the WL dimension is upper-bounded for all graphs that exclude some fixed graph as a minor [M. Grohe, 2017]. However, the bounds that can be derived from this general result are astronomic. Only recently, it was proved that the WL dimension of planar graphs is at most 3 [S. Kiefer et al., 2017].
In this paper, we prove that the WL dimension of graphs embeddable in a surface of Euler genus g is at most 4g+3. For the WL dimension of graphs embeddable in an orientable surface of Euler genus g, our approach yields an upper bound of 2g + 3.
Cite as
Martin Grohe and Sandra Kiefer. A Linear Upper Bound on the Weisfeiler-Leman Dimension of Graphs of Bounded Genus (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 117:1-117:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{grohe_et_al:LIPIcs.ICALP.2019.117,
author = {Grohe, Martin and Kiefer, Sandra},
title = {{A Linear Upper Bound on the Weisfeiler-Leman Dimension of Graphs of Bounded Genus}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {117:1--117:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.117},
URN = {urn:nbn:de:0030-drops-106931},
doi = {10.4230/LIPIcs.ICALP.2019.117},
annote = {Keywords: Weisfeiler-Leman algorithm, finite-variable logic, isomorphism testing, planar graphs, bounded genus}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Mehran Hosseini, Joël Ouaknine, and James Worrell
Abstract
We consider the problem of deciding termination of single-path while loops with integer variables, affine updates, and affine guard conditions. The question is whether such a loop terminates on all integer initial values. This problem is known to be decidable for the subclass of loops whose update matrices are diagonalisable, but the general case has remained open since being conjectured decidable by Tiwari in 2004. In this paper we show decidability of determining termination for arbitrary update matrices, confirming Tiwari’s conjecture. For the class of loops considered in this paper, the question of deciding termination on a specific initial value is a longstanding open problem in number theory. The key to our decision procedure is in showing how to circumvent the difficulties inherent in deciding termination on a fixed initial value.
Cite as
Mehran Hosseini, Joël Ouaknine, and James Worrell. Termination of Linear Loops over the Integers (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 118:1-118:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{hosseini_et_al:LIPIcs.ICALP.2019.118,
author = {Hosseini, Mehran and Ouaknine, Jo\"{e}l and Worrell, James},
title = {{Termination of Linear Loops over the Integers}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {118:1--118:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.118},
URN = {urn:nbn:de:0030-drops-106940},
doi = {10.4230/LIPIcs.ICALP.2019.118},
annote = {Keywords: Program Verification, Loop Termination, Linear Integer Programs, Affine While Loops}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, and Patrick Totzke
Abstract
We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset F of states infinitely often. A question left open by T.P. Hill in 1979 [Theodore Preston Hill, 1979] is whether there always exist epsilon-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient.
Cite as
Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, and Patrick Totzke. Büchi Objectives in Countable MDPs (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 119:1-119:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kiefer_et_al:LIPIcs.ICALP.2019.119,
author = {Kiefer, Stefan and Mayr, Richard and Shirmohammadi, Mahsa and Totzke, Patrick},
title = {{B\"{u}chi Objectives in Countable MDPs}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {119:1--119:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.119},
URN = {urn:nbn:de:0030-drops-106959},
doi = {10.4230/LIPIcs.ICALP.2019.119},
annote = {Keywords: Markov decision processes}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Christof Löding and Anton Pirogov
Abstract
Determinization of Büchi automata is a long-known difficult problem, and after the seminal result of Safra, who developed the first asymptotically optimal construction from Büchi into Rabin automata, much work went into improving, simplifying, or avoiding Safra’s construction. A different, less known determinization construction was proposed by Muller and Schupp. The two types of constructions share some similarities but their precise relationship was still unclear. In this paper, we shed some light on this relationship by proposing a construction from nondeterministic Büchi to deterministic parity automata that subsumes both constructions: Our construction leaves some freedom in the choice of the successor states of the deterministic automaton, and by instantiating these choices in different ways, one obtains as particular cases the construction of Safra and the construction of Muller and Schupp. The basis is a correspondence between structures that are encoded in the macrostates of the determinization procedures - Safra trees on one hand, and levels of the split-tree, which underlies the Muller and Schupp construction, on the other hand. Our construction also allows for mixing the mentioned constructions, and opens up new directions for the development of heuristics.
Cite as
Christof Löding and Anton Pirogov. Determinization of Büchi Automata: Unifying the Approaches of Safra and Muller-Schupp (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 120:1-120:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{loding_et_al:LIPIcs.ICALP.2019.120,
author = {L\"{o}ding, Christof and Pirogov, Anton},
title = {{Determinization of B\"{u}chi Automata: Unifying the Approaches of Safra and Muller-Schupp}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {120:1--120:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.120},
URN = {urn:nbn:de:0030-drops-106963},
doi = {10.4230/LIPIcs.ICALP.2019.120},
annote = {Keywords: B\"{u}chi automata, determinization, parity automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Antonio Molina Lovett and Jeffrey Shallit
Abstract
The permutation language P_n consists of all words that are permutations of a fixed alphabet of size n. Using divide-and-conquer, we construct a regular expression R_n that specifies P_n. We then give explicit bounds for the length of R_n, which we find to be 4^{n}n^{-(lg n)/4+Theta(1)}, and use these bounds to show that R_n has minimum size over all regular expressions specifying P_n.
Cite as
Antonio Molina Lovett and Jeffrey Shallit. Optimal Regular Expressions for Permutations (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 121:1-121:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{molinalovett_et_al:LIPIcs.ICALP.2019.121,
author = {Molina Lovett, Antonio and Shallit, Jeffrey},
title = {{Optimal Regular Expressions for Permutations}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {121:1--121:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.121},
URN = {urn:nbn:de:0030-drops-106978},
doi = {10.4230/LIPIcs.ICALP.2019.121},
annote = {Keywords: regular expressions, lower bounds, divide-and-conquer}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Anca Muscholl and Gabriele Puppis
Abstract
In this paper we provide a positive answer to a question left open by Alur and and Deshmukh in 2011 by showing that equivalence of finite-valued copyless streaming string transducers is decidable.
Cite as
Anca Muscholl and Gabriele Puppis. Equivalence of Finite-Valued Streaming String Transducers Is Decidable (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 122:1-122:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{muscholl_et_al:LIPIcs.ICALP.2019.122,
author = {Muscholl, Anca and Puppis, Gabriele},
title = {{Equivalence of Finite-Valued Streaming String Transducers Is Decidable}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {122:1--122:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.122},
URN = {urn:nbn:de:0030-drops-106988},
doi = {10.4230/LIPIcs.ICALP.2019.122},
annote = {Keywords: String transducers, equivalence, Ehrenfeucht conjecture}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Lê Thành Dũng Nguyễn and Pierre Pradic
Abstract
We introduce a new approach to implicit complexity in linear logic, inspired by functional database query languages and using recent developments in effective denotational semantics of polymorphism. We give the first sub-polynomial upper bound in a type system with impredicative polymorphism; adding restrictions on quantifiers yields a characterization of logarithmic space, for which extensional completeness is established via descriptive complexity.
Cite as
Lê Thành Dũng Nguyễn and Pierre Pradic. From Normal Functors to Logarithmic Space Queries (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 123:1-123:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{nguyen_et_al:LIPIcs.ICALP.2019.123,
author = {Nguy\~{ê}n, L\^{e} Th\`{a}nh D\~{u}ng and Pradic, Pierre},
title = {{From Normal Functors to Logarithmic Space Queries}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {123:1--123:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.123},
URN = {urn:nbn:de:0030-drops-106994},
doi = {10.4230/LIPIcs.ICALP.2019.123},
annote = {Keywords: coherence spaces, elementary linear logic, semantic evaluation}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Matthieu Picantin
Abstract
We develop an effective and natural approach to interpret any semigroup admitting a special language of greedy normal forms as an automaton semigroup, namely the semigroup generated by a Mealy automaton encoding the behaviour of such a language of greedy normal forms under one-sided multiplication. The framework embraces many of the well-known classes of (automatic) semigroups: free semigroups, free commutative semigroups, trace or divisibility monoids, braid or Artin - Tits or Krammer or Garside monoids, Baumslag - Solitar semigroups, etc. Like plactic monoids or Chinese monoids, some neither left- nor right-cancellative automatic semigroups are also investigated, as well as some residually finite variations of the bicyclic monoid. It provides what appears to be the first known connection from a class of automatic semigroups to a class of automaton semigroups. It is worthwhile noting that, "being an automatic semigroup" and "being an automaton semigroup" become dual properties in a very automata-theoretical sense. Quadratic rewriting systems and associated tilings appear as the cornerstone of our construction.
Cite as
Matthieu Picantin. Automatic Semigroups vs Automaton Semigroups (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 124:1-124:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{picantin:LIPIcs.ICALP.2019.124,
author = {Picantin, Matthieu},
title = {{Automatic Semigroups vs Automaton Semigroups}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {124:1--124:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.124},
URN = {urn:nbn:de:0030-drops-107004},
doi = {10.4230/LIPIcs.ICALP.2019.124},
annote = {Keywords: Mealy machine, semigroup, rewriting system, automaticity, self-similarity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Jean-Éric Pin and Christophe Reutenauer
Abstract
Let p be a prime number and let G_p be the variety of all languages recognised by a finite p-group. We give a construction process of all G_p-preserving functions from a free monoid to a free group. Our result follows from a new noncommutative generalization of Mahler’s theorem on interpolation series, a celebrated result of p-adic analysis.
Cite as
Jean-Éric Pin and Christophe Reutenauer. A Mahler’s Theorem for Word Functions (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 125:1-125:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{pin_et_al:LIPIcs.ICALP.2019.125,
author = {Pin, Jean-\'{E}ric and Reutenauer, Christophe},
title = {{A Mahler’s Theorem for Word Functions}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {125:1--125:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.125},
URN = {urn:nbn:de:0030-drops-107019},
doi = {10.4230/LIPIcs.ICALP.2019.125},
annote = {Keywords: group languages, interpolation series, pro-p metric, regularity preserving}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Thomas Place and Marc Zeitoun
Abstract
We investigate the star-free closure, which associates to a class of languages its closure under Boolean operations and marked concatenation. We prove that the star-free closure of any finite class and of any class of groups languages with decidable separation (plus mild additional properties) has decidable separation. We actually show decidability of a stronger property, called covering. This generalizes many results on the subject in a unified framework. A key ingredient is that star-free closure coincides with another closure operator where Kleene stars are also allowed in restricted contexts.
Cite as
Thomas Place and Marc Zeitoun. On All Things Star-Free (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 126:1-126:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{place_et_al:LIPIcs.ICALP.2019.126,
author = {Place, Thomas and Zeitoun, Marc},
title = {{On All Things Star-Free}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {126:1--126:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.126},
URN = {urn:nbn:de:0030-drops-107028},
doi = {10.4230/LIPIcs.ICALP.2019.126},
annote = {Keywords: Regular languages, separation problem, star-free closure, group languages}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Martin Raszyk, David Basin, and Dmitriy Traytel
Abstract
Every nondeterministic finite-state automaton is equivalent to a deterministic finite-state automaton. This result does not extend to finite-state transducers - finite-state automata equipped with a one-way output tape. There is a strict hierarchy of functions accepted by one-way deterministic finite-state transducers (1DFTs), one-way nondeterministic finite-state transducers (1NFTs), and two-way nondeterministic finite-state transducers (2NFTs), whereas the two-way deterministic finite-state transducers (2DFTs) accept the same family of functions as their nondeterministic counterparts (2NFTs).
We define multi-head one-way deterministic finite-state transducers (mh-1DFTs) as a natural extension of 1DFTs. These transducers have multiple one-way reading heads that move asynchronously over the input word. Our main result is that mh-1DFTs can deterministically express any function defined by a one-way nondeterministic finite-state transducer. Of independent interest, we formulate the all-suffix regular matching problem, which is the problem of deciding for each suffix of an input word whether it belongs to a regular language. As part of our proof, we show that an mh-1DFT can solve all-suffix regular matching, which has applications, e.g., in runtime verification.
Cite as
Martin Raszyk, David Basin, and Dmitriy Traytel. From Nondeterministic to Multi-Head Deterministic Finite-State Transducers (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 127:1-127:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{raszyk_et_al:LIPIcs.ICALP.2019.127,
author = {Raszyk, Martin and Basin, David and Traytel, Dmitriy},
title = {{From Nondeterministic to Multi-Head Deterministic Finite-State Transducers}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {127:1--127:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.127},
URN = {urn:nbn:de:0030-drops-107037},
doi = {10.4230/LIPIcs.ICALP.2019.127},
annote = {Keywords: Formal languages, Nondeterminism, Multi-head automata, Finite transducers}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Pierre-Alain Reynier and Didier Villevalois
Abstract
Transducers extend finite state automata with outputs, and describe transformations from strings to strings. Sequential transducers, which have a deterministic behaviour regarding their input, are of particular interest. However, unlike finite-state automata, not every transducer can be made sequential. The seminal work of Choffrut allows to characterise, amongst the functional one-way transducers, the ones that admit an equivalent sequential transducer.
In this work, we extend the results of Choffrut to the class of transducers that produce their output string by adding simultaneously, at each transition, a string on the left and a string on the right of the string produced so far. We call them the string-to-context transducers. We obtain a multiple characterisation of the functional string-to-context transducers admitting an equivalent sequential one, based on a Lipschitz property of the function realised by the transducer, and on a pattern (a new twinning property). Last, we prove that given a string-to-context transducer, determining whether there exists an equivalent sequential one is in coNP.
Cite as
Pierre-Alain Reynier and Didier Villevalois. Sequentiality of String-to-Context Transducers (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 128:1-128:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{reynier_et_al:LIPIcs.ICALP.2019.128,
author = {Reynier, Pierre-Alain and Villevalois, Didier},
title = {{Sequentiality of String-to-Context Transducers}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {128:1--128:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.128},
URN = {urn:nbn:de:0030-drops-107042},
doi = {10.4230/LIPIcs.ICALP.2019.128},
annote = {Keywords: Transducers, Sequentiality, Twinning Property, Two-Way Transducers}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Sylvain Schmitz
Abstract
The reachability problem in lossy counter machines is the best-known ACKERMANN-complete problem and has been used to establish most of the ACKERMANN-hardness statements in the literature. This hides however a complexity gap when the number of counters is fixed. We close this gap and prove F_d-completeness for machines with d counters, which provides the first known uncontrived problems complete for the fast-growing complexity classes at levels 3 < d < omega. We develop for this an approach through antichain factorisations of bad sequences and analysing the length of controlled antichains.
Cite as
Sylvain Schmitz. The Parametric Complexity of Lossy Counter Machines (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 129:1-129:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{schmitz:LIPIcs.ICALP.2019.129,
author = {Schmitz, Sylvain},
title = {{The Parametric Complexity of Lossy Counter Machines}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {129:1--129:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.129},
URN = {urn:nbn:de:0030-drops-107056},
doi = {10.4230/LIPIcs.ICALP.2019.129},
annote = {Keywords: Counter machine, well-structured system, well-quasi-order, antichain, fast-growing complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Henning Urbat and Stefan Milius
Abstract
We establish an Eilenberg-type correspondence for data languages, i.e. languages over an infinite alphabet. More precisely, we prove that there is a bijective correspondence between varieties of languages recognized by orbit-finite nominal monoids and pseudovarieties of such monoids. This is the first result of this kind for data languages. Our approach makes use of nominal Stone duality and a recent category theoretic generalization of Birkhoff-type theorems that we instantiate here for the category of nominal sets. In addition, we prove an axiomatic characterization of weak pseudovarieties as those classes of orbit-finite monoids that can be specified by sequences of nominal equations, which provides a nominal version of a classical theorem of Eilenberg and Schützenberger.
Cite as
Henning Urbat and Stefan Milius. Varieties of Data Languages (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 130:1-130:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{urbat_et_al:LIPIcs.ICALP.2019.130,
author = {Urbat, Henning and Milius, Stefan},
title = {{Varieties of Data Languages}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {130:1--130:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.130},
URN = {urn:nbn:de:0030-drops-107063},
doi = {10.4230/LIPIcs.ICALP.2019.130},
annote = {Keywords: Nominal sets, Stone duality, Algebraic language theory, Data languages}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Eleni C. Akrida, George B. Mertzios, Sotiris Nikoletseas, Christoforos Raptopoulos, Paul G. Spirakis, and Viktor Zamaraev
Abstract
Temporal graphs are used to abstractly model real-life networks that are inherently dynamic in nature, in the sense that the network structure undergoes discrete changes over time. Given a static underlying graph G=(V,E), a temporal graph on G is a sequence of snapshots {G_t=(V,E_t) subseteq G: t in N}, one for each time step t >= 1. In this paper we study stochastic temporal graphs, i.e. stochastic processes G={G_t subseteq G: t in N} whose random variables are the snapshots of a temporal graph on G. A natural feature of stochastic temporal graphs which can be observed in various real-life scenarios is a memory effect in the appearance probabilities of particular edges; that is, the probability an edge e in E appears at time step t depends on its appearance (or absence) at the previous k steps. In this paper we study the hierarchy of models memory-k, k >= 0, which address this memory effect in an edge-centric network evolution: every edge of G has its own probability distribution for its appearance over time, independently of all other edges. Clearly, for every k >= 1, memory-(k-1) is a special case of memory-k. However, in this paper we make a clear distinction between the values k=0 ("no memory") and k >= 1 ("some memory"), as in some cases these models exhibit a fundamentally different computational behavior for these values of k, as our results indicate. For every k >= 0 we investigate the computational complexity of two naturally related, but fundamentally different, temporal path (or journey) problems: {Minimum Arrival} and {Best Policy}. In the first problem we are looking for the expected arrival time of a foremost journey between two designated vertices {s},{y}. In the second one we are looking for the expected arrival time of the best policy for actually choosing a particular {s}-{y} journey. We present a detailed investigation of the computational landscape of both problems for the different values of memory k. Among other results we prove that, surprisingly, {Minimum Arrival} is strictly harder than {Best Policy}; in fact, for k=0, {Minimum Arrival} is #P-hard while {Best Policy} is solvable in O(n^2) time.
Cite as
Eleni C. Akrida, George B. Mertzios, Sotiris Nikoletseas, Christoforos Raptopoulos, Paul G. Spirakis, and Viktor Zamaraev. How Fast Can We Reach a Target Vertex in Stochastic Temporal Graphs?. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 131:1-131:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{akrida_et_al:LIPIcs.ICALP.2019.131,
author = {Akrida, Eleni C. and Mertzios, George B. and Nikoletseas, Sotiris and Raptopoulos, Christoforos and Spirakis, Paul G. and Zamaraev, Viktor},
title = {{How Fast Can We Reach a Target Vertex in Stochastic Temporal Graphs?}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {131:1--131:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.131},
URN = {urn:nbn:de:0030-drops-107071},
doi = {10.4230/LIPIcs.ICALP.2019.131},
annote = {Keywords: Temporal network, stochastic temporal graph, temporal path, #P-hard problem, polynomial-time approximation scheme}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Matthias Bonne and Keren Censor-Hillel
Abstract
This paper provides an in-depth study of the fundamental problems of finding small subgraphs in distributed dynamic networks.
While some problems are trivially easy to handle, such as detecting a triangle that emerges after an edge insertion, we show that, perhaps somewhat surprisingly, other problems exhibit a wide range of complexities in terms of the trade-offs between their round and bandwidth complexities.
In the case of triangles, which are only affected by the topology of the immediate neighborhood, some end results are:
- The bandwidth complexity of 1-round dynamic triangle detection or listing is Theta(1).
- The bandwidth complexity of 1-round dynamic triangle membership listing is Theta(1) for node/edge deletions, Theta(n^{1/2}) for edge insertions, and Theta(n) for node insertions.
- The bandwidth complexity of 1-round dynamic triangle membership detection is Theta(1) for node/edge deletions, O(log n) for edge insertions, and Theta(n) for node insertions.
Most of our upper and lower bounds are tight. Additionally, we provide almost always tight upper and lower bounds for larger cliques.
Cite as
Matthias Bonne and Keren Censor-Hillel. Distributed Detection of Cliques in Dynamic Networks. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 132:1-132:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{bonne_et_al:LIPIcs.ICALP.2019.132,
author = {Bonne, Matthias and Censor-Hillel, Keren},
title = {{Distributed Detection of Cliques in Dynamic Networks}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {132:1--132:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.132},
URN = {urn:nbn:de:0030-drops-107082},
doi = {10.4230/LIPIcs.ICALP.2019.132},
annote = {Keywords: distributed computing, subgraph detection, dynamic graphs}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Ioannis Caragiannis and Angelo Fanelli
Abstract
We consider the problem of the existence of natural improvement dynamics leading to approximate pure Nash equilibria, with a reasonable small approximation, and the problem of bounding the efficiency of such equilibria in the fundamental framework of weighted congestion game with polynomial latencies of degree at most d >= 1. In this work, by exploiting a simple technique, we firstly show that the game always admits a d-approximate potential function. This implies that every sequence of d-approximate improvement moves by the players always leads the game to a d-approximate pure Nash equilibrium. As a corollary, we also obtain that, under mild assumptions on the structure of the players' strategies, the game always admits a constant approximate potential function. Secondly, by using a simple potential function argument, we are able to show that in the game there always exists a (d+delta)-approximate pure Nash equilibrium, with delta in [0,1], whose cost is 2/(1+delta) times the cost of an optimal state.
Cite as
Ioannis Caragiannis and Angelo Fanelli. On Approximate Pure Nash Equilibria in Weighted Congestion Games with Polynomial Latencies. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 133:1-133:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{caragiannis_et_al:LIPIcs.ICALP.2019.133,
author = {Caragiannis, Ioannis and Fanelli, Angelo},
title = {{On Approximate Pure Nash Equilibria in Weighted Congestion Games with Polynomial Latencies}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {133:1--133:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.133},
URN = {urn:nbn:de:0030-drops-107095},
doi = {10.4230/LIPIcs.ICALP.2019.133},
annote = {Keywords: Congestion games, approximate pure Nash equilibrium, potential functions, approximate price of stability}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Arnaud Casteigts, Joseph G. Peters, and Jason Schoeters
Abstract
Let G=(V,E) be an undirected graph on n vertices and lambda:E -> 2^{N} a mapping that assigns to every edge a non-empty set of positive integer labels. These labels can be seen as discrete times when the edge is present. Such a labeled graph {G}=(G,lambda) is said to be temporally connected if a path exists with non-decreasing times from every vertex to every other vertex. In a seminal paper, Kempe, Kleinberg, and Kumar (STOC 2000) asked whether, given such a temporal graph, a sparse subset of edges can always be found whose labels suffice to preserve temporal connectivity - a temporal spanner. Axiotis and Fotakis (ICALP 2016) answered negatively by exhibiting a family of Theta(n^2)-dense temporal graphs which admit no temporal spanner of density o(n^2). The natural question is then whether sparse temporal spanners always exist in some classes of dense graphs.
In this paper, we answer this question affirmatively, by showing that if the underlying graph G is a complete graph, then one can always find temporal spanners of density O(n log n). The best known result for complete graphs so far was that spanners of density binom{n}{2}- floor[n/4] = O(n^2) always exist. Our result is the first positive answer as to the existence of o(n^2) sparse spanners in adversarial instances of temporal graphs since the original question by Kempe et al., focusing here on complete graphs. The proofs are constructive and directly adaptable as an algorithm.
Cite as
Arnaud Casteigts, Joseph G. Peters, and Jason Schoeters. Temporal Cliques Admit Sparse Spanners. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 134:1-134:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{casteigts_et_al:LIPIcs.ICALP.2019.134,
author = {Casteigts, Arnaud and Peters, Joseph G. and Schoeters, Jason},
title = {{Temporal Cliques Admit Sparse Spanners}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {134:1--134:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.134},
URN = {urn:nbn:de:0030-drops-107108},
doi = {10.4230/LIPIcs.ICALP.2019.134},
annote = {Keywords: Dynamic networks, Temporal graphs, Temporal connectivity, Sparse spanners}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Keren Censor-Hillel and Mikaël Rabie
Abstract
In this paper, we investigate a distributed maximal independent set (MIS) reconfiguration problem, in which there are two maximal independent sets for which every node is given its membership status, and the nodes need to communicate with their neighbors in order to find a reconfiguration schedule that switches from the first MIS to the second. Such a schedule is a list of independent sets that is restricted by forbidding two neighbors to change their membership status at the same step. In addition, these independent sets should provide some covering guarantee.
We show that obtaining an actual MIS (and even a 3-dominating set) in each intermediate step is impossible. However, we provide efficient solutions when the intermediate sets are only required to be independent and 4-dominating, which is almost always possible, as we fully characterize.
Consequently, our goal is to pin down the tradeoff between the possible length of the schedule and the number of communication rounds. We prove that a constant length schedule can be found in O(MIS+R32) rounds, where MIS is the complexity of finding an MIS in a worst-case graph and R32 is the complexity of finding a (3,2)-ruling set. For bounded degree graphs, this is O(log^*n) rounds and we show that it is necessary. On the other extreme, we show that with a constant number of rounds we can find a linear length schedule.
Cite as
Keren Censor-Hillel and Mikaël Rabie. Distributed Reconfiguration of Maximal Independent Sets. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 135:1-135:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{censorhillel_et_al:LIPIcs.ICALP.2019.135,
author = {Censor-Hillel, Keren and Rabie, Mika\"{e}l},
title = {{Distributed Reconfiguration of Maximal Independent Sets}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {135:1--135:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.135},
URN = {urn:nbn:de:0030-drops-107111},
doi = {10.4230/LIPIcs.ICALP.2019.135},
annote = {Keywords: distributed graph algorithms, reconfiguration, maximal independent set}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Aris Anagnostopoulos, Ilan R. Cohen, Stefano Leonardi, and Jakub Łącki
Abstract
Exploring large-scale networks is a time consuming and expensive task which is usually operated in a complex and uncertain environment. A crucial aspect of network exploration is the development of suitable strategies that decide which nodes and edges to probe at each stage of the process.
To model this process, we introduce the stochastic graph exploration problem. The input is an undirected graph G=(V,E) with a source vertex s, stochastic edge costs drawn from a distribution pi_e, e in E, and rewards on vertices of maximum value R. The goal is to find a set F of edges of total cost at most B such that the subgraph of G induced by F is connected, contains s, and maximizes the total reward. This problem generalizes the stochastic knapsack problem and other stochastic probing problems recently studied.
Our focus is on the development of efficient nonadaptive strategies that are competitive against the optimal adaptive strategy. A major challenge is the fact that the problem has an Omega(n) adaptivity gap even on a tree of n vertices. This is in sharp contrast with O(1) adaptivity gap of the stochastic knapsack problem, which is a special case of our problem. We circumvent this negative result by showing that O(log nR) resource augmentation suffices to obtain O(1) approximation on trees and O(log nR) approximation on general graphs. To achieve this result, we reduce stochastic graph exploration to a memoryless process - the minesweeper problem - which assigns to every edge a probability that the process terminates when the edge is probed. For this problem, interesting in its own, we present an optimal polynomial time algorithm on trees and an O(log nR) approximation for general graphs.
We study also the problem in which the maximum cost of an edge is a logarithmic fraction of the budget. We show that under this condition, there exist polynomial-time oblivious strategies that use 1+epsilon budget, whose adaptivity gaps on trees and general graphs are 1+epsilon and 8+epsilon, respectively. Finally, we provide additional results on the structure and the complexity of nonadaptive and adaptive strategies.
Cite as
Aris Anagnostopoulos, Ilan R. Cohen, Stefano Leonardi, and Jakub Łącki. Stochastic Graph Exploration. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 136:1-136:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{anagnostopoulos_et_al:LIPIcs.ICALP.2019.136,
author = {Anagnostopoulos, Aris and Cohen, Ilan R. and Leonardi, Stefano and {\L}\k{a}cki, Jakub},
title = {{Stochastic Graph Exploration}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {136:1--136:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.136},
URN = {urn:nbn:de:0030-drops-107122},
doi = {10.4230/LIPIcs.ICALP.2019.136},
annote = {Keywords: stochastic optimization, graph exploration, approximation algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Jurek Czyzowicz, Konstantinos Georgiou, Ryan Killick, Evangelos Kranakis, Danny Krizanc, Manuel Lafond, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende
Abstract
Consider two robots that start at the origin of the infinite line in search of an exit at an unknown location on the line. The robots can collaborate in the search, but can only communicate if they arrive at the same location at exactly the same time, i.e. they use the so-called face-to-face communication model. The group search time is defined as the worst-case time as a function of d, the distance of the exit from the origin, when both robots can reach the exit. It has long been known that for a single robot traveling at unit speed, the search time is at least 9d - o(d); a simple doubling strategy achieves this time bound. It was shown recently in [Chrobak et al., 2015] that k >= 2 robots traveling at unit speed also require at least 9d group search time.
We investigate energy-time trade-offs in group search by two robots, where the energy loss experienced by a robot traveling a distance x at constant speed s is given by s^2 x, as motivated by energy consumption models in physics and engineering. Specifically, we consider the problem of minimizing the total energy used by the robots, under the constraints that the search time is at most a multiple c of the distance d and the speed of the robots is bounded by b. Motivation for this study is that for the case when robots must complete the search in 9d time with maximum speed one (b=1; c=9), a single robot requires at least 9d energy, while for two robots, all previously proposed algorithms consume at least 28d/3 energy.
When the robots have bounded memory and can use only a constant number of fixed speeds, we generalize an algorithm described in [Baeza-Yates and Schott, 1995; Chrobak et al., 2015] to obtain a family of algorithms parametrized by pairs of b,c values that can solve the problem for the entire spectrum of these pairs for which the problem is solvable. In particular, for each such pair, we determine optimal (and in some cases nearly optimal) algorithms inducing the lowest possible energy consumption.
We also propose a novel search algorithm that simultaneously achieves search time 9d and consumes energy 8.42588d. Our result shows that two robots can search on the line in optimal time 9d while consuming less total energy than a single robot within the same search time. Our algorithm uses robots that have unbounded memory, and a finite number of dynamically computed speeds. It can be generalized for any c, b with cb=9, and consumes energy 8.42588b^2d.
Cite as
Jurek Czyzowicz, Konstantinos Georgiou, Ryan Killick, Evangelos Kranakis, Danny Krizanc, Manuel Lafond, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende. Energy Consumption of Group Search on a Line. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 137:1-137:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{czyzowicz_et_al:LIPIcs.ICALP.2019.137,
author = {Czyzowicz, Jurek and Georgiou, Konstantinos and Killick, Ryan and Kranakis, Evangelos and Krizanc, Danny and Lafond, Manuel and Narayanan, Lata and Opatrny, Jaroslav and Shende, Sunil},
title = {{Energy Consumption of Group Search on a Line}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {137:1--137:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.137},
URN = {urn:nbn:de:0030-drops-107138},
doi = {10.4230/LIPIcs.ICALP.2019.137},
annote = {Keywords: Evacuation, Exit, Line, Face-to-face Communication, Robots, Search}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Argyrios Deligkas, John Fearnley, Themistoklis Melissourgos, and Paul G. Spirakis
Abstract
We study the problem of finding an exact solution to the consensus halving problem. While recent work has shown that the approximate version of this problem is PPA-complete [Filos-Ratsikas and Goldberg, 2018; Filos-Ratsikas and Goldberg, 2018], we show that the exact version is much harder. Specifically, finding a solution with n agents and n cuts is FIXP-hard, and deciding whether there exists a solution with fewer than n cuts is ETR-complete. We also give a QPTAS for the case where each agent’s valuation is a polynomial.
Along the way, we define a new complexity class BU, which captures all problems that can be reduced to solving an instance of the Borsuk-Ulam problem exactly. We show that FIXP subseteq BU subseteq TFETR and that LinearBU = PPA, where LinearBU is the subclass of BU in which the Borsuk-Ulam instance is specified by a linear arithmetic circuit.
Cite as
Argyrios Deligkas, John Fearnley, Themistoklis Melissourgos, and Paul G. Spirakis. Computing Exact Solutions of Consensus Halving and the Borsuk-Ulam Theorem. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 138:1-138:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{deligkas_et_al:LIPIcs.ICALP.2019.138,
author = {Deligkas, Argyrios and Fearnley, John and Melissourgos, Themistoklis and Spirakis, Paul G.},
title = {{Computing Exact Solutions of Consensus Halving and the Borsuk-Ulam Theorem}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {138:1--138:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.138},
URN = {urn:nbn:de:0030-drops-107141},
doi = {10.4230/LIPIcs.ICALP.2019.138},
annote = {Keywords: PPA, FIXP, ETR, consensus halving, circuit, reduction, complexity class}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Stefan Dobrev, Lata Narayanan, Jaroslav Opatrny, and Denis Pankratov
Abstract
We consider the problem of finding a treasure at an unknown point of an n-dimensional infinite grid, n >= 3, by initially collocated finite automaton agents (scouts/robots). Recently, the problem has been well characterized for 2 dimensions for deterministic as well as randomized agents, both in synchronous and semi-synchronous models [S. Brandt et al., 2018; Y. Emek et al., 2015]. It has been conjectured that n+1 randomized agents are necessary to solve this problem in the n-dimensional grid [L. Cohen et al., 2017]. In this paper we disprove the conjecture in a strong sense: we show that three randomized synchronous agents suffice to explore an n-dimensional grid for any n. Our algorithm is optimal in terms of the number of the agents. Our key insight is that a constant number of finite automaton agents can, by their positions and movements, implement a stack, which can store the path being explored. We also show how to implement our algorithm using: four randomized semi-synchronous agents; four deterministic synchronous agents; or five deterministic semi-synchronous agents.
We give a different algorithm that uses 4 deterministic semi-synchronous agents for the 3-dimensional grid. This is provably optimal, and surprisingly, matches the result for 2 dimensions. For n >= 4, the time complexity of the solutions mentioned above is exponential in distance D of the treasure from the starting point of the agents. We show that in the deterministic case, one additional agent brings the time down to a polynomial. Finally, we focus on algorithms that never venture much beyond the distance D. We describe an algorithm that uses O(sqrt{n}) semi-synchronous deterministic agents that never go beyond 2D, as well as show that any algorithm using 3 synchronous deterministic agents in 3 dimensions, if it exists, must travel beyond Omega(D^{3/2}) from the origin.
Cite as
Stefan Dobrev, Lata Narayanan, Jaroslav Opatrny, and Denis Pankratov. Exploration of High-Dimensional Grids by Finite Automata. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 139:1-139:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{dobrev_et_al:LIPIcs.ICALP.2019.139,
author = {Dobrev, Stefan and Narayanan, Lata and Opatrny, Jaroslav and Pankratov, Denis},
title = {{Exploration of High-Dimensional Grids by Finite Automata}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {139:1--139:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.139},
URN = {urn:nbn:de:0030-drops-107153},
doi = {10.4230/LIPIcs.ICALP.2019.139},
annote = {Keywords: Multi-agent systems, finite state machines, high-dimensional grids, robot exploration, randomized agents, semi-synchronous and synchronous agents}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Yuval Emek, Shay Kutten, Ron Lavi, and William K. Moses Jr.
Abstract
Addressing a fundamental problem in programmable matter, we present the first deterministic algorithm to elect a unique leader in a system of connected amoebots assuming only that amoebots are initially contracted. Previous algorithms either used randomization, made various assumptions (shapes with no holes, or known shared chirality), or elected several co-leaders in some cases.
Some of the building blocks we introduce in constructing the algorithm are of interest by themselves, especially the procedure we present for reaching common chirality among the amoebots. Given the leader election and the chirality agreement building block, it is known that various tasks in programmable matter can be performed or improved.
The main idea of the new algorithm is the usage of the ability of the amoebots to move, which previous leader election algorithms have not used.
Cite as
Yuval Emek, Shay Kutten, Ron Lavi, and William K. Moses Jr.. Deterministic Leader Election in Programmable Matter. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 140:1-140:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{emek_et_al:LIPIcs.ICALP.2019.140,
author = {Emek, Yuval and Kutten, Shay and Lavi, Ron and Moses Jr., William K.},
title = {{Deterministic Leader Election in Programmable Matter}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {140:1--140:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.140},
URN = {urn:nbn:de:0030-drops-107169},
doi = {10.4230/LIPIcs.ICALP.2019.140},
annote = {Keywords: programmable matter, geometric amoebot model, leader election}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Thomas Erlebach, Frank Kammer, Kelin Luo, Andrej Sajenko, and Jakob T. Spooner
Abstract
A temporal graph is a graph whose edge set can change over time. We only require that the edge set in each time step forms a connected graph. The temporal exploration problem asks for a temporal walk that starts at a given vertex, moves over at most one edge in each time step, visits all vertices, and reaches the last unvisited vertex as early as possible. We show in this paper that every temporal graph with n vertices can be explored in O(n^{1.75}) time steps provided that either the degree of the graph is bounded in each step or the temporal walk is allowed to make two moves per step. This result is interesting because it breaks the lower bound of Omega(n^2) steps that holds for the worst-case exploration time if only one move per time step is allowed and the graph in each step can have arbitrary degree. We complement this main result by a logarithmic inapproximability result and a proof that for sparse temporal graphs (i.e., temporal graphs with O(n) edges in the underlying graph) making O(1) moves per time step can improve the worst-case exploration time at most by a constant factor.
Cite as
Thomas Erlebach, Frank Kammer, Kelin Luo, Andrej Sajenko, and Jakob T. Spooner. Two Moves per Time Step Make a Difference. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 141:1-141:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{erlebach_et_al:LIPIcs.ICALP.2019.141,
author = {Erlebach, Thomas and Kammer, Frank and Luo, Kelin and Sajenko, Andrej and Spooner, Jakob T.},
title = {{Two Moves per Time Step Make a Difference}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {141:1--141:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.141},
URN = {urn:nbn:de:0030-drops-107176},
doi = {10.4230/LIPIcs.ICALP.2019.141},
annote = {Keywords: Temporal Graph Exploration, Algorithmic Graph Theory, NP-Complete Problem}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Mohsen Ghaffari and Ali Sayyadi
Abstract
We present a constant-time randomized distributed algorithms in the congested clique model that computes an O(alpha)-vertex-coloring, with high probability. Here, alpha denotes the arboricity of the graph, which is, roughly speaking, the edge-density of the densest subgraph. Congested clique is a well-studied model of synchronous message passing for distributed computing with all-to-all communication: per round each node can send one O(log n)-bit message algorithm to each other node. Our O(1)-round algorithm settles the randomized round complexity of the O(alpha)-coloring problem. We also explain that a similar method can provide a constant-time randomized algorithm for decomposing the graph into O(alpha) edge-disjoint forests, so long as alpha <= n^{1-o(1)}.
Cite as
Mohsen Ghaffari and Ali Sayyadi. Distributed Arboricity-Dependent Graph Coloring via All-to-All Communication. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 142:1-142:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{ghaffari_et_al:LIPIcs.ICALP.2019.142,
author = {Ghaffari, Mohsen and Sayyadi, Ali},
title = {{Distributed Arboricity-Dependent Graph Coloring via All-to-All Communication}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {142:1--142:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.142},
URN = {urn:nbn:de:0030-drops-107187},
doi = {10.4230/LIPIcs.ICALP.2019.142},
annote = {Keywords: Distributed Computing, Message Passing Algorithms, Graph Coloring, Arboricity, Congested Clique Model, Randomized Algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Siddharth Gupta, Adrian Kosowski, and Laurent Viennot
Abstract
For fixed h >= 2, we consider the task of adding to a graph G a set of weighted shortcut edges on the same vertex set, such that the length of a shortest h-hop path between any pair of vertices in the augmented graph is exactly the same as the original distance between these vertices in G. A set of shortcut edges with this property is called an exact h-hopset and may be applied in processing distance queries on graph G. In particular, a 2-hopset directly corresponds to a distributed distance oracle known as a hub labeling. In this work, we explore centralized distance oracles based on 3-hopsets and display their advantages in several practical scenarios. In particular, for graphs of constant highway dimension, and more generally for graphs of constant skeleton dimension, we show that 3-hopsets require exponentially fewer shortcuts per node than any previously described distance oracle, and also offer a speedup in query time when compared to simple oracles based on a direct application of 2-hopsets. Finally, we consider the problem of computing minimum-size h-hopset (for any h >= 2) for a given graph G, showing a polylogarithmic-factor approximation for the case of unique shortest path graphs. When h=3, for a given bound on the space used by the distance oracle, we provide a construction of hopset achieving polylog approximation both for space and query time compared to the optimal 3-hopset oracle given the space bound.
Cite as
Siddharth Gupta, Adrian Kosowski, and Laurent Viennot. Exploiting Hopsets: Improved Distance Oracles for Graphs of Constant Highway Dimension and Beyond. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 143:1-143:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{gupta_et_al:LIPIcs.ICALP.2019.143,
author = {Gupta, Siddharth and Kosowski, Adrian and Viennot, Laurent},
title = {{Exploiting Hopsets: Improved Distance Oracles for Graphs of Constant Highway Dimension and Beyond}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {143:1--143:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.143},
URN = {urn:nbn:de:0030-drops-107199},
doi = {10.4230/LIPIcs.ICALP.2019.143},
annote = {Keywords: Hopsets, Distance Oracles, Graph Algorithms, Data Structures}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Bernhard Haeupler, Fabian Kuhn, Anders Martinsson, Kalina Petrova, and Pascal Pfister
Abstract
A classical multi-agent fence patrolling problem asks: What is the maximum length L of a line fence that k agents with maximum speeds v_1,..., v_k can patrol if each point on the line needs to be visited at least once every unit of time. It is easy to see that L = alpha sum_{i=1}^k v_i for some efficiency alpha in [1/2,1). After a series of works [Czyzowicz et al., 2011; Dumitrescu et al., 2014; Kawamura and Kobayashi, 2015; Kawamura and Soejima, 2015] giving better and better efficiencies, it was conjectured by Kawamura and Soejima [Kawamura and Soejima, 2015] that the best possible efficiency approaches 2/3. No upper bounds on the efficiency below 1 were known.
We prove the first such upper bounds and tightly bound the optimal efficiency in terms of the minimum speed ratio s = {v_{max}}/{v_{min}} and the number of agents k. Our bounds of alpha <= 1/{1 + 1/s} and alpha <= 1 - 1/(sqrt{k)+1} imply that in order to achieve efficiency 1 - epsilon, at least k >= Omega(epsilon^{-2}) agents with a speed ratio of s >= Omega(epsilon^{-1}) are necessary. Guided by our upper bounds, we construct a scheme whose efficiency approaches 1, disproving the conjecture stated above. Our scheme asymptotically matches our upper bounds in terms of the maximal speed difference and the number of agents used.
A variation of the fence patrolling problem considers a circular fence instead and asks for its circumference to be maximized. We consider the unidirectional case of this variation, where all agents are only allowed to move in one direction, say clockwise. At first, a strategy yielding L = max_{r in [k]} r * v_r where v_1 >= v_2 >= ... >= v_k was conjectured to be optimal by Czyzowicz et al. [Czyzowicz et al., 2011] This was proven not to be the case by giving constructions for only specific numbers of agents with marginal improvements of L. We give a general construction that yields L = 1/{33 log_e log_2(k)} sum_{i=1}^k v_i for any set of agents, which in particular for the case 1, 1/2, ..., 1/k diverges as k - > infty, thus resolving a conjecture by Kawamura and Soejima [Kawamura and Soejima, 2015] affirmatively.
Cite as
Bernhard Haeupler, Fabian Kuhn, Anders Martinsson, Kalina Petrova, and Pascal Pfister. Optimal Strategies for Patrolling Fences. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 144:1-144:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{haeupler_et_al:LIPIcs.ICALP.2019.144,
author = {Haeupler, Bernhard and Kuhn, Fabian and Martinsson, Anders and Petrova, Kalina and Pfister, Pascal},
title = {{Optimal Strategies for Patrolling Fences}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {144:1--144:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.144},
URN = {urn:nbn:de:0030-drops-107202},
doi = {10.4230/LIPIcs.ICALP.2019.144},
annote = {Keywords: multi-agent systems, patrolling algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Manish Purohit
Abstract
We consider the matroid coflow scheduling problem, where each job is comprised of a set of flows and the family of sets that can be scheduled at any time form a matroid. Our main result is a polynomial-time algorithm that yields a 2-approximation for the objective of minimizing the weighted completion time. This result is tight assuming P != NP. As a by-product we also obtain the first (2+epsilon)-approximation algorithm for the preemptive concurrent open shop scheduling problem.
Cite as
Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Manish Purohit. Matroid Coflow Scheduling. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 145:1-145:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{im_et_al:LIPIcs.ICALP.2019.145,
author = {Im, Sungjin and Moseley, Benjamin and Pruhs, Kirk and Purohit, Manish},
title = {{Matroid Coflow Scheduling}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {145:1--145:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.145},
URN = {urn:nbn:de:0030-drops-107213},
doi = {10.4230/LIPIcs.ICALP.2019.145},
annote = {Keywords: Coflow Scheduling, Concurrent Open Shop, Matroid Scheduling}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Amos Korman and Yoav Rodeh
Abstract
Assume that a treasure is placed in one of M boxes according to a known distribution and that k searchers are searching for it in parallel during T rounds. We study the question of how to incentivize selfish players so that group performance would be maximized. Here, this is measured by the success probability, namely, the probability that at least one player finds the treasure. We focus on congestion policies C(l) that specify the reward that a player receives if it is one of l players that (simultaneously) find the treasure for the first time. Our main technical contribution is proving that the exclusive policy, in which C(1)=1 and C(l)=0 for l>1, yields a price of anarchy of (1-(1-{1}/{k})^{k})^{-1}, and that this is the best possible price among all symmetric reward mechanisms. For this policy we also have an explicit description of a symmetric equilibrium, which is in some sense unique, and moreover enjoys the best success probability among all symmetric profiles. For general congestion policies, we show how to polynomially find, for any theta>0, a symmetric multiplicative (1+theta)(1+C(k))-equilibrium.
Together with an appropriate reward policy, a central entity can suggest players to play a particular profile at equilibrium. As our main conceptual contribution, we advocate the use of symmetric equilibria for such purposes. Besides being fair, we argue that symmetric equilibria can also become highly robust to crashes of players. Indeed, in many cases, despite the fact that some small fraction of players crash (or refuse to participate), symmetric equilibria remain efficient in terms of their group performances and, at the same time, serve as approximate equilibria. We show that this principle holds for a class of games, which we call monotonously scalable games. This applies in particular to our search game, assuming the natural sharing policy, in which C(l)=1/l. For the exclusive policy, this general result does not hold, but we show that the symmetric equilibrium is nevertheless robust under mild assumptions.
Cite as
Amos Korman and Yoav Rodeh. Multi-Round Cooperative Search Games with Multiple Players. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 146:1-146:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{korman_et_al:LIPIcs.ICALP.2019.146,
author = {Korman, Amos and Rodeh, Yoav},
title = {{Multi-Round Cooperative Search Games with Multiple Players}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {146:1--146:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.146},
URN = {urn:nbn:de:0030-drops-107227},
doi = {10.4230/LIPIcs.ICALP.2019.146},
annote = {Keywords: Algorithmic Mechanism Design, Parallel Algorithms, Collaborative Search, Fault-Tolerance, Price of Anarchy, Price of Stability, Symmetric Equilibria}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Dariusz R. Kowalski and Miguel A. Mosteiro
Abstract
Counting the number of nodes in {Anonymous Dynamic Networks} is enticing from an algorithmic perspective: an important computation in a restricted platform with promising applications. Starting with Michail, Chatzigiannakis, and Spirakis [Michail et al., 2013], a flurry of papers sped up the running time guarantees from doubly-exponential to polynomial [Dariusz R. Kowalski and Miguel A. Mosteiro, 2018]. There is a common theme across all those works: a distinguished node is assumed to be present, because Counting cannot be solved deterministically without at least one.
In the present work we study challenging questions that naturally follow: how to efficiently count with more than one distinguished node, or how to count without any distinguished node. More importantly, what is the minimal information needed about these distinguished nodes and what is the best we can aim for (count precision, stochastic guarantees, etc.) without any. We present negative and positive results to answer these questions. To the best of our knowledge, this is the first work that addresses them.
Cite as
Dariusz R. Kowalski and Miguel A. Mosteiro. Polynomial Anonymous Dynamic Distributed Computing Without a Unique Leader. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 147:1-147:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{kowalski_et_al:LIPIcs.ICALP.2019.147,
author = {Kowalski, Dariusz R. and Mosteiro, Miguel A.},
title = {{Polynomial Anonymous Dynamic Distributed Computing Without a Unique Leader}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {147:1--147:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.147},
URN = {urn:nbn:de:0030-drops-107239},
doi = {10.4230/LIPIcs.ICALP.2019.147},
annote = {Keywords: Anonymous Dynamic Networks, Counting, distributed algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Frederik Mallmann-Trenn, Yannic Maus, and Dominik Pajak
Abstract
We study a process of averaging in a distributed system with noisy communication. Each of the agents in the system starts with some value and the goal of each agent is to compute the average of all the initial values. In each round, one pair of agents is drawn uniformly at random from the whole population, communicates with each other and each of these two agents updates their local value based on their own value and the received message. The communication is noisy and whenever an agent sends any value v, the receiving agent receives v+N, where N is a zero-mean Gaussian random variable. The two quality measures of interest are (i) the total sum of squares TSS(t), which measures the sum of square distances from the average load to the initial average and (ii) bar{phi}(t), which measures the sum of square distances from the average load to the running average (average at time t).
It is known that the simple averaging protocol - in which an agent sends its current value and sets its new value to the average of the received value and its current value - converges eventually to a state where bar{phi}(t) is small. It has been observed that TSS(t), due to the noise, eventually diverges and previous research - mostly in control theory - has focused on showing eventual convergence w.r.t. the running average. We obtain the first probabilistic bounds on the convergence time of bar{phi}(t) and precise bounds on the drift of TSS(t) that show that although TSS(t) eventually diverges, for a wide and interesting range of parameters, TSS(t) stays small for a number of rounds that is polynomial in the number of agents. Our results extend to the synchronous setting and settings where the agents are restricted to discrete values and perform rounding.
Cite as
Frederik Mallmann-Trenn, Yannic Maus, and Dominik Pajak. Noidy Conmunixatipn: On the Convergence of the Averaging Population Protocol. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 148:1-148:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{mallmanntrenn_et_al:LIPIcs.ICALP.2019.148,
author = {Mallmann-Trenn, Frederik and Maus, Yannic and Pajak, Dominik},
title = {{Noidy Conmunixatipn: On the Convergence of the Averaging Population Protocol}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {148:1--148:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.148},
URN = {urn:nbn:de:0030-drops-107240},
doi = {10.4230/LIPIcs.ICALP.2019.148},
annote = {Keywords: population protocols, noisy communication, distributed averaging}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Shunhao Oh, Anuja Meetoo Appavoo, and Seth Gilbert
Abstract
Bandit-style algorithms have been studied extensively in stochastic and adversarial settings. Such algorithms have been shown to be useful in multiplayer settings, e.g. to solve the wireless network selection problem, which can be formulated as an adversarial bandit problem. A leading bandit algorithm for the adversarial setting is EXP3. However, network behavior is often repetitive, where user density and network behavior follow regular patterns. Bandit algorithms, like EXP3, fail to provide good guarantees for periodic behaviors. A major reason is that these algorithms compete against fixed-action policies, which is ineffective in a periodic setting.
In this paper, we define a periodic bandit setting, and periodic regret as a better performance measure for this type of setting. Instead of comparing an algorithm’s performance to fixed-action policies, we aim to be competitive with policies that play arms under some set of possible periodic patterns F (for example, all possible periodic functions with periods 1,2,*s,P). We propose Periodic EXP4, a computationally efficient variant of the EXP4 algorithm for periodic settings. With K arms, T time steps, and where each periodic pattern in F is of length at most P, we show that the periodic regret obtained by Periodic EXP4 is at most O(sqrt{PKT log K + KT log |F|}). We also prove a lower bound of Omega (sqrt{PKT + KT {log |F|}/{log K}}) for the periodic setting, showing that this is optimal within log-factors. As an example, we focus on the wireless network selection problem. Through simulation, we show that Periodic EXP4 learns the periodic pattern over time, adapts to changes in a dynamic environment, and far outperforms EXP3.
Cite as
Shunhao Oh, Anuja Meetoo Appavoo, and Seth Gilbert. Periodic Bandits and Wireless Network Selection. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 149:1-149:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{oh_et_al:LIPIcs.ICALP.2019.149,
author = {Oh, Shunhao and Appavoo, Anuja Meetoo and Gilbert, Seth},
title = {{Periodic Bandits and Wireless Network Selection}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {149:1--149:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.149},
URN = {urn:nbn:de:0030-drops-107251},
doi = {10.4230/LIPIcs.ICALP.2019.149},
annote = {Keywords: multi-armed bandits, wireless network selection, periodicity in environment}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Christian Scheideler and Alexander Setzer
Abstract
We consider the problem of transforming a given graph G_s into a desired graph G_t by applying a minimum number of primitives from a particular set of local graph transformation primitives. These primitives are local in the sense that each node can apply them based on local knowledge and by affecting only its 1-neighborhood. Although the specific set of primitives we consider makes it possible to transform any (weakly) connected graph into any other (weakly) connected graph consisting of the same nodes, they cannot disconnect the graph or introduce new nodes into the graph, making them ideal in the context of supervised overlay network transformations. We prove that computing a minimum sequence of primitive applications (even centralized) for arbitrary G_s and G_t is NP-hard, which we conjecture to hold for any set of local graph transformation primitives satisfying the aforementioned properties. On the other hand, we show that this problem admits a polynomial time algorithm with a constant approximation ratio.
Cite as
Christian Scheideler and Alexander Setzer. On the Complexity of Local Graph Transformations. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 150:1-150:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{scheideler_et_al:LIPIcs.ICALP.2019.150,
author = {Scheideler, Christian and Setzer, Alexander},
title = {{On the Complexity of Local Graph Transformations}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {150:1--150:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.150},
URN = {urn:nbn:de:0030-drops-107266},
doi = {10.4230/LIPIcs.ICALP.2019.150},
annote = {Keywords: Graphs transformations, NP-hardness, approximation algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Authors:
Daniel Schmand, Marc Schröder, and Alexander Skopalik
Abstract
We study a two-sided network investment game consisting of two sets of players, called providers and users. The game is set in two stages. In the first stage, providers aim to maximize their profit by investing in bandwidth of cloud computing services. The investments of the providers yield a set of usable services for the users. In the second stage, each user wants to process a task and therefore selects a bundle of services so as to minimize the total processing time. We assume the total processing time to be separable over the chosen services and the processing time of each service to depend on the utilization of the service and the installed bandwidth. We provide insights on how competition between providers affects the total costs of the users and show that every game on a series-parallel graph can be reduced to an equivalent single edge game when analyzing the set of subgame perfect Nash equilibria.
Cite as
Daniel Schmand, Marc Schröder, and Alexander Skopalik. Network Investment Games with Wardrop Followers. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 151:1-151:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
@InProceedings{schmand_et_al:LIPIcs.ICALP.2019.151,
author = {Schmand, Daniel and Schr\"{o}der, Marc and Skopalik, Alexander},
title = {{Network Investment Games with Wardrop Followers}},
booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
pages = {151:1--151:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-109-2},
ISSN = {1868-8969},
year = {2019},
volume = {132},
editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.151},
URN = {urn:nbn:de:0030-drops-107272},
doi = {10.4230/LIPIcs.ICALP.2019.151},
annote = {Keywords: Network Investment Game, Wardrop Equilibrium, Subgame Perfect Nash Equilibrium}
}