eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
0
0
10.4230/LIPIcs.ICALP.2019
article
LIPIcs, Volume 132, ICALP'19, Complete Volume
Baier, Christel
1
Chatzigiannakis, Ioannis
2
Flocchini, Paola
3
Leonardi, Stefano
2
TU Dresden, Germany
Sapienza University of Rome, Italy
University of Ottawa, Canada
LIPIcs, Volume 132, ICALP'19, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019/LIPIcs.ICALP.2019.pdf
Theory of computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
0:i
0:xxxviii
10.4230/LIPIcs.ICALP.2019.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Baier, Christel
1
Chatzigiannakis, Ioannis
2
Flocchini, Paola
3
Leonardi, Stefano
2
TU Dresden, Germany
Sapienza University of Rome, Italy
University of Ottawa, Canada
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.0/LIPIcs.ICALP.2019.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
1:1
1:1
10.4230/LIPIcs.ICALP.2019.1
article
Auction Design under Interdependent Values (Invited Talk)
Feldman, Michal
1
Blavatnik School of Computer Science, Tel-Aviv University, Israel
We study combinatorial auctions with interdependent valuations. In such settings, every agent has a private signal, and every agent has a valuation function that depends on the private signals of all the agents. Interdependent valuations capture settings where agents lack information to determine their own valuations. Examples include auctions for artwork or oil drilling rights. For single item auctions and assume some restrictive conditions (the so-called single-crossing condition), full welfare can be achieved. However, in general, there are strong impossibility results on welfare maximization in the interdependent setting. This is in contrast to settings where agents are aware of their own valuations, where the optimal welfare can always be obtained by an incentive compatible mechanism.
Motivated by these impossibility results, we study welfare maximization for interdependent valuations through the lens of approximation. We introduce two valuation properties that enable positive results. The first is a relaxed, parameterized version of single crossing; the second is a submodularity condition over the signals. We obtain a host of approximation guarantees under these two notions for various scenarios.
Related publications: [Alon Eden et al., 2018; Alon Eden et al., 2019]
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.1/LIPIcs.ICALP.2019.1.pdf
Combinatorial auctions
Interdependent values
Welfare approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
2:1
2:1
10.4230/LIPIcs.ICALP.2019.2
article
Symmetry and Similarity (Invited Talk)
Grohe, Martin
1
https://orcid.org/0000-0002-0292-9142
RWTH Aachen University, Lehrstuhl Informatik 7, Ahornstr. 55, 52074 Aachen, Germany
Deciding if two graphs are isomorphic, or equivalently, computing the symmetries of a graph, is a fundamental algorithmic problem. It has many interesting applications, and it is one of the few natural problems in the class NP whose complexity status is still unresolved. Three years ago, Babai (STOC 2016) gave a quasi-polynomial time isomorphism algorithm. Despite of this breakthrough, the question for a polynomial algorithm remains wide open.
Related to the isomorphism problem is the problem of determining the similarity between graphs. Variations of this problems are known as robust graph isomorphism or graph matching (the latter in the machine learning and computer vision literature). This problem is significantly harder than the isomorphism problem, both from a complexity theoretical and from a practical point of view, but for many applications it is the more relevant problem.
My talk will be a survey of recent progress on the isomorphism and on the similarity problem. I will focus on generic algorithmic strategies (as opposed to algorithms tailored towards specific graph classes) that have proved to be useful and interesting in various context, both theoretical and practical.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.2/LIPIcs.ICALP.2019.2.pdf
Graph Isomorphism
Graph Similarity
Graph Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
3:1
3:1
10.4230/LIPIcs.ICALP.2019.3
article
Approximately Good and Modern Matchings (Invited Talk)
Svensson, Ola
1
https://orcid.org/0000-0003-2997-1372
EPFL, Lausanne, Switzerland
The matching problem is one of our favorite benchmark problems. Work on it has contributed to the development of many core concepts of computer science, including the equation of efficiency with polynomial time computation in the groundbreaking work by Edmonds in 1965.
However, half a century later, we still do not have full understanding of the complexity of the matching problem in several models of computation such as parallel, online, and streaming algorithms. In this talk we survey some of the major challenges and report some recent progress.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.3/LIPIcs.ICALP.2019.3.pdf
Algorithms
Matchings
Computational Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
4:1
4:1
10.4230/LIPIcs.ICALP.2019.4
article
Automata Learning and Galois Connections (Invited Talk)
Vaandrager, Frits
1
https://orcid.org/0000-0003-3955-1910
Department of Software Science, Radboud University, The Netherlands
Automata learning is emerging as an effective technique for obtaining state machine models of software and hardware systems. I will present an overview of recent work in which we used active automata learning to find standard violations and security vulnerabilities in implementations of network protocols such as TCP and SSH. Also, I will discuss applications of automata learning to support refactoring of legacy control software and identifying job patterns in manufacturing systems. As a guiding theme in my presentation, I will show how Galois connections (adjunctions) help us to scale the application of learning algorithms to practical problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.4/LIPIcs.ICALP.2019.4.pdf
Automaton Learning
Model Learning
Protocol Verification
Applications of Automata Learning
Galois Connections
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
5:1
5:1
10.4230/LIPIcs.ICALP.2019.5
article
Fixed Point Computation Problems and Facets of Complexity (Invited Talk)
Yannakakis, Mihalis
1
Department of Computer Science, Columbia University, 455 Computer Science Building, 1214 Amsterdam Avenue, New York, NY 10027, USA
Many problems from a wide variety of areas can be formulated mathematically as the problem of computing a fixed point of a suitable given multivariate function. Examples include a variety of problems from game theory, economics, optimization, stochastic analysis, verification, and others. In some problems there is a unique fixed point (for example if the function is a contraction); in others there may be multiple fixed points and any one of them is an acceptable solution; while in other cases the desired object is a specific fixed point (for example the least fixed point or greatest fixed point of a monotone function). In this talk we will discuss several types of fixed point computation problems, their complexity, and some of the common themes that have emerged: classes of problems for which there are efficient algorithms, and other classes for which there seem to be serious obstacles.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.5/LIPIcs.ICALP.2019.5.pdf
Fixed Point
Polynomial Time Algorithm
Computational Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
6:1
6:13
10.4230/LIPIcs.ICALP.2019.6
article
Complexity-Theoretic Limitations on Blind Delegated Quantum Computation
Aaronson, Scott
1
Cojocaru, Alexandru
2
Gheorghiu, Alexandru
3
2
https://orcid.org/0000-0001-6225-7168
Kashefi, Elham
2
4
Department of Computer Science, University of Texas at Austin, USA
School of Informatics, University of Edinburgh, UK
Department of Computing and Mathematical Sciences, California Institute of Technology, USA
CNRS LIP6, Université Pierre et Marie Curie, Paris, France
Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know, from work over the past decade, that blind delegation protocols can achieve information-theoretic security (provided the client and the server exchange some amount of quantum information). In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained.
In the first part of our paper we provide some indication that ITS-BQC protocols for delegating polynomial-time quantum computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol in which the client and the server exchange O(n^d) bits of communication, implies that BQP subset MA/O(n^d). We conjecture that this containment is unlikely by proving that there exists an oracle relative to which BQP not subset MA/O(n^d). We then show that if an ITS-BQC protocol exists in which the client and the server interact only classically and which allows the client to delegate quantum sampling problems to the server (such as BosonSampling) then there exist non-uniform circuits of size 2^{n - Omega(n/log(n))}, making polynomially-sized queries to an NP^{NP} oracle, for computing the permanent of an n x n matrix.
The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol by showing that they must be contained in QCMA/qpoly cap coQCMA/qpoly. Then, we show that having such a protocol for delegating NP-hard functions implies coNP^{NP^{NP}} subseteq NP^{NP^{PromiseQMA}}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.6/LIPIcs.ICALP.2019.6.pdf
Quantum cryptography
Complexity theory
Delegated quantum computation
Computing on encrypted data
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
7:1
7:15
10.4230/LIPIcs.ICALP.2019.7
article
Faster Algorithms for All-Pairs Bounded Min-Cuts
Abboud, Amir
1
Georgiadis, Loukas
2
Italiano, Giuseppe F.
3
Krauthgamer, Robert
4
Parotsidis, Nikos
5
Trabelsi, Ohad
4
Uznański, Przemysław
6
Wolleb-Graf, Daniel
7
IBM Almaden Research Center, California, USA
University of Ioannina, Greece
LUISS University, Rome, Italy
Weizmann Institute of Science, Israel
University of Copenhagen, Denmark
University of Wrocław, Poland
ETH Zürich, Switzerland
The All-Pairs Min-Cut problem (aka All-Pairs Max-Flow) asks to compute a minimum s-t cut (or just its value) for all pairs of vertices s,t. We study this problem in directed graphs with unit edge/vertex capacities (corresponding to edge/vertex connectivity). Our focus is on the k-bounded case, where the algorithm has to find all pairs with min-cut value less than k, and report only those. The most basic case k=1 is the Transitive Closure (TC) problem, which can be solved in graphs with n vertices and m edges in time O(mn) combinatorially, and in time O(n^{omega}) where omega<2.38 is the matrix-multiplication exponent. These time bounds are conjectured to be optimal.
We present new algorithms and conditional lower bounds that advance the frontier for larger k, as follows:
- A randomized algorithm for vertex capacities that runs in time {O}((nk)^{omega}). This is only a factor k^omega away from the TC bound, and nearly matches it for all k=n^{o(1)}.
- Two deterministic algorithms for edge capacities (which is more general) that work in DAGs and further reports a minimum cut for each pair. The first algorithm is combinatorial (does not involve matrix multiplication) and runs in time {O}(2^{{O}(k^2)}* mn). The second algorithm can be faster on dense DAGs and runs in time {O}((k log n)^{4^{k+o(k)}}* n^{omega}). Previously, Georgiadis et al. [ICALP 2017], could match the TC bound (up to n^{o(1)} factors) only when k=2, and now our two algorithms match it for all k=o(sqrt{log n}) and k=o(log log n).
- The first super-cubic lower bound of n^{omega-1-o(1)} k^2 time under the 4-Clique conjecture, which holds even in the simplest case of DAGs with unit vertex capacities. It improves on the previous (SETH-based) lower bounds even in the unbounded setting k=n. For combinatorial algorithms, our reduction implies an n^{2-o(1)} k^2 conditional lower bound. Thus, we identify new settings where the complexity of the problem is (conditionally) higher than that of TC.
Our three sets of results are obtained via different techniques. The first one adapts the network coding method of Cheung, Lau, and Leung [SICOMP 2013] to vertex-capacitated digraphs. The second set exploits new insights on the structure of latest cuts together with suitable algebraic tools. The lower bounds arise from a novel reduction of a different structure than the SETH-based constructions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.7/LIPIcs.ICALP.2019.7.pdf
All-pairs min-cut
k-reachability
network coding
Directed graphs
fine-grained complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
8:1
8:13
10.4230/LIPIcs.ICALP.2019.8
article
Fine-Grained Reductions and Quantum Speedups for Dynamic Programming
Abboud, Amir
1
IBM Almaden Research Center, San Jose, California, USA
This paper points at a connection between certain (classical) fine-grained reductions and the question: Do quantum algorithms offer an advantage for problems whose (classical) best solution is via dynamic programming?
A remarkable recent result of Ambainis et al. [SODA 2019] indicates that the answer is positive for some fundamental problems such as Set-Cover and Travelling Salesman. They design a quantum O^*(1.728^n) time algorithm whereas the dynamic programming O^*(2^n) time algorithms are conjectured to be classically optimal. In this paper, fine-grained reductions are extracted from their algorithms giving the first lower bounds for problems in P that are based on the intriguing Set-Cover Conjecture (SeCoCo) of Cygan et al. [CCC 2010].
In particular, the SeCoCo implies:
- a super-linear Omega(n^{1.08}) lower bound for 3-SUM on n integers,
- an Omega(n^{k/(c_k)-epsilon}) lower bound for k-SUM on n integers and k-Clique on n-node graphs, for any integer k >= 3, where c_k <= log_2{k}+1.4427.
While far from being tight, these lower bounds are significantly stronger than what is known to follow from the Strong Exponential Time Hypothesis (SETH); the well-known n^{Omega(k)} ETH-based lower bounds for k-Clique and k-SUM are vacuous when k is constant.
Going in the opposite direction, this paper observes that some "sequential" problems with previously known fine-grained reductions to a "parallelizable" core also enjoy quantum speedups over their classical dynamic programming solutions. Examples include RNA Folding and Least-Weight Subsequence.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.8/LIPIcs.ICALP.2019.8.pdf
Fine-Grained Complexity
Set-Cover
3-SUM
k-Clique
k-SUM
Dynamic Programming
Quantum Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
9:1
9:15
10.4230/LIPIcs.ICALP.2019.9
article
Geometric Multicut
Abrahamsen, Mikkel
1
https://orcid.org/0000-0003-2734-4690
Giannopoulos, Panos
2
Löffler, Maarten
3
Rote, Günter
4
https://orcid.org/0000-0002-0351-5945
BARC, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark
giCenter, Department of Computer Science, City University of London, EC1V 0HB, London, UK
Department of Information and Computing Sciences, Utrecht University, The Netherlands
Institut für Informatik, Freie Universität Berlin, Takustraße 9, 14195 Berlin, Germany
We study the following separation problem: Given a collection of colored objects in the plane, compute a shortest "fence" F, i.e., a union of curves of minimum total length, that separates every two objects of different colors. Two objects are separated if F contains a simple closed curve that has one object in the interior and the other in the exterior. We refer to the problem as GEOMETRIC k-CUT, where k is the number of different colors, as it can be seen as a geometric analogue to the well-studied multicut problem on graphs. We first give an O(n^4 log^3 n)-time algorithm that computes an optimal fence for the case where the input consists of polygons of two colors and n corners in total. We then show that the problem is NP-hard for the case of three colors. Finally, we give a (2-4/3k)-approximation algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.9/LIPIcs.ICALP.2019.9.pdf
multicut
clustering
Steiner tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
10:1
10:12
10.4230/LIPIcs.ICALP.2019.10
article
Lower Bounds for Multiplication via Network Coding
Afshani, Peyman
1
Freksen, Casper Benjamin
1
Kamma, Lior
1
Larsen, Kasper Green
1
Computer Science Department, Aarhus University, Denmark
Multiplication is one of the most fundamental computational problems, yet its true complexity remains elusive. The best known upper bound, very recently proved by Harvey and van der Hoeven (2019), shows that two n-bit numbers can be multiplied via a boolean circuit of size O(n lg n). In this work, we prove that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multiplication must have size Omega(n lg n), thus almost completely settling the complexity of multiplication circuits. We additionally revisit classic conjectures in circuit complexity, due to Valiant, and show that the network coding conjecture also implies one of Valiant’s conjectures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.10/LIPIcs.ICALP.2019.10.pdf
Circuit Complexity
Circuit Lower Bounds
Multiplication
Network Coding
Fine-Grained Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
11:1
11:13
10.4230/LIPIcs.ICALP.2019.11
article
Path Contraction Faster Than 2^n
Agrawal, Akanksha
1
Fomin, Fedor V.
2
Lokshtanov, Daniel
3
Saurabh, Saket
4
2
Tale, Prafullkumar
5
Ben-Gurion University of the Negev, Beersheba, Israel
University of Bergen, Bergen, Norway
University of California Santa Barbara, Santa Barbara, California
Institute of Mathematical Sciences, HBNI and UMI ReLaX Chennai, India
Institute of Mathematical Sciences, HBNI, Chennai, India
A graph G is contractible to a graph H if there is a set X subseteq E(G), such that G/X is isomorphic to H. Here, G/X is the graph obtained from G by contracting all the edges in X. For a family of graphs F, the F-Contraction problem takes as input a graph G on n vertices, and the objective is to output the largest integer t, such that G is contractible to a graph H in F, where |V(H)|=t. When F is the family of paths, then the corresponding F-Contraction problem is called Path Contraction. The problem Path Contraction admits a simple algorithm running in time 2^n * n^{O(1)}. In spite of the deceptive simplicity of the problem, beating the 2^n * n^{O(1)} bound for Path Contraction seems quite challenging. In this paper, we design an exact exponential time algorithm for Path Contraction that runs in time 1.99987^n * n^{O(1)}. We also define a problem called 3-Disjoint Connected Subgraphs, and design an algorithm for it that runs in time 1.88^n * n^{O(1)}. The above algorithm is used as a sub-routine in our algorithm for Path Contraction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.11/LIPIcs.ICALP.2019.11.pdf
path contraction
exact exponential time algorithms
graph algorithms
enumerating connected sets
3-disjoint connected subgraphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
12:1
12:14
10.4230/LIPIcs.ICALP.2019.12
article
Deterministic Combinatorial Replacement Paths and Distance Sensitivity Oracles
Alon, Noga
1
2
Chechik, Shiri
3
Cohen, Sarel
3
Department of Mathematics, Princeton University, Princeton, NJ 08544, USA
Schools of Mathematics and Computer Science, Tel Aviv University, Tel Aviv 69978, Israel
Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel
In this work we derandomize two central results in graph algorithms, replacement paths and distance sensitivity oracles (DSOs) matching in both cases the running time of the randomized algorithms.
For the replacement paths problem, let G = (V,E) be a directed unweighted graph with n vertices and m edges and let P be a shortest path from s to t in G. The replacement paths problem is to find for every edge e in P the shortest path from s to t avoiding e. Roditty and Zwick [ICALP 2005] obtained a randomized algorithm with running time of O~(m sqrt{n}). Here we provide the first deterministic algorithm for this problem, with the same O~(m sqrt{n}) time. Due to matching conditional lower bounds of Williams et al. [FOCS 2010], our deterministic combinatorial algorithm for the replacement paths problem is optimal up to polylogarithmic factors (unless the long standing bound of O~(mn) for the combinatorial boolean matrix multiplication can be improved). This also implies a deterministic algorithm for the second simple shortest path problem in O~(m sqrt{n}) time, and a deterministic algorithm for the k-simple shortest paths problem in O~(k m sqrt{n}) time (for any integer constant k > 0).
For the problem of distance sensitivity oracles, let G = (V,E) be a directed graph with real-edge weights. An f-Sensitivity Distance Oracle (f-DSO) gets as input the graph G=(V,E) and a parameter f, preprocesses it into a data-structure, such that given a query (s,t,F) with s,t in V and F subseteq E cup V, |F| <=f being a set of at most f edges or vertices (failures), the query algorithm efficiently computes the distance from s to t in the graph G \ F (i.e., the distance from s to t in the graph G after removing from it the failing edges and vertices F).
For weighted graphs with real edge weights, Weimann and Yuster [FOCS 2010] presented several randomized f-DSOs. In particular, they presented a combinatorial f-DSO with O~(mn^{4-alpha}) preprocessing time and subquadratic O~(n^{2-2(1-alpha)/f}) query time, giving a tradeoff between preprocessing and query time for every value of 0 < alpha < 1. We derandomize this result and present a combinatorial deterministic f-DSO with the same asymptotic preprocessing and query time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.12/LIPIcs.ICALP.2019.12.pdf
replacement paths
distance sensitivity oracles
derandomization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
13:1
13:14
10.4230/LIPIcs.ICALP.2019.13
article
Algorithms and Hardness for Diameter in Dynamic Graphs
Ancona, Bertie
1
Henzinger, Monika
2
Roditty, Liam
3
Williams, Virginia Vassilevska
1
Wein, Nicole
1
MIT, Cambridge, MA, USA
University of Vienna, Austria
Bar Ilan University, Ramat Gan, Israel
The diameter, radius and eccentricities are natural graph parameters. While these problems have been studied extensively, there are no known dynamic algorithms for them beyond the ones that follow from trivial recomputation after each update or from solving dynamic All-Pairs Shortest Paths (APSP), which is very computationally intensive. This is the situation for dynamic approximation algorithms as well, and even if only edge insertions or edge deletions need to be supported.
This paper provides a comprehensive study of the dynamic approximation of Diameter, Radius and Eccentricities, providing both conditional lower bounds, and new algorithms whose bounds are optimal under popular hypotheses in fine-grained complexity. Some of the highlights include:
- Under popular hardness hypotheses, there can be no significantly better fully dynamic approximation algorithms than recomputing the answer after each update, or maintaining full APSP.
- Nearly optimal partially dynamic (incremental/decremental) algorithms can be achieved via efficient reductions to (incremental/decremental) maintenance of Single-Source Shortest Paths. For instance, a nearly (3/2+epsilon)-approximation to Diameter in directed or undirected n-vertex, m-edge graphs can be maintained decrementally in total time m^{1+o(1)}sqrt{n}/epsilon^2. This nearly matches the static 3/2-approximation algorithm for the problem that is known to be conditionally optimal.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.13/LIPIcs.ICALP.2019.13.pdf
fine-grained complexity
graph algorithms
dynamic algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
14:1
14:16
10.4230/LIPIcs.ICALP.2019.14
article
Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity
Andoni, Alexandr
1
Stein, Clifford
1
Zhong, Peilin
1
Columbia University, New York City, NY, USA
Many modern parallel systems, such as MapReduce, Hadoop and Spark, can be modeled well by the MPC model. The MPC model captures well coarse-grained computation on large data - data is distributed to processors, each of which has a sublinear (in the input data) amount of memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. This model is stronger than the classical PRAM model, and it is an intriguing question to design algorithms whose running time is smaller than in the PRAM model.
In this paper, we study two fundamental problems, 2-edge connectivity and 2-vertex connectivity (biconnectivity). PRAM algorithms which run in O(log n) time have been known for many years. We give algorithms using roughly log diameter rounds in the MPC model. Our main results are, for an n-vertex, m-edge graph of diameter D and bi-diameter D', 1) a O(log D log log_{m/n} n) parallel time 2-edge connectivity algorithm, 2) a O(log D log^2 log_{m/n}n+log D'log log_{m/n}n) parallel time biconnectivity algorithm, where the bi-diameter D' is the largest cycle length over all the vertex pairs in the same biconnected component. Our results are fully scalable, meaning that the memory per processor can be O(n^{delta}) for arbitrary constant delta>0, and the total memory used is linear in the problem size. Our 2-edge connectivity algorithm achieves the same parallel time as the connectivity algorithm of [Andoni et al., 2018]. We also show an Omega(log D') conditional lower bound for the biconnectivity problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.14/LIPIcs.ICALP.2019.14.pdf
parallel algorithms
biconnectivity
2-edge connectivity
the MPC model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
15:1
15:16
10.4230/LIPIcs.ICALP.2019.15
article
Two Party Distribution Testing: Communication and Security
Andoni, Alexandr
1
Malkin, Tal
1
Nosatzki, Negev Shekel
1
Columbia University, New York City, NY, USA
We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing problem, Alice and Bob each have t samples from, respectively, distributions a and b over [n], and they need to test whether a=b or a,b are epsilon-far (in the l_1 distance). This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions. Despite being a natural constraint in applications, the two-party setting has previously evaded attention.
We address two fundamental aspects of the two-party setting: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other’s input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have t samples from distributions a and b respectively, which may be correlated; the question is whether a,b are independent or epsilon-far from being independent. Our contribution is three-fold: 1) We show how to gain communication efficiency given more samples, beyond the information-theoretic bound on t. The gain is polynomially better than what one would obtain via adapting one-party algorithms. 2) We prove tightness of our trade-off for the closeness testing, as well as that the independence testing requires tight Omega(sqrt{m}) communication for unbounded number of samples. These lower bounds are of independent interest as, to the best of our knowledge, these are the first 2-party communication lower bounds for testing problems, where the inputs are a set of i.i.d. samples. 3) We define the concept of secure distribution testing, and provide secure versions of the above protocols with an overhead that is only polynomial in the security parameter.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.15/LIPIcs.ICALP.2019.15.pdf
distribution testing
communication complexity
security
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
16:1
16:15
10.4230/LIPIcs.ICALP.2019.16
article
Two New Results About Quantum Exact Learning
Arunachalam, Srinivasan
1
Chakraborty, Sourav
2
Lee, Troy
3
Paraashar, Manaswi
2
de Wolf, Ronald
4
Center for Theoretical Physics, MIT, Cambridge, MA, USA
Indian Statistical Institute, Kolkata, India
Centre for Quantum Software and Information, School of Software, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
QuSoft, CWI and University of Amsterdam, The Netherlands
We present two new results about exact learning by quantum computers. First, we show how to exactly learn a k-Fourier-sparse n-bit Boolean function from O(k^{1.5}(log k)^2) uniform quantum examples for that function. This improves over the bound of Theta~(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang’s lemma for sparse Boolean functions. Second, we show that if a concept class {C} can be exactly learned using Q quantum membership queries, then it can also be learned using O ({Q^2}/{log Q} * log|C|) classical membership queries. This improves the previous-best simulation result (Servedio-Gortler, SICOMP'04) by a log Q-factor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.16/LIPIcs.ICALP.2019.16.pdf
quantum computing
exact learning
analysis of Boolean functions
Fourier sparse Boolean functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
17:1
17:17
10.4230/LIPIcs.ICALP.2019.17
article
When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear Time
Assadi, Sepehr
1
Solomon, Shay
2
Department of Computer Science, Princeton University, NJ, USA
School of Electrical Engineering, Tel Aviv University, Israel
Maximal independent set (MIS), maximal matching (MM), and (Delta+1)-(vertex) coloring in graphs of maximum degree Delta are among the most prominent algorithmic graph theory problems. They are all solvable by a simple linear-time greedy algorithm and up until very recently this constituted the state-of-the-art. In SODA 2019, Assadi, Chen, and Khanna gave a randomized algorithm for (Delta+1)-coloring that runs in O~(n sqrt{n}) time, which even for moderately dense graphs is sublinear in the input size. The work of Assadi et al. however contained a spoiler for MIS and MM: neither problems provably admits a sublinear-time algorithm in general graphs. In this work, we dig deeper into the possibility of achieving sublinear-time algorithms for MIS and MM.
The neighborhood independence number of a graph G, denoted by beta(G), is the size of the largest independent set in the neighborhood of any vertex. We identify beta(G) as the "right" parameter to measure the runtime of MIS and MM algorithms: Although graphs of bounded neighborhood independence may be very dense (clique is one example), we prove that carefully chosen variants of greedy algorithms for MIS and MM run in O(n beta(G)) and O(n log{n} * beta(G)) time respectively on any n-vertex graph G. We complement this positive result by observing that a simple extension of the lower bound of Assadi et al. implies that Omega(n beta(G)) time is also necessary for any algorithm to either problem for all values of beta(G) from 1 to Theta(n). We note that our algorithm for MIS is deterministic while for MM we use randomization which we prove is unavoidable: any deterministic algorithm for MM requires Omega(n^2) time even for beta(G) = 2.
Graphs with bounded neighborhood independence, already for constant beta = beta(G), constitute a rich family of possibly dense graphs, including line graphs, proper interval graphs, unit-disk graphs, claw-free graphs, and graphs of bounded growth. Our results suggest that even though MIS and MM do not admit sublinear-time algorithms in general graphs, one can still solve both problems in sublinear time for a wide range of beta(G) << n.
Finally, by observing that the lower bound of Omega(n sqrt{n}) time for (Delta+1)-coloring due to Assadi et al. applies to graphs of (small) constant neighborhood independence, we unveil an intriguing separation between the time complexity of MIS and MM, and that of (Delta+1)-coloring: while the time complexity of MIS and MM is strictly higher than that of (Delta+1) coloring in general graphs, the exact opposite relation holds for graphs with small neighborhood independence.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.17/LIPIcs.ICALP.2019.17.pdf
Maximal Independent Set
Maximal Matching
Sublinear-Time Algorithms
Bounded Neighborhood Independence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
18:1
18:16
10.4230/LIPIcs.ICALP.2019.18
article
Robust Communication-Optimal Distributed Clustering Algorithms
Awasthi, Pranjal
1
Bakshi, Ainesh
2
Balcan, Maria-Florina
2
White, Colin
2
Woodruff, David P.
2
Rutgers University, Piscataway, NJ, USA
Carnegie Mellon University, Pittsburgh, PA, USA
In this work, we study the k-median and k-means clustering problems when the data is distributed across many servers and can contain outliers. While there has been a lot of work on these problems for worst-case instances, we focus on gaining a finer understanding through the lens of beyond worst-case analysis. Our main motivation is the following: for many applications such as clustering proteins by function or clustering communities in a social network, there is some unknown target clustering, and the hope is that running a k-median or k-means algorithm will produce clusterings which are close to matching the target clustering. Worst-case results can guarantee constant factor approximations to the optimal k-median or k-means objective value, but not closeness to the target clustering.
Our first result is a distributed algorithm which returns a near-optimal clustering assuming a natural notion of stability, namely, approximation stability [Awasthi and Balcan, 2014], even when a constant fraction of the data are outliers. The communication complexity is O~(sk+z) where s is the number of machines, k is the number of clusters, and z is the number of outliers. Next, we show this amount of communication cannot be improved even in the setting when the input satisfies various non-worst-case assumptions. We give a matching Omega(sk+z) lower bound on the communication required both for approximating the optimal k-means or k-median cost up to any constant, and for returning a clustering that is close to the target clustering in Hamming distance. These lower bounds hold even when the data satisfies approximation stability or other common notions of stability, and the cluster sizes are balanced. Therefore, Omega(sk+z) is a communication bottleneck, even for real-world instances.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.18/LIPIcs.ICALP.2019.18.pdf
robust distributed clustering
communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
19:1
19:13
10.4230/LIPIcs.ICALP.2019.19
article
Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms
Axiotis, Kyriakos
1
Tzamos, Christos
2
MIT, Cambridge, MA, USA
University of Wisconsin-Madison, USA
One of the most fundamental problems in Computer Science is the Knapsack problem. Given a set of n items with different weights and values, it asks to pick the most valuable subset whose total weight is below a capacity threshold T. Despite its wide applicability in various areas in Computer Science, Operations Research, and Finance, the best known running time for the problem is O(T n). The main result of our work is an improved algorithm running in time O(TD), where D is the number of distinct weights. Previously, faster runtimes for Knapsack were only possible when both weights and values are bounded by M and V respectively, running in time O(nMV) [Pisinger, 1999]. In comparison, our algorithm implies a bound of O(n M^2) without any dependence on V, or O(n V^2) without any dependence on M. Additionally, for the unbounded Knapsack problem, we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms match recent conditional lower bounds shown for the Knapsack problem [Marek Cygan et al., 2017; Marvin Künnemann et al., 2017].
We also initiate a systematic study of general capacitated dynamic programming, of which Knapsack is a core problem. This problem asks to compute the maximum weight path of length k in an edge- or node-weighted directed acyclic graph. In a graph with m edges, these problems are solvable by dynamic programming in time O(k m), and we explore under which conditions the dependence on k can be eliminated. We identify large classes of graphs where this is possible and apply our results to obtain linear time algorithms for the problem of k-sparse Delta-separated sequences. The main technical innovation behind our results is identifying and exploiting concavity that appears in relaxations and subproblems of the tasks we consider.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.19/LIPIcs.ICALP.2019.19.pdf
Knapsack
Fine-Grained Complexity
Dynamic Programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
20:1
20:16
10.4230/LIPIcs.ICALP.2019.20
article
Covering Metric Spaces by Few Trees
Bartal, Yair
1
Fandina, Nova
1
Neiman, Ofer
2
Department of Computer Science, Hebrew University of Jerusalem, Israel
Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
A tree cover of a metric space (X,d) is a collection of trees, so that every pair x,y in X has a low distortion path in one of the trees. If it has the stronger property that every point x in X has a single tree with low distortion paths to all other points, we call this a Ramsey tree cover. Tree covers and Ramsey tree covers have been studied by [Yair Bartal et al., 2005; Anupam Gupta et al., 2004; T-H. Hubert Chan et al., 2005; Gupta et al., 2006; Mendel and Naor, 2007], and have found several important algorithmic applications, e.g. routing and distance oracles. The union of trees in a tree cover also serves as a special type of spanner, that can be decomposed into a few trees with low distortion paths contained in a single tree; Such spanners for Euclidean pointsets were presented by [S. Arya et al., 1995].
In this paper we devise efficient algorithms to construct tree covers and Ramsey tree covers for general, planar and doubling metrics. We pay particular attention to the desirable case of distortion close to 1, and study what can be achieved when the number of trees is small. In particular, our work shows a large separation between what can be achieved by tree covers vs. Ramsey tree covers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.20/LIPIcs.ICALP.2019.20.pdf
tree cover
Ramsey tree cover
probabilistic hierarchical family
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
21:1
21:15
10.4230/LIPIcs.ICALP.2019.21
article
Even Faster Elastic-Degenerate String Matching via Fast Matrix Multiplication
Bernardini, Giulia
1
Gawrychowski, Paweł
2
Pisanti, Nadia
3
4
Pissis, Solon P.
5
Rosone, Giovanna
3
Department of Informatics, Systems and Communication, University of Milano - Bicocca, Italy
Institute of Computer Science, University of Wrocław, Poland
Department of Computer Science, University of Pisa, Italy
ERABLE Team, INRIA, France
CWI, Amsterdam, The Netherlands
An elastic-degenerate (ED) string is a sequence of n sets of strings of total length N, which was recently proposed to model a set of similar sequences. The ED string matching (EDSM) problem is to find all occurrences of a pattern of length m in an ED text. The EDSM problem has recently received some attention in the combinatorial pattern matching community, and an O(nm^{1.5}sqrt{log m} + N)-time algorithm is known [Aoyama et al., CPM 2018]. The standard assumption in the prior work on this question is that N is substantially larger than both n and m, and thus we would like to have a linear dependency on the former. Under this assumption, the natural open problem is whether we can decrease the 1.5 exponent in the time complexity, similarly as in the related (but, to the best of our knowledge, not equivalent) word break problem [Backurs and Indyk, FOCS 2016].
Our starting point is a conditional lower bound for the EDSM problem. We use the popular combinatorial Boolean matrix multiplication (BMM) conjecture stating that there is no truly subcubic combinatorial algorithm for BMM [Abboud and Williams, FOCS 2014]. By designing an appropriate reduction we show that a combinatorial algorithm solving the EDSM problem in O(nm^{1.5-epsilon} + N) time, for any epsilon>0, refutes this conjecture. Of course, the notion of combinatorial algorithms is not clearly defined, so our reduction should be understood as an indication that decreasing the exponent requires fast matrix multiplication.
Two standard tools used in algorithms on strings are string periodicity and fast Fourier transform. Our main technical contribution is that we successfully combine these tools with fast matrix multiplication to design a non-combinatorial O(nm^{1.381} + N)-time algorithm for EDSM. To the best of our knowledge, we are the first to do so.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.21/LIPIcs.ICALP.2019.21.pdf
string algorithms
pattern matching
elastic-degenerate string
matrix multiplication
fast Fourier transform
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
22:1
22:13
10.4230/LIPIcs.ICALP.2019.22
article
The Complexity of Approximating the Matching Polynomial in the Complex Plane
Bezáková, Ivona
1
Galanis, Andreas
2
Goldberg, Leslie Ann
2
Štefankovič, Daniel
3
Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA
Department of Computer Science, University of Oxford, UK
Department of Computer Science, University of Rochester, Rochester, NY, USA
We study the problem of approximating the value of the matching polynomial on graphs with edge parameter gamma, where gamma takes arbitrary values in the complex plane.
When gamma is a positive real, Jerrum and Sinclair showed that the problem admits an FPRAS on general graphs. For general complex values of gamma, Patel and Regts, building on methods developed by Barvinok, showed that the problem admits an FPTAS on graphs of maximum degree Delta as long as gamma is not a negative real number less than or equal to -1/(4(Delta-1)). Our first main result completes the picture for the approximability of the matching polynomial on bounded degree graphs. We show that for all Delta >= 3 and all real gamma less than -1/(4(Delta-1)), the problem of approximating the value of the matching polynomial on graphs of maximum degree Delta with edge parameter gamma is #P-hard.
We then explore whether the maximum degree parameter can be replaced by the connective constant. Sinclair et al. showed that for positive real gamma it is possible to approximate the value of the matching polynomial using a correlation decay algorithm on graphs with bounded connective constant (and potentially unbounded maximum degree). We first show that this result does not extend in general in the complex plane; in particular, the problem is #P-hard on graphs with bounded connective constant for a dense set of gamma values on the negative real axis. Nevertheless, we show that the result does extend for any complex value gamma that does not lie on the negative real axis. Our analysis accounts for complex values of gamma using geodesic distances in the complex plane in the metric defined by an appropriate density function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.22/LIPIcs.ICALP.2019.22.pdf
matchings
partition function
correlation decay
connective constant
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
23:1
23:14
10.4230/LIPIcs.ICALP.2019.23
article
Finding Tutte Paths in Linear Time
Biedl, Therese
1
https://orcid.org/0000-0002-9003-3783
Kindermann, Philipp
2
https://orcid.org/0000-0001-5764-7719
David R. Cheriton School of Computer Science, University of Waterloo, Canada
Lehrstuhl für Informatik I, Universität Würzburg, Germany
It is well-known that every planar graph has a Tutte path, i.e., a path P such that any component of G-P has at most three attachment points on P. However, it was only recently shown that such Tutte paths can be found in polynomial time. In this paper, we give a new proof that 3-connected planar graphs have Tutte paths, which leads to a linear-time algorithm to find Tutte paths. Furthermore, our Tutte path has special properties: it visits all exterior vertices, all components of G-P have exactly three attachment points, and we can assign distinct representatives to them that are interior vertices. Finally, our running time bound is slightly stronger; we can bound it in terms of the degrees of the faces that are incident to P. This allows us to find some applications of Tutte paths (such as binary spanning trees and 2-walks) in linear time as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.23/LIPIcs.ICALP.2019.23.pdf
planar graph
Tutte path
Hamiltonian path
2-walk
linear time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
24:1
24:15
10.4230/LIPIcs.ICALP.2019.24
article
Approximate Counting of k-Paths: Deterministic and in Polynomial Space
Björklund, Andreas
1
Lokshtanov, Daniel
2
Saurabh, Saket
3
Zehavi, Meirav
4
Lund University, Lund, Sweden
University of California, Bergen, Santa Barbara, USA
The Institute of Mathematical Sciences, HBNI, Chennai, India
Ben-Gurion University, Beersheba, Israel
A few years ago, Alon et al. [ISMB 2008] gave a simple randomized O((2e)^km epsilon^{-2})-time exponential-space algorithm to approximately compute the number of paths on k vertices in a graph G up to a multiplicative error of 1 +/- epsilon. Shortly afterwards, Alon and Gutner [IWPEC 2009, TALG 2010] gave a deterministic exponential-space algorithm with running time (2e)^{k+O(log^3k)}m log n whenever epsilon^{-1}=k^{O(1)}. Recently, Brand et al. [STOC 2018] provided a speed-up at the cost of reintroducing randomization. Specifically, they gave a randomized O(4^km epsilon^{-2})-time exponential-space algorithm. In this article, we revisit the algorithm by Alon and Gutner. We modify the foundation of their work, and with a novel twist, obtain the following results.
- We present a deterministic 4^{k+O(sqrt{k}(log^2k+log^2 epsilon^{-1}))}m log n-time polynomial-space algorithm. This matches the running time of the best known deterministic polynomial-space algorithm for deciding whether a given graph G has a path on k vertices.
- Additionally, we present a randomized 4^{k+O(log k(log k + log epsilon^{-1}))}m log n-time polynomial-space algorithm. While Brand et al. make non-trivial use of exterior algebra, our algorithm is very simple; we only make elementary use of the probabilistic method.
Thus, the algorithm by Brand et al. runs in time 4^{k+o(k)}m whenever epsilon^{-1}=2^{o(k)}, while our deterministic and randomized algorithms run in time 4^{k+o(k)}m log n whenever epsilon^{-1}=2^{o(k^{1/4})} and epsilon^{-1}=2^{o(k/(log k))}, respectively. Prior to our work, no 2^{O(k)}n^{O(1)}-time polynomial-space algorithm was known. Additionally, our approach is embeddable in the classic framework of divide-and-color, hence it immediately extends to approximate counting of graphs of bounded treewidth; in comparison, Brand et al. note that their approach is limited to graphs of bounded pathwidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.24/LIPIcs.ICALP.2019.24.pdf
parameterized complexity
approximate counting
{ k}-Path
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
25:1
25:14
10.4230/LIPIcs.ICALP.2019.25
article
Computing Permanents and Counting Hamiltonian Cycles by Listing Dissimilar Vectors
Björklund, Andreas
1
Williams, Ryan
2
Department of Computer Science, Lund University, Sweden
Department of Electrical Engineering and Computer Science & CSAIL, MIT, Cambridge, MA, USA
We show that the permanent of an n x n matrix over any finite ring of r <= n elements can be computed with a deterministic 2^{n-Omega(n/r)} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/(r log r))} time, implicit in [Björklund, Husfeldt, and Lyckberg, IPL 2017]. For the permanent over the integers of a 0/1-matrix with exactly d ones per row and column, we provide a deterministic 2^{n-Omega(n/(d^{3/4)})} time algorithm. This improves on a 2^{n-Omega(n/d)} time algorithm in [Cygan and Pilipczuk ICALP 2013]. We also show that the number of Hamiltonian cycles in an n-vertex directed graph of average degree delta can be computed by a deterministic 2^{n-Omega(n/(delta))} time algorithm. This improves on a Las Vegas algorithm running in expected 2^{n-Omega(n/poly(delta))} time in [Björklund, Kaski, and Koutis, ICALP 2017].
A key tool in our approach is a reduction from computing the permanent to listing pairs of dissimilar vectors from two sets of vectors, i.e., vectors over a finite set that differ in each coordinate, building on an observation of [Bax and Franklin, Algorithmica 2002]. We propose algorithms that can be used both to derandomise the construction of Bax and Franklin, and efficiently list dissimilar pairs using several algorithmic tools. We also give a simple randomised algorithm resulting in Monte Carlo algorithms within the same time bounds.
Our new fast algorithms for listing dissimilar vector pairs from two sets of vectors are inspired by recent algorithms for detecting and counting orthogonal vectors by [Abboud, Williams, and Yu, SODA 2015] and [Chan and Williams, SODA 2016].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.25/LIPIcs.ICALP.2019.25.pdf
permanent
Hamiltonian cycle
orthogonal vectors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
26:1
26:13
10.4230/LIPIcs.ICALP.2019.26
article
Solving Systems of Polynomial Equations over GF(2) by a Parity-Counting Self-Reduction
Björklund, Andreas
1
Kaski, Petteri
2
Williams, Ryan
3
Department of Computer Science, Lund University, Sweden
Department of Computer Science, Aalto University, Finland
Department of Electrical Engineering and Computer Science & CSAIL, MIT, Cambridge, MA, USA
We consider the problem of finding solutions to systems of polynomial equations over a finite field. Lokshtanov et al. [SODA'17] recently obtained the first worst-case algorithms that beat exhaustive search for this problem. In particular for degree-d equations modulo two in n variables, they gave an O^*(2^{(1-1/(5d))n}) time algorithm, and for the special case d=2 they gave an O^*(2^{0.876n}) time algorithm.
We modify their approach in a way that improves these running times to O^*(2^{(1-1/(2.7d))n}) and O^*{2^{0.804n}), respectively. In particular, our latter bound - that holds for all systems of quadratic equations modulo 2 - comes close to the O^*(2^{0.792n}) expected time bound of an algorithm empirically found to hold for random equation systems in Bardet et al. [J. Complexity, 2013]. Our improvement involves three observations:
1) The Valiant-Vazirani lemma can be used to reduce the solution-finding problem to that of counting solutions modulo 2.
2) The monomials in the probabilistic polynomials used in this solution-counting modulo 2 have a special form that we exploit to obtain better bounds on their number than in Lokshtanov et al. [SODA'17].
3) The problem of solution-counting modulo 2 can be "embedded" in a smaller instance of the original problem, which enables us to apply the algorithm as a subroutine to itself.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.26/LIPIcs.ICALP.2019.26.pdf
equation systems
polynomial method
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
27:1
27:14
10.4230/LIPIcs.ICALP.2019.27
article
Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning
Brandão, Fernando G. S. L.
1
Kalev, Amir
2
Li, Tongyang
2
Lin, Cedric Yen-Yu
2
Svore, Krysta M.
3
Wu, Xiaodi
2
Institute of Quantum Information and Matter, California Institute of Technology, USA
Joint Center for Quantum Information and Computer Science, University of Maryland, USA
Station Q, Quantum Architectures and Computation Group, Microsoft Research, USA
We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r.
We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics.
As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.27/LIPIcs.ICALP.2019.27.pdf
quantum algorithms
semidefinite program
convex optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
28:1
28:13
10.4230/LIPIcs.ICALP.2019.28
article
A Simple Protocol for Verifiable Delegation of Quantum Computation in One Round
Grilo, Alex B.
1
2
CWI, Amsterdam, The Netherlands
QuSoft, Amsterdam, The Netherlands
The importance of being able to verify quantum computation delegated to remote servers increases with recent development of quantum technologies. In some of the proposed protocols for this task, a client delegates her quantum computation to non-communicating servers in multiple rounds of communication. In this work, we propose the first protocol where the client delegates her quantum computation to two servers in one-round of communication. Another advantage of our protocol is that it is conceptually simpler than previous protocols. The parameters of our protocol also make it possible to prove security even if the servers are allowed to communicate, but respecting the plausible assumption that information cannot be propagated faster than speed of light, making it the first relativistic protocol for quantum computation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.28/LIPIcs.ICALP.2019.28.pdf
quantum computation
quantum cryptography
delegation of quantum computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
29:1
29:15
10.4230/LIPIcs.ICALP.2019.29
article
Dismantlability, Connectedness, and Mixing in Relational Structures
Briceño, Raimundo
1
Bulatov, Andrei A.
2
Dalmau, Víctor
3
Larose, Benoît
4
School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel
School of Computing Science, Simon Fraser University, Canada
Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
LACIM, Université du Québec a Montréal, Montréal, Canada
The Constraint Satisfaction Problem (CSP) and its counting counterpart appears under different guises in many areas of mathematics, computer science, statistical physics, and elsewhere. Its structural and algorithmic properties have demonstrated to play a crucial role in many of those applications. For instance, topological properties of the solution set such as connectedness is related to the hardness of CSPs over random structures. In approximate counting and statistical physics, where CSPs emerge in the form of spin systems, mixing properties and the uniqueness of Gibbs measures have been heavily exploited for approximating partition functions or the free energy of spin systems. Additionally, in the decision CSPs, structural properties of the relational structures involved - like, for example, dismantlability - and their logical characterizations have been instrumental for determining the complexity and other properties of the problem.
In spite of the great diversity of those features, there are some eerie similarities between them. These were observed and made more precise in the case of graph homomorphisms by Brightwell and Winkler, who showed that the structural property of dismantlability of the target graph, the connectedness of the set of homomorphisms, good mixing properties of the corresponding spin system, and the uniqueness of Gibbs measure are all equivalent. In this paper we go a step further and demonstrate similar connections for arbitrary CSPs. This requires much deeper understanding of dismantling and the structure of the solution space in the case of relational structures, and new refined concepts of mixing introduced by Briceño. In addition, we develop properties related to the study of valid extensions of a given partially defined homomorphism, an approach that turns out to be novel even in the graph case. We also add to the mix the combinatorial property of finite duality and its logic counterpart, FO-definability, studied by Larose, Loten, and Tardif.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.29/LIPIcs.ICALP.2019.29.pdf
relational structure
constraint satisfaction problem
homomorphism
mixing properties
Gibbs measure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
30:1
30:14
10.4230/LIPIcs.ICALP.2019.30
article
Sign-Rank Can Increase Under Intersection
Bun, Mark
1
2
Mande, Nikhil S.
3
Thaler, Justin
3
Simons Institute for the Theory of Computing, Berkeley, CA, USA
Boston University, MA, USA
Georgetown University, Washington, DC, USA
The communication class UPP^{cc} is a communication analog of the Turing Machine complexity class PP. It is characterized by a matrix-analytic complexity measure called sign-rank (also called dimension complexity), and is essentially the most powerful communication class against which we know how to prove lower bounds.
For a communication problem f, let f wedge f denote the function that evaluates f on two disjoint inputs and outputs the AND of the results. We exhibit a communication problem f with UPP^{cc}(f)= O(log n), and UPP^{cc}(f wedge f) = Theta(log^2 n). This is the first result showing that UPP communication complexity can increase by more than a constant factor under intersection. We view this as a first step toward showing that UPP^{cc}, the class of problems with polylogarithmic-cost UPP communication protocols, is not closed under intersection.
Our result shows that the function class consisting of intersections of two majorities on n bits has dimension complexity n^{Omega(log n)}. This matches an upper bound of (Klivans, O'Donnell, and Servedio, FOCS 2002), who used it to give a quasipolynomial time algorithm for PAC learning intersections of polylogarithmically many majorities. Hence, fundamentally new techniques will be needed to learn this class of functions in polynomial time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.30/LIPIcs.ICALP.2019.30.pdf
Sign rank
dimension complexity
communication complexity
learning theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
31:1
31:14
10.4230/LIPIcs.ICALP.2019.31
article
Covert Computation in Self-Assembled Circuits
Cantu, Angel A.
1
Luchsinger, Austin
1
Schweller, Robert
1
Wylie, Tim
1
Department of Computer Science, University of Texas - Rio Grande Valley, USA
Traditionally, computation within self-assembly models is hard to conceal because the self-assembly process generates a crystalline assembly whose computational history is inherently part of the structure itself. With no way to remove information from the computation, this computational model offers a unique problem: how can computational input and computation be hidden while still computing and reporting the final output? Designing such systems is inherently motivated by privacy concerns in biomedical computing and applications in cryptography.
In this paper we propose the problem of performing "covert computation" within tile self-assembly that seeks to design self-assembly systems that "conceal" both the input and computational history of performed computations. We achieve these results within the growth-only restricted abstract tile assembly model (aTAM) with positive and negative interactions. We show that general-case covert computation is possible by implementing a set of basic covert logic gates capable of simulating any circuit (functionally complete). To further motivate the study of covert computation, we apply our new framework to resolve an outstanding complexity question; we use our covert circuitry to show that the unique assembly verification problem within the growth-only aTAM with negative interactions is coNP-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.31/LIPIcs.ICALP.2019.31.pdf
self-assembly
covert circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
32:1
32:14
10.4230/LIPIcs.ICALP.2019.32
article
Randomness and Intractability in Kolmogorov Complexity
Oliveira, Igor Carboni
1
Department of Computer Science, University of Oxford, UK
We introduce randomized time-bounded Kolmogorov complexity (rKt), a natural extension of Levin’s notion [Leonid A. Levin, 1984] of Kolmogorov complexity. A string w of low rKt complexity can be decompressed from a short representation via a time-bounded algorithm that outputs w with high probability.
This complexity measure gives rise to a decision problem over strings: MrKtP (The Minimum rKt Problem). We explore ideas from pseudorandomness to prove that MrKtP and its variants cannot be solved in randomized quasi-polynomial time. This exhibits a natural string compression problem that is provably intractable, even for randomized computations. Our techniques also imply that there is no n^{1 - epsilon}-approximate algorithm for MrKtP running in randomized quasi-polynomial time.
Complementing this lower bound, we observe connections between rKt, the power of randomness in computing, and circuit complexity. In particular, we present the first hardness magnification theorem for a natural problem that is unconditionally hard against a strong model of computation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.32/LIPIcs.ICALP.2019.32.pdf
computational complexity
randomness
circuit lower bounds
Kolmogorov complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
33:1
33:14
10.4230/LIPIcs.ICALP.2019.33
article
The Power of Block-Encoded Matrix Powers: Improved Regression Techniques via Faster Hamiltonian Simulation
Chakraborty, Shantanav
1
Gilyén, András
2
Jeffery, Stacey
2
QuIC, Université libre de Bruxelles, Belgium
QuSoft/CWI, The Netherlands
We apply the framework of block-encodings, introduced by Low and Chuang (under the name standard-form), to the study of quantum machine learning algorithms and derive general results that are applicable to a variety of input models, including sparse matrix oracles and matrices stored in a data structure. We develop several tools within the block-encoding framework, such as singular value estimation of a block-encoded matrix, and quantum linear system solvers using block-encodings. The presented results give new techniques for Hamiltonian simulation of non-sparse matrices, which could be relevant for certain quantum chemistry applications, and which in turn imply an exponential improvement in the dependence on precision in quantum linear systems solvers for non-sparse matrices.
In addition, we develop a technique of variable-time amplitude estimation, based on Ambainis' variable-time amplitude amplification technique, which we are also able to apply within the framework.
As applications, we design the following algorithms: (1) a quantum algorithm for the quantum weighted least squares problem, exhibiting a 6-th power improvement in the dependence on the condition number and an exponential improvement in the dependence on the precision over the previous best algorithm of Kerenidis and Prakash; (2) the first quantum algorithm for the quantum generalized least squares problem; and (3) quantum algorithms for estimating electrical-network quantities, including effective resistance and dissipated power, improving upon previous work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.33/LIPIcs.ICALP.2019.33.pdf
Quantum algorithms
Hamiltonian simulation
Quantum machine learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
34:1
34:15
10.4230/LIPIcs.ICALP.2019.34
article
Unlabeled Sample Compression Schemes and Corner Peelings for Ample and Maximum Classes
Chalopin, Jérémie
1
https://orcid.org/0000-0002-2988-8969
Chepoi, Victor
2
https://orcid.org/0000-0002-0481-7312
Moran, Shay
3
https://orcid.org/0000-0002-8662-2737
Warmuth, Manfred K.
4
CNRS, Aix-Marseille Université, Université de Toulon, LIS, Marseille, France
Aix-Marseille Université, CNRS, Université de Toulon, LIS, Marseille, France
Department of Computer Science, Princeton University, Princeton, USA
Computer Science Department, University of California, Santa Cruz, USA
We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by H. Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that the previous constructions of optimal unlabeled compression schemes for maximum classes are erroneous.
On the positive side we present a new construction of an optimal unlabeled compression scheme for maximum classes. We leave as open whether our unlabeled compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.34/LIPIcs.ICALP.2019.34.pdf
VC-dimension
sample compression
Sauer-Shelah-Perles lemma
Sandwich lemma
maximum class
ample/extremal class
corner peeling
unique sink orientation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
35:1
35:15
10.4230/LIPIcs.ICALP.2019.35
article
Query-To-Communication Lifting for BPP Using Inner Product
Chattopadhyay, Arkadev
1
Filmus, Yuval
2
https://orcid.org/0000-0002-1739-0872
Koroth, Sajin
3
https://orcid.org/0000-0002-7989-1963
Meir, Or
3
https://orcid.org/0000-0001-5031-0750
Pitassi, Toniann
4
https://orcid.org/0000-0003-0832-2760
School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India
Department of Computer Science, Technion Israel Institute of Technology, Haifa, Israel
Department of Computer Science, University of Haifa, Haifa, Israel
Department of Computer Science, University of Toronto, Canada
We prove a new query-to-communication lifting for randomized protocols, with inner product as gadget. This allows us to use a much smaller gadget, leading to a more efficient lifting. Prior to this work, such a theorem was known only for deterministic protocols, due to Chattopadhyay et al. [Arkadev Chattopadhyay et al., 2017] and Wu et al. [Xiaodi Wu et al., 2017]. The only query-to-communication lifting result for randomized protocols, due to Göös, Pitassi and Watson [Mika Göös et al., 2017], used the much larger indexing gadget.
Our proof also provides a unified treatment of randomized and deterministic lifting. Most existing proofs of deterministic lifting theorems use a measure of information known as thickness. In contrast, Göös, Pitassi and Watson [Mika Göös et al., 2017] used blockwise min-entropy as a measure of information. Our proof uses the blockwise min-entropy framework to prove lifting theorems in both settings in a unified way.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.35/LIPIcs.ICALP.2019.35.pdf
lifting theorems
inner product
BPP Lifting
Deterministic Lifting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
36:1
36:13
10.4230/LIPIcs.ICALP.2019.36
article
Estimating the Frequency of a Clustered Signal
Chen, Xue
1
Price, Eric
2
Northwestern University, Evanston, IL, USA
The University of Texas at Austin, USA
We consider the problem of locating a signal whose frequencies are "off grid" and clustered in a narrow band. Given noisy sample access to a function g(t) with Fourier spectrum in a narrow range [f_0 - Delta, f_0 + Delta], how accurately is it possible to identify f_0? We present generic conditions on g that allow for efficient, accurate estimates of the frequency. We then show bounds on these conditions for k-Fourier-sparse signals that imply recovery of f_0 to within Delta + O~(k^3) from samples on [-1, 1]. This improves upon the best previous bound of O(Delta + O~(k^5))^{1.5}. We also show that no algorithm can do better than Delta + O~(k^2).
In the process we provide a new O~(k^3) bound on the ratio between the maximum and average value of continuous k-Fourier-sparse signals, which has independent application.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.36/LIPIcs.ICALP.2019.36.pdf
sublinear algorithms
Fourier transform
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
37:1
37:15
10.4230/LIPIcs.ICALP.2019.37
article
Block Edit Errors with Transpositions: Deterministic Document Exchange Protocols and Almost Optimal Binary Codes
Cheng, Kuan
1
Jin, Zhengzhong
1
Li, Xin
1
Wu, Ke
1
Department of Computer Science, Johns Hopkins University, USA
Document exchange and error correcting codes are two fundamental problems regarding communications. In the first problem, Alice and Bob each holds a string, and the goal is for Alice to send a short sketch to Bob, so that Bob can recover Alice’s string. In the second problem, Alice sends a message with some redundant information to Bob through a channel that can add adversarial errors, and the goal is for Bob to correctly recover the message despite the errors. In both problems, an upper bound is placed on the number of errors between the two strings or that the channel can add, and a major goal is to minimize the size of the sketch or the redundant information. In this paper we focus on deterministic document exchange protocols and binary error correcting codes.
Both problems have been studied extensively. In the case of Hamming errors (i.e., bit substitutions) and bit erasures, we have explicit constructions with asymptotically optimal parameters. However, other error types are still rather poorly understood. In a recent work [Kuan Cheng et al., 2018], the authors constructed explicit deterministic document exchange protocols and binary error correcting codes for edit errors with almost optimal parameters. Unfortunately, the constructions in [Kuan Cheng et al., 2018] do not work for other common errors such as block transpositions.
In this paper, we generalize the constructions in [Kuan Cheng et al., 2018] to handle a much larger class of errors. These include bursts of insertions and deletions, as well as block transpositions. Specifically, we consider document exchange and error correcting codes where the total number of block insertions, block deletions, and block transpositions is at most k <= alpha n/log n for some constant 0<alpha<1. In addition, the total number of bits inserted and deleted by the first two kinds of operations is at most t <= beta n for some constant 0<beta<1, where n is the length of Alice’s string or message. We construct explicit, deterministic document exchange protocols with sketch size O((k log n +t) log^2 n/{k log n + t}) and explicit binary error correcting code with O(k log n log log log n+t) redundant bits. As a comparison, the information-theoretic optimum for both problems is Theta(k log n+t). As far as we know, previously there are no known explicit deterministic document exchange protocols in this case, and the best known binary code needs Omega(n) redundant bits even to correct just one block transposition [L. J. Schulman and D. Zuckerman, 1999].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.37/LIPIcs.ICALP.2019.37.pdf
Deterministic document exchange
error correcting code
block edit error
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
38:1
38:13
10.4230/LIPIcs.ICALP.2019.38
article
Restricted Max-Min Allocation: Approximation and Integrality Gap
Cheng, Siu-Wing
1
https://orcid.org/0000-0002-3557-9935
Mao, Yuchen
1
https://orcid.org/0000-0002-1075-344X
Department of Computer Science and Engineering, HKUST, Hong Kong
Asadpour, Feige, and Saberi proved that the integrality gap of the configuration LP for the restricted max-min allocation problem is at most 4. However, their proof does not give a polynomial-time approximation algorithm. A lot of efforts have been devoted to designing an efficient algorithm whose approximation ratio can match this upper bound for the integrality gap. In ICALP 2018, we present a (6 + delta)-approximation algorithm where delta can be any positive constant, and there is still a gap of roughly 2. In this paper, we narrow the gap significantly by proposing a (4+delta)-approximation algorithm where delta can be any positive constant. The approximation ratio is with respect to the optimal value of the configuration LP, and the running time is poly(m,n)* n^{poly(1/(delta))} where n is the number of players and m is the number of resources. We also improve the upper bound for the integrality gap of the configuration LP to 3 + 21/26 =~ 3.808.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.38/LIPIcs.ICALP.2019.38.pdf
fair allocation
configuration LP
approximation
integrality gap
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
39:1
39:14
10.4230/LIPIcs.ICALP.2019.39
article
Circuit Lower Bounds for MCSP from Local Pseudorandom Generators
Cheraghchi, Mahdi
1
https://orcid.org/0000-0001-8957-0306
Kabanets, Valentine
2
Lu, Zhenjian
2
Myrisiotis, Dimitrios
1
Department of Computing, Imperial College London, London, UK
School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most theta, for a given parameter theta. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires
- N^{3-o(1)}-size de Morgan formulas, improving the recent N^{2-o(1)} lower bound by Hirahara and Santhanam (CCC, 2017),
- N^{2-o(1)}-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and
- 2^{Omega (N^{1/(d+2.01)})}-size depth-d AC^0 circuits, improving the superpolynomial lower bound by Allender et al. (SICOMP, 2006).
The AC^0 lower bound stated above matches the best-known AC^0 lower bound (for PARITY) up to a small additive constant in the depth. Also, for the special case of depth-2 circuits (i.e., CNFs or DNFs), we get an almost optimal lower bound of 2^{N^{1-o(1)}} for MCSP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.39/LIPIcs.ICALP.2019.39.pdf
minimum circuit size problem (MCSP)
circuit lower bounds
pseudorandom generators (PRGs)
local PRGs
de Morgan formulas
branching programs
constant depth circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
40:1
40:15
10.4230/LIPIcs.ICALP.2019.40
article
The Norms of Graph Spanners
Chlamtáč, Eden
1
Dinitz, Michael
2
Robinson, Thomas
1
Ben Gurion University of the Negev, Beersheva, Israel
Johns Hopkins University, Baltimore, MD, USA
A t-spanner of a graph G is a subgraph H in which all distances are preserved up to a multiplicative t factor. A classical result of Althöfer et al. is that for every integer k and every graph G, there is a (2k-1)-spanner of G with at most O(n^{1+1/k}) edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have "few" nodes of "large" degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the l_p-norm of their degree vector, thus simultaneously modeling the number of edges (the l_1-norm) and the maximum degree (the l_{infty}-norm). We give precise upper bounds for all ranges of p and stretch t: we prove that the greedy (2k-1)-spanner has l_p norm of at most max(O(n), O(n^{{k+p}/{kp}})), and that this bound is tight (assuming the Erdős girth conjecture). We also study universal lower bounds, allowing us to give "generic" guarantees on the approximation ratio of the greedy algorithm which generalize and interpolate between the known approximations for the l_1 and l_{infty} norm. Finally, we show that at least in some situations, the l_p norm behaves fundamentally differently from l_1 or l_{infty}: there are regimes (p=2 and stretch 3 in particular) where the greedy spanner has a provably superior approximation to the generic guarantee.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.40/LIPIcs.ICALP.2019.40.pdf
spanners
approximations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
41:1
41:14
10.4230/LIPIcs.ICALP.2019.41
article
On the Fixed-Parameter Tractability of Capacitated Clustering
Cohen-Addad, Vincent
1
Li, Jason
2
CNRS & Sorbonne Université, Paris, France
Carnegie Mellon University, Pittsburgh, PA, USA
We study the complexity of the classic capacitated k-median and k-means problems parameterized by the number of centers, k. These problems are notoriously difficult since the best known approximation bound for high dimensional Euclidean space and general metric space is Theta(log k) and it remains a major open problem whether a constant factor exists.
We show that there exists a (3+epsilon)-approximation algorithm for the capacitated k-median and a (9+epsilon)-approximation algorithm for the capacitated k-means problem in general metric spaces whose running times are f(epsilon,k) n^{O(1)}. For Euclidean inputs of arbitrary dimension, we give a (1+epsilon)-approximation algorithm for both problems with a similar running time. This is a significant improvement over the (7+epsilon)-approximation of Adamczyk et al. for k-median in general metric spaces and the (69+epsilon)-approximation of Xu et al. for Euclidean k-means.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.41/LIPIcs.ICALP.2019.41.pdf
approximation algorithms
fixed-parameter tractability
capacitated
k-median
k-means
clustering
core-sets
Euclidean
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
42:1
42:14
10.4230/LIPIcs.ICALP.2019.42
article
Tight FPT Approximations for k-Median and k-Means
Cohen-Addad, Vincent
1
Gupta, Anupam
2
Kumar, Amit
3
Lee, Euiwoong
4
Li, Jason
2
CNRS & Sorbonne Université, Paris, France
Carnegie Mellon University, Pittsburgh, PA, USA
IIT Delhi, India
New York University, NY, USA
We investigate the fine-grained complexity of approximating the classical k-Median/k-Means clustering problems in general metric spaces. We show how to improve the approximation factors to (1+2/e+epsilon) and (1+8/e+epsilon) respectively, using algorithms that run in fixed-parameter time. Moreover, we show that we cannot do better in FPT time, modulo recent complexity-theoretic conjectures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.42/LIPIcs.ICALP.2019.42.pdf
approximation algorithms
fixed-parameter tractability
k-median
k-means
clustering
core-sets
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
43:1
43:14
10.4230/LIPIcs.ICALP.2019.43
article
Information-Theoretic and Algorithmic Thresholds for Group Testing
Coja-Oghlan, Amin
1
Gebhard, Oliver
1
Hahn-Klimroth, Max
1
Loick, Philipp
1
Goethe University, Frankfurt, Germany
In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [Aldridge et al. 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [Aldridge et al. 2014, Johnson et al. 2019].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.43/LIPIcs.ICALP.2019.43.pdf
Group testing problem
phase transitions
information theory
efficient algorithms
sharp threshold
Bayesian inference
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
44:1
44:15
10.4230/LIPIcs.ICALP.2019.44
article
On Reachability Problems for Low-Dimensional Matrix Semigroups
Colcombet, Thomas
1
https://orcid.org/0000-0001-6529-6963
Ouaknine, Joël
2
3
https://orcid.org/0000-0003-0031-9356
Semukhin, Pavel
3
https://orcid.org/0000-0002-7547-6391
Worrell, James
3
https://orcid.org/0000-0001-8151-2443
IRIF, CNRS, Université Paris Diderot, France
The Max Planck Institute for Software Systems, Saarbrücken, Germany
Department of Computer Science, University of Oxford, United Kingdom
We consider the Membership and the Half-Space Reachability problems for matrices in dimensions two and three. Our first main result is that the Membership Problem is decidable for finitely generated sub-semigroups of the Heisenberg group over rational numbers. Furthermore, we prove two decidability results for the Half-Space Reachability Problem. Namely, we show that this problem is decidable for sub-semigroups of GL(2,Z) and of the Heisenberg group over rational numbers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.44/LIPIcs.ICALP.2019.44.pdf
membership problem
half-space reachability problem
matrix semigroups
Heisenberg group
general linear group
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
45:1
45:14
10.4230/LIPIcs.ICALP.2019.45
article
Independent Sets in Vertex-Arrival Streams
Cormode, Graham
1
https://orcid.org/0000-0002-0698-0922
Dark, Jacques
1
Konrad, Christian
2
https://orcid.org/0000-0003-1802-4011
University of Warwick, UK
University of Bristol, UK
We consider the maximal and maximum independent set problems in three models of graph streams:
- In the edge model we see a stream of edges which collectively define a graph; this model is well-studied for a variety of problems. We show that the space complexity for a one-pass streaming algorithm to find a maximal independent set is quadratic (i.e. we must store all edges). We further show that it is not much easier if we only require approximate maximality. This contrasts strongly with the other two vertex-based models, where one can greedily find an exact solution in only the space needed to store the independent set.
- In the "explicit" vertex model, the input stream is a sequence of vertices making up the graph. Every vertex arrives along with its incident edges that connect to previously arrived vertices. Various graph problems require substantially less space to solve in this setting than in edge-arrival streams. We show that every one-pass c-approximation streaming algorithm for maximum independent set (MIS) on explicit vertex streams requires Omega({n^2}/{c^6}) bits of space, where n is the number of vertices of the input graph. It is already known that Theta~({n^2}/{c^2}) bits of space are necessary and sufficient in the edge arrival model (Halldórsson et al. 2012), thus the MIS problem is not significantly easier to solve under the explicit vertex arrival order assumption. Our result is proved via a reduction from a new multi-party communication problem closely related to pointer jumping.
- In the "implicit" vertex model, the input stream consists of a sequence of objects, one per vertex. The algorithm is equipped with a function that maps pairs of objects to the presence or absence of edges, thus defining the graph. This model captures, for example, geometric intersection graphs such as unit disc graphs. Our final set of results consists of several improved upper and lower bounds for interval and square intersection graphs, in both explicit and implicit streams. In particular, we show a gap between the hardness of the explicit and implicit vertex models for interval graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.45/LIPIcs.ICALP.2019.45.pdf
streaming algorithms
independent set size
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
46:1
46:14
10.4230/LIPIcs.ICALP.2019.46
article
Approximation Algorithms for Min-Distance Problems
Dalirrooyfard, Mina
1
Williams, Virginia Vassilevska
1
Vyas, Nikhil
1
Wein, Nicole
1
Xu, Yinzhan
1
Yu, Yuancheng
1
MIT, Cambridge, MA, USA
We study fundamental graph parameters such as the Diameter and Radius in directed graphs, when distances are measured using a somewhat unorthodox but natural measure: the distance between u and v is the minimum of the shortest path distances from u to v and from v to u. The center node in a graph under this measure can for instance represent the optimal location for a hospital to ensure the fastest medical care for everyone, as one can either go to the hospital, or a doctor can be sent to help.
By computing All-Pairs Shortest Paths, all pairwise distances and thus the parameters we study can be computed exactly in O~(mn) time for directed graphs on n vertices, m edges and nonnegative edge weights. Furthermore, this time bound is tight under the Strong Exponential Time Hypothesis [Roditty-Vassilevska W. STOC 2013] so it is natural to study how well these parameters can be approximated in O(mn^{1-epsilon}) time for constant epsilon>0. Abboud, Vassilevska Williams, and Wang [SODA 2016] gave a polynomial factor approximation for Diameter and Radius, as well as a constant factor approximation for both problems in the special case where the graph is a DAG. We greatly improve upon these bounds by providing the first constant factor approximations for Diameter, Radius and the related Eccentricities problem in general graphs. Additionally, we provide a hierarchy of algorithms for Diameter that gives a time/accuracy trade-off.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.46/LIPIcs.ICALP.2019.46.pdf
fine-grained complexity
graph algorithms
diameter
radius
eccentricities
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
47:1
47:15
10.4230/LIPIcs.ICALP.2019.47
article
Tight Approximation Algorithms for Bichromatic Graph Diameter and Related Problems
Dalirrooyfard, Mina
1
Williams, Virginia Vassilevska
1
Vyas, Nikhil
1
Wein, Nicole
1
MIT, Cambridge, MA, USA
Some of the most fundamental and well-studied graph parameters are the Diameter (the largest shortest paths distance) and Radius (the smallest distance for which a "center" node can reach all other nodes). The natural and important ST-variant considers two subsets S and T of the vertex set and lets the ST-diameter be the maximum distance between a node in S and a node in T, and the ST-radius be the minimum distance for a node of S to reach all nodes of T. The bichromatic variant is the special case in which S and T partition the vertex set.
In this paper we present a comprehensive study of the approximability of ST and Bichromatic Diameter, Radius, and Eccentricities, and variants, in graphs with and without directions and weights. We give the first nontrivial approximation algorithms for most of these problems, including time/accuracy trade-off upper and lower bounds. We show that nearly all of our obtained bounds are tight under the Strong Exponential Time Hypothesis (SETH), or the related Hitting Set Hypothesis.
For instance, for Bichromatic Diameter in undirected weighted graphs with m edges, we present an O~(m^{3/2}) time 5/3-approximation algorithm, and show that under SETH, neither the running time, nor the approximation factor can be significantly improved while keeping the other unchanged.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.47/LIPIcs.ICALP.2019.47.pdf
approximation algorithms
fine-grained complexity
diameter
radius
eccentricities
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
48:1
48:13
10.4230/LIPIcs.ICALP.2019.48
article
Faster Algorithms for All Pairs Non-Decreasing Paths Problem
Duan, Ran
1
Jin, Ce
1
Wu, Hongxun
1
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
In this paper, we present an improved algorithm for the All Pairs Non-decreasing Paths (APNP) problem on weighted simple digraphs, which has running time O~(n^{{3 + omega}/{2}}) = O~(n^{2.686}). Here n is the number of vertices, and omega < 2.373 is the exponent of time complexity of fast matrix multiplication [Williams 2012, Le Gall 2014]. This matches the current best upper bound for (max, min)-matrix product [Duan, Pettie 2009] which is reducible to APNP. Thus, further improvement for APNP will imply a faster algorithm for (max, min)-matrix product. The previous best upper bound for APNP on weighted digraphs was O~(n^{1/2(3 + {3 - omega}/{omega + 1} + omega)}) = O~(n^{2.78}) [Duan, Gu, Zhang 2018]. We also show an O~(n^2) time algorithm for APNP in undirected simple graphs which also reaches optimal within logarithmic factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.48/LIPIcs.ICALP.2019.48.pdf
graph optimization
matrix multiplication
non-decreasing paths
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
49:1
49:13
10.4230/LIPIcs.ICALP.2019.49
article
Faster Approximation Algorithms for Computing Shortest Cycles on Weighted Graphs
Ducoffe, Guillaume
1
2
3
National Institute for Research and Development in Informatics, Romania
The Research Institute of the University of Bucharest ICUB, Romania
University of Bucharest, Romania
Given an n-vertex m-edge graph G with non-negative edge-weights, a shortest cycle of G is one minimizing the sum of the weights on its edges. The girth of G is the weight of such a shortest cycle. We obtain several new approximation algorithms for computing the girth of weighted graphs:
- For any graph G with polynomially bounded integer weights, we present a deterministic algorithm that computes, in O~(n^{5/3}+m)-time, a cycle of weight at most twice the girth of G. This matches both the approximation factor and - almost - the running time of the best known subquadratic-time approximation algorithm for the girth of unweighted graphs.
- Then, we turn our algorithm into a deterministic (2+epsilon)-approximation for graphs with arbitrary non-negative edge-weights, at the price of a slightly worse running-time in O~(n^{5/3}polylog(1/epsilon)+m). For that, we introduce a generic method in order to obtain a polynomial-factor approximation of the girth in subquadratic time, that may be of independent interest.
- Finally, if we assume that the adjacency lists are sorted then we can get rid off the dependency in the number m of edges. Namely, we can transform our algorithms into an O~(n^{5/3})-time randomized 4-approximation for graphs with non-negative edge-weights. This can be derandomized, thereby leading to an O~(n^{5/3})-time deterministic 4-approximation for graphs with polynomially bounded integer weights, and an O~(n^{5/3}polylog(1/epsilon))-time deterministic (4+epsilon)-approximation for graphs with non-negative edge-weights.
To the best of our knowledge, these are the first known subquadratic-time approximation algorithms for computing the girth of weighted graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.49/LIPIcs.ICALP.2019.49.pdf
girth
weighted graphs
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
50:1
50:13
10.4230/LIPIcs.ICALP.2019.50
article
Algorithmically Efficient Syntactic Characterization of Possibility Domains
Díaz, Josep
1
Kirousis, Lefteris
2
1
https://orcid.org/0000-0002-4912-8959
Kokonezi, Sofia
2
https://orcid.org/0000-0002-4580-6150
Livieratos, John
2
https://orcid.org/0000-0001-6409-4286
Computer Science Department, Universitat Politècnica de Catalunya, Barcelona
Department of Mathematics, National and Kapodistrian University of Athens
We call domain any arbitrary subset of a Cartesian power of the set {0,1} when we think of it as reflecting abstract rationality restrictions on vectors of two-valued judgments on a number of issues. In Computational Social Choice Theory, and in particular in the theory of judgment aggregation, a domain is called a possibility domain if it admits a non-dictatorial aggregator, i.e. if for some k there exists a unanimous (idempotent) function F:D^k - > D which is not a projection function. We prove that a domain is a possibility domain if and only if there is a propositional formula of a certain syntactic form, sometimes called an integrity constraint, whose set of satisfying truth assignments, or models, comprise the domain. We call possibility integrity constraints the formulas of the specific syntactic type we define. Given a possibility domain D, we show how to construct a possibility integrity constraint for D efficiently, i.e, in polynomial time in the size of the domain. We also show how to distinguish formulas that are possibility integrity constraints in linear time in the size of the input formula. Finally, we prove the analogous results for local possibility domains, i.e. domains that admit an aggregator which is not a projection function, even when restricted to any given issue. Our result falls in the realm of classical results that give syntactic characterizations of logical relations that have certain closure properties, like e.g. the result that logical relations component-wise closed under logical AND are precisely the models of Horn formulas. However, our techniques draw from results in judgment aggregation theory as well from results about propositional formulas and logical relations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.50/LIPIcs.ICALP.2019.50.pdf
collective decision making
computational social choice
judgment aggregation
logical relations
algorithm complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
51:1
51:14
10.4230/LIPIcs.ICALP.2019.51
article
On Geometric Complexity Theory: Multiplicity Obstructions Are Stronger Than Occurrence Obstructions
Dörfler, Julian
1
Ikenmeyer, Christian
2
Panova, Greta
3
4
Saarland University, Saarbrücken, Germany
Max Planck Institute for Software Systems, Saarbrücken, Germany
University of Southern Californa, Los Angeles, CA, USA
University of Pennsylvania, Philadelphia, PA, USA
Geometric Complexity Theory as initiated by Mulmuley and Sohoni in two papers (SIAM J Comput 2001, 2008) aims to separate algebraic complexity classes via representation theoretic multiplicities in coordinate rings of specific group varieties. We provide the first toy setting in which a separation can be achieved for a family of polynomials via these multiplicities.
Mulmuley and Sohoni’s papers also conjecture that the vanishing behavior of multiplicities would be sufficient to separate complexity classes (so-called occurrence obstructions). The existence of such strong occurrence obstructions has been recently disproven in 2016 in two successive papers, Ikenmeyer-Panova (Adv. Math.) and Bürgisser-Ikenmeyer-Panova (J. AMS). This raises the question whether separating group varieties via representation theoretic multiplicities is stronger than separating them via occurrences. We provide first finite settings where a separation via multiplicities can be achieved, while the separation via occurrences is provably impossible. These settings are surprisingly simple and natural: We study the variety of products of homogeneous linear forms (the so-called Chow variety) and the variety of polynomials of bounded border Waring rank (i.e. a higher secant variety of the Veronese variety).
As a side result we prove a slight generalization of Hermite’s reciprocity theorem, which proves Foulkes' conjecture for a new infinite family of cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.51/LIPIcs.ICALP.2019.51.pdf
Algebraic complexity theory
geometric complexity theory
Waring rank
plethysm coefficients
occurrence obstructions
multiplicity obstructions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
52:1
52:14
10.4230/LIPIcs.ICALP.2019.52
article
The Arboricity Captures the Complexity of Sampling Edges
Eden, Talya
1
Ron, Dana
1
Rosenbaum, Will
2
Tel Aviv University, Tel Aviv, Israel
Max Planck Institute for Informatics, Saarbrücken, Germany
In this paper, we revisit the problem of sampling edges in an unknown graph G = (V, E) from a distribution that is (pointwise) almost uniform over E. We consider the case where there is some a priori upper bound on the arboriciy of G. Given query access to a graph G over n vertices and of average degree {d} and arboricity at most alpha, we design an algorithm that performs O(alpha/d * {log^3 n}/epsilon) queries in expectation and returns an edge in the graph such that every edge e in E is sampled with probability (1 +/- epsilon)/m. The algorithm performs two types of queries: degree queries and neighbor queries. We show that the upper bound is tight (up to poly-logarithmic factors and the dependence in epsilon), as Omega(alpha/d) queries are necessary for the easier task of sampling edges from any distribution over E that is close to uniform in total variational distance. We also prove that even if G is a tree (i.e., alpha = 1 so that alpha/d = Theta(1)), Omega({log n}/{loglog n}) queries are necessary to sample an edge from any distribution that is pointwise close to uniform, thus establishing that a poly(log n) factor is necessary for constant alpha. Finally we show how our algorithm can be applied to obtain a new result on approximately counting subgraphs, based on the recent work of Assadi, Kapralov, and Khanna (ITCS, 2019).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.52/LIPIcs.ICALP.2019.52.pdf
sampling
graph algorithms
arboricity
sublinear-time algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
53:1
53:12
10.4230/LIPIcs.ICALP.2019.53
article
A Nearly-Linear Time Algorithm for Submodular Maximization with a Knapsack Constraint
Ene, Alina
1
Nguyen, Huy L.
2
Department of Computer Science, Boston University, MA, USA
College of Computer and Information Science, Northeastern University, Boston, MA, USA
We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon)^{O(1/epsilon^4)} n log^2{n} function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to Omega(n^2) running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.53/LIPIcs.ICALP.2019.53.pdf
submodular maximization
knapsack constraint
fast algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
54:1
54:14
10.4230/LIPIcs.ICALP.2019.54
article
Towards Nearly-Linear Time Algorithms for Submodular Maximization with a Matroid Constraint
Ene, Alina
1
Nguyen, Huy L.
2
Department of Computer Science, Boston University, MA, USA
College of Computer and Information Science, Northeastern University, Boston, MA, USA
We consider fast algorithms for monotone submodular maximization subject to a matroid constraint. We assume that the matroid is given as input in an explicit form, and the goal is to obtain the best possible running times for important matroids. We develop a new algorithm for a general matroid constraint with a 1 - 1/e - epsilon approximation that achieves a fast running time provided we have a fast data structure for maintaining an approximately maximum weight base in the matroid through a sequence of decrease weight operations. We construct such data structures for graphic matroids and partition matroids, and we obtain the first algorithms for these classes of matroids that achieve a nearly-optimal, 1 - 1/e - epsilon approximation, using a nearly-linear number of function evaluations and arithmetic operations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.54/LIPIcs.ICALP.2019.54.pdf
submodular maximization
matroid constraints
fast running times
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
55:1
55:15
10.4230/LIPIcs.ICALP.2019.55
article
On the Complexity of String Matching for Graphs
Equi, Massimo
1
Grossi, Roberto
2
Mäkinen, Veli
1
Tomescu, Alexandru I.
1
Department of Computer Science, University of Helsinki, Finland
Dipartimento di Informatica, Università di Pisa, Italy
Exact string matching in labeled graphs is the problem of searching paths of a graph G=(V,E) such that the concatenation of their node labels is equal to the given pattern string P[1..m]. This basic problem can be found at the heart of more complex operations on variation graphs in computational biology, of query operations in graph databases, and of analysis operations in heterogeneous networks.
We prove a conditional lower bound stating that, for any constant epsilon>0, an O(|E|^{1 - epsilon} m)-time, or an O(|E| m^{1 - epsilon})-time algorithm for exact string matching in graphs, with node labels and patterns drawn from a binary alphabet, cannot be achieved unless the Strong Exponential Time Hypothesis (SETH) is false. This holds even if restricted to undirected graphs with maximum node degree two, i.e. to zig-zag matching in bidirectional strings, or to deterministic directed acyclic graphs whose nodes have maximum sum of indegree and outdegree three. These restricted cases make the lower bound stricter than what can be directly derived from related bounds on regular expression matching (Backurs and Indyk, FOCS'16). In fact, our bounds are tight in the sense that lowering the degree or the alphabet size yields linear-time solvable problems.
An interesting corollary is that exact and approximate matching are equally hard (quadratic time) in graphs under SETH. In comparison, the same problems restricted to strings have linear-time vs quadratic-time solutions, respectively (approximate pattern matching having also a matching SETH lower bound (Backurs and Indyk, STOC'15)).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.55/LIPIcs.ICALP.2019.55.pdf
exact pattern matching
graph query
graph search
labeled graphs
string matching
string search
strong exponential time hypothesis
heterogeneous networks
variation graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
56:1
56:15
10.4230/LIPIcs.ICALP.2019.56
article
Unique End of Potential Line
Fearnley, John
1
Gordon, Spencer
2
Mehta, Ruta
3
Savani, Rahul
1
University of Liverpool, UK
California Institute of Technology, Pasadena, CA, USA
University of Illinois at Urbana-Champaign, IL, USA
The complexity class CLS was proposed by Daskalakis and Papadimitriou in 2011 to understand the complexity of important NP search problems that admit both path following and potential optimizing algorithms. Here we identify a subclass of CLS - called UniqueEOPL - that applies a more specific combinatorial principle that guarantees unique solutions. We show that UniqueEOPL contains several important problems such as the P-matrix Linear Complementarity Problem, finding Fixed Point of Contraction Maps, and solving Unique Sink Orientations (USOs). UniqueEOPL seems to a proper subclass of CLS and looks more likely to be the right class for the problems of interest. We identify a problem - closely related to solving contraction maps and USOs - that is complete for UniqueEOPL. Our results also give the fastest randomised algorithm for P-matrix LCP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.56/LIPIcs.ICALP.2019.56.pdf
P-matrix linear complementarity problem
unique sink orientation
contraction map
TFNP
total search problems
continuous local search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
57:1
57:12
10.4230/LIPIcs.ICALP.2019.57
article
Dichotomy for Symmetric Boolean PCSPs
Ficak, Miron
1
https://orcid.org/0000-0003-3104-6354
Kozik, Marcin
1
https://orcid.org/0000-0002-1839-4824
Olšák, Miroslav
2
Stankiewicz, Szymon
1
https://orcid.org/0000-0003-2235-4849
Theoretical Computer Science Department, Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
Department of Algebra, Charles University, Prague, Czech Republic
In one of the most actively studied version of Constraint Satisfaction Problem, a CSP is defined by a relational structure called a template. In the decision version of the problem the goal is to determine whether a structure given on input admits a homomorphism into this template. Two recent independent results of Bulatov [FOCS'17] and Zhuk [FOCS'17] state that each finite template defines CSP which is tractable or NP-complete.
In a recent paper Brakensiek and Guruswami [SODA'18] proposed an extension of the CSP framework. This extension, called Promise Constraint Satisfaction Problem, includes many naturally occurring computational questions, e.g. approximate coloring, that cannot be cast as CSPs. A PCSP is a combination of two CSPs defined by two similar templates; the computational question is to distinguish a YES instance of the first one from a NO instance of the second.
The computational complexity of many PCSPs remains unknown. Even the case of Boolean templates (solved for CSP by Schaefer [STOC'78]) remains wide open. The main result of Brakensiek and Guruswami [SODA'18] shows that Boolean PCSPs exhibit a dichotomy (PTIME vs. NPC) when "all the clauses are symmetric and allow for negation of variables". In this paper we remove the "allow for negation of variables" assumption from the theorem. The "symmetric" assumption means that changing the order of variables in a constraint does not change its satisfiability. The "negation of variables" means that both of the templates share a relation which can be used to effectively negate Boolean variables.
The main result of this paper establishes dichotomy for all the symmetric boolean templates. The tractability case of our theorem and the theorem of Brakensiek and Guruswami are almost identical. The main difference, and the main contribution of this work, is the new reason for hardness and the reasoning proving the split.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.57/LIPIcs.ICALP.2019.57.pdf
promise constraint satisfaction problem
PCSP
algebraic approach
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
58:1
58:13
10.4230/LIPIcs.ICALP.2019.58
article
Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions
Filmus, Yuval
1
https://orcid.org/0000-0002-1739-0872
Hambardzumyan, Lianna
2
Hatami, Hamed
2
https://orcid.org/0000-0002-4732-434X
Hatami, Pooya
3
https://orcid.org/0000-0001-7928-8008
Zuckerman, David
3
Computer Science Department, Technion, Haifa, Israel
School of Computer Science, McGill University, Montreal, QC, Canada
Department of Computer Science, UT Austin, Austin, TX, USA
The seminal result of Kahn, Kalai and Linial shows that a coalition of O(n/(log n)) players can bias the outcome of any Boolean function {0,1}^n -> {0,1} with respect to the uniform measure. We extend their result to arbitrary product measures on {0,1}^n, by combining their argument with a completely different argument that handles very biased input bits.
We view this result as a step towards proving a conjecture of Friedgut, which states that Boolean functions on the continuous cube [0,1]^n (or, equivalently, on {1,...,n}^n) can be biased using coalitions of o(n) players. This is the first step taken in this direction since Friedgut proposed the conjecture in 2004.
Russell, Saks and Zuckerman extended the result of Kahn, Kalai and Linial to multi-round protocols, showing that when the number of rounds is o(log^* n), a coalition of o(n) players can bias the outcome with respect to the uniform measure. We extend this result as well to arbitrary product measures on {0,1}^n.
The argument of Russell et al. relies on the fact that a coalition of o(n) players can boost the expectation of any Boolean function from epsilon to 1-epsilon with respect to the uniform measure. This fails for general product distributions, as the example of the AND function with respect to mu_{1-1/n} shows. Instead, we use a novel boosting argument alongside a generalization of our first result to arbitrary finite ranges.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.58/LIPIcs.ICALP.2019.58.pdf
Boolean function analysis
coin flipping
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
59:1
59:13
10.4230/LIPIcs.ICALP.2019.59
article
Covering Vectors by Spaces in Perturbed Graphic Matroids and Their Duals
Fomin, Fedor V.
1
Golovach, Petr A.
1
Lokshtanov, Daniel
2
Saurabh, Saket
3
Zehavi, Meirav
4
Department of Informatics, University of Bergen, Norway
Department of Computer Science, University of California Santa Barbara, USA
The Institute of Mathematical Sciences, HBNI, Chennai, India
Ben-Gurion University, Israel
Perturbed graphic matroids are binary matroids that can be obtained from a graphic matroid by adding a noise of small rank. More precisely, an r-rank perturbed graphic matroid M is a binary matroid that can be represented in the form I +P, where I is the incidence matrix of some graph and P is a binary matrix of rank at most r. Such matroids naturally appear in a number of theoretical and applied settings. The main motivation behind our work is an attempt to understand which parameterized algorithms for various problems on graphs could be lifted to perturbed graphic matroids.
We study the parameterized complexity of a natural generalization (for matroids) of the following fundamental problems on graphs: Steiner Tree and Multiway Cut. In this generalization, called the Space Cover problem, we are given a binary matroid M with a ground set E, a set of terminals T subseteq E, and a non-negative integer k. The task is to decide whether T can be spanned by a subset of E \ T of size at most k.
We prove that on graphic matroid perturbations, for every fixed r, Space Cover is fixed-parameter tractable parameterized by k. On the other hand, the problem becomes W[1]-hard when parameterized by r+k+|T| and it is NP-complete for r <= 2 and |T|<= 2.
On cographic matroids, that are the duals of graphic matroids, Space Cover generalizes another fundamental and well-studied problem, namely Multiway Cut. We show that on the duals of perturbed graphic matroids the Space Cover problem is fixed-parameter tractable parameterized by r+k.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.59/LIPIcs.ICALP.2019.59.pdf
Binary matroids
perturbed graphic matroids
spanning set
parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
60:1
60:15
10.4230/LIPIcs.ICALP.2019.60
article
Decomposition of Map Graphs with Applications
Fomin, Fedor V.
1
Lokshtanov, Daniel
2
Panolan, Fahad
1
Saurabh, Saket
3
Zehavi, Meirav
4
University of Bergen, Norway
University of California, Santa Barbara, USA
The Institute of Mathematical Sciences, HBNI, Chennai, India
Ben-Gurion University of the Negev, Beer-Sheva, Israel
Bidimensionality is the most common technique to design subexponential-time parameterized algorithms on special classes of graphs, particularly planar graphs. The core engine behind it is a combinatorial lemma of Robertson, Seymour and Thomas that states that every planar graph either has a sqrt{k} x sqrt{k}-grid as a minor, or its treewidth is O(sqrt{k}). However, bidimensionality theory cannot be extended directly to several well-known classes of geometric graphs like unit disk or map graphs. This is mainly due to the presence of large cliques in these classes of graphs. Nevertheless, a relaxation of this lemma has been proven useful for unit disk graphs. Inspired by this, we prove a new decomposition lemma for map graphs, the intersection graphs of finitely many simply-connected and interior-disjoint regions of the Euclidean plane. Informally, our lemma states the following. For any map graph G, there exists a collection (U_1,...,U_t) of cliques of G with the following property: G either contains a sqrt{k} x sqrt{k}-grid as a minor, or it admits a tree decomposition where every bag is the union of O(sqrt{k}) cliques in the above collection.
The new lemma appears to be a handy tool in the design of subexponential parameterized algorithms on map graphs. We demonstrate its usability by designing algorithms on map graphs with running time 2^{O({sqrt{k}log{k}})} * n^{O(1)} for Connected Planar F-Deletion (that encompasses problems such as Feedback Vertex Set and Vertex Cover). Obtaining subexponential algorithms for Longest Cycle/Path and Cycle Packing is more challenging. We have to construct tree decompositions with more powerful properties and to prove sublinear bounds on the number of ways an optimum solution could "cross" bags in these decompositions.
For Longest Cycle/Path, these are the first subexponential-time parameterized algorithm on map graphs. For Feedback Vertex Set and Cycle Packing, we improve upon known 2^{O({k^{0.75}log{k}})} * n^{O(1)}-time algorithms on map graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.60/LIPIcs.ICALP.2019.60.pdf
Longest Cycle
Cycle Packing
Feedback Vertex Set
Map Graphs
FPT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
61:1
61:14
10.4230/LIPIcs.ICALP.2019.61
article
The Satisfiability Threshold for Non-Uniform Random 2-SAT
Friedrich, Tobias
1
https://orcid.org/0000-0003-0076-6308
Rothenberger, Ralf
1
https://orcid.org/0000-0002-4133-2437
Algorithm Engineering Group, Hasso Plattner Institute, University of Potsdam, Germany
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. Its worst-case hardness lies at the core of computational complexity theory, for example in the form of NP-hardness and the (Strong) Exponential Time Hypothesis. In practice however, SAT instances can often be solved efficiently. This contradicting behavior has spawned interest in the average-case analysis of SAT and has triggered the development of sophisticated rigorous and non-rigorous techniques for analyzing random structures.
Despite a long line of research and substantial progress, most theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, real-world instances often exhibit large fluctuations in variable occurrence. This can be modeled by a non-uniform distribution of the variables, which can result in distributions closer to industrial SAT instances.
We study satisfiability thresholds of non-uniform random 2-SAT with n variables and m clauses and with an arbitrary probability distribution (p_i)_{i in[n]} with p_1 >=slant p_2 >=slant ... >=slant p_n>0 over the n variables. We show for p_{1}^2=Theta (sum_{i=1}^n p_i^2) that the asymptotic satisfiability threshold is at {m=Theta ((1-{sum_{i=1}^n p_i^2})/(p_1 * (sum_{i=2}^n p_i^2)^{1/2}))} and that it is coarse. For p_{1}^2=o (sum_{i=1}^n p_i^2) we show that there is a sharp satisfiability threshold at m=(sum_{i=1}^n p_i^2)^{-1}. This result generalizes the seminal works by Chvatal and Reed [FOCS 1992] and by Goerdt [JCSS 1996].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.61/LIPIcs.ICALP.2019.61.pdf
random SAT
satisfiability threshold
sharpness
non-uniform distribution
2-SAT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
62:1
62:15
10.4230/LIPIcs.ICALP.2019.62
article
Determinant Equivalence Test over Finite Fields and over Q
Garg, Ankit
1
Gupta, Nikhil
2
Kayal, Neeraj
1
Saha, Chandan
2
Microsoft Research India, Bangalore, India
Department of Computer Science and Automation, Indian Institute of Science, India
The determinant polynomial Det_n(x) of degree n is the determinant of a n x n matrix of formal variables. A polynomial f is equivalent to Det_n(x) over a field F if there exists a A in GL(n^2,F) such that f = Det_n(A * x). Determinant equivalence test over F is the following algorithmic task: Given black-box access to a f in F[x], check if f is equivalent to Det_n(x) over F, and if so then output a transformation matrix A in GL(n^2,F). In (Kayal, STOC 2012), a randomized polynomial time determinant equivalence test was given over F = C. But, to our knowledge, the complexity of the problem over finite fields and over Q was not well understood.
In this work, we give a randomized poly(n,log |F|) time determinant equivalence test over finite fields F (under mild restrictions on the characteristic and size of F). Over Q, we give an efficient randomized reduction from factoring square-free integers to determinant equivalence test for quadratic forms (i.e. the n=2 case), assuming GRH. This shows that designing a polynomial-time determinant equivalence test over Q is a challenging task. Nevertheless, we show that determinant equivalence test over Q is decidable: For bounded n, there is a randomized polynomial-time determinant equivalence test over Q with access to an oracle for integer factoring. Moreover, for any n, there is a randomized polynomial-time algorithm that takes input black-box access to a f in Q[x] and if f is equivalent to Det_n over Q then it returns a A in GL(n^2,L) such that f = Det_n(A * x), where L is an extension field of Q and [L : Q] <= n.
The above algorithms over finite fields and over Q are obtained by giving a polynomial-time randomized reduction from determinant equivalence test to another problem, namely the full matrix algebra isomorphism problem. We also show a reduction in the converse direction which is efficient if n is bounded. These reductions, which hold over any F (under mild restrictions on the characteristic and size of F), establish a close connection between the complexity of the two problems. This then leads to our results via applications of known results on the full algebra isomorphism problem over finite fields (Rónyai, STOC 1987 and Rónyai, J. Symb. Comput. 1990) and over Q (Ivanyos {et al}., Journal of Algebra 2012 and Babai {et al}., Mathematics of Computation 1990).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.62/LIPIcs.ICALP.2019.62.pdf
Determinant equivalence test
full matrix algebra isomorphism
Lie algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
63:1
63:14
10.4230/LIPIcs.ICALP.2019.63
article
Non-Clairvoyant Precedence Constrained Scheduling
Garg, Naveen
1
Gupta, Anupam
2
Kumar, Amit
1
Singla, Sahil
3
Computer Science and Engineering Department, Indian Institute of Technology, Delhi, India
Computer Science Department, Carnegie Mellon University, USA
Princeton University and Institute for Advanced Study, USA
We consider the online problem of scheduling jobs on identical machines, where jobs have precedence constraints. We are interested in the demanding setting where the jobs sizes are not known up-front, but are revealed only upon completion (the non-clairvoyant setting). Such precedence-constrained scheduling problems routinely arise in map-reduce and large-scale optimization. For minimizing the total weighted completion time, we give a constant-competitive algorithm. And for total weighted flow-time, we give an O(1/epsilon^2)-competitive algorithm under (1+epsilon)-speed augmentation and a natural "no-surprises" assumption on release dates of jobs (which we show is necessary in this context).
Our algorithm proceeds by assigning virtual rates to all waiting jobs, including the ones which are dependent on other uncompleted jobs. We then use these virtual rates to decide on the actual rates of minimal jobs (i.e., jobs which do not have dependencies and hence are eligible to run). Interestingly, the virtual rates are obtained by allocating time in a fair manner, using a Eisenberg-Gale-type convex program (which we can solve optimally using a primal-dual scheme). The optimality condition of this convex program allows us to show dual-fitting proofs more easily, without having to guess and hand-craft the duals. This idea of using fair virtual rates may have broader applicability in scheduling problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.63/LIPIcs.ICALP.2019.63.pdf
Online algorithms
Scheduling
Primal-Dual analysis
Nash welfare
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
64:1
64:13
10.4230/LIPIcs.ICALP.2019.64
article
A Composition Theorem for Randomized Query Complexity via Max-Conflict Complexity
Gavinsky, Dmitry
1
Lee, Troy
2
Santha, Miklos
3
4
5
Sanyal, Swagato
6
Institute of Mathematics, Czech Academy of Sciences, 115 67 Žitna 25, Praha 1, Czech Republic
Centre for Quantum Software and Information, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
CNRS, IRIF, Université de Paris, 75205 Paris, France
Centre for Quantum Technologies, National University of Singapore, Singapore 117543
MajuLab, UMI 3654, Singapore
Indian Institute of Technology Kharagpur, India
For any relation f subseteq {0,1}^n x S and any partial Boolean function g:{0,1}^m -> {0,1,*}, we show that R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * sqrt{R_{1/3}(g)}) , where R_epsilon(*) stands for the bounded-error randomized query complexity with error at most epsilon, and f o g^n subseteq ({0,1}^m)^n x S denotes the composition of f with n instances of g.
The new composition theorem is optimal, at least, for the general case of relational problems: A relation f_0 and a partial Boolean function g_0 are constructed, such that R_{4/9}(f_0) in Theta(sqrt n), R_{1/3}(g_0)in Theta(n) and R_{1/3}(f_0 o g_0^n) in Theta(n).
The theorem is proved via introducing a new complexity measure, max-conflict complexity, denoted by bar{chi}(*). Its investigation shows that bar{chi}(g) in Omega(sqrt{R_{1/3}(g)}) for any partial Boolean function g and R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * bar{chi}(g)) for any relation f, which readily implies the composition statement. It is further shown that bar{chi}(g) is always at least as large as the sabotage complexity of g.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.64/LIPIcs.ICALP.2019.64.pdf
query complexity
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
65:1
65:14
10.4230/LIPIcs.ICALP.2019.65
article
The Hairy Ball Problem is PPAD-Complete
Goldberg, Paul W.
1
https://orcid.org/0000-0002-5436-7890
Hollender, Alexandros
1
https://orcid.org/0000-0001-5255-9349
Department of Computer Science, University of Oxford, United Kingdom
The Hairy Ball Theorem states that every continuous tangent vector field on an even-dimensional sphere must have a zero. We prove that the associated computational problem of computing an approximate zero is PPAD-complete. We also give a FIXP-hardness result for the general exact computation problem.
In order to show that this problem lies in PPAD, we provide new results on multiple-source variants of End-of-Line, the canonical PPAD-complete problem. In particular, finding an approximate zero of a Hairy Ball vector field on an even-dimensional sphere reduces to a 2-source End-of-Line problem. If the domain is changed to be the torus of genus g >= 2 instead (where the Hairy Ball Theorem also holds), then the problem reduces to a 2(g-1)-source End-of-Line problem.
These multiple-source End-of-Line results are of independent interest and provide new tools for showing membership in PPAD. In particular, we use them to provide the first full proof of PPAD-completeness for the Imbalance problem defined by Beame et al. in 1998.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.65/LIPIcs.ICALP.2019.65.pdf
Computational Complexity
TFNP
PPAD
End-of-Line
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
66:1
66:15
10.4230/LIPIcs.ICALP.2019.66
article
AC^0[p] Lower Bounds Against MCSP via the Coin Problem
Golovnev, Alexander
1
Ilango, Rahul
2
Impagliazzo, Russell
3
Kabanets, Valentine
4
Kolokolova, Antonina
5
Tal, Avishay
6
Harvard University, Cambridge, USA
Rutgers University, New Brunswick, USA
University of California San Diego, USA
Simon Fraser University, Burnaby, Canada
Memorial University of Newfoundland, St. John’s, Canada
Stanford University, USA
Minimum Circuit Size Problem (MCSP) asks to decide if a given truth table of an n-variate boolean function has circuit complexity less than a given parameter s. We prove that MCSP is hard for constant-depth circuits with mod p gates, for any prime p >= 2 (the circuit class AC^0[p]). Namely, we show that MCSP requires d-depth AC^0[p] circuits of size at least exp(N^{0.49/d}), where N=2^n is the size of an input truth table of an n-variate boolean function. Our circuit lower bound proof shows that MCSP can solve the coin problem: distinguish uniformly random N-bit strings from those generated using independent samples from a biased random coin which is 1 with probability 1/2+N^{-0.49}, and 0 otherwise. Solving the coin problem with such parameters is known to require exponentially large AC^0[p] circuits. Moreover, this also implies that MAJORITY is computable by a non-uniform AC^0 circuit of polynomial size that also has MCSP-oracle gates. The latter has a few other consequences for the complexity of MCSP, e.g., we get that any boolean function in NC^1 (i.e., computable by a polynomial-size formula) can also be computed by a non-uniform polynomial-size AC^0 circuit with MCSP-oracle gates.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.66/LIPIcs.ICALP.2019.66.pdf
Minimum Circuit Size Problem (MCSP)
circuit lower bounds
AC0[p]
coin problem
hybrid argument
MKTP
biased random boolean functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
67:1
67:14
10.4230/LIPIcs.ICALP.2019.67
article
Stochastic Online Metric Matching
Gupta, Anupam
1
Guruganesh, Guru
2
Peng, Binghui
3
Wajc, David
1
Carnegie Mellon University, Pittsburgh, PA, USA
Google Research, United States
Tsinghua University, China
We study the minimum-cost metric perfect matching problem under online i.i.d arrivals. We are given a fixed metric with a server at each of the points, and then requests arrive online, each drawn independently from a known probability distribution over the points. Each request has to be matched to a free server, with cost equal to the distance. The goal is to minimize the expected total cost of the matching.
Such stochastic arrival models have been widely studied for the maximization variants of the online matching problem; however, the only known result for the minimization problem is a tight O(log n)-competitiveness for the random-order arrival model. This is in contrast with the adversarial model, where an optimal competitive ratio of O(log n) has long been conjectured and remains a tantalizing open question.
In this paper, we show that the i.i.d model admits substantially better algorithms: our main result is an O((log log log n)^2)-competitive algorithm in this model, implying a strict separation between the i.i.d model and the adversarial and random order models. Along the way we give a 9-competitive algorithm for the line and tree metrics - the first O(1)-competitive algorithm for any non-trivial arrival model for these much-studied metrics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.67/LIPIcs.ICALP.2019.67.pdf
stochastic
online
online matching
metric matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
68:1
68:14
10.4230/LIPIcs.ICALP.2019.68
article
Constructions of Maximally Recoverable Local Reconstruction Codes via Function Fields
Guruswami, Venkatesan
1
Jin, Lingfei
2
3
4
https://orcid.org/0000-0002-1523-880X
Xing, Chaoping
5
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
Shanghai Institute of Intelligent Electronics & Systems, Shanghai, China
Shanghai Bolckchain Engineering Research Center, Fudan University, Shanghai 200433, China
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
Local Reconstruction Codes (LRCs) allow for recovery from a small number of erasures in a local manner based on just a few other codeword symbols. They have emerged as the codes of choice for large scale distributed storage systems due to the very efficient repair of failed storage nodes in the typical scenario of a single or few nodes failing, while also offering fault tolerance against worst-case scenarios with more erasures. A maximally recoverable (MR) LRC offers the best possible blend of such local and global fault tolerance, guaranteeing recovery from all erasure patterns which are information-theoretically correctable given the presence of local recovery groups. In an (n,r,h,a)-LRC, the n codeword symbols are partitioned into r disjoint groups each of which include a local parity checks capable of locally correcting a erasures. The codeword symbols further obey h heavy (global) parity checks. Such a code is maximally recoverable if it can correct all patterns of a erasures per local group plus up to h additional erasures anywhere in the codeword. This property amounts to linear independence of all such subsets of columns of the parity check matrix.
MR LRCs have received much attention recently, with many explicit constructions covering different regimes of parameters. Unfortunately, all known constructions require a large field size that is exponential in h or a, and it is of interest to obtain MR LRCs of minimal possible field size. In this work, we develop an approach based on function fields to construct MR LRCs. Our method recovers, and in most parameter regimes improves, the field size of previous approaches. For instance, for the case of small r << epsilon log n and large h >=slant Omega(n^{1-epsilon}), we improve the field size from roughly n^h to n^{epsilon h}. For the case of a=1 (one local parity check), we improve the field size quadratically from r^{h(h+1)} to r^{h floor[(h+1)/2]} for some range of r. The improvements are modest, but more importantly are obtained in a unified manner via a promising new idea.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.68/LIPIcs.ICALP.2019.68.pdf
Erasure codes
Algebraic constructions
Linear algebra
Locally Repairable Codes
Explicit constructions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
69:1
69:16
10.4230/LIPIcs.ICALP.2019.69
article
Quantum Chebyshev’s Inequality and Applications
Hamoudi, Yassine
1
https://orcid.org/0000-0002-3762-0612
Magniez, Frédéric
1
https://orcid.org/0000-0003-2384-9026
Université de Paris, IRIF, CNRS, F-75013 Paris, France
In this paper we provide new quantum algorithms with polynomial speed-up for a range of problems for which no such results were known, or we improve previous algorithms. First, we consider the approximation of the frequency moments F_k of order k >= 3 in the multi-pass streaming model with updates (turnstile model). We design a P-pass quantum streaming algorithm with memory M satisfying a tradeoff of P^2 M = O~(n^{1-2/k}), whereas the best classical algorithm requires P M = Theta(n^{1-2/k}). Then, we study the problem of estimating the number m of edges and the number t of triangles given query access to an n-vertex graph. We describe optimal quantum algorithms that perform O~(sqrt{n}/m^{1/4}) and O~(sqrt{n}/t^{1/6} + m^{3/4}/sqrt{t}) queries respectively. This is a quadratic speed-up compared to the classical complexity of these problems.
For this purpose we develop a new quantum paradigm that we call Quantum Chebyshev’s inequality. Namely we demonstrate that, in a certain model of quantum sampling, one can approximate with relative error the mean of any random variable with a number of quantum samples that is linear in the ratio of the square root of the variance to the mean. Classically the dependence is quadratic. Our algorithm subsumes a previous result of Montanaro [Montanaro, 2015]. This new paradigm is based on a refinement of the Amplitude Estimation algorithm of Brassard et al. [Brassard et al., 2002] and of previous quantum algorithms for the mean estimation problem. We show that this speed-up is optimal, and we identify another common model of quantum sampling where it cannot be obtained. Finally, we develop a new technique called "variable-time amplitude estimation" that reduces the dependence of our algorithm on the sample preparation time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.69/LIPIcs.ICALP.2019.69.pdf
Quantum algorithms
approximation algorithms
sublinear-time algorithms
Monte Carlo method
streaming algorithms
subgraph counting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
70:1
70:15
10.4230/LIPIcs.ICALP.2019.70
article
Retracting Graphs to Cycles
Haney, Samuel
1
Liaee, Mehraneh
2
Maggs, Bruce M.
1
3
Panigrahi, Debmalya
1
Rajaraman, Rajmohan
2
Sundaram, Ravi
2
Duke University, Durham, NC, USA
Northeastern University, Boston, MA, USA
Akamai Technologies, Cambridge, MA, USA
We initiate the algorithmic study of retracting a graph into a cycle in the graph, which seeks a mapping of the graph vertices to the cycle vertices so as to minimize the maximum stretch of any edge, subject to the constraint that the restriction of the mapping to the cycle is the identity map. This problem has its roots in the rich theory of retraction of topological spaces, and has strong ties to well-studied metric embedding problems such as minimum bandwidth and 0-extension. Our first result is an O(min{k, sqrt{n}})-approximation for retracting any graph on n nodes to a cycle with k nodes. We also show a surprising connection to Sperner’s Lemma that rules out the possibility of improving this result using certain natural convex relaxations of the problem. Nevertheless, if the problem is restricted to planar graphs, we show that we can overcome these integrality gaps by giving an optimal combinatorial algorithm, which is the technical centerpiece of the paper. Building on our planar graph algorithm, we also obtain a constant-factor approximation algorithm for retraction of points in the Euclidean plane to a uniform cycle.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.70/LIPIcs.ICALP.2019.70.pdf
Graph algorithms
Graph embedding
Planar graphs
Approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
71:1
71:16
10.4230/LIPIcs.ICALP.2019.71
article
On Adaptive Algorithms for Maximum Matching
Hegerfeld, Falko
1
Kratsch, Stefan
1
Humboldt-Universität zu Berlin, Germany
In the fundamental Maximum Matching problem the task is to find a maximum cardinality set of pairwise disjoint edges in a given undirected graph. The fastest algorithm for this problem, due to Micali and Vazirani, runs in time O(sqrt{n}m) and stands unbeaten since 1980. It is complemented by faster, often linear-time, algorithms for various special graph classes. Moreover, there are fast parameterized algorithms, e.g., time O(km log n) relative to tree-width k, which outperform O(sqrt{n}m) when the parameter is sufficiently small.
We show that the Micali-Vazirani algorithm, and in fact any algorithm following the phase framework of Hopcroft and Karp, is adaptive to beneficial input structure. We exhibit several graph classes for which such algorithms run in linear time O(n+m). More strongly, we show that they run in time O(sqrt{k}m) for graphs that are k vertex deletions away from any of several such classes, without explicitly computing an optimal or approximate deletion set; before, most such bounds were at least Omega(km). Thus, any phase-based matching algorithm with linear-time phases obliviously interpolates between linear time for k=O(1) and the worst case of O(sqrt{n}m) when k=Theta(n). We complement our findings by proving that the phase framework by itself still allows Omega(sqrt{n}) phases, and hence time Omega(sqrt{n}m), even on paths, cographs, and bipartite chain graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.71/LIPIcs.ICALP.2019.71.pdf
Matchings
Adaptive Analysis
Parameterized Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
72:1
72:14
10.4230/LIPIcs.ICALP.2019.72
article
Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits
Hrubeš, Pavel
1
Natarajan Ramamoorthy, Sivaramakrishnan
2
Rao, Anup
2
Yehudayoff, Amir
3
Institute of Mathematics of ASCR, Prague
Paul G. Allen School of Computer Science & Engineering, University of Washington, USA
Department of Mathematics, Technion-IIT, Haifa, Israel
There are various notions of balancing set families that appear in combinatorics and computer science. For example, a family of proper non-empty subsets S_1,...,S_k subset [n] is balancing if for every subset X subset {1,2,...,n} of size n/2, there is an i in [k] so that |S_i cap X| = |S_i|/2. We extend and simplify the framework developed by Hegedűs for proving lower bounds on the size of balancing set families. We prove that if n=2p for a prime p, then k >= p. For arbitrary values of n, we show that k >= n/2 - o(n).
We then exploit the connection between balancing families and depth-2 threshold circuits. This connection helps resolve a question raised by Kulikov and Podolskii on the fan-in of depth-2 majority circuits computing the majority function on n bits. We show that any depth-2 threshold circuit that computes the majority on n bits has at least one gate with fan-in at least n/2 - o(n). We also prove a sharp lower bound on the fan-in of depth-2 threshold circuits computing a specific weighted threshold function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.72/LIPIcs.ICALP.2019.72.pdf
Balancing sets
depth-2 threshold circuits
polynomials
majority
weighted thresholds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
73:1
73:12
10.4230/LIPIcs.ICALP.2019.73
article
Scalable and Jointly Differentially Private Packing
Huang, Zhiyi
1
Zhu, Xue
1
The University of Hong Kong
We introduce an (epsilon, delta)-jointly differentially private algorithm for packing problems. Our algorithm not only achieves the optimal trade-off between the privacy parameter epsilon and the minimum supply requirement (up to logarithmic factors), but is also scalable in the sense that the running time is linear in the number of agents n. Previous algorithms either run in cubic time in n, or require a minimum supply per resource that is sqrt{n} times larger than the best possible.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.73/LIPIcs.ICALP.2019.73.pdf
Joint differential privacy
packing
scalable algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
74:1
74:14
10.4230/LIPIcs.ICALP.2019.74
article
Local Search Breaks 1.75 for Graph Balancing
Jansen, Klaus
1
Rohwedder, Lars
1
Department of Computer Science, Christian-Albrechts-Universität, Kiel, Germany
Graph Balancing is the problem of orienting the edges of a weighted multigraph so as to minimize the maximum weighted in-degree. Since the introduction of the problem the best algorithm known achieves an approximation ratio of 1.75 and it is based on rounding a linear program with this exact integrality gap. It is also known that there is no (1.5 - epsilon)-approximation algorithm, unless P=NP. Can we do better than 1.75?
We prove that a different LP formulation, the configuration LP, has a strictly smaller integrality gap. Graph Balancing was the last one in a group of related problems from literature, for which it was open whether the configuration LP is stronger than previous, simple LP relaxations. We base our proof on a local search approach that has been applied successfully to the more general Restricted Assignment problem, which in turn is a prominent special case of makespan minimization on unrelated machines. With a number of technical novelties we are able to obtain a bound of 1.749 for the case of Graph Balancing. It is not clear whether the local search algorithm we present terminates in polynomial time, which means that the bound is non-constructive. However, it is a strong evidence that a better approximation algorithm is possible using the configuration LP and it allows the optimum to be estimated within a factor better than 1.75.
A particularly interesting aspect of our techniques is the way we handle small edges in the local search. We manage to exploit the configuration constraints enforced on small edges in the LP. This may be of interest to other problems such as Restricted Assignment as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.74/LIPIcs.ICALP.2019.74.pdf
graph
approximation algorithm
scheduling
integrality gap
local search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
75:1
75:13
10.4230/LIPIcs.ICALP.2019.75
article
Near-Linear Time Algorithm for n-fold ILPs via Color Coding
Jansen, Klaus
1
Lassota, Alexandra
1
Rohwedder, Lars
1
Department of Computer Science, Kiel University, Kiel, Germany
We study an important case of ILPs max {c^Tx | Ax = b, l <= x <= u, x in Z^{n t}} with n * t variables and lower and upper bounds l, u in Z^{nt}. In n-fold ILPs non-zero entries only appear in the first r rows of the matrix A and in small blocks of size s x t along the diagonal underneath. Despite this restriction many optimization problems can be expressed in this form. It is known that n-fold ILPs can be solved in FPT time regarding the parameters s, r, and Delta, where Delta is the greatest absolute value of an entry in A. The state-of-the-art technique is a local search algorithm that subsequently moves in an improving direction. Both, the number of iterations and the search for such an improving direction take time Omega(n), leading to a quadratic running time in n. We introduce a technique based on Color Coding, which allows us to compute these improving directions in logarithmic time after a single initialization step. This leads to the first algorithm for n-fold ILPs with a running time that is near-linear in the number nt of variables, namely (rs Delta)^{O(r^2s + s^2)} L^2 * nt log^{O(1)}(nt), where L is the encoding length of the largest integer in the input. In contrast to the algorithms in recent literature, we do not need to solve the LP relaxation in order to handle unbounded variables. Instead, we give a structural lemma to introduce appropriate bounds. If, on the other hand, we are given such an LP solution, the running time can be decreased by a factor of L.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.75/LIPIcs.ICALP.2019.75.pdf
Near-Linear Time Algorithm
n-fold ILP
Color Coding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
76:1
76:14
10.4230/LIPIcs.ICALP.2019.76
article
An Improved FPTAS for 0-1 Knapsack
Jin, Ce
1
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
The 0-1 knapsack problem is an important NP-hard problem that admits fully polynomial-time approximation schemes (FPTASs). Previously the fastest FPTAS by Chan (2018) with approximation factor 1+epsilon runs in O~(n + (1/epsilon)^{12/5}) time, where O~ hides polylogarithmic factors. In this paper we present an improved algorithm in O~(n+(1/epsilon)^{9/4}) time, with only a (1/epsilon)^{1/4} gap from the quadratic conditional lower bound based on (min,+)-convolution. Our improvement comes from a multi-level extension of Chan’s number-theoretic construction, and a greedy lemma that reduces unnecessary computation spent on cheap items.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.76/LIPIcs.ICALP.2019.76.pdf
approximation algorithms
knapsack
subset sum
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
77:1
77:12
10.4230/LIPIcs.ICALP.2019.77
article
Testing the Complexity of a Valued CSP Language
Kolmogorov, Vladimir
1
Institute of Science and Technology Austria, Klosterneuburg, Austria
A Valued Constraint Satisfaction Problem (VCSP) provides a common framework that can express a wide range of discrete optimization problems. A VCSP instance is given by a finite set of variables, a finite domain of labels, and an objective function to be minimized. This function is represented as a sum of terms where each term depends on a subset of the variables. To obtain different classes of optimization problems, one can restrict all terms to come from a fixed set Gamma of cost functions, called a language.
Recent breakthrough results have established a complete complexity classification of such classes with respect to language Gamma: if all cost functions in Gamma satisfy a certain algebraic condition then all Gamma-instances can be solved in polynomial time, otherwise the problem is NP-hard. Unfortunately, testing this condition for a given language Gamma is known to be NP-hard. We thus study exponential algorithms for this meta-problem. We show that the tractability condition of a finite-valued language Gamma can be tested in O(sqrt[3]{3}^{|D|}* poly(size(Gamma))) time, where D is the domain of Gamma and poly(*) is some fixed polynomial. We also obtain a matching lower bound under the Strong Exponential Time Hypothesis (SETH). More precisely, we prove that for any constant delta<1 there is no O(sqrt[3]{3}^{delta|D|}) algorithm, assuming that SETH holds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.77/LIPIcs.ICALP.2019.77.pdf
Valued Constraint Satisfaction Problems
Exponential time algorithms
Exponential Time Hypothesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
78:1
78:15
10.4230/LIPIcs.ICALP.2019.78
article
Towards Optimal Depth Reductions for Syntactically Multilinear Circuits
Kumar, Mrinal
1
Oliveira, Rafael
1
Saptharishi, Ramprasad
2
University of Toronto, Canada
Tata Institute of Fundamental Research
We show that any n-variate polynomial computable by a syntactically multilinear circuit of size poly(n) can be computed by a depth-4 syntactically multilinear (Sigma Pi Sigma Pi) circuit of size at most exp ({O (sqrt{n log n})}). For degree d = omega(n/log n), this improves upon the upper bound of exp ({O(sqrt{d}log n)}) obtained by Tavenas [Sébastien Tavenas, 2015] for general circuits, and is known to be asymptotically optimal in the exponent when d < n^{epsilon} for a small enough constant epsilon. Our upper bound matches the lower bound of exp ({Omega (sqrt{n log n})}) proved by Raz and Yehudayoff [Ran Raz and Amir Yehudayoff, 2009], and thus cannot be improved further in the exponent. Our results hold over all fields and also generalize to circuits of small individual degree.
More generally, we show that an n-variate polynomial computable by a syntactically multilinear circuit of size poly(n) can be computed by a syntactically multilinear circuit of product-depth Delta of size at most exp inparen{O inparen{Delta * (n/log n)^{1/Delta} * log n}}. It follows from the lower bounds of Raz and Yehudayoff [Ran Raz and Amir Yehudayoff, 2009] that in general, for constant Delta, the exponent in this upper bound is tight and cannot be improved to o inparen{inparen{n/log n}^{1/Delta}* log n}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.78/LIPIcs.ICALP.2019.78.pdf
arithmetic circuits
multilinear circuits
depth reduction
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
79:1
79:15
10.4230/LIPIcs.ICALP.2019.79
article
Sum-Of-Squares Bounds via Boolean Function Analysis
Kurpisz, Adam
1
ETH Zürich, Department of Mathematics, Rämistrasse 101, 8092 Zürich, Switzerland
We introduce a method for proving bounds on the SoS rank based on Boolean Function Analysis and Approximation Theory. We apply our technique to improve upon existing results, thus making progress towards answering several open questions.
We consider two questions by Laurent. First, finding what is the SoS rank of the linear representation of the set with no integral points. We prove that the SoS rank is between ceil[n/2] and ceil[~ n/2 +sqrt{n log{2n}} ~]. Second, proving the bounds on the SoS rank for the instance of the Min Knapsack problem. We show that the SoS rank is at least Omega(sqrt{n}) and at most ceil[{n+ 4 ceil[sqrt{n} ~]}/2]. Finally, we consider the question by Bienstock regarding the instance of the Set Cover problem. For this problem we prove the SoS rank lower bound of Omega(sqrt{n}).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.79/LIPIcs.ICALP.2019.79.pdf
SoS certificate
SoS rank
hypercube optimization
semidefinite programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
80:1
80:15
10.4230/LIPIcs.ICALP.2019.80
article
Dynamic Time Warping in Strongly Subquadratic Time: Algorithms for the Low-Distance Regime and Approximate Evaluation
Kuszmaul, William
1
Massachusetts Institute of Technology, Cambridge, USA
Dynamic time warping distance (DTW) is a widely used distance measure between time series, with applications in areas such as speech recognition and bioinformatics. The best known algorithms for computing DTW run in near quadratic time, and conditional lower bounds prohibit the existence of significantly faster algorithms.
The lower bounds do not prevent a faster algorithm for the important special case in which the DTW is small, however. For an arbitrary metric space Sigma with distances normalized so that the smallest non-zero distance is one, we present an algorithm which computes dtw(x, y) for two strings x and y over Sigma in time O(n * dtw(x, y)). When dtw(x, y) is small, this represents a significant speedup over the standard quadratic-time algorithm.
Using our low-distance regime algorithm as a building block, we also present an approximation algorithm which computes dtw(x, y) within a factor of O(n^epsilon) in time O~(n^{2 - epsilon}) for 0 < epsilon < 1. The algorithm allows for the strings x and y to be taken over an arbitrary well-separated tree metric with logarithmic depth and at most exponential aspect ratio. Notably, any polynomial-size metric space can be efficiently embedded into such a tree metric with logarithmic expected distortion. Extending our techniques further, we also obtain the first approximation algorithm for edit distance to work with characters taken from an arbitrary metric space, providing an n^epsilon-approximation in time O~(n^{2 - epsilon}), with high probability.
Finally, we turn our attention to the relationship between edit distance and dynamic time warping distance. We prove a reduction from computing edit distance over an arbitrary metric space to computing DTW over the same metric space, except with an added null character (whose distance to a letter l is defined to be the edit-distance insertion cost of l). Applying our reduction to a conditional lower bound of Bringmann and Künnemann pertaining to edit distance over {0, 1}, we obtain a conditional lower bound for computing DTW over a three letter alphabet (with distances of zero and one). This improves on a previous result of Abboud, Backurs, and Williams, who gave a conditional lower bound for DTW over an alphabet of size five.
With a similar approach, we also prove a reduction from computing edit distance (over generalized Hamming Space) to computing longest-common-subsequence length (LCS) over an alphabet with an added null character. Surprisingly, this means that one can recover conditional lower bounds for LCS directly from those for edit distance, which was not previously thought to be the case.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.80/LIPIcs.ICALP.2019.80.pdf
dynamic time warping
edit distance
approximation algorithm
tree metrics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
81:1
81:15
10.4230/LIPIcs.ICALP.2019.81
article
A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem
Lin, Bingkai
1
2
https://orcid.org/0000-0002-3444-6380
National Institute of Informatics, Tokyo, Japan
Nanjing University, Nanjing, China
Given an n-vertex bipartite graph I=(S,U,E), the goal of set cover problem is to find a minimum sized subset of S such that every vertex in U is adjacent to some vertex of this subset. It is NP-hard to approximate set cover to within a (1-o(1))ln n factor [I. Dinur and D. Steurer, 2014]. If we use the size of the optimum solution k as the parameter, then it can be solved in n^{k+o(1)} time [Eisenbrand and Grandoni, 2004]. A natural question is: can we approximate set cover to within an o(ln n) factor in n^{k-epsilon} time?
In a recent breakthrough result[Karthik et al., 2018], Karthik, Laekhanukit and Manurangsi showed that assuming the Strong Exponential Time Hypothesis (SETH), for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can approximate set cover to a factor below (log n)^{1/poly(k,e(epsilon))} for some function e.
This paper presents a simple gap-producing reduction which, given a set cover instance I=(S,U,E) and two integers k<h <=(1-o(1))sqrt[k]{log |S|/log log |S|}, outputs a new set cover instance I'=(S,U',E') with |U'|=|U|^{h^k}|S|^{O(1)} in |U|^{h^k}* |S|^{O(1)} time such that
- if I has a k-sized solution, then so does I';
- if I has no k-sized solution, then every solution of I' must contain at least h vertices.
Setting h=(1-o(1))sqrt[k]{log |S|/log log |S|}, we show that assuming SETH, for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can distinguish between a set cover instance with k-sized solution and one whose minimum solution size is at least (1-o(1))* sqrt[k]((log n)/(log log n)). This improves the result in [Karthik et al., 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.81/LIPIcs.ICALP.2019.81.pdf
set cover
FPT inapproximability
gap-producing reduction
(n
k)-universal set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
82:1
82:14
10.4230/LIPIcs.ICALP.2019.82
article
Maintaining Perfect Matchings at Low Cost
Matuschke, Jannik
1
Schmidt-Kraepelin, Ulrike
2
Verschae, José
3
Research Center for Operations Management, KU Leuven, Leuven, Belgium
Institute of Software Engineering and Theoretical Computer Science, TU Berlin, Berlin, Germany
Institute of Engineering Sciences, Universidad de O'Higgins, Rancagua, Chile
The min-cost matching problem suffers from being very sensitive to small changes of the input. Even in a simple setting, e.g., when the costs come from the metric on the line, adding two nodes to the input might change the optimal solution completely. On the other hand, one expects that small changes in the input should incur only small changes on the constructed solutions, measured as the number of modified edges. We introduce a two-stage model where we study the trade-off between quality and robustness of solutions. In the first stage we are given a set of nodes in a metric space and we must compute a perfect matching. In the second stage 2k new nodes appear and we must adapt the solution to a perfect matching for the new instance.
We say that an algorithm is (alpha,beta)-robust if the solutions constructed in both stages are alpha-approximate with respect to min-cost perfect matchings, and if the number of edges deleted from the first stage matching is at most beta k. Hence, alpha measures the quality of the algorithm and beta its robustness. In this setting we aim to balance both measures by deriving algorithms for constant alpha and beta. We show that there exists an algorithm that is (3,1)-robust for any metric if one knows the number 2k of arriving nodes in advance. For the case that k is unknown the situation is significantly more involved. We study this setting under the metric on the line and devise a (10,2)-robust algorithm that constructs a solution with a recursive structure that carefully balances cost and redundancy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.82/LIPIcs.ICALP.2019.82.pdf
matchings
robust optimization
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
83:1
83:14
10.4230/LIPIcs.ICALP.2019.83
article
The Minimum Cost Query Problem on Matroids with Uncertainty Areas
Merino, Arturo I.
1
https://orcid.org/0000-0002-1728-6936
Soto, José A.
1
https://orcid.org/0000-0003-2219-8401
Dept. of Mathematical Engineering and CMM, Universidad de Chile & UMI-CNRS 2807, Santiago, Chile
We study the minimum weight basis problem on matroid when elements' weights are uncertain. For each element we only know a set of possible values (an uncertainty area) that contains its real weight. In some cases there exist bases that are uniformly optimal, that is, they are minimum weight bases for every possible weight function obeying the uncertainty areas. In other cases, computing such a basis is not possible unless we perform some queries for the exact value of some elements.
Our main result is a polynomial time algorithm for the following problem. Given a matroid with uncertainty areas and a query cost function on its elements, find the set of elements of minimum total cost that we need to simultaneously query such that, no matter their revelation, the resulting instance admits a uniformly optimal base. We also provide combinatorial characterizations of all uniformly optimal bases, when one exists; and of all sets of queries that can be performed so that after revealing the corresponding weights the resulting instance admits a uniformly optimal base.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.83/LIPIcs.ICALP.2019.83.pdf
Minimum spanning tree
matroids
uncertainty
queries
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
84:1
84:16
10.4230/LIPIcs.ICALP.2019.84
article
Short Proofs Are Hard to Find
Mertz, Ian
1
Pitassi, Toniann
1
2
Wei, Yuanhao
3
University of Toronto, Canada
Institute for Advanced Study, Princeton, NJ, USA
Carnegie Mellon University, Pittsburgh, PA, USA
We obtain a streamlined proof of an important result by Alekhnovich and Razborov [Michael Alekhnovich and Alexander A. Razborov, 2008], showing that it is hard to automatize both tree-like and general Resolution. Under a different assumption than [Michael Alekhnovich and Alexander A. Razborov, 2008], our simplified proof gives improved bounds: we show under ETH that these proof systems are not automatizable in time n^f(n), whenever f(n) = o(log^{1/7 - epsilon} log n) for any epsilon > 0. Previously non-automatizability was only known for f(n) = O(1). Our proof also extends fairly straightforwardly to prove similar hardness results for PCR and Res(r).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.84/LIPIcs.ICALP.2019.84.pdf
automatizability
Resolution
SAT solvers
proof complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
85:1
85:15
10.4230/LIPIcs.ICALP.2019.85
article
A Tight Approximation for Submodular Maximization with Mixed Packing and Covering Constraints
Mizrachi, Eyal
1
Schwartz, Roy
1
Spoerhase, Joachim
2
https://orcid.org/0000-0002-2601-6452
Uniyal, Sumedha
2
Computer Science Department, Technion, Haifa 32000, Israel
Department of Computer Science, Aalto University, Espoo, Finland
Motivated by applications in machine learning, such as subset selection and data summarization, we consider the problem of maximizing a monotone submodular function subject to mixed packing and covering constraints. We present a tight approximation algorithm that for any constant epsilon >0 achieves a guarantee of 1-(1/e)-epsilon while violating only the covering constraints by a multiplicative factor of 1-epsilon. Our algorithm is based on a novel enumeration method, which unlike previously known enumeration techniques, can handle both packing and covering constraints. We extend the above main result by additionally handling a matroid independence constraint as well as finding (approximate) pareto set optimal solutions when multiple submodular objectives are present. Finally, we propose a novel and purely combinatorial dynamic programming approach. While this approach does not give tight bounds it yields deterministic and in some special cases also considerably faster algorithms. For example, for the well-studied special case of only packing constraints (Kulik et al. [Math. Oper. Res. `13] and Chekuri et al. [FOCS `10]), we are able to present the first deterministic non-trivial approximation algorithm. We believe our new combinatorial approach might be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.85/LIPIcs.ICALP.2019.85.pdf
submodular function
approximation algorithm
covering
packing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
86:1
86:14
10.4230/LIPIcs.ICALP.2019.86
article
Scheduling to Approximate Minimization Objectives on Identical Machines
Moseley, Benjamin
1
2
Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA
Relational AI, Berkeley, CA, USA
This paper considers scheduling on identical machines. The scheduling objective considered in this paper generalizes most scheduling minimization problems. In the problem, there are n jobs and each job j is associated with a monotonically increasing function g_j. The goal is to design a schedule that minimizes sum_{j in [n]} g_{j}(C_j) where C_j is the completion time of job j in the schedule. An O(1)-approximation is known for the single machine case. On multiple machines, this paper shows that if the scheduler is required to be either non-migratory or non-preemptive then any algorithm has an unbounded approximation ratio. Using preemption and migration, this paper gives a O(log log nP)-approximation on multiple machines, the first result on multiple machines. These results imply the first non-trivial positive results for several special cases of the problem considered, such as throughput minimization and tardiness.
Natural linear programs known for the problem have a poor integrality gap. The results are obtained by strengthening a natural linear program for the problem with a set of covering inequalities we call job cover inequalities. This linear program is rounded to an integral solution by building on quasi-uniform sampling and rounding techniques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.86/LIPIcs.ICALP.2019.86.pdf
Scheduling
LP rounding
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
87:1
87:12
10.4230/LIPIcs.ICALP.2019.87
article
Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set
Mustafa, Nabil H.
1
Université Paris-Est, Laboratoire d'Informatique Gaspard-Monge, ESIEE Paris, France
Given a set system (X, R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an epsilon-net for (X, R) of size O(d/epsilon log 1/epsilon). Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems.
In this paper, we consider the following natural algorithm to compute an epsilon-net: start with an initial random sample N. Iteratively, as long as N is not an epsilon-net for R, pick any unhit set S in R (say, given by an Oracle), and add O(1) randomly chosen points from S to N.
We prove that the above algorithm computes, in expectation, epsilon-nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O(1/epsilon) calls to the Oracle. In particular, this implies that computing optimal-sized epsilon-nets are as easy as computing an unhit set in the given set system.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.87/LIPIcs.ICALP.2019.87.pdf
epsilon-nets
Geometric Set Systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
88:1
88:14
10.4230/LIPIcs.ICALP.2019.88
article
Tight Bounds for Online Weighted Tree Augmentation
Naor, Joseph (Seffi)
1
Umboh, Seeun William
2
https://orcid.org/0000-0001-6984-4007
Williamson, David P.
3
https://orcid.org/0000-0002-2884-0058
Technion, Haifa, Israel
The University of Sydney, Australia
Cornell University, Ithaca, NY, USA
The Weighted Tree Augmentation problem (WTAP) is a fundamental problem in network design. In this paper, we consider this problem in the online setting. We are given an n-vertex spanning tree T and an additional set L of edges (called links) with costs. Then, terminal pairs arrive one-by-one and our task is to maintain a low-cost subset of links F such that every terminal pair that has arrived so far is 2-edge-connected in T cup F. This online problem was first studied by Gupta, Krishnaswamy and Ravi (SICOMP 2012) who used it as a subroutine for the online survivable network design problem. They gave a deterministic O(log^2 n)-competitive algorithm and showed an Omega(log n) lower bound on the competitive ratio of randomized algorithms. The case when T is a path is also interesting: it is exactly the online interval set cover problem, which also captures as a special case the parking permit problem studied by Meyerson (FOCS 2005). The contribution of this paper is to give tight results for online weighted tree and path augmentation problems. The main result of this work is a deterministic O(log n)-competitive algorithm for online WTAP, which is tight up to constant factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.88/LIPIcs.ICALP.2019.88.pdf
Online algorithms
competitive analysis
tree augmentation
network design
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
89:1
89:14
10.4230/LIPIcs.ICALP.2019.89
article
Optimal Short Cycle Decomposition in Almost Linear Time
Parter, Merav
1
Yogev, Eylon
2
Weizmann IS, Rehovot, Israel
Technion, Haifa, Israel
Short cycle decomposition is an edge partitioning of an unweighted graph into edge-disjoint short cycles, plus a small number of extra edges not in any cycle. This notion was introduced by Chu et al. [FOCS'18] as a fundamental tool for graph sparsification and sketching. Clearly, it is most desirable to have a fast algorithm for partitioning the edges into as short as possible cycles, while omitting few edges.
The most naïve procedure for such decomposition runs in time O(m * n) and partitions the edges into O(log n)-length edge-disjoint cycles plus at most 2n edges. Chu et al. improved the running time considerably to m^{1+o(1)}, while increasing both the length of the cycles and the number of omitted edges by a factor of n^{o(1)}. Even more recently, Liu-Sachdeva-Yu [SODA'19] showed that for every constant delta in (0,1] there is an O(m * n^{delta})-time algorithm that provides, w.h.p., cycles of length O(log n)^{1/delta} and O(n) extra edges.
In this paper, we significantly improve upon these bounds. We first show an m^{1+o(1)}-time deterministic algorithm for computing nearly optimal cycle decomposition, i.e., with cycle length O(log^2 n) and an extra subset of O(n log n) edges not in any cycle. This algorithm is based on a reduction to low-congestion cycle covers, introduced by the authors in [SODA'19].
We also provide a simple deterministic algorithm that computes edge-disjoint cycles of length 2^{1/epsilon} with n^{1+epsilon}* 2^{1/epsilon} extra edges, for every epsilon in (0,1]. Combining this with Liu-Sachdeva-Yu [SODA'19] gives a linear time randomized algorithm for computing cycles of length poly(log n) and O(n) extra edges, for every n-vertex graphs with n^{1+1/delta} edges for some constant delta.
These decomposition algorithms lead to improvements in all the algorithmic applications of Chu et al. as well as to new distributed constructions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.89/LIPIcs.ICALP.2019.89.pdf
Cycle decomposition
low-congestion cycle cover
graph sparsification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
90:1
90:14
10.4230/LIPIcs.ICALP.2019.90
article
Satisfiability Thresholds for Regular Occupation Problems
Panagiotou, Konstantinos
1
Pasch, Matija
1
LMU München, Germany
In the last two decades the study of random instances of constraint satisfaction problems (CSPs) has flourished across several disciplines, including computer science, mathematics and physics. The diversity of the developed methods, on the rigorous and non-rigorous side, has led to major advances regarding both the theoretical as well as the applied viewpoints. The two most popular types of such CSPs are the Erdős-Rényi and the random regular CSPs.
Based on a ceteris paribus approach in terms of the density evolution equations known from statistical physics, we focus on a specific prominent class of problems of the latter type, the so-called occupation problems. The regular r-in-k occupation problems resemble a basis of this class. By now, out of these CSPs only the satisfiability threshold - the largest degree for which the problem admits asymptotically a solution - for the 1-in-k occupation problem has been rigorously established. In the present work we take a general approach towards a systematic analysis of occupation problems. In particular, we discover a surprising and explicit connection between the 2-in-k occupation problem satisfiability threshold and the determination of contraction coefficients, an important quantity in information theory measuring the loss of information that occurs when communicating through a noisy channel. We present methods to facilitate the computation of these coefficients and use them to establish explicitly the threshold for the 2-in-k occupation problem for k=4. Based on this result, for general k >= 5 we formulate a conjecture that pins down the exact value of the corresponding coefficient, which, if true, is shown to determine the threshold in all these cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.90/LIPIcs.ICALP.2019.90.pdf
Constraint satisfaction problem
replica symmetric
contraction coefficient
first moment
second moment
small subgraph conditioning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
91:1
91:16
10.4230/LIPIcs.ICALP.2019.91
article
Toward a Dichotomy for Approximation of H-Coloring
Rafiey, Akbar
1
https://orcid.org/0000-0003-1619-3997
Rafiey, Arash
2
3
Santos, Thiago
2
Department of Computing Science, Simon Fraser University, Burnaby, Canada
Indiana State University, Terre Haute, IN, USA
Simon Fraser University, Burnaby, Canada
Given two (di)graphs G, H and a cost function c:V(G) x V(H) -> Q_{>= 0} cup {+infty}, in the minimum cost homomorphism problem, MinHOM(H), we are interested in finding a homomorphism f:V(G)-> V(H) (a.k.a H-coloring) that minimizes sum limits_{v in V(G)}c(v,f(v)). The complexity of exact minimization of this problem is well understood [Pavol Hell and Arash Rafiey, 2012], and the class of digraphs H, for which the MinHOM(H) is polynomial time solvable is a small subset of all digraphs.
In this paper, we consider the approximation of MinHOM within a constant factor. In terms of digraphs, MinHOM(H) is not approximable if H contains a digraph asteroidal triple (DAT). We take a major step toward a dichotomy classification of approximable cases. We give a dichotomy classification for approximating the MinHOM(H) when H is a graph (i.e. symmetric digraph). For digraphs, we provide constant factor approximation algorithms for two important classes of digraphs, namely bi-arc digraphs (digraphs with a conservative semi-lattice polymorphism or min-ordering), and k-arc digraphs (digraphs with an extended min-ordering). Specifically, we show that:
- Dichotomy for Graphs: MinHOM(H) has a 2|V(H)|-approximation algorithm if graph H admits a conservative majority polymorphims (i.e. H is a bi-arc graph), otherwise, it is inapproximable;
- MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a bi-arc digraph;
- MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a k-arc digraph.
In conclusion, we show the importance of these results and provide insights for achieving a dichotomy classification of approximable cases. Our constant factors depend on the size of H. However, the implementation of our algorithms provides a much better approximation ratio. It leaves open to investigate a classification of digraphs H, where MinHOM(H) admits a constant factor approximation algorithm that is independent of |V(H)|.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.91/LIPIcs.ICALP.2019.91.pdf
Approximation algorithms
minimum cost homomorphism
randomized rounding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
92:1
92:14
10.4230/LIPIcs.ICALP.2019.92
article
Beating Fredman-Komlós for Perfect k-Hashing
Guruswami, Venkatesan
1
https://orcid.org/0000-0001-7926-3396
Riazanov, Andrii
1
Computer Science Department, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, USA, 15213
We say a subset C subseteq {1,2,...,k}^n is a k-hash code (also called k-separated) if for every subset of k codewords from C, there exists a coordinate where all these codewords have distinct values. Understanding the largest possible rate (in bits), defined as (log_2 |C|)/n, of a k-hash code is a classical problem. It arises in two equivalent contexts: (i) the smallest size possible for a perfect hash family that maps a universe of N elements into {1,2,...,k}, and (ii) the zero-error capacity for decoding with lists of size less than k for a certain combinatorial channel.
A general upper bound of k!/k^{k-1} on the rate of a k-hash code (in the limit of large n) was obtained by Fredman and Komlós in 1984 for any k >= 4. While better bounds have been obtained for k=4, their original bound has remained the best known for each k >= 5. In this work, we present a method to obtain the first improvement to the Fredman-Komlós bound for every k >= 5, and we apply this method to give explicit numerical bounds for k=5, 6.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.92/LIPIcs.ICALP.2019.92.pdf
Coding theory
perfect hashing
hash family
graph entropy
zero-error information theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
93:1
93:15
10.4230/LIPIcs.ICALP.2019.93
article
Random Walks on Dynamic Graphs: Mixing Times, Hitting Times, and Return Probabilities
Sauerwald, Thomas
1
Zanetti, Luca
1
Department of Computer Science and Technology, University of Cambridge, United Kingdom
We establish and generalise several bounds for various random walk quantities including the mixing time and the maximum hitting time. Unlike previous analyses, our derivations are based on rather intuitive notions of local expansion properties which allow us to capture the progress the random walk makes through t-step probabilities.
We apply our framework to dynamically changing graphs, where the set of vertices is fixed while the set of edges changes in each round. For random walks on dynamic connected graphs for which the stationary distribution does not change over time, we show that their behaviour is in a certain sense similar to static graphs. For example, we show that the mixing and hitting times of any sequence of d-regular connected graphs is O(n^2), generalising a well-known result for static graphs. We also provide refined bounds depending on the isoperimetric dimension of the graph, matching again known results for static graphs. Finally, we investigate properties of random walks on dynamic graphs that are not always connected: we relate their convergence to stationarity to the spectral properties of an average of transition matrices and provide some examples that demonstrate strong discrepancies between static and dynamic graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.93/LIPIcs.ICALP.2019.93.pdf
random walks
dynamic graphs
hitting times
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
94:1
94:16
10.4230/LIPIcs.ICALP.2019.94
article
Querying a Matrix Through Matrix-Vector Products
Sun, Xiaoming
1
2
Woodruff, David P.
3
Yang, Guang
4
5
Zhang, Jialin
1
2
CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
Carnegie Mellon University, Pittsburgh, PA, US
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Conflux, Beijing, China
We consider algorithms with access to an unknown matrix M in F^{n x d} via matrix-vector products, namely, the algorithm chooses vectors v^1, ..., v^q, and observes Mv^1, ..., Mv^q. Here the v^i can be randomized as well as chosen adaptively as a function of Mv^1, ..., Mv^{i-1}. Motivated by applications of sketching in distributed computation, linear algebra, and streaming models, as well as connections to areas such as communication complexity and property testing, we initiate the study of the number q of queries needed to solve various fundamental problems. We study problems in three broad categories, including linear algebra, statistics problems, and graph problems. For example, we consider the number of queries required to approximate the rank, trace, maximum eigenvalue, and norms of a matrix M; to compute the AND/OR/Parity of each column or row of M, to decide whether there are identical columns or rows in M or whether M is symmetric, diagonal, or unitary; or to compute whether a graph defined by M is connected or triangle-free. We also show separations for algorithms that are allowed to obtain matrix-vector products only by querying vectors on the right, versus algorithms that can query vectors on both the left and the right. We also show separations depending on the underlying field the matrix-vector product occurs in. For graph problems, we show separations depending on the form of the matrix (bipartite adjacency versus signed edge-vertex incidence matrix) to represent the graph.
Surprisingly, this fundamental model does not appear to have been studied on its own, and we believe a thorough investigation of problems in this model would be beneficial to a number of different application areas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.94/LIPIcs.ICALP.2019.94.pdf
Communication complexity
linear algebra
sketching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
95:1
95:13
10.4230/LIPIcs.ICALP.2019.95
article
Dynamic Ordered Sets with Approximate Queries, Approximate Heaps and Soft Heaps
Thorup, Mikkel
1
Zamir, Or
2
Zwick, Uri
2
Department of Computer Science, University of Copenhagen, Denmark
Blavatnik School of Computer Science, Tel Aviv University, Israel
We consider word RAM data structures for maintaining ordered sets of integers whose select and rank operations are allowed to return approximate results, i.e., ranks, or items whose rank, differ by less than Delta from the exact answer, where Delta=Delta(n) is an error parameter. Related to approximate select and rank is approximate (one-dimensional) nearest-neighbor. A special case of approximate select queries are approximate min queries. Data structures that support approximate min operations are known as approximate heaps (priority queues). Related to approximate heaps are soft heaps, which are approximate heaps with a different notion of approximation.
We prove the optimality of all the data structures presented, either through matching cell-probe lower bounds, or through equivalences to well studied static problems. For approximate select, rank, and nearest-neighbor operations we get matching cell-probe lower bounds. We prove an equivalence between approximate min operations, i.e., approximate heaps, and the static partitioning problem. Finally, we prove an equivalence between soft heaps and the classical sorting problem, on a smaller number of items.
Our results have many interesting and unexpected consequences. It turns out that approximation greatly speeds up some of these operations, while others are almost unaffected. In particular, while select and rank have identical operation times, both in comparison-based and word RAM implementations, an interesting separation emerges between the approximate versions of these operations in the word RAM model. Approximate select is much faster than approximate rank. It also turns out that approximate min is exponentially faster than the more general approximate select. Next, we show that implementing soft heaps is harder than implementing approximate heaps. The relation between them corresponds to the relation between sorting and partitioning.
Finally, as an interesting byproduct, we observe that a combination of known techniques yields a deterministic word RAM algorithm for (exactly) sorting n items in O(n log log_w n) time, where w is the word length. Even for the easier problem of finding duplicates, the best previous deterministic bound was O(min{n log log n,n log_w n}). Our new unifying bound is an improvement when w is sufficiently large compared with n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.95/LIPIcs.ICALP.2019.95.pdf
Order queries
word RAM
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
96:1
96:13
10.4230/LIPIcs.ICALP.2019.96
article
Amplification with One NP Oracle Query
Watson, Thomas
1
University of Memphis, Memphis, TN, USA
We provide a complete picture of the extent to which amplification of success probability is possible for randomized algorithms having access to one NP oracle query, in the settings of two-sided, one-sided, and zero-sided error. We generalize this picture to amplifying one-query algorithms with q-query algorithms, and we show our inclusions are tight for relativizing techniques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.96/LIPIcs.ICALP.2019.96.pdf
Amplification
NP
oracle
query
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
97:1
97:14
10.4230/LIPIcs.ICALP.2019.97
article
Separating k-Player from t-Player One-Way Communication, with Applications to Data Streams
Woodruff, David P.
1
Yang, Guang
2
3
Carnegie Mellon University, Pittsburgh, PA, USA
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Conflux, Beijing, China
In a k-party communication problem, the k players with inputs x_1, x_2, ..., x_k, respectively, want to evaluate a function f(x_1, x_2, ..., x_k) using as little communication as possible. We consider the message-passing model, in which the inputs are partitioned in an arbitrary, possibly worst-case manner, among a smaller number t of players (t<k). The t-player communication cost of computing f can only be smaller than the k-player communication cost, since the t players can trivially simulate the k-player protocol. But how much smaller can it be? We study deterministic and randomized protocols in the one-way model, and provide separations for product input distributions, which are optimal for low error probability protocols. We also provide much stronger separations when the input distribution is non-product.
A key application of our results is in proving lower bounds for data stream algorithms. In particular, we give an optimal Omega(epsilon^{-2}log(N) log log(mM)) bits of space lower bound for the fundamental problem of (1 +/-{epsilon})-approximating the number |x |_0 of non-zero entries of an n-dimensional vector x after m updates each of magnitude M, and with success probability >= 2/3, in a strict turnstile stream. Our result matches the best known upper bound when epsilon >= 1/polylog(mM). It also improves on the prior Omega({epsilon}^{-2}log(mM)) lower bound and separates the complexity of approximating L_0 from approximating the p-norm L_p for p bounded away from 0, since the latter has an O(epsilon^{-2}log(mM)) bit upper bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.97/LIPIcs.ICALP.2019.97.pdf
Communication complexity
multi-player communication
one-way communication
streaming complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
98:1
98:13
10.4230/LIPIcs.ICALP.2019.98
article
Construction of Optimal Locally Recoverable Codes and Connection with Hypergraph
Xing, Chaoping
1
https://orcid.org/0000-0002-1257-1033
Yuan, Chen
2
https://orcid.org/0000-0002-3730-8397
School of Electronics, Information and Electrical Engineering, Shanghai Jiaotong University, Shanghai, 200240, P. R. China
Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
Locally recoverable codes are a class of block codes with an additional property called locality. A locally recoverable code with locality r can recover a symbol by reading at most r other symbols. Recently, it was discovered by several authors that a q-ary optimal locally recoverable code, i.e., a locally recoverable code achieving the Singleton-type bound, can have length much bigger than q+1. In this paper, we present both the upper bound and the lower bound on the length of optimal locally recoverable codes. Our lower bound improves the best known result in [Yuan Luo et al., 2018] for all distance d >= 7. This result is built on the observation of the parity-check matrix equipped with the Vandermonde structure. It turns out that a parity-check matrix with the Vandermonde structure produces an optimal locally recoverable code if it satisfies a certain expansion property for subsets of F_q. To our surprise, this expansion property is then shown to be equivalent to a well-studied problem in extremal graph theory. Our upper bound is derived by an refined analysis of the arguments of Theorem 3.3 in [Venkatesan Guruswami et al., 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.98/LIPIcs.ICALP.2019.98.pdf
Locally Repairable Codes
Hypergraph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
99:1
99:15
10.4230/LIPIcs.ICALP.2019.99
article
Improvements in Quantum SDP-Solving with Applications
van Apeldoorn, Joran
1
Gilyén, András
1
QuSoft, CWI, The Netherlands
Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework.
We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case when the input matrices are proportional to mixed quantum states that one can query. For this model Brandão et al. [Fernando G. S. L. Brandão et al., 2018] showed that the dependence on n can be replaced by a polynomial dependence on both the rank and the trace of the input matrices. We remove the dependence on the rank and hence require only a dependence on the trace of the input matrices.
After we obtain these results we apply them to a few different problems. The most notable of which is the problem of shadow tomography, recently introduced by Aaronson [Aaronson, 2018]. Here we simultaneously improve both the sample and computational complexity of the previous best results. Finally we prove a new Omega~(sqrt{m}alpha gamma) lower bound for solving LPs and SDPs in the quantum operator model, which also implies a lower bound for the model of Brandão et al. [Fernando G. S. L. Brandão et al., 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.99/LIPIcs.ICALP.2019.99.pdf
quantum algorithms
semidefinite programming
shadow tomography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
100:1
100:16
10.4230/LIPIcs.ICALP.2019.100
article
Minimizing GFG Transition-Based Automata (Track B: Automata, Logic, Semantics, and Theory of Programming)
Abu Radi, Bader
1
Kupferman, Orna
1
School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel
While many applications of automata in formal methods can use nondeterministic automata, some applications, most notably synthesis, need deterministic or good-for-games automata. The latter are nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past. The minimization problem for nondeterministic and deterministic Büchi and co-Büchi word automata are PSPACE-complete and NP-complete, respectively. We describe a polynomial minimization algorithm for good-for-games co-Büchi word automata with transition-based acceptance. Thus, a run is accepting if it traverses a set of designated transitions only finitely often. Our algorithm is based on a sequence of transformations we apply to the automaton, on top of which a minimal quotient automaton is defined.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.100/LIPIcs.ICALP.2019.100.pdf
Minimization
Deterministic co-Büchi Automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
101:1
101:13
10.4230/LIPIcs.ICALP.2019.101
article
A Type System for Interactive JSON Schema Inference (Extended Abstract)
Baazizi, Mohamed-Amine
1
Colazzo, Dario
2
Ghelli, Giorgio
3
Sartiani, Carlo
4
Sorbonne Université, CNRS, LIP6 UMR 7606, Paris, France
Université Paris-Dauphine, PSL, LAMSADE, France
Dipartimento di Informatica, Università di Pisa, Italy
DIMIE, Università della Basilicata - Potenza, Italy
In this paper we present the first JSON type system that provides the possibility of inferring a schema by adopting different levels of precision/succinctness for different parts of the dataset, under user control. This feature gives the data analyst the possibility to have detailed schemas for parts of the data of greater interest, while more succinct schema is provided for other parts, and the decision can be changed as many times as needed, in order to explore the schema in a gradual fashion, moving the focus to different parts of the collection, without the need of reprocessing data and by only performing type rewriting operations on the most precise schema.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.101/LIPIcs.ICALP.2019.101.pdf
JSON
type systems
interactive inference
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
102:1
102:15
10.4230/LIPIcs.ICALP.2019.102
article
On the Complexity of Value Iteration (Track B: Automata, Logic, Semantics, and Theory of Programming)
Balaji, Nikhil
1
Kiefer, Stefan
1
Novotný, Petr
2
https://orcid.org/0000-0002-5026-4392
Pérez, Guillermo A.
3
https://orcid.org/0000-0002-1200-4952
Shirmohammadi, Mahsa
4
5
University of Oxford, UK
Masaryk University, Brno, Czech Republic
University of Antwerp, Belgium
CNRS, Paris, France
IRIF, Paris, France
Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXPTIME-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. To obtain this main result, we develop several stepping stones that yield results of an independent interest. For instance, we show that it is EXPTIME-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. We also provide new complexity results for the bounded halting problem in linear-update counter machines.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.102/LIPIcs.ICALP.2019.102.pdf
Markov decision processes
Value iteration
Formal verification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
103:1
103:14
10.4230/LIPIcs.ICALP.2019.103
article
Monadic Decomposability of Regular Relations (Track B: Automata, Logic, Semantics, and Theory of Programming)
Barceló, Pablo
1
2
https://orcid.org/0000-0003-2293-2653
Hong, Chih-Duo
3
Le, Xuan-Bach
3
Lin, Anthony W.
4
https://orcid.org/0000-0003-4715-5096
Niskanen, Reino
3
https://orcid.org/0000-0002-2210-1481
Department of Computer Science, University of Chile, Santiago, Chile
IMFD, Santiago, Chile
Department of Computer Science, University of Oxford, UK
Technische Universität Kaiserslautern, Germany
Monadic decomposibility - the ability to determine whether a formula in a given logical theory can be decomposed into a boolean combination of monadic formulas - is a powerful tool for devising a decision procedure for a given logical theory. In this paper, we revisit a classical decision problem in automata theory: given a regular (a.k.a. synchronized rational) relation, determine whether it is recognizable, i.e., it has a monadic decomposition (that is, a representation as a boolean combination of cartesian products of regular languages). Regular relations are expressive formalisms which, using an appropriate string encoding, can capture relations definable in Presburger Arithmetic. In fact, their expressive power coincide with relations definable in a universal automatic structure; equivalently, those definable by finite set interpretations in WS1S (Weak Second Order Theory of One Successor). Determining whether a regular relation admits a recognizable relation was known to be decidable (and in exponential time for binary relations), but its precise complexity still hitherto remains open. Our main contribution is to fully settle the complexity of this decision problem by developing new techniques employing infinite Ramsey theory. The complexity for DFA (resp. NFA) representations of regular relations is shown to be NLOGSPACE-complete (resp. PSPACE-complete).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.103/LIPIcs.ICALP.2019.103.pdf
Transducers
Automata
Synchronized Rational Relations
Ramsey Theory
Variable Independence
Automatic Structures
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
104:1
104:15
10.4230/LIPIcs.ICALP.2019.104
article
Boundedness of Conjunctive Regular Path Queries (Track B: Automata, Logic, Semantics, and Theory of Programming)
Barceló, Pablo
1
2
https://orcid.org/0000-0003-2293-2653
Figueira, Diego
3
Romero, Miguel
4
Department of Computer Science, University of Chile, Santiago, Chile
IMFD, Santiago, Chile
CNRS & LaBRI, Talence, France
Department of Computer Science, University of Oxford, Oxford, UK
We study the boundedness problem for unions of conjunctive regular path queries with inverses (UC2RPQs). This is the problem of, given a UC2RPQ, checking whether it is equivalent to a union of conjunctive queries (UCQ). We show the problem to be ExpSpace-complete, thus coinciding with the complexity of containment for UC2RPQs. As a corollary, when a UC2RPQ is bounded, it is equivalent to a UCQ of at most triple-exponential size, and in fact we show that this bound is optimal. We also study better behaved classes of UC2RPQs, namely acyclic UC2RPQs of bounded thickness, and strongly connected UCRPQs, whose boundedness problem is, respectively, PSpace-complete and Pi_2^P-complete. Most upper bounds exploit results on limitedness for distance automata, in particular extending the model with alternation and two-wayness, which may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.104/LIPIcs.ICALP.2019.104.pdf
regular path queries
boundedness
limitedness
distance automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
105:1
105:14
10.4230/LIPIcs.ICALP.2019.105
article
Polynomially Ambiguous Probabilistic Automata on Restricted Languages (Track B: Automata, Logic, Semantics, and Theory of Programming)
Bell, Paul C.
1
https://orcid.org/0000-0003-2620-635X
Department of Computer Science, Byrom Street, Liverpool John Moores University, Liverpool, L3-3AF, UK
We consider the computability and complexity of decision questions for Probabilistic Finite Automata (PFA) with sub-exponential ambiguity. We show that the emptiness problem for non-strict cut-points of polynomially ambiguous PFA remains undecidable even when the input word is over a bounded language and all PFA transition matrices are commutative. In doing so, we introduce a new technique based upon the Turakainen construction of a PFA from a Weighted Finite Automata which can be used to generate PFA of lower dimensions and of subexponential ambiguity. We also study freeness/injectivity problems for polynomially ambiguous PFA and study the border of decidability and tractability for various cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.105/LIPIcs.ICALP.2019.105.pdf
Probabilistic finite automata
ambiguity
undecidability
bounded language
emptiness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
106:1
106:14
10.4230/LIPIcs.ICALP.2019.106
article
String-to-String Interpretations With Polynomial-Size Output (Track B: Automata, Logic, Semantics, and Theory of Programming)
Bojańczyk, Mikołaj
1
Kiefer, Sandra
2
Lhote, Nathan
1
Institute of Informatics, University of Warsaw, Poland
Department of Computer Science, RWTH Aachen University, Germany
String-to-string MSO interpretations are like Courcelle’s MSO transductions, except that a single output position can be represented using a tuple of input positions instead of just a single input position. In particular, the output length is polynomial in the input length, as opposed to MSO transductions, which have output of linear length. We show that string-to-string MSO interpretations are exactly the polyregular functions. The latter class has various characterisations, one of which is that it consists of the string-to-string functions recognised by pebble transducers.
Our main result implies the surprising fact that string-to-string MSO interpretations are closed under composition.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.106/LIPIcs.ICALP.2019.106.pdf
MSO
interpretations
pebble transducers
polyregular functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
107:1
107:13
10.4230/LIPIcs.ICALP.2019.107
article
A Kleene Theorem for Nominal Automata (Track B: Automata, Logic, Semantics, and Theory of Programming)
Brunet, Paul
1
https://orcid.org/0000-0002-9762-6872
Silva, Alexandra
1
https://orcid.org/0000-0001-5014-9784
University College London, UK
Nominal automata are a widely studied class of automata designed to recognise languages over infinite alphabets. In this paper, we present a Kleene theorem for nominal automata by providing a syntax to denote regular nominal languages. We use regular expressions with explicit binders for creation and destruction of names and pinpoint an exact property of these expressions - namely memory-finiteness - identifying a subclass of expressions denoting exactly regular nominal languages.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.107/LIPIcs.ICALP.2019.107.pdf
Kleene Theorem
Nominal automata
Bracket Algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
108:1
108:15
10.4230/LIPIcs.ICALP.2019.108
article
Completeness of Graphical Languages for Mixed States Quantum Mechanics (Track B: Automata, Logic, Semantics, and Theory of Programming)
Carette, Titouan
1
Jeandel, Emmanuel
1
https://orcid.org/0000-0001-7236-2906
Perdrix, Simon
1
https://orcid.org/0000-0002-1808-2409
Vilmart, Renaud
1
Université de Lorraine, CNRS, Inria, LORIA, F 54000 Nancy, France
There exist several graphical languages for quantum information processing, like quantum circuits, ZX-Calculus, ZW-Calculus, etc. Each of these languages forms a dagger-symmetric monoidal category (dagger-SMC) and comes with an interpretation functor to the dagger-SMC of (finite dimension) Hilbert spaces. In the recent years, one of the main achievements of the categorical approach to quantum mechanics has been to provide several equational theories for most of these graphical languages, making them complete for various fragments of pure quantum mechanics.
We address the question of the extension of these languages beyond pure quantum mechanics, in order to reason on mixed states and general quantum operations, i.e. completely positive maps. Intuitively, such an extension relies on the axiomatisation of a discard map which allows one to get rid of a quantum system, operation which is not allowed in pure quantum mechanics.
We introduce a new construction, the discard construction, which transforms any dagger-symmetric monoidal category into a symmetric monoidal category equipped with a discard map. Roughly speaking this construction consists in making any isometry causal.
Using this construction we provide an extension for several graphical languages that we prove to be complete for general quantum operations. However this construction fails for some fringe cases like the Clifford+T quantum mechanics, as the category does not have enough isometries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.108/LIPIcs.ICALP.2019.108.pdf
Quantum Computing
Quantum Categorical Mechanics
Category Theory
Mixed States
Completely Positive Maps
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
109:1
109:16
10.4230/LIPIcs.ICALP.2019.109
article
Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number (Track B: Automata, Logic, Semantics, and Theory of Programming)
Casel, Katrin
1
Day, Joel D.
2
https://orcid.org/0000-0003-0738-9816
Fleischmann, Pamela
3
https://orcid.org/0000-0002-1531-7970
Kociumaka, Tomasz
4
5
https://orcid.org/0000-0002-2477-1702
Manea, Florin
3
https://orcid.org/0000-0001-6094-3324
Schmid, Markus L.
6
https://orcid.org/0000-0001-5137-1504
Hasso Plattner Institute, University of Potsdam, Germany
Department of Computer Science, Loughborough University, UK
Department of Computer Science, Kiel University, Germany
Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel
Institute of Informatics, University of Warsaw, Poland
Trier University, Germany
We investigate the locality number, a recently introduced structural parameter for strings (with applications in pattern matching with variables), and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow us to show that computing the locality number is NP-hard but fixed-parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with ratio O(sqrt{log{opt}} log n). As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the best currently known approximation algorithm for cutwidth. In addition to these main results, we also consider the possibility of greedy-based approximation algorithms for the locality number.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.109/LIPIcs.ICALP.2019.109.pdf
Graph and String Parameters
NP-Completeness
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
110:1
110:15
10.4230/LIPIcs.ICALP.2019.110
article
Solutions Sets to Systems of Equations in Hyperbolic Groups Are EDT0L in PSPACE (Track B: Automata, Logic, Semantics, and Theory of Programming)
Ciobanu, Laura
1
https://orcid.org/0000-0002-9451-1471
Elder, Murray
2
https://orcid.org/0000-0002-2438-3945
Heriot-Watt University, Edinburgh EH14 4AS, Scotland
University of Technology Sydney, Ultimo NSW 2007, Australia
We show that the full set of solutions to systems of equations and inequations in a hyperbolic group, with or without torsion, as shortlex geodesic words, is an EDT0L language whose specification can be computed in NSPACE(n^2 log n) for the torsion-free case and NSPACE(n^4 log n) for the torsion case. Our work combines deep geometric results by Rips, Sela, Dahmani and Guirardel on decidability of existential theories of hyperbolic groups, work of computer scientists including Plandowski, Jeż, Diekert and others on PSPACE algorithms to solve equations in free monoids and groups using compression, and an intricate language-theoretic analysis.
The present work gives an essentially optimal formal language description for all solutions in all hyperbolic groups, and an explicit and surprising low space complexity to compute them.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.110/LIPIcs.ICALP.2019.110.pdf
Hyperbolic group
Existential theory
EDT0L language
PSPACE
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
111:1
111:14
10.4230/LIPIcs.ICALP.2019.111
article
Differential Logical Relations, Part I: The Simply-Typed Case (Track B: Automata, Logic, Semantics, and Theory of Programming)
Dal Lago, Ugo
1
2
Gavazzo, Francesco
3
Yoshimizu, Akira
2
University of Bologna, Italy
INRIA Sophia Antipolis, France
IMDEA Software Institute, Spain
We introduce a new form of logical relation which, in the spirit of metric relations, allows us to assign each pair of programs a quantity measuring their distance, rather than a boolean value standing for their being equivalent. The novelty of differential logical relations consists in measuring the distance between terms not (necessarily) by a numerical value, but by a mathematical object which somehow reflects the interactive complexity, i.e. the type, of the compared terms. We exemplify this concept in the simply-typed lambda-calculus, and show a form of soundness theorem. We also see how ordinary logical relations and metric relations can be seen as instances of differential logical relations. Finally, we show that differential logical relations can be organised in a cartesian closed category, contrarily to metric relations, which are well-known not to have such a structure, but only that of a monoidal closed category.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.111/LIPIcs.ICALP.2019.111.pdf
Logical Relations
lambda-Calculus
Program Equivalence
Semantics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
112:1
112:14
10.4230/LIPIcs.ICALP.2019.112
article
Approximations of Isomorphism and Logics with Linear-Algebraic Operators (Track B: Automata, Logic, Semantics, and Theory of Programming)
Dawar, Anuj
1
Grädel, Erich
2
Pakusa, Wied
2
University of Cambridge, UK
RWTH Aachen University, Germany
Invertible map equivalences are approximations of graph isomorphism that refine the well-known Weisfeiler-Leman method. They are parameterized by a number k and a set Q of primes. The intuition is that two equivalent graphs G equiv^IM_{k, Q} H cannot be distinguished by means of partitioning the set of k-tuples in both graphs with respect to any linear-algebraic operator acting on vector spaces over fields of characteristic p, for any p in Q. These equivalences have first appeared in the study of rank logic, but in fact they can be used to delimit the expressive power of any extension of fixed-point logic with linear-algebraic operators. We define {LA^{k}}(Q), an infinitary logic with k variables and all linear-algebraic operators over finite vector spaces of characteristic p in Q and show that equiv^IM_{k, Q} is the natural notion of elementary equivalence for this logic. The logic LA^{omega}(Q) = Cup_{k in omega} LA^{k}(Q) is then a natural upper bound on the expressive power of any extension of fixed-point logics by means of Q-linear-algebraic operators.
By means of a new and much deeper algebraic analysis of a generalized variant, for any prime p, of the CFI-structures due to Cai, Fürer, and Immerman, we prove that, as long as Q is not the set of all primes, there is no k such that equiv^IM_{k, Q} is the same as isomorphism. It follows that there are polynomial-time properties of graphs which are not definable in LA^{omega}(Q), which implies that no extension of fixed-point logic with linear-algebraic operators can capture PTIME, unless it includes such operators for all prime characteristics. Our analysis requires substantial algebraic machinery, including a homogeneity property of CFI-structures and Maschke’s Theorem, an important result from the representation theory of finite groups.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.112/LIPIcs.ICALP.2019.112.pdf
Finite Model Theory
Graph Isomorphism
Descriptive Complexity
Algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
113:1
113:15
10.4230/LIPIcs.ICALP.2019.113
article
Counting Answers to Existential Questions (Track B: Automata, Logic, Semantics, and Theory of Programming)
Dell, Holger
1
https://orcid.org/0000-0001-8955-0786
Roth, Marc
1
https://orcid.org/0000-0003-3159-9418
Wellnitz, Philip
2
https://orcid.org/0000-0002-6482-8478
Cluster of Excellence (MMCI), Saarland Informatics Campus (SIC), Saarbrücken, Germany
Max Planck Institute for Informatics, Saarland Informatics Campus (SIC), Saarbrücken, Germany
Conjunctive queries select and are expected to return certain tuples from a relational database. We study the potentially easier problem of counting all selected tuples, rather than enumerating them. In particular, we are interested in the problem’s parameterized and data complexity, where the query is considered to be small or even fixed, and the database is considered to be large. We identify two structural parameters for conjunctive queries that capture their inherent complexity: The dominating star size and the linked matching number. If the dominating star size of a conjunctive query is large, then we show that counting solution tuples to the query is at least as hard as counting dominating sets, which yields a fine-grained complexity lower bound under the Strong Exponential Time Hypothesis (SETH) as well as a #W[2]-hardness result in parameterized complexity. Moreover, if the linked matching number of a conjunctive query is large, then we show that the structure of the query is so rich that arbitrary queries up to a certain size can be encoded into it; in the language of parameterized complexity, this essentially establishes a #A[2]-completeness result.
Using ideas stemming from Lovász (1967), we lift complexity results from the class of conjunctive queries to arbitrary existential or universal formulas that might contain inequalities and negations on constraints over the free variables. As a consequence, we obtain a complexity classification that refines and generalizes previous results of Chen, Durand, and Mengel (ToCS 2015; ICDT 2015; PODS 2016) for conjunctive queries and of Curticapean and Marx (FOCS 2014) for the subgraph counting problem. Our proof also relies on graph minors, and we show a strengthening of the Excluded-Grid-Theorem which might be of independent interest: If the linked matching number (and thus the treewidth) is large, then not only can we find a large grid somewhere in the graph, but we can find a large grid whose diagonal has disjoint paths leading into an assumed node-well-linked set.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.113/LIPIcs.ICALP.2019.113.pdf
Conjunctive queries
graph homomorphisms
counting complexity
parameterized complexity
fine-grained complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
114:1
114:14
10.4230/LIPIcs.ICALP.2019.114
article
A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games (Track B: Automata, Logic, Semantics, and Theory of Programming)
Dorfman, Dani
1
Kaplan, Haim
1
Zwick, Uri
1
Blavatnik School of Computer Science, Tel Aviv University, Israel
We present an improved exponential time algorithm for Energy Games, and hence also for Mean Payoff Games. The running time of the new algorithm is O (min(m n W, m n 2^{n/2} log W)), where n is the number of vertices, m is the number of edges, and when the edge weights are integers of absolute value at most W. For small values of W, the algorithm matches the performance of the pseudopolynomial time algorithm of Brim et al. on which it is based. For W >= n2^{n/2}, the new algorithm is faster than the algorithm of Brim et al. and is currently the fastest deterministic algorithm for Energy Games and Mean Payoff Games. The new algorithm is obtained by introducing a technique of forecasting repetitive actions performed by the algorithm of Brim et al., along with the use of an edge-weight scaling technique.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.114/LIPIcs.ICALP.2019.114.pdf
Energy Games
Mean Payoff Games
Scaling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
115:1
115:14
10.4230/LIPIcs.ICALP.2019.115
article
Reachability for Branching Concurrent Stochastic Games (Track B: Automata, Logic, Semantics, and Theory of Programming)
Etessami, Kousha
1
Martinov, Emanuel
1
Stewart, Alistair
2
Yannakakis, Mihalis
3
School of Informatics, University of Edinburgh, UK
Department of Computer Science, University of Southern California, Los Angeles, CA, USA
Department of Computer Science, Columbia University, New York City, NY, USA
We give polynomial time algorithms for deciding almost-sure and limit-sure reachability in Branching Concurrent Stochastic Games (BCSGs). These are a class of infinite-state imperfect-information stochastic games that generalize both finite-state concurrent stochastic reachability games ([L. de Alfaro et al., 2007]) and branching simple stochastic reachability games ([K. Etessami et al., 2018]).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.115/LIPIcs.ICALP.2019.115.pdf
stochastic games
multi-type branching processes
concurrent games
minimax-polynomial equations
reachability
almost-sure
limit-sure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
116:1
116:13
10.4230/LIPIcs.ICALP.2019.116
article
FO = FO^3 for Linear Orders with Monotone Binary Relations (Track B: Automata, Logic, Semantics, and Theory of Programming)
Fortin, Marie
1
LSV, CNRS & ENS Paris-Saclay, Université Paris-Saclay, France
We show that over the class of linear orders with additional binary relations satisfying some monotonicity conditions, monadic first-order logic has the three-variable property. This generalizes (and gives a new proof of) several known results, including the fact that monadic first-order logic has the three-variable property over linear orders, as well as over (R,<,+1), and answers some open questions mentioned in a paper from Antonopoulos, Hunter, Raza and Worrell [FoSSaCS 2015]. Our proof is based on a translation of monadic first-order logic formulas into formulas of a star-free variant of Propositional Dynamic Logic, which are in turn easily expressible in monadic first-order logic with three variables.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.116/LIPIcs.ICALP.2019.116.pdf
first-order logic
three-variable property
propositional dynamic logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
117:1
117:15
10.4230/LIPIcs.ICALP.2019.117
article
A Linear Upper Bound on the Weisfeiler-Leman Dimension of Graphs of Bounded Genus (Track B: Automata, Logic, Semantics, and Theory of Programming)
Grohe, Martin
1
Kiefer, Sandra
1
RWTH Aachen University, Aachen, Germany
The Weisfeiler-Leman (WL) dimension of a graph is a measure for the inherent descriptive complexity of the graph. While originally derived from a combinatorial graph isomorphism test called the Weisfeiler-Leman algorithm, the WL dimension can also be characterised in terms of the number of variables that is required to describe the graph up to isomorphism in first-order logic with counting quantifiers.
It is known that the WL dimension is upper-bounded for all graphs that exclude some fixed graph as a minor [M. Grohe, 2017]. However, the bounds that can be derived from this general result are astronomic. Only recently, it was proved that the WL dimension of planar graphs is at most 3 [S. Kiefer et al., 2017].
In this paper, we prove that the WL dimension of graphs embeddable in a surface of Euler genus g is at most 4g+3. For the WL dimension of graphs embeddable in an orientable surface of Euler genus g, our approach yields an upper bound of 2g + 3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.117/LIPIcs.ICALP.2019.117.pdf
Weisfeiler-Leman algorithm
finite-variable logic
isomorphism testing
planar graphs
bounded genus
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
118:1
118:13
10.4230/LIPIcs.ICALP.2019.118
article
Termination of Linear Loops over the Integers (Track B: Automata, Logic, Semantics, and Theory of Programming)
Hosseini, Mehran
1
Ouaknine, Joël
2
1
Worrell, James
1
Department of Computer Science, University of Oxford, UK
Max Planck Institute for Software Systems, Germany
We consider the problem of deciding termination of single-path while loops with integer variables, affine updates, and affine guard conditions. The question is whether such a loop terminates on all integer initial values. This problem is known to be decidable for the subclass of loops whose update matrices are diagonalisable, but the general case has remained open since being conjectured decidable by Tiwari in 2004. In this paper we show decidability of determining termination for arbitrary update matrices, confirming Tiwari’s conjecture. For the class of loops considered in this paper, the question of deciding termination on a specific initial value is a longstanding open problem in number theory. The key to our decision procedure is in showing how to circumvent the difficulties inherent in deciding termination on a fixed initial value.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.118/LIPIcs.ICALP.2019.118.pdf
Program Verification
Loop Termination
Linear Integer Programs
Affine While Loops
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
119:1
119:14
10.4230/LIPIcs.ICALP.2019.119
article
Büchi Objectives in Countable MDPs (Track B: Automata, Logic, Semantics, and Theory of Programming)
Kiefer, Stefan
1
Mayr, Richard
2
Shirmohammadi, Mahsa
3
4
Totzke, Patrick
5
University of Oxford, UK
University of Edinburgh, UK
CNRS, Paris, France
IRIF, Paris, France
University of Liverpool, UK
We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset F of states infinitely often. A question left open by T.P. Hill in 1979 [Theodore Preston Hill, 1979] is whether there always exist epsilon-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.119/LIPIcs.ICALP.2019.119.pdf
Markov decision processes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
120:1
120:13
10.4230/LIPIcs.ICALP.2019.120
article
Determinization of Büchi Automata: Unifying the Approaches of Safra and Muller-Schupp (Track B: Automata, Logic, Semantics, and Theory of Programming)
Löding, Christof
1
Pirogov, Anton
1
https://orcid.org/0000-0002-5077-7497
RWTH Aachen University, Ahornstr. 55, 52074 Aachen, Germany
Determinization of Büchi automata is a long-known difficult problem, and after the seminal result of Safra, who developed the first asymptotically optimal construction from Büchi into Rabin automata, much work went into improving, simplifying, or avoiding Safra’s construction. A different, less known determinization construction was proposed by Muller and Schupp. The two types of constructions share some similarities but their precise relationship was still unclear. In this paper, we shed some light on this relationship by proposing a construction from nondeterministic Büchi to deterministic parity automata that subsumes both constructions: Our construction leaves some freedom in the choice of the successor states of the deterministic automaton, and by instantiating these choices in different ways, one obtains as particular cases the construction of Safra and the construction of Muller and Schupp. The basis is a correspondence between structures that are encoded in the macrostates of the determinization procedures - Safra trees on one hand, and levels of the split-tree, which underlies the Muller and Schupp construction, on the other hand. Our construction also allows for mixing the mentioned constructions, and opens up new directions for the development of heuristics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.120/LIPIcs.ICALP.2019.120.pdf
Büchi automata
determinization
parity automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
121:1
121:12
10.4230/LIPIcs.ICALP.2019.121
article
Optimal Regular Expressions for Permutations (Track B: Automata, Logic, Semantics, and Theory of Programming)
Molina Lovett, Antonio
1
https://orcid.org/0000-0002-1890-9517
Shallit, Jeffrey
1
https://orcid.org/0000-0003-1197-3820
University of Waterloo, Canada
The permutation language P_n consists of all words that are permutations of a fixed alphabet of size n. Using divide-and-conquer, we construct a regular expression R_n that specifies P_n. We then give explicit bounds for the length of R_n, which we find to be 4^{n}n^{-(lg n)/4+Theta(1)}, and use these bounds to show that R_n has minimum size over all regular expressions specifying P_n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.121/LIPIcs.ICALP.2019.121.pdf
regular expressions
lower bounds
divide-and-conquer
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
122:1
122:15
10.4230/LIPIcs.ICALP.2019.122
article
Equivalence of Finite-Valued Streaming String Transducers Is Decidable (Track B: Automata, Logic, Semantics, and Theory of Programming)
Muscholl, Anca
1
Puppis, Gabriele
2
LaBRI, University of Bordeaux, France
CNRS, LaBRI, Bordeaux, France
In this paper we provide a positive answer to a question left open by Alur and and Deshmukh in 2011 by showing that equivalence of finite-valued copyless streaming string transducers is decidable.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.122/LIPIcs.ICALP.2019.122.pdf
String transducers
equivalence
Ehrenfeucht conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
123:1
123:15
10.4230/LIPIcs.ICALP.2019.123
article
From Normal Functors to Logarithmic Space Queries (Track B: Automata, Logic, Semantics, and Theory of Programming)
Nguyễn, Lê Thành Dũng
1
https://orcid.org/0000-0002-6900-5577
Pradic, Pierre
2
3
LIPN, UMR 7030 CNRS, Université Paris 13, Sorbonne Paris Cité, France
ENS de Lyon, Université de Lyon, LIP, France
University of Warsaw, Faculty of Mathematics, Informatics and Mechanics, Poland
We introduce a new approach to implicit complexity in linear logic, inspired by functional database query languages and using recent developments in effective denotational semantics of polymorphism. We give the first sub-polynomial upper bound in a type system with impredicative polymorphism; adding restrictions on quantifiers yields a characterization of logarithmic space, for which extensional completeness is established via descriptive complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.123/LIPIcs.ICALP.2019.123.pdf
coherence spaces
elementary linear logic
semantic evaluation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
124:1
124:15
10.4230/LIPIcs.ICALP.2019.124
article
Automatic Semigroups vs Automaton Semigroups (Track B: Automata, Logic, Semantics, and Theory of Programming)
Picantin, Matthieu
1
https://orcid.org/0000-0002-7149-1770
IRIF UMR 8243 CNRS & Univ Paris Diderot, 75013 Paris, France
We develop an effective and natural approach to interpret any semigroup admitting a special language of greedy normal forms as an automaton semigroup, namely the semigroup generated by a Mealy automaton encoding the behaviour of such a language of greedy normal forms under one-sided multiplication. The framework embraces many of the well-known classes of (automatic) semigroups: free semigroups, free commutative semigroups, trace or divisibility monoids, braid or Artin - Tits or Krammer or Garside monoids, Baumslag - Solitar semigroups, etc. Like plactic monoids or Chinese monoids, some neither left- nor right-cancellative automatic semigroups are also investigated, as well as some residually finite variations of the bicyclic monoid. It provides what appears to be the first known connection from a class of automatic semigroups to a class of automaton semigroups. It is worthwhile noting that, "being an automatic semigroup" and "being an automaton semigroup" become dual properties in a very automata-theoretical sense. Quadratic rewriting systems and associated tilings appear as the cornerstone of our construction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.124/LIPIcs.ICALP.2019.124.pdf
Mealy machine
semigroup
rewriting system
automaticity
self-similarity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
125:1
125:13
10.4230/LIPIcs.ICALP.2019.125
article
A Mahler’s Theorem for Word Functions (Track B: Automata, Logic, Semantics, and Theory of Programming)
Pin, Jean-Éric
1
Reutenauer, Christophe
2
IRIF, Université Paris Denis Diderot, CNRS - Case 7014 - F-75205 Paris Cedex 13, France
Mathématiques, Université du Québec à Montréal, CP 8888, succ. Centre Ville, Canada H3C 3P8
Let p be a prime number and let G_p be the variety of all languages recognised by a finite p-group. We give a construction process of all G_p-preserving functions from a free monoid to a free group. Our result follows from a new noncommutative generalization of Mahler’s theorem on interpolation series, a celebrated result of p-adic analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.125/LIPIcs.ICALP.2019.125.pdf
group languages
interpolation series
pro-p metric
regularity preserving
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
126:1
126:14
10.4230/LIPIcs.ICALP.2019.126
article
On All Things Star-Free (Track B: Automata, Logic, Semantics, and Theory of Programming)
Place, Thomas
1
Zeitoun, Marc
2
Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400, Talence and IUF, France
Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400, Talence, France
We investigate the star-free closure, which associates to a class of languages its closure under Boolean operations and marked concatenation. We prove that the star-free closure of any finite class and of any class of groups languages with decidable separation (plus mild additional properties) has decidable separation. We actually show decidability of a stronger property, called covering. This generalizes many results on the subject in a unified framework. A key ingredient is that star-free closure coincides with another closure operator where Kleene stars are also allowed in restricted contexts.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.126/LIPIcs.ICALP.2019.126.pdf
Regular languages
separation problem
star-free closure
group languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
127:1
127:14
10.4230/LIPIcs.ICALP.2019.127
article
From Nondeterministic to Multi-Head Deterministic Finite-State Transducers (Track B: Automata, Logic, Semantics, and Theory of Programming)
Raszyk, Martin
1
Basin, David
1
Traytel, Dmitriy
1
Department of Computer Science, ETH Zürich, Universitätstrasse 6, 8092, Switzerland
Every nondeterministic finite-state automaton is equivalent to a deterministic finite-state automaton. This result does not extend to finite-state transducers - finite-state automata equipped with a one-way output tape. There is a strict hierarchy of functions accepted by one-way deterministic finite-state transducers (1DFTs), one-way nondeterministic finite-state transducers (1NFTs), and two-way nondeterministic finite-state transducers (2NFTs), whereas the two-way deterministic finite-state transducers (2DFTs) accept the same family of functions as their nondeterministic counterparts (2NFTs).
We define multi-head one-way deterministic finite-state transducers (mh-1DFTs) as a natural extension of 1DFTs. These transducers have multiple one-way reading heads that move asynchronously over the input word. Our main result is that mh-1DFTs can deterministically express any function defined by a one-way nondeterministic finite-state transducer. Of independent interest, we formulate the all-suffix regular matching problem, which is the problem of deciding for each suffix of an input word whether it belongs to a regular language. As part of our proof, we show that an mh-1DFT can solve all-suffix regular matching, which has applications, e.g., in runtime verification.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.127/LIPIcs.ICALP.2019.127.pdf
Formal languages
Nondeterminism
Multi-head automata
Finite transducers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
128:1
128:14
10.4230/LIPIcs.ICALP.2019.128
article
Sequentiality of String-to-Context Transducers (Track B: Automata, Logic, Semantics, and Theory of Programming)
Reynier, Pierre-Alain
1
Villevalois, Didier
1
Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
Transducers extend finite state automata with outputs, and describe transformations from strings to strings. Sequential transducers, which have a deterministic behaviour regarding their input, are of particular interest. However, unlike finite-state automata, not every transducer can be made sequential. The seminal work of Choffrut allows to characterise, amongst the functional one-way transducers, the ones that admit an equivalent sequential transducer.
In this work, we extend the results of Choffrut to the class of transducers that produce their output string by adding simultaneously, at each transition, a string on the left and a string on the right of the string produced so far. We call them the string-to-context transducers. We obtain a multiple characterisation of the functional string-to-context transducers admitting an equivalent sequential one, based on a Lipschitz property of the function realised by the transducer, and on a pattern (a new twinning property). Last, we prove that given a string-to-context transducer, determining whether there exists an equivalent sequential one is in coNP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.128/LIPIcs.ICALP.2019.128.pdf
Transducers
Sequentiality
Twinning Property
Two-Way Transducers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
129:1
129:15
10.4230/LIPIcs.ICALP.2019.129
article
The Parametric Complexity of Lossy Counter Machines (Track B: Automata, Logic, Semantics, and Theory of Programming)
Schmitz, Sylvain
1
2
https://orcid.org/0000-0002-4101-4308
LSV, ENS Paris Saclay & CNRS, Université Paris-Saclay, France
IUF, France
The reachability problem in lossy counter machines is the best-known ACKERMANN-complete problem and has been used to establish most of the ACKERMANN-hardness statements in the literature. This hides however a complexity gap when the number of counters is fixed. We close this gap and prove F_d-completeness for machines with d counters, which provides the first known uncontrived problems complete for the fast-growing complexity classes at levels 3 < d < omega. We develop for this an approach through antichain factorisations of bad sequences and analysing the length of controlled antichains.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.129/LIPIcs.ICALP.2019.129.pdf
Counter machine
well-structured system
well-quasi-order
antichain
fast-growing complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
130:1
130:14
10.4230/LIPIcs.ICALP.2019.130
article
Varieties of Data Languages (Track B: Automata, Logic, Semantics, and Theory of Programming)
Urbat, Henning
1
Milius, Stefan
1
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
We establish an Eilenberg-type correspondence for data languages, i.e. languages over an infinite alphabet. More precisely, we prove that there is a bijective correspondence between varieties of languages recognized by orbit-finite nominal monoids and pseudovarieties of such monoids. This is the first result of this kind for data languages. Our approach makes use of nominal Stone duality and a recent category theoretic generalization of Birkhoff-type theorems that we instantiate here for the category of nominal sets. In addition, we prove an axiomatic characterization of weak pseudovarieties as those classes of orbit-finite monoids that can be specified by sequences of nominal equations, which provides a nominal version of a classical theorem of Eilenberg and Schützenberger.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.130/LIPIcs.ICALP.2019.130.pdf
Nominal sets
Stone duality
Algebraic language theory
Data languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
131:1
131:14
10.4230/LIPIcs.ICALP.2019.131
article
How Fast Can We Reach a Target Vertex in Stochastic Temporal Graphs?
Akrida, Eleni C.
1
https://orcid.org/0000-0002-1126-1623
Mertzios, George B.
2
https://orcid.org/0000-0001-7182-585X
Nikoletseas, Sotiris
3
Raptopoulos, Christoforos
3
https://orcid.org/0000-0002-9837-2632
Spirakis, Paul G.
1
4
https://orcid.org/0000-0001-5396-3749
Zamaraev, Viktor
2
https://orcid.org/0000-0001-5755-4141
Department of Computer Science, University of Liverpool, UK
Department of Computer Science, Durham University, UK
Computer Engineering & Informatics Department, University of Patras, and CTI, Greece
Computer Engineering & Informatics Department, University of Patras, Greece
Temporal graphs are used to abstractly model real-life networks that are inherently dynamic in nature, in the sense that the network structure undergoes discrete changes over time. Given a static underlying graph G=(V,E), a temporal graph on G is a sequence of snapshots {G_t=(V,E_t) subseteq G: t in N}, one for each time step t >= 1. In this paper we study stochastic temporal graphs, i.e. stochastic processes G={G_t subseteq G: t in N} whose random variables are the snapshots of a temporal graph on G. A natural feature of stochastic temporal graphs which can be observed in various real-life scenarios is a memory effect in the appearance probabilities of particular edges; that is, the probability an edge e in E appears at time step t depends on its appearance (or absence) at the previous k steps. In this paper we study the hierarchy of models memory-k, k >= 0, which address this memory effect in an edge-centric network evolution: every edge of G has its own probability distribution for its appearance over time, independently of all other edges. Clearly, for every k >= 1, memory-(k-1) is a special case of memory-k. However, in this paper we make a clear distinction between the values k=0 ("no memory") and k >= 1 ("some memory"), as in some cases these models exhibit a fundamentally different computational behavior for these values of k, as our results indicate. For every k >= 0 we investigate the computational complexity of two naturally related, but fundamentally different, temporal path (or journey) problems: {Minimum Arrival} and {Best Policy}. In the first problem we are looking for the expected arrival time of a foremost journey between two designated vertices {s},{y}. In the second one we are looking for the expected arrival time of the best policy for actually choosing a particular {s}-{y} journey. We present a detailed investigation of the computational landscape of both problems for the different values of memory k. Among other results we prove that, surprisingly, {Minimum Arrival} is strictly harder than {Best Policy}; in fact, for k=0, {Minimum Arrival} is #P-hard while {Best Policy} is solvable in O(n^2) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.131/LIPIcs.ICALP.2019.131.pdf
Temporal network
stochastic temporal graph
temporal path
#P-hard problem
polynomial-time approximation scheme
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
132:1
132:15
10.4230/LIPIcs.ICALP.2019.132
article
Distributed Detection of Cliques in Dynamic Networks
Bonne, Matthias
1
Censor-Hillel, Keren
1
Department of Computer Science, Technion, Haifa, Israel
This paper provides an in-depth study of the fundamental problems of finding small subgraphs in distributed dynamic networks.
While some problems are trivially easy to handle, such as detecting a triangle that emerges after an edge insertion, we show that, perhaps somewhat surprisingly, other problems exhibit a wide range of complexities in terms of the trade-offs between their round and bandwidth complexities.
In the case of triangles, which are only affected by the topology of the immediate neighborhood, some end results are:
- The bandwidth complexity of 1-round dynamic triangle detection or listing is Theta(1).
- The bandwidth complexity of 1-round dynamic triangle membership listing is Theta(1) for node/edge deletions, Theta(n^{1/2}) for edge insertions, and Theta(n) for node insertions.
- The bandwidth complexity of 1-round dynamic triangle membership detection is Theta(1) for node/edge deletions, O(log n) for edge insertions, and Theta(n) for node insertions.
Most of our upper and lower bounds are tight. Additionally, we provide almost always tight upper and lower bounds for larger cliques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.132/LIPIcs.ICALP.2019.132.pdf
distributed computing
subgraph detection
dynamic graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
133:1
133:12
10.4230/LIPIcs.ICALP.2019.133
article
On Approximate Pure Nash Equilibria in Weighted Congestion Games with Polynomial Latencies
Caragiannis, Ioannis
1
Fanelli, Angelo
2
University of Patras & CTI "Diophantus", Patras, Greece
CNRS (UMR-6211), Caen, France
We consider the problem of the existence of natural improvement dynamics leading to approximate pure Nash equilibria, with a reasonable small approximation, and the problem of bounding the efficiency of such equilibria in the fundamental framework of weighted congestion game with polynomial latencies of degree at most d >= 1. In this work, by exploiting a simple technique, we firstly show that the game always admits a d-approximate potential function. This implies that every sequence of d-approximate improvement moves by the players always leads the game to a d-approximate pure Nash equilibrium. As a corollary, we also obtain that, under mild assumptions on the structure of the players' strategies, the game always admits a constant approximate potential function. Secondly, by using a simple potential function argument, we are able to show that in the game there always exists a (d+delta)-approximate pure Nash equilibrium, with delta in [0,1], whose cost is 2/(1+delta) times the cost of an optimal state.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.133/LIPIcs.ICALP.2019.133.pdf
Congestion games
approximate pure Nash equilibrium
potential functions
approximate price of stability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
134:1
134:14
10.4230/LIPIcs.ICALP.2019.134
article
Temporal Cliques Admit Sparse Spanners
Casteigts, Arnaud
1
https://orcid.org/0000-0002-7819-7013
Peters, Joseph G.
2
https://orcid.org/0000-0002-2475-8145
Schoeters, Jason
1
https://orcid.org/0000-0001-7257-5426
LaBRI, Université de Bordeaux, CNRS, Bordeaux INP, France
School of Computing Science, Simon Fraser University, Canada
Let G=(V,E) be an undirected graph on n vertices and lambda:E -> 2^{N} a mapping that assigns to every edge a non-empty set of positive integer labels. These labels can be seen as discrete times when the edge is present. Such a labeled graph {G}=(G,lambda) is said to be temporally connected if a path exists with non-decreasing times from every vertex to every other vertex. In a seminal paper, Kempe, Kleinberg, and Kumar (STOC 2000) asked whether, given such a temporal graph, a sparse subset of edges can always be found whose labels suffice to preserve temporal connectivity - a temporal spanner. Axiotis and Fotakis (ICALP 2016) answered negatively by exhibiting a family of Theta(n^2)-dense temporal graphs which admit no temporal spanner of density o(n^2). The natural question is then whether sparse temporal spanners always exist in some classes of dense graphs.
In this paper, we answer this question affirmatively, by showing that if the underlying graph G is a complete graph, then one can always find temporal spanners of density O(n log n). The best known result for complete graphs so far was that spanners of density binom{n}{2}- floor[n/4] = O(n^2) always exist. Our result is the first positive answer as to the existence of o(n^2) sparse spanners in adversarial instances of temporal graphs since the original question by Kempe et al., focusing here on complete graphs. The proofs are constructive and directly adaptable as an algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.134/LIPIcs.ICALP.2019.134.pdf
Dynamic networks
Temporal graphs
Temporal connectivity
Sparse spanners
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
135:1
135:14
10.4230/LIPIcs.ICALP.2019.135
article
Distributed Reconfiguration of Maximal Independent Sets
Censor-Hillel, Keren
1
Rabie, Mikaël
2
3
Department of Computer Science, Technion, Israel
IRIF, Université de Paris, France
Aalto University, Finland
In this paper, we investigate a distributed maximal independent set (MIS) reconfiguration problem, in which there are two maximal independent sets for which every node is given its membership status, and the nodes need to communicate with their neighbors in order to find a reconfiguration schedule that switches from the first MIS to the second. Such a schedule is a list of independent sets that is restricted by forbidding two neighbors to change their membership status at the same step. In addition, these independent sets should provide some covering guarantee.
We show that obtaining an actual MIS (and even a 3-dominating set) in each intermediate step is impossible. However, we provide efficient solutions when the intermediate sets are only required to be independent and 4-dominating, which is almost always possible, as we fully characterize.
Consequently, our goal is to pin down the tradeoff between the possible length of the schedule and the number of communication rounds. We prove that a constant length schedule can be found in O(MIS+R32) rounds, where MIS is the complexity of finding an MIS in a worst-case graph and R32 is the complexity of finding a (3,2)-ruling set. For bounded degree graphs, this is O(log^*n) rounds and we show that it is necessary. On the other extreme, we show that with a constant number of rounds we can find a linear length schedule.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.135/LIPIcs.ICALP.2019.135.pdf
distributed graph algorithms
reconfiguration
maximal independent set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
136:1
136:14
10.4230/LIPIcs.ICALP.2019.136
article
Stochastic Graph Exploration
Anagnostopoulos, Aris
1
Cohen, Ilan R.
2
Leonardi, Stefano
1
Łącki, Jakub
3
Sapienza University of Rome, Italy
CWI, Amsterdam, The Netherlands
Google Research, New York, USA
Exploring large-scale networks is a time consuming and expensive task which is usually operated in a complex and uncertain environment. A crucial aspect of network exploration is the development of suitable strategies that decide which nodes and edges to probe at each stage of the process.
To model this process, we introduce the stochastic graph exploration problem. The input is an undirected graph G=(V,E) with a source vertex s, stochastic edge costs drawn from a distribution pi_e, e in E, and rewards on vertices of maximum value R. The goal is to find a set F of edges of total cost at most B such that the subgraph of G induced by F is connected, contains s, and maximizes the total reward. This problem generalizes the stochastic knapsack problem and other stochastic probing problems recently studied.
Our focus is on the development of efficient nonadaptive strategies that are competitive against the optimal adaptive strategy. A major challenge is the fact that the problem has an Omega(n) adaptivity gap even on a tree of n vertices. This is in sharp contrast with O(1) adaptivity gap of the stochastic knapsack problem, which is a special case of our problem. We circumvent this negative result by showing that O(log nR) resource augmentation suffices to obtain O(1) approximation on trees and O(log nR) approximation on general graphs. To achieve this result, we reduce stochastic graph exploration to a memoryless process - the minesweeper problem - which assigns to every edge a probability that the process terminates when the edge is probed. For this problem, interesting in its own, we present an optimal polynomial time algorithm on trees and an O(log nR) approximation for general graphs.
We study also the problem in which the maximum cost of an edge is a logarithmic fraction of the budget. We show that under this condition, there exist polynomial-time oblivious strategies that use 1+epsilon budget, whose adaptivity gaps on trees and general graphs are 1+epsilon and 8+epsilon, respectively. Finally, we provide additional results on the structure and the complexity of nonadaptive and adaptive strategies.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.136/LIPIcs.ICALP.2019.136.pdf
stochastic optimization
graph exploration
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
137:1
137:15
10.4230/LIPIcs.ICALP.2019.137
article
Energy Consumption of Group Search on a Line
Czyzowicz, Jurek
1
Georgiou, Konstantinos
2
Killick, Ryan
3
Kranakis, Evangelos
3
Krizanc, Danny
4
Lafond, Manuel
5
Narayanan, Lata
6
Opatrny, Jaroslav
6
Shende, Sunil
7
Université du Québec en Outaouais, Gatineau, Québec, Canada
Department of Mathematics, Ryerson University, Toronto, Ontario, Canada
School of Computer Science, Carleton University, Ottawa, Ontario, Canada
Department of Mathematics & Comp. Sci., Wesleyan University, Middletown, CT, USA
Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
Department of Comp. Sci. and Software Eng., Concordia University, Montreal, Québec, Canada
Department of Computer Science, Rutgers University, Camden, NJ, USA
Consider two robots that start at the origin of the infinite line in search of an exit at an unknown location on the line. The robots can collaborate in the search, but can only communicate if they arrive at the same location at exactly the same time, i.e. they use the so-called face-to-face communication model. The group search time is defined as the worst-case time as a function of d, the distance of the exit from the origin, when both robots can reach the exit. It has long been known that for a single robot traveling at unit speed, the search time is at least 9d - o(d); a simple doubling strategy achieves this time bound. It was shown recently in [Chrobak et al., 2015] that k >= 2 robots traveling at unit speed also require at least 9d group search time.
We investigate energy-time trade-offs in group search by two robots, where the energy loss experienced by a robot traveling a distance x at constant speed s is given by s^2 x, as motivated by energy consumption models in physics and engineering. Specifically, we consider the problem of minimizing the total energy used by the robots, under the constraints that the search time is at most a multiple c of the distance d and the speed of the robots is bounded by b. Motivation for this study is that for the case when robots must complete the search in 9d time with maximum speed one (b=1; c=9), a single robot requires at least 9d energy, while for two robots, all previously proposed algorithms consume at least 28d/3 energy.
When the robots have bounded memory and can use only a constant number of fixed speeds, we generalize an algorithm described in [Baeza-Yates and Schott, 1995; Chrobak et al., 2015] to obtain a family of algorithms parametrized by pairs of b,c values that can solve the problem for the entire spectrum of these pairs for which the problem is solvable. In particular, for each such pair, we determine optimal (and in some cases nearly optimal) algorithms inducing the lowest possible energy consumption.
We also propose a novel search algorithm that simultaneously achieves search time 9d and consumes energy 8.42588d. Our result shows that two robots can search on the line in optimal time 9d while consuming less total energy than a single robot within the same search time. Our algorithm uses robots that have unbounded memory, and a finite number of dynamically computed speeds. It can be generalized for any c, b with cb=9, and consumes energy 8.42588b^2d.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.137/LIPIcs.ICALP.2019.137.pdf
Evacuation
Exit
Line
Face-to-face Communication
Robots
Search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
138:1
138:14
10.4230/LIPIcs.ICALP.2019.138
article
Computing Exact Solutions of Consensus Halving and the Borsuk-Ulam Theorem
Deligkas, Argyrios
1
2
Fearnley, John
1
Melissourgos, Themistoklis
1
https://orcid.org/0000-0002-9867-6257
Spirakis, Paul G.
1
3
https://orcid.org/0000-0001-5396-3749
Department of Computer Science, University of Liverpool, Liverpool, UK
Leverhulme Research Centre for Functional Materials Design, Liverpool, UK
Computer Engineering and Informatics Department, University of Patras, Patras, Greece
We study the problem of finding an exact solution to the consensus halving problem. While recent work has shown that the approximate version of this problem is PPA-complete [Filos-Ratsikas and Goldberg, 2018; Filos-Ratsikas and Goldberg, 2018], we show that the exact version is much harder. Specifically, finding a solution with n agents and n cuts is FIXP-hard, and deciding whether there exists a solution with fewer than n cuts is ETR-complete. We also give a QPTAS for the case where each agent’s valuation is a polynomial.
Along the way, we define a new complexity class BU, which captures all problems that can be reduced to solving an instance of the Borsuk-Ulam problem exactly. We show that FIXP subseteq BU subseteq TFETR and that LinearBU = PPA, where LinearBU is the subclass of BU in which the Borsuk-Ulam instance is specified by a linear arithmetic circuit.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.138/LIPIcs.ICALP.2019.138.pdf
PPA
FIXP
ETR
consensus halving
circuit
reduction
complexity class
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
139:1
139:16
10.4230/LIPIcs.ICALP.2019.139
article
Exploration of High-Dimensional Grids by Finite Automata
Dobrev, Stefan
1
Narayanan, Lata
2
Opatrny, Jaroslav
2
Pankratov, Denis
2
Institute of Mathematics, Slovak Academy of Sciences, Bratislava, Slovakia
Department of CSSE, Concordia University, Montreal, Canada
We consider the problem of finding a treasure at an unknown point of an n-dimensional infinite grid, n >= 3, by initially collocated finite automaton agents (scouts/robots). Recently, the problem has been well characterized for 2 dimensions for deterministic as well as randomized agents, both in synchronous and semi-synchronous models [S. Brandt et al., 2018; Y. Emek et al., 2015]. It has been conjectured that n+1 randomized agents are necessary to solve this problem in the n-dimensional grid [L. Cohen et al., 2017]. In this paper we disprove the conjecture in a strong sense: we show that three randomized synchronous agents suffice to explore an n-dimensional grid for any n. Our algorithm is optimal in terms of the number of the agents. Our key insight is that a constant number of finite automaton agents can, by their positions and movements, implement a stack, which can store the path being explored. We also show how to implement our algorithm using: four randomized semi-synchronous agents; four deterministic synchronous agents; or five deterministic semi-synchronous agents.
We give a different algorithm that uses 4 deterministic semi-synchronous agents for the 3-dimensional grid. This is provably optimal, and surprisingly, matches the result for 2 dimensions. For n >= 4, the time complexity of the solutions mentioned above is exponential in distance D of the treasure from the starting point of the agents. We show that in the deterministic case, one additional agent brings the time down to a polynomial. Finally, we focus on algorithms that never venture much beyond the distance D. We describe an algorithm that uses O(sqrt{n}) semi-synchronous deterministic agents that never go beyond 2D, as well as show that any algorithm using 3 synchronous deterministic agents in 3 dimensions, if it exists, must travel beyond Omega(D^{3/2}) from the origin.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.139/LIPIcs.ICALP.2019.139.pdf
Multi-agent systems
finite state machines
high-dimensional grids
robot exploration
randomized agents
semi-synchronous and synchronous agents
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
140:1
140:14
10.4230/LIPIcs.ICALP.2019.140
article
Deterministic Leader Election in Programmable Matter
Emek, Yuval
1
Kutten, Shay
1
Lavi, Ron
1
Moses Jr., William K.
1
https://orcid.org/0000-0002-4533-7593
Faculty of Industrial Engineering and Management, Technion - IIT, Haifa, Israel
Addressing a fundamental problem in programmable matter, we present the first deterministic algorithm to elect a unique leader in a system of connected amoebots assuming only that amoebots are initially contracted. Previous algorithms either used randomization, made various assumptions (shapes with no holes, or known shared chirality), or elected several co-leaders in some cases.
Some of the building blocks we introduce in constructing the algorithm are of interest by themselves, especially the procedure we present for reaching common chirality among the amoebots. Given the leader election and the chirality agreement building block, it is known that various tasks in programmable matter can be performed or improved.
The main idea of the new algorithm is the usage of the ability of the amoebots to move, which previous leader election algorithms have not used.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.140/LIPIcs.ICALP.2019.140.pdf
programmable matter
geometric amoebot model
leader election
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
141:1
141:14
10.4230/LIPIcs.ICALP.2019.141
article
Two Moves per Time Step Make a Difference
Erlebach, Thomas
1
https://orcid.org/0000-0002-4470-5868
Kammer, Frank
2
https://orcid.org/0000-0002-2662-3471
Luo, Kelin
3
https://orcid.org/0000-0003-2006-0601
Sajenko, Andrej
2
https://orcid.org/0000-0001-5946-8087
Spooner, Jakob T.
1
https://orcid.org/0000-0003-3816-6308
Department of Informatics, University of Leicester, Leicester, England
THM, University of Applied Sciences Mittelhessen, Giessen, Germany
School of Management, Xi'an Jiaotong University, Xianning West Road, Xi'an, China
A temporal graph is a graph whose edge set can change over time. We only require that the edge set in each time step forms a connected graph. The temporal exploration problem asks for a temporal walk that starts at a given vertex, moves over at most one edge in each time step, visits all vertices, and reaches the last unvisited vertex as early as possible. We show in this paper that every temporal graph with n vertices can be explored in O(n^{1.75}) time steps provided that either the degree of the graph is bounded in each step or the temporal walk is allowed to make two moves per step. This result is interesting because it breaks the lower bound of Omega(n^2) steps that holds for the worst-case exploration time if only one move per time step is allowed and the graph in each step can have arbitrary degree. We complement this main result by a logarithmic inapproximability result and a proof that for sparse temporal graphs (i.e., temporal graphs with O(n) edges in the underlying graph) making O(1) moves per time step can improve the worst-case exploration time at most by a constant factor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.141/LIPIcs.ICALP.2019.141.pdf
Temporal Graph Exploration
Algorithmic Graph Theory
NP-Complete Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
142:1
142:14
10.4230/LIPIcs.ICALP.2019.142
article
Distributed Arboricity-Dependent Graph Coloring via All-to-All Communication
Ghaffari, Mohsen
1
Sayyadi, Ali
2
ETH Zurich, Switzerland
Sharif University of Technology, Iran
We present a constant-time randomized distributed algorithms in the congested clique model that computes an O(alpha)-vertex-coloring, with high probability. Here, alpha denotes the arboricity of the graph, which is, roughly speaking, the edge-density of the densest subgraph. Congested clique is a well-studied model of synchronous message passing for distributed computing with all-to-all communication: per round each node can send one O(log n)-bit message algorithm to each other node. Our O(1)-round algorithm settles the randomized round complexity of the O(alpha)-coloring problem. We also explain that a similar method can provide a constant-time randomized algorithm for decomposing the graph into O(alpha) edge-disjoint forests, so long as alpha <= n^{1-o(1)}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.142/LIPIcs.ICALP.2019.142.pdf
Distributed Computing
Message Passing Algorithms
Graph Coloring
Arboricity
Congested Clique Model
Randomized Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
143:1
143:15
10.4230/LIPIcs.ICALP.2019.143
article
Exploiting Hopsets: Improved Distance Oracles for Graphs of Constant Highway Dimension and Beyond
Gupta, Siddharth
1
Kosowski, Adrian
2
Viennot, Laurent
2
Ben-Gurion University of the Negev, Israel
Inria, Paris, France
For fixed h >= 2, we consider the task of adding to a graph G a set of weighted shortcut edges on the same vertex set, such that the length of a shortest h-hop path between any pair of vertices in the augmented graph is exactly the same as the original distance between these vertices in G. A set of shortcut edges with this property is called an exact h-hopset and may be applied in processing distance queries on graph G. In particular, a 2-hopset directly corresponds to a distributed distance oracle known as a hub labeling. In this work, we explore centralized distance oracles based on 3-hopsets and display their advantages in several practical scenarios. In particular, for graphs of constant highway dimension, and more generally for graphs of constant skeleton dimension, we show that 3-hopsets require exponentially fewer shortcuts per node than any previously described distance oracle, and also offer a speedup in query time when compared to simple oracles based on a direct application of 2-hopsets. Finally, we consider the problem of computing minimum-size h-hopset (for any h >= 2) for a given graph G, showing a polylogarithmic-factor approximation for the case of unique shortest path graphs. When h=3, for a given bound on the space used by the distance oracle, we provide a construction of hopset achieving polylog approximation both for space and query time compared to the optimal 3-hopset oracle given the space bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.143/LIPIcs.ICALP.2019.143.pdf
Hopsets
Distance Oracles
Graph Algorithms
Data Structures
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
144:1
144:13
10.4230/LIPIcs.ICALP.2019.144
article
Optimal Strategies for Patrolling Fences
Haeupler, Bernhard
1
Kuhn, Fabian
2
Martinsson, Anders
3
Petrova, Kalina
3
Pfister, Pascal
3
Carnegie Mellon University, Pittsburgh, PA, USA
University of Freiburg, Germany
ETH Zurich, Switzerland
A classical multi-agent fence patrolling problem asks: What is the maximum length L of a line fence that k agents with maximum speeds v_1,..., v_k can patrol if each point on the line needs to be visited at least once every unit of time. It is easy to see that L = alpha sum_{i=1}^k v_i for some efficiency alpha in [1/2,1). After a series of works [Czyzowicz et al., 2011; Dumitrescu et al., 2014; Kawamura and Kobayashi, 2015; Kawamura and Soejima, 2015] giving better and better efficiencies, it was conjectured by Kawamura and Soejima [Kawamura and Soejima, 2015] that the best possible efficiency approaches 2/3. No upper bounds on the efficiency below 1 were known.
We prove the first such upper bounds and tightly bound the optimal efficiency in terms of the minimum speed ratio s = {v_{max}}/{v_{min}} and the number of agents k. Our bounds of alpha <= 1/{1 + 1/s} and alpha <= 1 - 1/(sqrt{k)+1} imply that in order to achieve efficiency 1 - epsilon, at least k >= Omega(epsilon^{-2}) agents with a speed ratio of s >= Omega(epsilon^{-1}) are necessary. Guided by our upper bounds, we construct a scheme whose efficiency approaches 1, disproving the conjecture stated above. Our scheme asymptotically matches our upper bounds in terms of the maximal speed difference and the number of agents used.
A variation of the fence patrolling problem considers a circular fence instead and asks for its circumference to be maximized. We consider the unidirectional case of this variation, where all agents are only allowed to move in one direction, say clockwise. At first, a strategy yielding L = max_{r in [k]} r * v_r where v_1 >= v_2 >= ... >= v_k was conjectured to be optimal by Czyzowicz et al. [Czyzowicz et al., 2011] This was proven not to be the case by giving constructions for only specific numbers of agents with marginal improvements of L. We give a general construction that yields L = 1/{33 log_e log_2(k)} sum_{i=1}^k v_i for any set of agents, which in particular for the case 1, 1/2, ..., 1/k diverges as k - > infty, thus resolving a conjecture by Kawamura and Soejima [Kawamura and Soejima, 2015] affirmatively.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.144/LIPIcs.ICALP.2019.144.pdf
multi-agent systems
patrolling algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
145:1
145:13
10.4230/LIPIcs.ICALP.2019.145
article
Matroid Coflow Scheduling
Im, Sungjin
1
Moseley, Benjamin
2
Pruhs, Kirk
3
Purohit, Manish
4
University of California at Merced, USA
Carnegie Mellon University, Pittsburgh, PA, USA
University of Pittsburgh, PA, USA
Google, Mountain View, CA, USA
We consider the matroid coflow scheduling problem, where each job is comprised of a set of flows and the family of sets that can be scheduled at any time form a matroid. Our main result is a polynomial-time algorithm that yields a 2-approximation for the objective of minimizing the weighted completion time. This result is tight assuming P != NP. As a by-product we also obtain the first (2+epsilon)-approximation algorithm for the preemptive concurrent open shop scheduling problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.145/LIPIcs.ICALP.2019.145.pdf
Coflow Scheduling
Concurrent Open Shop
Matroid Scheduling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
146:1
146:14
10.4230/LIPIcs.ICALP.2019.146
article
Multi-Round Cooperative Search Games with Multiple Players
Korman, Amos
1
https://orcid.org/0000-0001-8652-9228
Rodeh, Yoav
2
https://orcid.org/0000-0002-7224-6451
Université de Paris, IRIF, CNRS, F-75013 Paris, France
Ort Braude College, Karmiel, Israel
Assume that a treasure is placed in one of M boxes according to a known distribution and that k searchers are searching for it in parallel during T rounds. We study the question of how to incentivize selfish players so that group performance would be maximized. Here, this is measured by the success probability, namely, the probability that at least one player finds the treasure. We focus on congestion policies C(l) that specify the reward that a player receives if it is one of l players that (simultaneously) find the treasure for the first time. Our main technical contribution is proving that the exclusive policy, in which C(1)=1 and C(l)=0 for l>1, yields a price of anarchy of (1-(1-{1}/{k})^{k})^{-1}, and that this is the best possible price among all symmetric reward mechanisms. For this policy we also have an explicit description of a symmetric equilibrium, which is in some sense unique, and moreover enjoys the best success probability among all symmetric profiles. For general congestion policies, we show how to polynomially find, for any theta>0, a symmetric multiplicative (1+theta)(1+C(k))-equilibrium.
Together with an appropriate reward policy, a central entity can suggest players to play a particular profile at equilibrium. As our main conceptual contribution, we advocate the use of symmetric equilibria for such purposes. Besides being fair, we argue that symmetric equilibria can also become highly robust to crashes of players. Indeed, in many cases, despite the fact that some small fraction of players crash (or refuse to participate), symmetric equilibria remain efficient in terms of their group performances and, at the same time, serve as approximate equilibria. We show that this principle holds for a class of games, which we call monotonously scalable games. This applies in particular to our search game, assuming the natural sharing policy, in which C(l)=1/l. For the exclusive policy, this general result does not hold, but we show that the symmetric equilibrium is nevertheless robust under mild assumptions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.146/LIPIcs.ICALP.2019.146.pdf
Algorithmic Mechanism Design
Parallel Algorithms
Collaborative Search
Fault-Tolerance
Price of Anarchy
Price of Stability
Symmetric Equilibria
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
147:1
147:15
10.4230/LIPIcs.ICALP.2019.147
article
Polynomial Anonymous Dynamic Distributed Computing Without a Unique Leader
Kowalski, Dariusz R.
1
2
Mosteiro, Miguel A.
3
Department of Computer Science, University of Liverpool, UK
SWPS University of Social Sciences and Humanities, Warsaw, Poland
Computer Science Department, Pace University, New York, NY, USA
Counting the number of nodes in {Anonymous Dynamic Networks} is enticing from an algorithmic perspective: an important computation in a restricted platform with promising applications. Starting with Michail, Chatzigiannakis, and Spirakis [Michail et al., 2013], a flurry of papers sped up the running time guarantees from doubly-exponential to polynomial [Dariusz R. Kowalski and Miguel A. Mosteiro, 2018]. There is a common theme across all those works: a distinguished node is assumed to be present, because Counting cannot be solved deterministically without at least one.
In the present work we study challenging questions that naturally follow: how to efficiently count with more than one distinguished node, or how to count without any distinguished node. More importantly, what is the minimal information needed about these distinguished nodes and what is the best we can aim for (count precision, stochastic guarantees, etc.) without any. We present negative and positive results to answer these questions. To the best of our knowledge, this is the first work that addresses them.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.147/LIPIcs.ICALP.2019.147.pdf
Anonymous Dynamic Networks
Counting
distributed algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
148:1
148:16
10.4230/LIPIcs.ICALP.2019.148
article
Noidy Conmunixatipn: On the Convergence of the Averaging Population Protocol
Mallmann-Trenn, Frederik
1
Maus, Yannic
2
Pajak, Dominik
3
4
MIT, CSAIL, Cambridge, MA, US
Department of Computer Science, Technion, Haifa, Israel,
Faculty of Fundamental Problems of Technology, Wroclaw University of Science and Technology, Poland
Tooploox, Wroclaw, Poland
We study a process of averaging in a distributed system with noisy communication. Each of the agents in the system starts with some value and the goal of each agent is to compute the average of all the initial values. In each round, one pair of agents is drawn uniformly at random from the whole population, communicates with each other and each of these two agents updates their local value based on their own value and the received message. The communication is noisy and whenever an agent sends any value v, the receiving agent receives v+N, where N is a zero-mean Gaussian random variable. The two quality measures of interest are (i) the total sum of squares TSS(t), which measures the sum of square distances from the average load to the initial average and (ii) bar{phi}(t), which measures the sum of square distances from the average load to the running average (average at time t).
It is known that the simple averaging protocol - in which an agent sends its current value and sets its new value to the average of the received value and its current value - converges eventually to a state where bar{phi}(t) is small. It has been observed that TSS(t), due to the noise, eventually diverges and previous research - mostly in control theory - has focused on showing eventual convergence w.r.t. the running average. We obtain the first probabilistic bounds on the convergence time of bar{phi}(t) and precise bounds on the drift of TSS(t) that show that although TSS(t) eventually diverges, for a wide and interesting range of parameters, TSS(t) stays small for a number of rounds that is polynomial in the number of agents. Our results extend to the synchronous setting and settings where the agents are restricted to discrete values and perform rounding.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.148/LIPIcs.ICALP.2019.148.pdf
population protocols
noisy communication
distributed averaging
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
149:1
149:15
10.4230/LIPIcs.ICALP.2019.149
article
Periodic Bandits and Wireless Network Selection
Oh, Shunhao
1
Appavoo, Anuja Meetoo
1
Gilbert, Seth
1
Department of Computer Science, National University of Singapore
Bandit-style algorithms have been studied extensively in stochastic and adversarial settings. Such algorithms have been shown to be useful in multiplayer settings, e.g. to solve the wireless network selection problem, which can be formulated as an adversarial bandit problem. A leading bandit algorithm for the adversarial setting is EXP3. However, network behavior is often repetitive, where user density and network behavior follow regular patterns. Bandit algorithms, like EXP3, fail to provide good guarantees for periodic behaviors. A major reason is that these algorithms compete against fixed-action policies, which is ineffective in a periodic setting.
In this paper, we define a periodic bandit setting, and periodic regret as a better performance measure for this type of setting. Instead of comparing an algorithm’s performance to fixed-action policies, we aim to be competitive with policies that play arms under some set of possible periodic patterns F (for example, all possible periodic functions with periods 1,2,*s,P). We propose Periodic EXP4, a computationally efficient variant of the EXP4 algorithm for periodic settings. With K arms, T time steps, and where each periodic pattern in F is of length at most P, we show that the periodic regret obtained by Periodic EXP4 is at most O(sqrt{PKT log K + KT log |F|}). We also prove a lower bound of Omega (sqrt{PKT + KT {log |F|}/{log K}}) for the periodic setting, showing that this is optimal within log-factors. As an example, we focus on the wireless network selection problem. Through simulation, we show that Periodic EXP4 learns the periodic pattern over time, adapts to changes in a dynamic environment, and far outperforms EXP3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.149/LIPIcs.ICALP.2019.149.pdf
multi-armed bandits
wireless network selection
periodicity in environment
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
150:1
150:14
10.4230/LIPIcs.ICALP.2019.150
article
On the Complexity of Local Graph Transformations
Scheideler, Christian
1
https://orcid.org/0000-0002-5278-528X
Setzer, Alexander
1
Paderborn University, Germany
We consider the problem of transforming a given graph G_s into a desired graph G_t by applying a minimum number of primitives from a particular set of local graph transformation primitives. These primitives are local in the sense that each node can apply them based on local knowledge and by affecting only its 1-neighborhood. Although the specific set of primitives we consider makes it possible to transform any (weakly) connected graph into any other (weakly) connected graph consisting of the same nodes, they cannot disconnect the graph or introduce new nodes into the graph, making them ideal in the context of supervised overlay network transformations. We prove that computing a minimum sequence of primitive applications (even centralized) for arbitrary G_s and G_t is NP-hard, which we conjecture to hold for any set of local graph transformation primitives satisfying the aforementioned properties. On the other hand, we show that this problem admits a polynomial time algorithm with a constant approximation ratio.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.150/LIPIcs.ICALP.2019.150.pdf
Graphs transformations
NP-hardness
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-07-04
132
151:1
151:14
10.4230/LIPIcs.ICALP.2019.151
article
Network Investment Games with Wardrop Followers
Schmand, Daniel
1
https://orcid.org/0000-0001-7776-3426
Schröder, Marc
2
https://orcid.org/0000-0002-0048-2826
Skopalik, Alexander
3
https://orcid.org/0000-0002-4950-8708
Goethe University Frankfurt, Germany
RWTH Aachen University, Germany
University of Twente, Netherlands
We study a two-sided network investment game consisting of two sets of players, called providers and users. The game is set in two stages. In the first stage, providers aim to maximize their profit by investing in bandwidth of cloud computing services. The investments of the providers yield a set of usable services for the users. In the second stage, each user wants to process a task and therefore selects a bundle of services so as to minimize the total processing time. We assume the total processing time to be separable over the chosen services and the processing time of each service to depend on the utilization of the service and the installed bandwidth. We provide insights on how competition between providers affects the total costs of the users and show that every game on a series-parallel graph can be reduced to an equivalent single edge game when analyzing the set of subgame perfect Nash equilibria.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol132-icalp2019/LIPIcs.ICALP.2019.151/LIPIcs.ICALP.2019.151.pdf
Network Investment Game
Wardrop Equilibrium
Subgame Perfect Nash Equilibrium