LIPIcs, Volume 168

47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)



Thumbnail PDF

Event

ICALP 2020, July 8-11, 2020, Saarbrücken, Germany (Virtual Conference)

Editors

Artur Czumaj
  • University of Warwick, UK
Anuj Dawar
  • University of Cambridge, UK
Emanuela Merelli
  • University of Camerino, Italy

Publication Details

  • published at: 2020-06-29
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-138-2
  • DBLP: db/conf/icalp/icalp2020

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 168, ICALP 2020, Complete Volume

Authors: Artur Czumaj, Anuj Dawar, and Emanuela Merelli


Abstract
LIPIcs, Volume 168, ICALP 2020, Complete Volume

Cite as

47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 1-2446, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{czumaj_et_al:LIPIcs.ICALP.2020,
  title =	{{LIPIcs, Volume 168, ICALP 2020, Complete Volume}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{1--2446},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020},
  URN =		{urn:nbn:de:0030-drops-124067},
  doi =		{10.4230/LIPIcs.ICALP.2020},
  annote =	{Keywords: LIPIcs, Volume 168, ICALP 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Artur Czumaj, Anuj Dawar, and Emanuela Merelli


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 0:i-0:xxxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2020.0,
  author =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{0:i--0:xxxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.0},
  URN =		{urn:nbn:de:0030-drops-124075},
  doi =		{10.4230/LIPIcs.ICALP.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
An Incentive Analysis of Some Bitcoin Fee Designs (Invited Talk)

Authors: Andrew Chi chih Yao


Abstract
In the Bitcoin system, miners are incentivized to join the system and validate transactions through fees paid by the users. A simple "pay your bid" auction has been employed to determine the transaction fees. Recently, Lavi, Sattath and Zohar [Lavi et al., 2019] proposed an alternative fee design, called the monopolistic price (MP) mechanism, aimed at improving the revenue for the miners. Although MP is not strictly incentive compatible (IC), they studied how close to IC the mechanism is for iid distributions, and conjectured that it is nearly IC asymptotically based on extensive simulations and some analysis. In this paper, we prove that the MP mechanism is nearly incentive compatible for any iid distribution as the number of users grows large. This holds true with respect to other attacks such as splitting bids. We also prove a conjecture in [Lavi et al., 2019] that MP dominates the RSOP auction in revenue (originally defined in [Goldberg et al., 2006] for digital goods). These results lend support to MP as a Bitcoin fee design candidate. Additionally, we explore some possible intrinsic correlations between incentive compatibility and revenue in general.

Cite as

Andrew Chi chih Yao. An Incentive Analysis of Some Bitcoin Fee Designs (Invited Talk). In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 1:1-1:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{yao:LIPIcs.ICALP.2020.1,
  author =	{Yao, Andrew Chi chih},
  title =	{{An Incentive Analysis of Some Bitcoin Fee Designs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{1:1--1:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.1},
  URN =		{urn:nbn:de:0030-drops-124087},
  doi =		{10.4230/LIPIcs.ICALP.2020.1},
  annote =	{Keywords: Bitcoin, blockchain, incentive compatibility, maximum revenue, mechanism design}
}
Document
Invited Talk
Sketching Graphs and Combinatorial Optimization (Invited Talk)

Authors: Robert Krauthgamer


Abstract
Graph-sketching algorithms summarize an input graph G in a manner that suffices to later answer (perhaps approximately) one or more optimization problems on G, like distances, cuts, and matchings. Two famous examples are the Gomory-Hu tree, which represents all the minimum st-cuts in a graph G using a tree on the same vertex set V(G); and the cut-sparsifier of Benczúr and Karger, which is a sparse graph (often a reweighted subgraph) that approximates every cut in G within factor 1±ε. Another genre of these problems limits the queries to designated terminal vertices, denoted T ⊆ V(G), and the sketch size depends on |T| instead of |V(G)|. The talk will survey this topic, particularly cut and flow problems such as the three examples above. Currently, most known sketches are based on a graph representation, often called edge and vertex sparsification, which leaves room for potential improvements like smaller storage by using another representation, and faster running time to answer a query. These algorithms employ a host of techniques, ranging from combinatorial methods, like graph partitioning and edge or vertex sampling, to standard tools in data-stream algorithms and in sparse recovery. There are also several lower bounds known, either combinatorial (for the graph representation) or based on communication complexity and information theory. Many of the recent efforts focus on characterizing the tradeoff between accuracy and sketch size, yet many intriguing and very accessible problems are still open, and I will describe them in the talk.

Cite as

Robert Krauthgamer. Sketching Graphs and Combinatorial Optimization (Invited Talk). In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{krauthgamer:LIPIcs.ICALP.2020.2,
  author =	{Krauthgamer, Robert},
  title =	{{Sketching Graphs and Combinatorial Optimization}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.2},
  URN =		{urn:nbn:de:0030-drops-124090},
  doi =		{10.4230/LIPIcs.ICALP.2020.2},
  annote =	{Keywords: Sketching, edge sparsification, vertex sparsification, Gomory-Hu tree, mimicking networks, graph sampling, succinct data structures}
}
Document
Invited Talk
How to Play in Infinite MDPs (Invited Talk)

Authors: Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, Patrick Totzke, and Dominik Wojtczak


Abstract
Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this paper we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. We aim at giving an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs.

Cite as

Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, Patrick Totzke, and Dominik Wojtczak. How to Play in Infinite MDPs (Invited Talk). In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kiefer_et_al:LIPIcs.ICALP.2020.3,
  author =	{Kiefer, Stefan and Mayr, Richard and Shirmohammadi, Mahsa and Totzke, Patrick and Wojtczak, Dominik},
  title =	{{How to Play in Infinite MDPs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.3},
  URN =		{urn:nbn:de:0030-drops-124103},
  doi =		{10.4230/LIPIcs.ICALP.2020.3},
  annote =	{Keywords: Markov decision processes}
}
Document
Track A: Algorithms, Complexity and Games
Scheduling Lower Bounds via AND Subset Sum

Authors: Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay


Abstract
Given N instances (X_1,t_1),…,(X_N,t_N) of Subset Sum, the AND Subset Sum problem asks to determine whether all of these instances are yes-instances; that is, whether each set of integers X_i has a subset that sums up to the target integer t_i. We prove that this problem cannot be solved in time Õ((N ⋅ t_max)^{1-ε}), for t_max = max_i t_i and any ε > 0, assuming the ∀ ∃ Strong Exponential Time Hypothesis (∀∃-SETH). We then use this result to exclude Õ(n+P_max⋅n^{1-ε})-time algorithms for several scheduling problems on n jobs with maximum processing time P_max, assuming ∀∃-SETH. These include classical problems such as 1||∑ w_jU_j, the problem of minimizing the total weight of tardy jobs on a single machine, and P₂||∑ U_j, the problem of minimizing the number of tardy jobs on two identical parallel machines.

Cite as

Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay. Scheduling Lower Bounds via AND Subset Sum. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.4,
  author =	{Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir},
  title =	{{Scheduling Lower Bounds via AND Subset Sum}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.4},
  URN =		{urn:nbn:de:0030-drops-124119},
  doi =		{10.4230/LIPIcs.ICALP.2020.4},
  annote =	{Keywords: SETH, fine grained complexity, Subset Sum, scheduling}
}
Document
Track A: Algorithms, Complexity and Games
On the Fine-Grained Complexity of Parity Problems

Authors: Amir Abboud, Shon Feller, and Oren Weimann


Abstract
We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT.

Cite as

Amir Abboud, Shon Feller, and Oren Weimann. On the Fine-Grained Complexity of Parity Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.5,
  author =	{Abboud, Amir and Feller, Shon and Weimann, Oren},
  title =	{{On the Fine-Grained Complexity of Parity Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.5},
  URN =		{urn:nbn:de:0030-drops-124127},
  doi =		{10.4230/LIPIcs.ICALP.2020.5},
  annote =	{Keywords: All-pairs shortest paths, Fine-grained complexity, Diameter, Distance product, Min-plus convolution, Parity problems}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Streaming Algorithms for Submodular Maximization with Cardinality Constraints

Authors: Naor Alaluf, Alina Ene, Moran Feldman, Huy L. Nguyen, and Andrew Suh


Abstract
We study the problem of maximizing a non-monotone submodular function subject to a cardinality constraint in the streaming model. Our main contributions are two single-pass (semi-)streaming algorithms that use Õ(k)⋅poly(1/ε) memory, where k is the size constraint. At the end of the stream, both our algorithms post-process their data structures using any offline algorithm for submodular maximization, and obtain a solution whose approximation guarantee is α/(1+α)-ε, where α is the approximation of the offline algorithm. If we use an exact (exponential time) post-processing algorithm, this leads to 1/2-ε approximation (which is nearly optimal). If we post-process with the algorithm of [Niv Buchbinder and Moran Feldman, 2019], that achieves the state-of-the-art offline approximation guarantee of α = 0.385, we obtain 0.2779-approximation in polynomial time, improving over the previously best polynomial-time approximation of 0.1715 due to [Feldman et al., 2018]. One of our algorithms is combinatorial and enjoys fast update and overall running times. Our other algorithm is based on the multilinear extension, enjoys an improved space complexity, and can be made deterministic in some settings of interest.

Cite as

Naor Alaluf, Alina Ene, Moran Feldman, Huy L. Nguyen, and Andrew Suh. Optimal Streaming Algorithms for Submodular Maximization with Cardinality Constraints. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alaluf_et_al:LIPIcs.ICALP.2020.6,
  author =	{Alaluf, Naor and Ene, Alina and Feldman, Moran and Nguyen, Huy L. and Suh, Andrew},
  title =	{{Optimal Streaming Algorithms for Submodular Maximization with Cardinality Constraints}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.6},
  URN =		{urn:nbn:de:0030-drops-124137},
  doi =		{10.4230/LIPIcs.ICALP.2020.6},
  annote =	{Keywords: Submodular maximization, streaming algorithms, cardinality constraint}
}
Document
Track A: Algorithms, Complexity and Games
Dynamic Averaging Load Balancing on Cycles

Authors: Dan Alistarh, Giorgi Nadiradze, and Amirmojtaba Sabour


Abstract
We consider the following dynamic load-balancing process: given an underlying graph G with n nodes, in each step t≥ 0, one unit of load is created, and placed at a randomly chosen graph node. In the same step, the chosen node picks a random neighbor, and the two nodes balance their loads by averaging them. We are interested in the expected gap between the minimum and maximum loads at nodes as the process progresses, and its dependence on n and on the graph structure. Variants of the above graphical balanced allocation process have been studied previously by Peres, Talwar, and Wieder [Peres et al., 2015], and by Sauerwald and Sun [Sauerwald and Sun, 2015]. These authors left as open the question of characterizing the gap in the case of cycle graphs in the dynamic case, where weights are created during the algorithm’s execution. For this case, the only known upper bound is of 𝒪(n log n), following from a majorization argument due to [Peres et al., 2015], which analyzes a related graphical allocation process. In this paper, we provide an upper bound of 𝒪 (√n log n) on the expected gap of the above process for cycles of length n. We introduce a new potential analysis technique, which enables us to bound the difference in load between k-hop neighbors on the cycle, for any k ≤ n/2. We complement this with a "gap covering" argument, which bounds the maximum value of the gap by bounding its value across all possible subsets of a certain structure, and recursively bounding the gaps within each subset. We provide analytical and experimental evidence that our upper bound on the gap is tight up to a logarithmic factor.

Cite as

Dan Alistarh, Giorgi Nadiradze, and Amirmojtaba Sabour. Dynamic Averaging Load Balancing on Cycles. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alistarh_et_al:LIPIcs.ICALP.2020.7,
  author =	{Alistarh, Dan and Nadiradze, Giorgi and Sabour, Amirmojtaba},
  title =	{{Dynamic Averaging Load Balancing on Cycles}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.7},
  URN =		{urn:nbn:de:0030-drops-124142},
  doi =		{10.4230/LIPIcs.ICALP.2020.7},
  annote =	{Keywords: Algorithms, Load Balancing}
}
Document
Track A: Algorithms, Complexity and Games
Asynchronous Majority Dynamics in Preferential Attachment Trees

Authors: Maryam Bahrani, Nicole Immorlica, Divyarthi Mohan, and S. Matthew Weinberg


Abstract
We study information aggregation in networks where agents make binary decisions (labeled incorrect or correct). Agents initially form independent private beliefs about the better decision, which is correct with probability 1/2+δ. The dynamics we consider are asynchronous (each round, a single agent updates their announced decision) and non-Bayesian (agents simply copy the majority announcements among their neighbors, tie-breaking in favor of their private signal). Our main result proves that when the network is a tree formed according to the preferential attachment model [Barabási and Albert, 1999], with high probability, the process stabilizes in a correct majority within O(n log n/log log n) rounds. We extend our results to other tree structures, including balanced M-ary trees for any M.

Cite as

Maryam Bahrani, Nicole Immorlica, Divyarthi Mohan, and S. Matthew Weinberg. Asynchronous Majority Dynamics in Preferential Attachment Trees. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bahrani_et_al:LIPIcs.ICALP.2020.8,
  author =	{Bahrani, Maryam and Immorlica, Nicole and Mohan, Divyarthi and Weinberg, S. Matthew},
  title =	{{Asynchronous Majority Dynamics in Preferential Attachment Trees}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.8},
  URN =		{urn:nbn:de:0030-drops-124156},
  doi =		{10.4230/LIPIcs.ICALP.2020.8},
  annote =	{Keywords: Opinion Dynamics, Information Cascades, Preferential Attachment, Majority Dynamics, non-Bayesian Asynchronous Learning, Stochastic Processes}
}
Document
Track A: Algorithms, Complexity and Games
The Power of Many Samples in Query Complexity

Authors: Andrew Bassilakis, Andrew Drucker, Mika Göös, Lunjia Hu, Weiyun Ma, and Li-Yang Tan


Abstract
The randomized query complexity 𝖱(f) of a boolean function f: {0,1}ⁿ → {0,1} is famously characterized (via Yao’s minimax) by the least number of queries needed to distinguish a distribution 𝒟₀ over 0-inputs from a distribution 𝒟₁ over 1-inputs, maximized over all pairs (𝒟₀,𝒟₁). We ask: Does this task become easier if we allow query access to infinitely many samples from either 𝒟₀ or 𝒟₁? We show the answer is no: There exists a hard pair (𝒟₀,𝒟₁) such that distinguishing 𝒟₀^∞ from 𝒟₁^∞ requires Θ(𝖱(f)) many queries. As an application, we show that for any composed function f∘g we have 𝖱(f∘g) ≥ Ω(fbs(f)𝖱(g)) where fbs denotes fractional block sensitivity.

Cite as

Andrew Bassilakis, Andrew Drucker, Mika Göös, Lunjia Hu, Weiyun Ma, and Li-Yang Tan. The Power of Many Samples in Query Complexity. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bassilakis_et_al:LIPIcs.ICALP.2020.9,
  author =	{Bassilakis, Andrew and Drucker, Andrew and G\"{o}\"{o}s, Mika and Hu, Lunjia and Ma, Weiyun and Tan, Li-Yang},
  title =	{{The Power of Many Samples in Query Complexity}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.9},
  URN =		{urn:nbn:de:0030-drops-124163},
  doi =		{10.4230/LIPIcs.ICALP.2020.9},
  annote =	{Keywords: Query complexity, Composition theorems}
}
Document
Track A: Algorithms, Complexity and Games
Medians in Median Graphs and Their Cube Complexes in Linear Time

Authors: Laurine Bénéteau, Jérémie Chalopin, Victor Chepoi, and Yann Vaxès


Abstract
The median of a set of vertices P of a graph G is the set of all vertices x of G minimizing the sum of distances from x to all vertices of P. In this paper, we present a linear time algorithm to compute medians in median graphs, improving over the existing quadratic time algorithm. We also present a linear time algorithm to compute medians in the 𝓁₁-cube complexes associated with median graphs. Median graphs constitute the principal class of graphs investigated in metric graph theory and have a rich geometric and combinatorial structure. Our algorithm is based on the majority rule characterization of medians in median graphs and on a fast computation of parallelism classes of edges (Θ-classes or hyperplanes) via Lexicographic Breadth First Search (LexBFS). To prove the correctness of our algorithm, we show that any LexBFS ordering of the vertices of G satisfies the following fellow traveler property of independent interest: the parents of any two adjacent vertices of G are also adjacent.

Cite as

Laurine Bénéteau, Jérémie Chalopin, Victor Chepoi, and Yann Vaxès. Medians in Median Graphs and Their Cube Complexes in Linear Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{beneteau_et_al:LIPIcs.ICALP.2020.10,
  author =	{B\'{e}n\'{e}teau, Laurine and Chalopin, J\'{e}r\'{e}mie and Chepoi, Victor and Vax\`{e}s, Yann},
  title =	{{Medians in Median Graphs and Their Cube Complexes in Linear Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{10:1--10:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.10},
  URN =		{urn:nbn:de:0030-drops-124171},
  doi =		{10.4230/LIPIcs.ICALP.2020.10},
  annote =	{Keywords: Median Graph, CAT(0) Cube Complex, Median Problem, Linear Time Algorithm, LexBFS}
}
Document
Track A: Algorithms, Complexity and Games
Graph Coloring via Degeneracy in Streaming and Other Space-Conscious Models

Authors: Suman K. Bera, Amit Chakrabarti, and Prantar Ghosh


Abstract
We study the problem of coloring a given graph using a small number of colors in several well-established models of computation for big data. These include the data streaming model, the general graph query model, the massively parallel communication (MPC) model, and the CONGESTED-CLIQUE and the LOCAL models of distributed computation. On the one hand, we give algorithms with sublinear complexity, for the appropriate notion of complexity in each of these models. Our algorithms color a graph G using κ(G)⋅(1+o(1)) colors, where κ(G) is the degeneracy of G: this parameter is closely related to the arboricity α(G). As a function of κ(G) alone, our results are close to best possible, since the optimal number of colors is κ(G)+1. For several classes of graphs, including real-world "big graphs," our results improve upon the number of colors used by the various (Δ(G)+1)-coloring algorithms known for these models, where Δ(G) is the maximum degree in G, since Δ(G) ⩾ κ(G) and can in fact be arbitrarily larger than κ(G). On the other hand, we establish certain lower bounds indicating that sublinear algorithms probably cannot go much further. In particular, we prove that any randomized coloring algorithm that uses at most κ(G)+O(1) colors would require Ω(n²) storage in the one pass streaming model, and Ω(n²) many queries in the general graph query model, where n is the number of vertices in the graph. These lower bounds hold even when the value of κ(G) is known in advance; at the same time, our upper bounds do not require κ(G) to be given in advance.

Cite as

Suman K. Bera, Amit Chakrabarti, and Prantar Ghosh. Graph Coloring via Degeneracy in Streaming and Other Space-Conscious Models. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bera_et_al:LIPIcs.ICALP.2020.11,
  author =	{Bera, Suman K. and Chakrabarti, Amit and Ghosh, Prantar},
  title =	{{Graph Coloring via Degeneracy in Streaming and Other Space-Conscious Models}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.11},
  URN =		{urn:nbn:de:0030-drops-124182},
  doi =		{10.4230/LIPIcs.ICALP.2020.11},
  annote =	{Keywords: Data streaming, Graph coloring, Sublinear algorithms, Massively parallel communication, Distributed algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Improved Bounds for Matching in Random-Order Streams

Authors: Aaron Bernstein


Abstract
We study the problem of computing an approximate maximum cardinality matching in the semi-streaming model when edges arrive in a random order. In the semi-streaming model, the edges of the input graph G = (V,E) are given as a stream e₁, …, e_m, and the algorithm is allowed to make a single pass over this stream while using O(n polylog(n)) space (m = |E| and n = |V|). If the order of edges is adversarial, a simple single-pass greedy algorithm yields a 1/2-approximation in O(n) space; achieving a better approximation in adversarial streams remains an elusive open question. A line of recent work shows that one can improve upon the 1/2-approximation if the edges of the stream arrive in a random order. The state of the art for this model is two-fold: Assadi et al. [SODA 2019] show how to compute a 2/3(∼.66)-approximate matching, but the space requirement is O(n^1.5 polylog(n)). Very recently, Farhadi et al. [SODA 2020] presented an algorithm with the desired space usage of O(n polylog(n)), but a worse approximation ratio of 6/11(∼.545), or 3/5(=.6) in bipartite graphs. In this paper, we present an algorithm that computes a 2/3(∼.66)-approximate matching using only O(n log(n)) space, improving upon both results above. We also note that for adversarial streams, a lower bound of Kapralov [SODA 2013] shows that any algorithm that achieves a 1-1/e(∼.63)-approximation requires (n^{1+Ω(1/log log(n))}) space. Our result for random-order streams is the first to go beyond the adversarial-order lower bound, thus establishing that computing a maximum matching is provably easier in random-order streams.

Cite as

Aaron Bernstein. Improved Bounds for Matching in Random-Order Streams. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 12:1-12:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bernstein:LIPIcs.ICALP.2020.12,
  author =	{Bernstein, Aaron},
  title =	{{Improved Bounds for Matching in Random-Order Streams}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{12:1--12:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.12},
  URN =		{urn:nbn:de:0030-drops-124194},
  doi =		{10.4230/LIPIcs.ICALP.2020.12},
  annote =	{Keywords: Graph Algorithms, Sublinear Algorithms, Matching, Streaming}
}
Document
Track A: Algorithms, Complexity and Games
An Optimal Algorithm for Online Multiple Knapsack

Authors: Marcin Bienkowski, Maciej Pacut, and Krzysztof Piecuch


Abstract
In the online multiple knapsack problem, an algorithm faces a stream of items, and each item has to be either rejected or stored irrevocably in one of n bins (knapsacks) of equal size. The gain of an algorithm is equal to the sum of sizes of accepted items and the goal is to maximize the total gain. So far, for this natural problem, the best solution was the 0.5-competitive algorithm FirstFit (the result holds for any n ≥ 2). We present the first algorithm that beats this ratio, achieving the competitive ratio of 1/(1+ln(2))-O(1/n) ≈ 0.5906 - O(1/n). Our algorithm is deterministic and optimal up to lower-order terms, as the upper bound of 1/(1+ln(2)) for randomized solutions was given previously by Cygan et al. [TOCS 2016].

Cite as

Marcin Bienkowski, Maciej Pacut, and Krzysztof Piecuch. An Optimal Algorithm for Online Multiple Knapsack. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.ICALP.2020.13,
  author =	{Bienkowski, Marcin and Pacut, Maciej and Piecuch, Krzysztof},
  title =	{{An Optimal Algorithm for Online Multiple Knapsack}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.13},
  URN =		{urn:nbn:de:0030-drops-124207},
  doi =		{10.4230/LIPIcs.ICALP.2020.13},
  annote =	{Keywords: online knapsack, multiple knapsacks, bin packing, competitive analysis}
}
Document
Track A: Algorithms, Complexity and Games
Space Efficient Construction of Lyndon Arrays in Linear Time

Authors: Philip Bille, Jonas Ellert, Johannes Fischer, Inge Li Gørtz, Florian Kurpicz, J. Ian Munro, and Eva Rotenberg


Abstract
Given a string S of length n, its Lyndon array identifies for each suffix S[i..n] the next lexicographically smaller suffix S[j..n], i.e. the minimal index j > i with S[i..n] ≻ S[j..n]. Apart from its plain (n log₂ n)-bit array representation, the Lyndon array can also be encoded as a succinct parentheses sequence that requires only 2n bits of space. While linear time construction algorithms for both representations exist, it has previously been unknown if the same time bound can be achieved with less than Ω(n lg n) bits of additional working space. We show that, in fact, o(n) additional bits are sufficient to compute the succinct 2n-bit version of the Lyndon array in linear time. For the plain (n log₂ n)-bit version, we only need 𝒪(1) additional words to achieve linear time. Our space efficient construction algorithm makes the Lyndon array more accessible as a fundamental data structure in applications like full-text indexing.

Cite as

Philip Bille, Jonas Ellert, Johannes Fischer, Inge Li Gørtz, Florian Kurpicz, J. Ian Munro, and Eva Rotenberg. Space Efficient Construction of Lyndon Arrays in Linear Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.ICALP.2020.14,
  author =	{Bille, Philip and Ellert, Jonas and Fischer, Johannes and G{\o}rtz, Inge Li and Kurpicz, Florian and Munro, J. Ian and Rotenberg, Eva},
  title =	{{Space Efficient Construction of Lyndon Arrays in Linear Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.14},
  URN =		{urn:nbn:de:0030-drops-124211},
  doi =		{10.4230/LIPIcs.ICALP.2020.14},
  annote =	{Keywords: String algorithms, string suffixes, succinct data structures, Lyndon word, Lyndon array, nearest smaller values, nearest smaller suffixes}
}
Document
Track A: Algorithms, Complexity and Games
New Fault Tolerant Subset Preservers

Authors: Greg Bodwin, Keerti Choudhary, Merav Parter, and Noa Shahar


Abstract
Fault tolerant distance preservers are sparse subgraphs that preserve distances between given pairs of nodes under edge or vertex failures. In this paper, we present the first non-trivial constructions of subset distance preservers, which preserve all distances among a subset of nodes S, that can handle either an edge or a vertex fault. - For an n-vertex undirected weighted graph or weighted DAG G = (V,E) and S ⊆ V, we present a construction of a subset preserver with Õ(|S|n) edges that is resilient to a single fault. In the single pair case (|S| = 2), the bound improves to O(n). We further provide a nearly-matching lower bound of Ω(|S|n) in either setting, and we show that the same lower bound holds conditionally even if attention is restricted to unweighted graphs. - For an n-vertex directed unweighted graph G = (V,E) and r ∈ V, S ⊆ V, we present a construction of a preserver of distances in {r} × S with Õ(n^{4/3} |S|^{5/6}) edges that is resilient to a single fault. In the case |S| = 1 the bound improves to O(n^{4/3}), and for this case we provide another matching conditional lower bound. - For an n-vertex directed weighted graph G = (V, E) and r ∈ V, S ⊆ V, we present a construction of a preserver of distances in {r} × S with Õ(n^{3/2} |S|^{3/4}) edges that is resilient to a single vertex fault. (It was proved in [Greg Bodwin et al., 2017] that the bound improves to O(n^{3/2}) when |S| = 1, and that this is conditionally tight.)

Cite as

Greg Bodwin, Keerti Choudhary, Merav Parter, and Noa Shahar. New Fault Tolerant Subset Preservers. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 15:1-15:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2020.15,
  author =	{Bodwin, Greg and Choudhary, Keerti and Parter, Merav and Shahar, Noa},
  title =	{{New Fault Tolerant Subset Preservers}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{15:1--15:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.15},
  URN =		{urn:nbn:de:0030-drops-124222},
  doi =		{10.4230/LIPIcs.ICALP.2020.15},
  annote =	{Keywords: Subset Preservers, Distances, Fault-tolerance}
}
Document
Track A: Algorithms, Complexity and Games
Bridge-Depth Characterizes Which Structural Parameterizations of Vertex Cover Admit a Polynomial Kernel

Authors: Marin Bougeret, Bart M. P. Jansen, and Ignasi Sau


Abstract
We study the kernelization complexity of structural parameterizations of the Vertex Cover problem. Here, the goal is to find a polynomial-time preprocessing algorithm that can reduce any instance (G,k) of the Vertex Cover problem to an equivalent one, whose size is polynomial in the size of a pre-determined complexity parameter of G. A long line of previous research deals with parameterizations based on the number of vertex deletions needed to reduce G to a member of a simple graph class ℱ, such as forests, graphs of bounded tree-depth, and graphs of maximum degree two. We set out to find the most general graph classes ℱ for which Vertex Cover parameterized by the vertex-deletion distance of the input graph to ℱ, admits a polynomial kernelization. We give a complete characterization of the minor-closed graph families ℱ for which such a kernelization exists. We introduce a new graph parameter called bridge-depth, and prove that a polynomial kernelization exists if and only if ℱ has bounded bridge-depth. The proof is based on an interesting connection between bridge-depth and the size of minimal blocking sets in graphs, which are vertex sets whose removal decreases the independence number.

Cite as

Marin Bougeret, Bart M. P. Jansen, and Ignasi Sau. Bridge-Depth Characterizes Which Structural Parameterizations of Vertex Cover Admit a Polynomial Kernel. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 16:1-16:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bougeret_et_al:LIPIcs.ICALP.2020.16,
  author =	{Bougeret, Marin and Jansen, Bart M. P. and Sau, Ignasi},
  title =	{{Bridge-Depth Characterizes Which Structural Parameterizations of Vertex Cover Admit a Polynomial Kernel}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{16:1--16:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.16},
  URN =		{urn:nbn:de:0030-drops-124238},
  doi =		{10.4230/LIPIcs.ICALP.2020.16},
  annote =	{Keywords: vertex cover, parameterized complexity, polynomial kernel, structural parameterization, bridge-depth}
}
Document
Track A: Algorithms, Complexity and Games
The Complexity of Promise SAT on Non-Boolean Domains

Authors: Alex Brandts, Marcin Wrochna, and Stanislav Živný


Abstract
While 3-SAT is NP-hard, 2-SAT is solvable in polynomial time. Austrin, Guruswami, and Håstad [FOCS'14/SICOMP'17] proved a result known as "(2+ε)-SAT is NP-hard". They showed that the problem of distinguishing k-CNF formulas that are g-satisfiable (i.e. some assignment satisfies at least g literals in every clause) from those that are not even 1-satisfiable is NP-hard if g/k < 1/2 and is in P otherwise. We study a generalisation of SAT on arbitrary finite domains, with clauses that are disjunctions of unary constraints, and establish analogous behaviour. Thus we give a dichotomy for a natural fragment of promise constraint satisfaction problems (PCSPs) on arbitrary finite domains.

Cite as

Alex Brandts, Marcin Wrochna, and Stanislav Živný. The Complexity of Promise SAT on Non-Boolean Domains. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{brandts_et_al:LIPIcs.ICALP.2020.17,
  author =	{Brandts, Alex and Wrochna, Marcin and \v{Z}ivn\'{y}, Stanislav},
  title =	{{The Complexity of Promise SAT on Non-Boolean Domains}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.17},
  URN =		{urn:nbn:de:0030-drops-124241},
  doi =		{10.4230/LIPIcs.ICALP.2020.17},
  annote =	{Keywords: promise constraint satisfaction, PCSP, polymorphisms, algebraic approach, label cover}
}
Document
Track A: Algorithms, Complexity and Games
A Simple Dynamization of Trapezoidal Point Location in Planar Subdivisions

Authors: Milutin Brankovic, Nikola Grujic, André van Renssen, and Martin P. Seybold


Abstract
We study how to dynamize the Trapezoidal Search Tree (TST) - a well known randomized point location structure for planar subdivisions of kinetic line segments. Our approach naturally extends incremental leaf-level insertions to recursive methods and allows adaptation for the online setting. The dynamization carries over to the Trapezoidal Search DAG (TSD), which has linear size and logarithmic point location costs with high probability. On a set S of non-crossing segments, each TST update performs expected 𝒪(log²|S|) operations and each TSD update performs expected 𝒪(log |S|) operations. We demonstrate the practicality of our method with an open-source implementation, based on the Computational Geometry Algorithms Library, and experiments on the update performance.

Cite as

Milutin Brankovic, Nikola Grujic, André van Renssen, and Martin P. Seybold. A Simple Dynamization of Trapezoidal Point Location in Planar Subdivisions. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{brankovic_et_al:LIPIcs.ICALP.2020.18,
  author =	{Brankovic, Milutin and Grujic, Nikola and van Renssen, Andr\'{e} and Seybold, Martin P.},
  title =	{{A Simple Dynamization of Trapezoidal Point Location in Planar Subdivisions}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.18},
  URN =		{urn:nbn:de:0030-drops-124253},
  doi =		{10.4230/LIPIcs.ICALP.2020.18},
  annote =	{Keywords: Dynamization, Trapezoidal Search Tree, Trapezoidal Search DAG, Backward Analysis, Point Location, Planar Subdivision, Treap, Order-maintenance}
}
Document
Track A: Algorithms, Complexity and Games
Faster Minimization of Tardy Processing Time on a Single Machine

Authors: Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz


Abstract
This paper is concerned with the 1||∑ p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(P ⋅ n) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for 1||∑ p_jU_j, each improving on Lawler and Moore’s algorithm in a different scenario: - Our first algorithm runs in Õ(P^{7/4}) time, and outperforms Lawler and Moore’s algorithm in instances where n = ω̃(P^{3/4}). - Our second algorithm runs in Õ(min{P ⋅ D_#, P + D}) time, where D_# is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when n = ω̃(D_#) or n = ω̃(D/P). Further, it extends the known Õ(P) algorithm for the single due date special case of 1||∑ p_jU_j in a natural way. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)-convolution which is interesting in its own right.

Cite as

Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz. Faster Minimization of Tardy Processing Time on a Single Machine. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2020.19,
  author =	{Bringmann, Karl and Fischer, Nick and Hermelin, Danny and Shabtay, Dvir and Wellnitz, Philip},
  title =	{{Faster Minimization of Tardy Processing Time on a Single Machine}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.19},
  URN =		{urn:nbn:de:0030-drops-124269},
  doi =		{10.4230/LIPIcs.ICALP.2020.19},
  annote =	{Keywords: Weighted number of tardy jobs, sumsets, convolutions}
}
Document
Track A: Algorithms, Complexity and Games
Fréchet Distance for Uncertain Curves

Authors: Kevin Buchin, Chenglin Fan, Maarten Löffler, Aleksandr Popov, Benjamin Raichel, and Marcel Roeloffzen


Abstract
In this paper we study a wide range of variants for computing the (discrete and continuous) Fréchet distance between uncertain curves. We define an uncertain curve as a sequence of uncertainty regions, where each region is a disk, a line segment, or a set of points. A realisation of a curve is a polyline connecting one point from each region. Given an uncertain curve and a second (certain or uncertain) curve, we seek to compute the lower and upper bound Fréchet distance, which are the minimum and maximum Fréchet distance for any realisations of the curves. We prove that both problems are NP-hard for the continuous Fréchet distance, and the upper bound problem remains hard for the discrete Fréchet distance. In contrast, the lower bound discrete Fréchet distance can be computed in polynomial time using dynamic programming. Furthermore, we show that computing the expected discrete or continuous Fréchet distance is #P-hard when the uncertainty regions are modelled as point sets or line segments. On the positive side, we argue that in any constant dimension there is a FPTAS for the lower bound problem when Δ/δ is polynomially bounded, where δ is the Fréchet distance and Δ bounds the diameter of the regions. We then argue there is a near-linear-time 3-approximation for the decision problem when the regions are convex and roughly δ-separated. Finally, we study the setting with Sakoe - Chiba bands, restricting the alignment of the two curves, and give polynomial-time algorithms for upper bound and expected (discrete) Fréchet distance for point-set-modelled uncertainty regions.

Cite as

Kevin Buchin, Chenglin Fan, Maarten Löffler, Aleksandr Popov, Benjamin Raichel, and Marcel Roeloffzen. Fréchet Distance for Uncertain Curves. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ICALP.2020.20,
  author =	{Buchin, Kevin and Fan, Chenglin and L\"{o}ffler, Maarten and Popov, Aleksandr and Raichel, Benjamin and Roeloffzen, Marcel},
  title =	{{Fr\'{e}chet Distance for Uncertain Curves}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.20},
  URN =		{urn:nbn:de:0030-drops-124276},
  doi =		{10.4230/LIPIcs.ICALP.2020.20},
  annote =	{Keywords: Curves, Uncertainty, Fr\'{e}chet Distance, Hardness}
}
Document
Track A: Algorithms, Complexity and Games
Counting Homomorphisms in Plain Exponential Time

Authors: Andrei A. Bulatov and Amineh Dadsetan


Abstract
In the counting Graph Homomorphism problem (#GraphHom) the question is: Given graphs G,H, find the number of homomorphisms from G to H. This problem is generally #P-complete, moreover, Cygan et al. proved that unless the Exponential Time Hypothesis fails there is no algorithm that solves this problem in time O(|V(H)|^o(|V(G)|)). This, however, does not rule out the possibility that faster algorithms exist for restricted problems of this kind. Wahlström proved that #GraphHom can be solved in plain exponential time, that is, in time O((2k+1)^(|V(G)|+|V(H)|) poly(|V(H)|,|V(G)|)) provided H has clique width k. We generalize this result to a larger class of graphs, and also identify several other graph classes that admit a plain exponential algorithm for #GraphHom.

Cite as

Andrei A. Bulatov and Amineh Dadsetan. Counting Homomorphisms in Plain Exponential Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bulatov_et_al:LIPIcs.ICALP.2020.21,
  author =	{Bulatov, Andrei A. and Dadsetan, Amineh},
  title =	{{Counting Homomorphisms in Plain Exponential Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.21},
  URN =		{urn:nbn:de:0030-drops-124287},
  doi =		{10.4230/LIPIcs.ICALP.2020.21},
  annote =	{Keywords: graph homomorphisms, plain exponential time, clique width}
}
Document
Track A: Algorithms, Complexity and Games
From Holant to Quantum Entanglement and Back

Authors: Jin-Yi Cai, Zhiguo Fu, and Shuai Shao


Abstract
Holant problems are intimately connected with quantum theory as tensor networks. We first use techniques from Holant theory to derive new and improved results for quantum entanglement theory. We discover two particular entangled states |Ψ₆⟩ of 6 qubits and |Ψ₈⟩ of 8 qubits respectively, that have extraordinary closure properties in terms of the Bell property. Then we use entanglement properties of constraint functions to derive a new complexity dichotomy for all real-valued Holant problems containing a signature of odd arity. The signatures need not be symmetric, and no auxiliary signatures are assumed.

Cite as

Jin-Yi Cai, Zhiguo Fu, and Shuai Shao. From Holant to Quantum Entanglement and Back. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 22:1-22:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ICALP.2020.22,
  author =	{Cai, Jin-Yi and Fu, Zhiguo and Shao, Shuai},
  title =	{{From Holant to Quantum Entanglement and Back}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{22:1--22:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.22},
  URN =		{urn:nbn:de:0030-drops-124298},
  doi =		{10.4230/LIPIcs.ICALP.2020.22},
  annote =	{Keywords: Holant problem, Quantum entanglement, SLOCC equivalence, Bell property}
}
Document
Track A: Algorithms, Complexity and Games
Counting Perfect Matchings and the Eight-Vertex Model

Authors: Jin-Yi Cai and Tianyu Liu


Abstract
We study the approximation complexity of the partition function of the eight-vertex model on general 4-regular graphs. For the first time, we relate the approximability of the eight-vertex model to the complexity of approximately counting perfect matchings, a central open problem in this field. Our results extend those in [Jin-Yi Cai et al., 2018]. In a region of the parameter space where no previous approximation complexity was known, we show that approximating the partition function is at least as hard as approximately counting perfect matchings via approximation-preserving reductions. In another region of the parameter space which is larger than the region that is previously known to admit Fully Polynomial Randomized Approximation Scheme (FPRAS), we show that computing the partition function can be reduced to counting perfect matchings (which is valid for both exact and approximate counting). Moreover, we give a complete characterization of nonnegatively weighted (not necessarily planar) 4-ary matchgates, which has been open for several years. The key ingredient of our proof is a geometric lemma. We also identify a region of the parameter space where approximating the partition function on planar 4-regular graphs is feasible but on general 4-regular graphs is equivalent to approximately counting perfect matchings. To our best knowledge, these are the first problems that exhibit this dichotomic behavior between the planar and the nonplanar settings in approximate counting.

Cite as

Jin-Yi Cai and Tianyu Liu. Counting Perfect Matchings and the Eight-Vertex Model. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ICALP.2020.23,
  author =	{Cai, Jin-Yi and Liu, Tianyu},
  title =	{{Counting Perfect Matchings and the Eight-Vertex Model}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.23},
  URN =		{urn:nbn:de:0030-drops-124301},
  doi =		{10.4230/LIPIcs.ICALP.2020.23},
  annote =	{Keywords: Approximate complexity, the eight-vertex model, counting perfect matchings}
}
Document
Track A: Algorithms, Complexity and Games
Roundtrip Spanners with (2k-1) Stretch

Authors: Ruoxu Cen, Ran Duan, and Yong Gu


Abstract
A roundtrip spanner of a directed graph G is a subgraph of G preserving roundtrip distances approximately for all pairs of vertices. Despite extensive research, there is still a small stretch gap between roundtrip spanners in directed graphs and undirected graphs. For a directed graph with real edge weights in [1,W], we first propose a new deterministic algorithm that constructs a roundtrip spanner with (2k-1) stretch and O(k n^(1+1/k) log (nW)) edges for every integer k > 1, then remove the dependence of size on W to give a roundtrip spanner with (2k-1) stretch and O(k n^(1+1/k) log n) edges. While keeping the edge size small, our result improves the previous 2k+ε stretch roundtrip spanners in directed graphs [Roditty, Thorup, Zwick'02; Zhu, Lam'18], and almost matches the undirected (2k-1)-spanner with O(n^(1+1/k)) edges [Althöfer et al. '93] when k is a constant, which is optimal under Erdös conjecture.

Cite as

Ruoxu Cen, Ran Duan, and Yong Gu. Roundtrip Spanners with (2k-1) Stretch. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 24:1-24:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cen_et_al:LIPIcs.ICALP.2020.24,
  author =	{Cen, Ruoxu and Duan, Ran and Gu, Yong},
  title =	{{Roundtrip Spanners with (2k-1) Stretch}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{24:1--24:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.24},
  URN =		{urn:nbn:de:0030-drops-124313},
  doi =		{10.4230/LIPIcs.ICALP.2020.24},
  annote =	{Keywords: Graph theory, Deterministic algorithm, Roundtrip spanners}
}
Document
Track A: Algorithms, Complexity and Games
New Extremal Bounds for Reachability and Strong-Connectivity Preservers Under Failures

Authors: Diptarka Chakraborty and Keerti Choudhary


Abstract
In this paper, we consider the question of computing sparse subgraphs for any input directed graph G = (V,E) on n vertices and m edges, that preserves reachability and/or strong connectivity structures. - We show O(n+min{|P|√n, n√|P|}) bound on a subgraph that is an 1-fault-tolerant reachability preserver for a given vertex-pair set P ⊆ V× V, i.e., it preserves reachability between any pair of vertices in P under single edge (or vertex) failure. Our result is a significant improvement over the previous best O(n |P|) bound obtained as a corollary of single-source reachability preserver construction. We prove our upper bound by exploiting the special structure of single fault-tolerant reachability preserver for any pair, and then considering the interaction among such structures for different pairs. - In the lower bound side, we show that a 2-fault-tolerant reachability preserver for a vertex-pair set P ⊆ V×V of size Ω(n^ε), for even any arbitrarily small ε, requires at least Ω(n^(1+ε/8)) edges. This refutes the existence of linear-sized dual fault-tolerant preservers for reachability for any polynomial sized vertex-pair set. - We also present the first sub-quadratic bound of at most Õ(k 2^k n^(2-1/k)) size, for strong-connectivity preservers of directed graphs under k failures. To the best of our knowledge no non-trivial bound for this problem was known before, for a general k. We get our result by adopting the color-coding technique of Alon, Yuster, and Zwick [JACM'95].

Cite as

Diptarka Chakraborty and Keerti Choudhary. New Extremal Bounds for Reachability and Strong-Connectivity Preservers Under Failures. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ICALP.2020.25,
  author =	{Chakraborty, Diptarka and Choudhary, Keerti},
  title =	{{New Extremal Bounds for Reachability and Strong-Connectivity Preservers Under Failures}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.25},
  URN =		{urn:nbn:de:0030-drops-124327},
  doi =		{10.4230/LIPIcs.ICALP.2020.25},
  annote =	{Keywords: Preservers, Strong-connectivity, Reachability, Fault-tolerant, Graph sparsification}
}
Document
Track A: Algorithms, Complexity and Games
Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming

Authors: Timothy F. N. Chan, Jacob W. Cooper, Martin Koutecký, Daniel Král', and Kristýna Pekárková


Abstract
A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with tree-depth d and largest entry Δ are solvable in time g(d,Δ) poly(n) for some function g, i.e., fixed parameter tractable when parameterized by tree-depth d and Δ. However, the tree-depth of a constraint matrix depends on the positions of its non-zero entries and thus does not reflect its geometric structure. In particular, tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth. We prove that the branch-depth of the matroid defined by the columns of the constraint matrix is equal to the minimum tree-depth of a row-equivalent matrix. We also design a fixed parameter algorithm parameterized by an integer d and the entry complexity of an input matrix that either outputs a matrix with the smallest dual tree-depth that is row-equivalent to the input matrix or outputs that there is no matrix with dual tree-depth at most d that is row-equivalent to the input matrix. Finally, we use these results to obtain a fixed parameter algorithm for integer programming parameterized by the branch-depth of the input constraint matrix and the entry complexity. The parameterization by branch-depth cannot be replaced by the more permissive notion of branch-width.

Cite as

Timothy F. N. Chan, Jacob W. Cooper, Martin Koutecký, Daniel Král', and Kristýna Pekárková. Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ICALP.2020.26,
  author =	{Chan, Timothy F. N. and Cooper, Jacob W. and Kouteck\'{y}, Martin and Kr\'{a}l', Daniel and Pek\'{a}rkov\'{a}, Krist\'{y}na},
  title =	{{Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.26},
  URN =		{urn:nbn:de:0030-drops-124339},
  doi =		{10.4230/LIPIcs.ICALP.2020.26},
  annote =	{Keywords: Matroid algorithms, width parameters, integer programming, fixed parameter tractability, branch-width, branch-depth}
}
Document
Track A: Algorithms, Complexity and Games
Dynamic Longest Common Substring in Polylogarithmic Time

Authors: Panagiotis Charalampopoulos, Paweł Gawrychowski, and Karol Pokorski


Abstract
The longest common substring problem consists in finding a longest string that appears as a (contiguous) substring of two input strings. We consider the dynamic variant of this problem, in which we are to maintain two dynamic strings S and T, each of length at most n, that undergo substitutions of letters, in order to be able to return a longest common substring after each substitution. Recently, Amir et al. [ESA 2019] presented a solution for this problem that needs only 𝒪̃(n^(2/3)) time per update. This brought the challenge of determining whether there exists a faster solution with polylogarithmic update time, or (as is the case for other dynamic problems), we should expect a polynomial (conditional) lower bound. We answer this question by designing a significantly faster algorithm that processes each substitution in amortized log^𝒪(1) n time with high probability. Our solution relies on exploiting the local consistency of the parsing of a collection of dynamic strings due to Gawrychowski et al. [SODA 2018], and on maintaining two dynamic trees with labeled bicolored leaves, so that after each update we can report a pair of nodes, one from each tree, of maximum combined weight, which have at least one common leaf-descendant of each color. We complement this with a lower bound of Ω(log n/ log log n) for the update time of any polynomial-size data structure that maintains the LCS of two dynamic strings, even allowing amortization and randomization.

Cite as

Panagiotis Charalampopoulos, Paweł Gawrychowski, and Karol Pokorski. Dynamic Longest Common Substring in Polylogarithmic Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{charalampopoulos_et_al:LIPIcs.ICALP.2020.27,
  author =	{Charalampopoulos, Panagiotis and Gawrychowski, Pawe{\l} and Pokorski, Karol},
  title =	{{Dynamic Longest Common Substring in Polylogarithmic Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.27},
  URN =		{urn:nbn:de:0030-drops-124340},
  doi =		{10.4230/LIPIcs.ICALP.2020.27},
  annote =	{Keywords: string algorithms, dynamic algorithms, longest common substring}
}
Document
Track A: Algorithms, Complexity and Games
Improved Black-Box Constructions of Composable Secure Computation

Authors: Rohit Chatterjee, Xiao Liang, and Omkant Pandey


Abstract
We close the gap between black-box and non-black-box constructions of composable secure multiparty computation in the plain model under the minimal assumption of semi-honest oblivious transfer. The notion of protocol composition we target is angel-based security, or more precisely, security with super-polynomial helpers. In this notion, both the simulator and the adversary are given access to an oracle called an angel that can perform some predefined super-polynomial time task. Angel-based security maintains the attractive properties of the universal composition framework while providing meaningful security guarantees in complex environments without having to trust anyone. Angel-based security can be achieved using non-black-box constructions in max(R_OT,Õ(log n)) rounds where R_OT is the round-complexity of semi-honest oblivious transfer. However, current best known black-box constructions under the same assumption require max(R_OT,Õ(log² n)) rounds. If R_OT is a constant, the gap between non-black-box and black-box constructions can be a multiplicative factor log n. We close this gap by presenting a max(R_OT,Õ(log n)) round black-box construction. We achieve this result by constructing constant-round 1-1 CCA-secure commitments assuming only black-box access to one-way functions.

Cite as

Rohit Chatterjee, Xiao Liang, and Omkant Pandey. Improved Black-Box Constructions of Composable Secure Computation. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.ICALP.2020.28,
  author =	{Chatterjee, Rohit and Liang, Xiao and Pandey, Omkant},
  title =	{{Improved Black-Box Constructions of Composable Secure Computation}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.28},
  URN =		{urn:nbn:de:0030-drops-124351},
  doi =		{10.4230/LIPIcs.ICALP.2020.28},
  annote =	{Keywords: Secure Multi-Party Computation, Black-Box, Composable, Non-Malleable}
}
Document
Track A: Algorithms, Complexity and Games
Simplifying and Unifying Replacement Paths Algorithms in Weighted Directed Graphs

Authors: Shiri Chechik and Moran Nechushtan


Abstract
In the replacement paths (RP) problem we are given a graph G and a shortest path P between two nodes s and t . The goal is to find for every edge e ∈ P, a shortest path from s to t that avoids e. The first result of this paper is a simple reduction from the RP problem to the problem of computing shortest cycles for all nodes on a shortest path. Using this simple reduction we unify and extremely simplify two state of the art solutions for two different well-studied variants of the RP problem. In the first variant (algebraic) we show that by using at most n queries to the Yuster-Zwick distance oracle [FOCS 2005], one can solve the the RP problem for a given directed graph with integer edge weights in the range [-M,M] in Õ(M n^ω) time . This improves the running time of the state of the art algorithm of Vassilevska Williams [SODA 2011] by a factor of log⁶n. In the second variant (planar) we show that by using the algorithm of Klein for the multiple-source shortest paths problem (MSSP) [SODA 2005] one can solve the RP problem for directed planar graph with non negative edge weights in O (n log n) time. This matches the state of the art algorithm of Wulff-Nilsen [SODA 2010], but with arguably much simpler algorithm and analysis.

Cite as

Shiri Chechik and Moran Nechushtan. Simplifying and Unifying Replacement Paths Algorithms in Weighted Directed Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 29:1-29:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2020.29,
  author =	{Chechik, Shiri and Nechushtan, Moran},
  title =	{{Simplifying and Unifying Replacement Paths Algorithms in Weighted Directed Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{29:1--29:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.29},
  URN =		{urn:nbn:de:0030-drops-124365},
  doi =		{10.4230/LIPIcs.ICALP.2020.29},
  annote =	{Keywords: Fault tolerance, Distance oracle, Planar graph}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation

Authors: Yu Chen, Sampath Kannan, and Sanjeev Khanna


Abstract
We consider the problem of designing sublinear time algorithms for estimating the cost of minimum metric traveling salesman (TSP) tour. Specifically, given access to a n × n distance matrix D that specifies pairwise distances between n points, the goal is to estimate the TSP cost by performing only sublinear (in the size of D) queries. For the closely related problem of estimating the weight of a metric minimum spanning tree (MST), it is known that for any ε > 0, there exists an Õ(n/ε^O(1)) time algorithm that returns a (1 + ε)-approximate estimate of the MST cost. This result immediately implies an Õ(n/ε^O(1)) time algorithm to estimate the TSP cost to within a (2 + ε) factor for any ε > 0. However, no o(n²) time algorithms are known to approximate metric TSP to a factor that is strictly better than 2. On the other hand, there were also no known barriers that rule out existence of (1 + ε)-approximate estimation algorithms for metric TSP with Õ(n) time for any fixed ε > 0. In this paper, we make progress on both algorithms and lower bounds for estimating metric TSP cost. On the algorithmic side, we first consider the graphic TSP problem where the metric D corresponds to shortest path distances in a connected unweighted undirected graph. We show that there exists an Õ(n) time algorithm that estimates the cost of graphic TSP to within a factor of (2-ε₀) for some ε₀ > 0. This is the first sublinear cost estimation algorithm for graphic TSP that achieves an approximation factor less than 2. We also consider another well-studied special case of metric TSP, namely, (1,2)-TSP where all distances are either 1 or 2, and give an Õ(n^1.5) time algorithm to estimate optimal cost to within a factor of 1.625. Our estimation algorithms for graphic TSP as well as for (1,2)-TSP naturally lend themselves to Õ(n) space streaming algorithms that give an 11/6-approximation for graphic TSP and a 1.625-approximation for (1,2)-TSP. These results motivate the natural question if analogously to metric MST, for any ε > 0, (1 + ε)-approximate estimates can be obtained for graphic TSP and (1,2)-TSP using Õ(n) queries. We answer this question in the negative - there exists an ε₀ > 0, such that any algorithm that estimates the cost of graphic TSP ((1,2)-TSP) to within a (1 + ε₀)-factor, necessarily requires Ω(n²) queries. This lower bound result highlights a sharp separation between the metric MST and metric TSP problems. Similarly to many classical approximation algorithms for TSP, our sublinear time estimation algorithms utilize subroutines for estimating the size of a maximum matching in the underlying graph. We show that this is not merely an artifact of our approach, and that for any ε > 0, any algorithm that estimates the cost of graphic TSP or (1,2)-TSP to within a (1 + ε)-factor, can also be used to estimate the size of a maximum matching in a bipartite graph to within an ε n additive error. This connection allows us to translate known lower bounds for matching size estimation in various models to similar lower bounds for metric TSP cost estimation.

Cite as

Yu Chen, Sampath Kannan, and Sanjeev Khanna. Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 30:1-30:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2020.30,
  author =	{Chen, Yu and Kannan, Sampath and Khanna, Sanjeev},
  title =	{{Sublinear Algorithms and Lower Bounds for Metric TSP Cost Estimation}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{30:1--30:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.30},
  URN =		{urn:nbn:de:0030-drops-124372},
  doi =		{10.4230/LIPIcs.ICALP.2020.30},
  annote =	{Keywords: sublinear algorithms, TSP, streaming algorithms, query complexity}
}
Document
Track A: Algorithms, Complexity and Games
Computational Complexity of the α-Ham-Sandwich Problem

Authors: Man-Kwun Chiu, Aruni Choudhary, and Wolfgang Mulzer


Abstract
The classic Ham-Sandwich theorem states that for any d measurable sets in ℝ^d, there is a hyperplane that bisects them simultaneously. An extension by Bárány, Hubard, and Jerónimo [DCG 2008] states that if the sets are convex and well-separated, then for any given α₁, … , α_d ∈ [0, 1], there is a unique oriented hyperplane that cuts off a respective fraction α₁, … , α_d from each set. Steiger and Zhao [DCG 2010] proved a discrete analogue of this theorem, which we call the α-Ham-Sandwich theorem. They gave an algorithm to find the hyperplane in time O(n (log n)^{d-3}), where n is the total number of input points. The computational complexity of this search problem in high dimensions is open, quite unlike the complexity of the Ham-Sandwich problem, which is now known to be PPA-complete (Filos-Ratsikas and Goldberg [STOC 2019]). Recently, Fearnley, Gordon, Mehta, and Savani [ICALP 2019] introduced a new sub-class of CLS (Continuous Local Search) called Unique End-of-Potential Line (UEOPL). This class captures problems in CLS that have unique solutions. We show that for the α-Ham-Sandwich theorem, the search problem of finding the dividing hyperplane lies in UEOPL. This gives the first non-trivial containment of the problem in a complexity class and places it in the company of classic search problems such as finding the fixed point of a contraction map, the unique sink orientation problem and the P-matrix linear complementarity problem.

Cite as

Man-Kwun Chiu, Aruni Choudhary, and Wolfgang Mulzer. Computational Complexity of the α-Ham-Sandwich Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 31:1-31:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chiu_et_al:LIPIcs.ICALP.2020.31,
  author =	{Chiu, Man-Kwun and Choudhary, Aruni and Mulzer, Wolfgang},
  title =	{{Computational Complexity of the \alpha-Ham-Sandwich Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{31:1--31:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.31},
  URN =		{urn:nbn:de:0030-drops-124382},
  doi =		{10.4230/LIPIcs.ICALP.2020.31},
  annote =	{Keywords: Ham-Sandwich Theorem, Computational Complexity, Continuous Local Search}
}
Document
Track A: Algorithms, Complexity and Games
Existence and Complexity of Approximate Equilibria in Weighted Congestion Games

Authors: George Christodoulou, Martin Gairing, Yiannis Giannakopoulos, Diogo Poças, and Clara Waldmann


Abstract
We study the existence of approximate pure Nash equilibria (α-PNE) in weighted atomic congestion games with polynomial cost functions of maximum degree d. Previously it was known that d-approximate equilibria always exist, while nonexistence was established only for small constants, namely for 1.153-PNE. We improve significantly upon this gap, proving that such games in general do not have Θ̃(√d)-approximate PNE, which provides the first super-constant lower bound. Furthermore, we provide a black-box gap-introducing method of combining such nonexistence results with a specific circuit gadget, in order to derive NP-completeness of the decision version of the problem. In particular, deploying this technique we are able to show that deciding whether a weighted congestion game has an Õ(√d)-PNE is NP-complete. Previous hardness results were known only for the special case of exact equilibria and arbitrary cost functions. The circuit gadget is of independent interest and it allows us to also prove hardness for a variety of problems related to the complexity of PNE in congestion games. For example, we demonstrate that the question of existence of α-PNE in which a certain set of players plays a specific strategy profile is NP-hard for any α < 3^(d/2), even for unweighted congestion games. Finally, we study the existence of approximate equilibria in weighted congestion games with general (nondecreasing) costs, as a function of the number of players n. We show that n-PNE always exist, matched by an almost tight nonexistence bound of Θ̃(n) which we can again transform into an NP-completeness proof for the decision problem.

Cite as

George Christodoulou, Martin Gairing, Yiannis Giannakopoulos, Diogo Poças, and Clara Waldmann. Existence and Complexity of Approximate Equilibria in Weighted Congestion Games. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{christodoulou_et_al:LIPIcs.ICALP.2020.32,
  author =	{Christodoulou, George and Gairing, Martin and Giannakopoulos, Yiannis and Po\c{c}as, Diogo and Waldmann, Clara},
  title =	{{Existence and Complexity of Approximate Equilibria in Weighted Congestion Games}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.32},
  URN =		{urn:nbn:de:0030-drops-124392},
  doi =		{10.4230/LIPIcs.ICALP.2020.32},
  annote =	{Keywords: Atomic congestion games, existence of equilibria, pure Nash equilibria, approximate equilibria, hardness of equilibria}
}
Document
Track A: Algorithms, Complexity and Games
On Packing Low-Diameter Spanning Trees

Authors: Julia Chuzhoy, Merav Parter, and Zihan Tan


Abstract
Edge connectivity of a graph is one of the most fundamental graph-theoretic concepts. The celebrated tree packing theorem of Tutte and Nash-Williams from 1961 states that every k-edge connected graph G contains a collection 𝒯 of ⌊k/2⌋ edge-disjoint spanning trees, that we refer to as a tree packing; the diameter of the tree packing 𝒯 is the largest diameter of any tree in 𝒯. A desirable property of a tree packing for leveraging the high connectivity of a graph in distributed communication networks, is that its diameter is low. Yet, despite extensive research in this area, it is still unclear how to compute a tree packing of a low-diameter graph G, whose diameter is sublinear in |V(G)|, or, alternatively, how to show that such a packing does not exist. In this paper, we provide first non-trivial upper and lower bounds on the diameter of tree packing. We start by showing that, for every k-edge connected n-vertex graph G of diameter D, there is a tree packing 𝒯 containing Ω(k) trees, of diameter O((101k log n)^D), with edge-congestion at most 2. Karger’s edge sampling technique demonstrates that, if G is a k-edge connected graph, and G[p] is a subgraph of G obtained by sampling each edge of G independently with probability p = Θ(log n/k), then with high probability G[p] is connected. We extend this result to show that the diameter of G[p] is bounded by O(k^(D(D+1)/2)) with high probability. This immediately gives a tree packing of Ω(k/log n) edge-disjoint trees of diameter at most O(k^(D(D+1)/2)). We also show that these two results are nearly tight for graphs with a small diameter: we show that there are k-edge connected graphs of diameter 2D, such that any packing of k/α trees with edge-congestion η contains at least one tree of diameter Ω((k/(2α η D))^D), for any k,α and η. Additionally, we show that if, for every pair u,v of vertices of a given graph G, there is a collection of k edge-disjoint paths connecting u to v, of length at most D each, then we can efficiently compute a tree packing of size k, diameter O(D log n), and edge-congestion O(log n). Finally, we provide several applications of low-diameter tree packing in the distributed settings of network optimization and secure computation.

Cite as

Julia Chuzhoy, Merav Parter, and Zihan Tan. On Packing Low-Diameter Spanning Trees. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chuzhoy_et_al:LIPIcs.ICALP.2020.33,
  author =	{Chuzhoy, Julia and Parter, Merav and Tan, Zihan},
  title =	{{On Packing Low-Diameter Spanning Trees}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.33},
  URN =		{urn:nbn:de:0030-drops-124405},
  doi =		{10.4230/LIPIcs.ICALP.2020.33},
  annote =	{Keywords: Spanning tree, packing, edge-connectivity}
}
Document
Track A: Algorithms, Complexity and Games
Online Two-Dimensional Load Balancing

Authors: Ilan Cohen, Sungjin Im, and Debmalya Panigrahi


Abstract
In this paper, we consider the problem of assigning 2-dimensional vector jobs to identical machines online so to minimize the maximum load on any dimension of any machine. For arbitrary number of dimensions d, this problem is known as vector scheduling, and recent research has established the optimal competitive ratio as O((log d)/(log log d)) (Im et al. FOCS 2015, Azar et al. SODA 2018). But, these results do not shed light on the situation for small number of dimensions, particularly for d = 2 which is of practical interest. In this case, a trivial analysis shows that the classic list scheduling greedy algorithm has a competitive ratio of 3. We show the following improvements over this baseline in this paper: - We give an improved, and tight, analysis of the list scheduling algorithm establishing a competitive ratio of 8/3 for two dimensions. - If the value of opt is known, we improve the competitive ratio to 9/4 using a variant of the classic best fit algorithm for two dimensions. - For any fixed number of dimensions, we design an algorithm that is provably the best possible against a fractional optimum solution. This algorithm provides a proof of concept that we can simulate the optimal algorithm online up to the integrality gap of the natural LP relaxation of the problem.

Cite as

Ilan Cohen, Sungjin Im, and Debmalya Panigrahi. Online Two-Dimensional Load Balancing. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 34:1-34:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2020.34,
  author =	{Cohen, Ilan and Im, Sungjin and Panigrahi, Debmalya},
  title =	{{Online Two-Dimensional Load Balancing}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{34:1--34:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.34},
  URN =		{urn:nbn:de:0030-drops-124415},
  doi =		{10.4230/LIPIcs.ICALP.2020.34},
  annote =	{Keywords: Online algorithms, scheduling, multi-dimensional}
}
Document
Track A: Algorithms, Complexity and Games
Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph

Authors: Mina Dalirrooyfard and Virginia Vassilevska Williams


Abstract
The girth is one of the most basic graph parameters, and its computation has been studied for many decades. Under widely believed fine-grained assumptions, computing the girth exactly is known to require mn^{1-o(1)} time, both in sparse and dense m-edge, n-node graphs, motivating the search for fast approximations. Fast good quality approximation algorithms for undirected graphs have been known for decades. For the girth in directed graphs, until recently the only constant factor approximation algorithms ran in O(n^ω) time, where ω < 2.373 is the matrix multiplication exponent. These algorithms have two drawbacks: (1) they only offer an improvement over the mn running time for dense graphs, and (2) the current fast matrix multiplication methods are impractical. The first constant factor approximation algorithm that runs in O(mn^{1-ε}) time for ε > 0 and all sparsities m was only recently obtained by Chechik et al. [STOC 2020]; it is also combinatorial. It is known that a better than 2-approximation algorithm for the girth in dense directed unweighted graphs needs n^{3-o(1)} time unless one uses fast matrix multiplication. Meanwhile, the best known approximation factor for a combinatorial algorithm running in O(mn^{1-ε}) time (by Chechik et al.) is 3. Is the true answer 2 or 3? The main result of this paper is a (conditionally) tight approximation algorithm for directed graphs. First, we show that under a popular hardness assumption, any algorithm, even one that exploits fast matrix multiplication, would need to take at least mn^{1-o(1)} time for some sparsity m if it achieves a (2-ε)-approximation for any ε > 0. Second we give a 2-approximation algorithm for the girth of unweighted graphs running in Õ(mn^{3/4}) time, and a (2+ε)-approximation algorithm (for any ε > 0) that works in weighted graphs and runs in Õ(m√ n) time. Our algorithms are combinatorial. We also obtain a (4+ε)-approximation of the girth running in Õ(mn^{√2-1}) time, improving upon the previous best Õ(m√n) running time by Chechik et al. Finally, we consider the computation of roundtrip spanners. We obtain a (5+ε)-approximate roundtrip spanner on Õ(n^{1.5}/ε²) edges in Õ(m√n/ε²) time. This improves upon the previous approximation factor (8+ε) of Chechik et al. for the same running time.

Cite as

Mina Dalirrooyfard and Virginia Vassilevska Williams. Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2020.35,
  author =	{Dalirrooyfard, Mina and Vassilevska Williams, Virginia},
  title =	{{Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.35},
  URN =		{urn:nbn:de:0030-drops-124421},
  doi =		{10.4230/LIPIcs.ICALP.2020.35},
  annote =	{Keywords: Shortest cycle, Girth, Graph algorithms, Approximation algorithms, Fine-grained complexity, Roundtrip Spanner}
}
Document
Track A: Algorithms, Complexity and Games
Symmetric Arithmetic Circuits

Authors: Anuj Dawar and Gregory Wilsenach


Abstract
We introduce symmetric arithmetic circuits, i.e. arithmetic circuits with a natural symmetry restriction. In the context of circuits computing polynomials defined on a matrix of variables, such as the determinant or the permanent, the restriction amounts to requiring that the shape of the circuit is invariant under row and column permutations of the matrix. We establish unconditional, nearly exponential, lower bounds on the size of any symmetric circuit for computing the permanent over any field of characteristic other than 2. In contrast, we show that there are polynomial-size symmetric circuits for computing the determinant over fields of characteristic zero.

Cite as

Anuj Dawar and Gregory Wilsenach. Symmetric Arithmetic Circuits. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 36:1-36:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.ICALP.2020.36,
  author =	{Dawar, Anuj and Wilsenach, Gregory},
  title =	{{Symmetric Arithmetic Circuits}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{36:1--36:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.36},
  URN =		{urn:nbn:de:0030-drops-124430},
  doi =		{10.4230/LIPIcs.ICALP.2020.36},
  annote =	{Keywords: arithmetic circuits, symmetric arithmetic circuits, Boolean circuits, symmetric circuits, permanent, determinant, counting width, Weisfeiler-Leman dimension, Cai-F\"{u}rer-Immerman constructions}
}
Document
Track A: Algorithms, Complexity and Games
An Efficient PTAS for Stochastic Load Balancing with Poisson Jobs

Authors: Anindya De, Sanjeev Khanna, Huan Li, and Hesam Nikpey


Abstract
We give the first polynomial-time approximation scheme (PTAS) for the stochastic load balancing problem when the job sizes follow Poisson distributions. This improves upon the 2-approximation algorithm due to Goel and Indyk (FOCS'99). Moreover, our approximation scheme is an efficient PTAS that has a running time double exponential in 1/ε but nearly-linear in n, where n is the number of jobs and ε is the target error. Previously, a PTAS (not efficient) was only known for jobs that obey exponential distributions (Goel and Indyk, FOCS'99). Our algorithm relies on several probabilistic ingredients including some (seemingly) new results on scaling and the so-called "focusing effect" of maximum of Poisson random variables which might be of independent interest.

Cite as

Anindya De, Sanjeev Khanna, Huan Li, and Hesam Nikpey. An Efficient PTAS for Stochastic Load Balancing with Poisson Jobs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ICALP.2020.37,
  author =	{De, Anindya and Khanna, Sanjeev and Li, Huan and Nikpey, Hesam},
  title =	{{An Efficient PTAS for Stochastic Load Balancing with Poisson Jobs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{37:1--37:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.37},
  URN =		{urn:nbn:de:0030-drops-124449},
  doi =		{10.4230/LIPIcs.ICALP.2020.37},
  annote =	{Keywords: Efficient PTAS, Makespan Minimization, Scheduling, Stochastic Load Balancing, Poisson Distribution}
}
Document
Track A: Algorithms, Complexity and Games
Tree Polymatrix Games Are PPAD-Hard

Authors: Argyrios Deligkas, John Fearnley, and Rahul Savani


Abstract
We prove that it is PPAD-hard to compute a Nash equilibrium in a tree polymatrix game with twenty actions per player. This is the first PPAD hardness result for a game with a constant number of actions per player where the interaction graph is acyclic. Along the way we show PPAD-hardness for finding an ε-fixed point of a 2D-LinearFIXP instance, when ε is any constant less than (√2 - 1)/2 ≈ 0.2071. This lifts the hardness regime from polynomially small approximations in k-dimensions to constant approximations in two-dimensions, and our constant is substantial when compared to the trivial upper bound of 0.5.

Cite as

Argyrios Deligkas, John Fearnley, and Rahul Savani. Tree Polymatrix Games Are PPAD-Hard. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 38:1-38:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{deligkas_et_al:LIPIcs.ICALP.2020.38,
  author =	{Deligkas, Argyrios and Fearnley, John and Savani, Rahul},
  title =	{{Tree Polymatrix Games Are PPAD-Hard}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{38:1--38:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.38},
  URN =		{urn:nbn:de:0030-drops-124458},
  doi =		{10.4230/LIPIcs.ICALP.2020.38},
  annote =	{Keywords: Nash Equilibria, Polymatrix Games, PPAD, Brouwer Fixed Points}
}
Document
Track A: Algorithms, Complexity and Games
Spectral Sparsification via Bounded-Independence Sampling

Authors: Dean Doron, Jack Murtagh, Salil Vadhan, and David Zuckerman


Abstract
We give a deterministic, nearly logarithmic-space algorithm for mild spectral sparsification of undirected graphs. Given a weighted, undirected graph G on n vertices described by a binary string of length N, an integer k ≤ log n and an error parameter ε > 0, our algorithm runs in space Õ(k log(N w_max/w_min)) where w_max and w_min are the maximum and minimum edge weights in G, and produces a weighted graph H with Õ(n^(1+2/k)/ε²) edges that spectrally approximates G, in the sense of Spielmen and Teng [Spielman and Teng, 2004], up to an error of ε. Our algorithm is based on a new bounded-independence analysis of Spielman and Srivastava’s effective resistance based edge sampling algorithm [Spielman and Srivastava, 2011] and uses results from recent work on space-bounded Laplacian solvers [Jack Murtagh et al., 2017]. In particular, we demonstrate an inherent tradeoff (via upper and lower bounds) between the amount of (bounded) independence used in the edge sampling algorithm, denoted by k above, and the resulting sparsity that can be achieved.

Cite as

Dean Doron, Jack Murtagh, Salil Vadhan, and David Zuckerman. Spectral Sparsification via Bounded-Independence Sampling. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 39:1-39:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.ICALP.2020.39,
  author =	{Doron, Dean and Murtagh, Jack and Vadhan, Salil and Zuckerman, David},
  title =	{{Spectral Sparsification via Bounded-Independence Sampling}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{39:1--39:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.39},
  URN =		{urn:nbn:de:0030-drops-124462},
  doi =		{10.4230/LIPIcs.ICALP.2020.39},
  annote =	{Keywords: Spectral sparsification, Derandomization, Space complexity}
}
Document
Track A: Algorithms, Complexity and Games
Hard Problems on Random Graphs

Authors: Jan Dreier, Henri Lotze, and Peter Rossmanith


Abstract
Many graph properties are expressible in first order logic. Whether a graph contains a clique or a dominating set of size k are two examples. For the solution size as its parameter the first one is W[1]-complete and the second one W[2]-complete meaning that both of them are hard problems in the worst-case. If we look at both problem from the aspect of average-case complexity, the picture changes. Clique can be solved in expected FPT time on uniformly distributed graphs of size n, while this is not clear for Dominating Set. We show that it is indeed unlikely that Dominating Set can be solved efficiently on random graphs: If yes, then every first-order expressible graph property can be solved in expected FPT time, too. Furthermore, this remains true when we consider random graphs with an arbitrary constant edge probability. We identify a very simple problem on random matrices that is equally hard to solve on average: Given a square boolean matrix, are there k rows whose logical AND is the zero vector? The related Even Set problem on the other hand turns out to be efficiently solvable on random instances, while it is known to be hard in the worst-case.

Cite as

Jan Dreier, Henri Lotze, and Peter Rossmanith. Hard Problems on Random Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dreier_et_al:LIPIcs.ICALP.2020.40,
  author =	{Dreier, Jan and Lotze, Henri and Rossmanith, Peter},
  title =	{{Hard Problems on Random Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.40},
  URN =		{urn:nbn:de:0030-drops-124477},
  doi =		{10.4230/LIPIcs.ICALP.2020.40},
  annote =	{Keywords: random graphs, average-case complexity, first-order model checking}
}
Document
Track A: Algorithms, Complexity and Games
A Scaling Algorithm for Weighted f-Factors in General Graphs

Authors: Ran Duan, Haoqing He, and Tianyi Zhang


Abstract
We study the maximum weight perfect f-factor problem on any general simple graph G = (V,E,ω) with positive integral edge weights w, and n = |V|, m = |E|. When we have a function f:V → ℕ_+ on vertices, a perfect f-factor is a generalized matching so that every vertex u is matched to exactly f(u) different edges. The previous best results on this problem have running time O(m f(V)) [Gabow 2018] or Õ(W(f(V))^2.373)) [Gabow and Sankowski 2013], where W is the maximum edge weight, and f(V) = ∑_{u ∈ V}f(u). In this paper, we present a scaling algorithm for this problem with running time Õ(mn^{2/3} log W). Previously this bound is only known for bipartite graphs [Gabow and Tarjan 1989]. The advantage is that the running time is independent of f(V), and consequently it breaks the Ω(mn) barrier for large f(V) even for the unweighted f-factor problem in general graphs.

Cite as

Ran Duan, Haoqing He, and Tianyi Zhang. A Scaling Algorithm for Weighted f-Factors in General Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2020.41,
  author =	{Duan, Ran and He, Haoqing and Zhang, Tianyi},
  title =	{{A Scaling Algorithm for Weighted f-Factors in General Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{41:1--41:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.41},
  URN =		{urn:nbn:de:0030-drops-124487},
  doi =		{10.4230/LIPIcs.ICALP.2020.41},
  annote =	{Keywords: Scaling Algorithm, f-Factors, General Graphs}
}
Document
Track A: Algorithms, Complexity and Games
The Outer Limits of Contention Resolution on Matroids and Connections to the Secretary Problem

Authors: Shaddin Dughmi


Abstract
Contention resolution schemes have proven to be a useful and unifying abstraction for a variety of constrained optimization problems, in both offline and online arrival models. Much of prior work restricts attention to product distributions for the input set of elements, and studies contention resolution for increasingly general packing constraints, both offline and online. In this paper, we instead focus on generalizing the input distribution, restricting attention to matroid constraints in both the offline and online random arrival models. In particular, we study contention resolution when the input set is arbitrarily distributed, and may exhibit positive and/or negative correlations between elements. We characterize the distributions for which offline contention resolution is possible, and establish some of their basic closure properties. Our characterization can be interpreted as a distributional generalization of the matroid covering theorem. For the online random arrival model, we show that contention resolution is intimately tied to the secretary problem via two results. First, we show that a competitive algorithm for the matroid secretary problem implies that online contention resolution is essentially as powerful as offline contention resolution for matroids, so long as the algorithm is given the input distribution. Second, we reduce the matroid secretary problem to the design of an online contention resolution scheme of a particular form.

Cite as

Shaddin Dughmi. The Outer Limits of Contention Resolution on Matroids and Connections to the Secretary Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dughmi:LIPIcs.ICALP.2020.42,
  author =	{Dughmi, Shaddin},
  title =	{{The Outer Limits of Contention Resolution on Matroids and Connections to the Secretary Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.42},
  URN =		{urn:nbn:de:0030-drops-124496},
  doi =		{10.4230/LIPIcs.ICALP.2020.42},
  annote =	{Keywords: Contention Resolution, Secretary Problems, Matroids}
}
Document
Track A: Algorithms, Complexity and Games
Extending Partial 1-Planar Drawings

Authors: Eduard Eiben, Robert Ganian, Thekla Hamm, Fabian Klute, and Martin Nöllenburg


Abstract
Algorithmic extension problems of partial graph representations such as planar graph drawings or geometric intersection representations are of growing interest in topological graph theory and graph drawing. In such an extension problem, we are given a tuple (G,H,ℋ) consisting of a graph G, a connected subgraph H of G and a drawing ℋ of H, and the task is to extend ℋ into a drawing of G while maintaining some desired property of the drawing, such as planarity. In this paper we study the problem of extending partial 1-planar drawings, which are drawings in the plane that allow each edge to have at most one crossing. In addition we consider the subclass of IC-planar drawings, which are 1-planar drawings with independent crossings. Recognizing 1-planar graphs as well as IC-planar graphs is NP-complete and the NP-completeness easily carries over to the extension problem. Therefore, our focus lies on establishing the tractability of such extension problems in a weaker sense than polynomial-time tractability. Here, we show that both problems are fixed-parameter tractable when parameterized by the number of edges missing from H, i.e., the edge deletion distance between H and G. The second part of the paper then turns to a more powerful parameterization which is based on measuring the vertex+edge deletion distance between the partial and complete drawing, i.e., the minimum number of vertices and edges that need to be deleted to obtain H from G.

Cite as

Eduard Eiben, Robert Ganian, Thekla Hamm, Fabian Klute, and Martin Nöllenburg. Extending Partial 1-Planar Drawings. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 43:1-43:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.ICALP.2020.43,
  author =	{Eiben, Eduard and Ganian, Robert and Hamm, Thekla and Klute, Fabian and N\"{o}llenburg, Martin},
  title =	{{Extending Partial 1-Planar Drawings}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{43:1--43:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.43},
  URN =		{urn:nbn:de:0030-drops-124509},
  doi =		{10.4230/LIPIcs.ICALP.2020.43},
  annote =	{Keywords: Extension problems, 1-planarity, parameterized algorithms}
}
Document
Track A: Algorithms, Complexity and Games
How to Hide a Clique?

Authors: Uriel Feige and Vadim Grinberg


Abstract
In the well known planted clique problem, a clique (or alternatively, an independent set) of size k is planted at random in an Erdos-Renyi random G(n, p) graph, and the goal is to design an algorithm that finds the maximum clique (or independent set) in the resulting graph. We introduce a variation on this problem, where instead of planting the clique at random, the clique is planted by an adversary who attempts to make it difficult to find the maximum clique in the resulting graph. We show that for the standard setting of the parameters of the problem, namely, a clique of size k = √n planted in a random G(n, 1/2) graph, the known polynomial time algorithms can be extended (in a non-trivial way) to work also in the adversarial setting. In contrast, we show that for other natural settings of the parameters, such as planting an independent set of size k = n/2 in a G(n, p) graph with p = n^{-1/2}, there is no polynomial time algorithm that finds an independent set of size k, unless NP has randomized polynomial time algorithms.

Cite as

Uriel Feige and Vadim Grinberg. How to Hide a Clique?. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 44:1-44:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{feige_et_al:LIPIcs.ICALP.2020.44,
  author =	{Feige, Uriel and Grinberg, Vadim},
  title =	{{How to Hide a Clique?}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{44:1--44:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.44},
  URN =		{urn:nbn:de:0030-drops-124517},
  doi =		{10.4230/LIPIcs.ICALP.2020.44},
  annote =	{Keywords: planted clique, semi-random model, Lovasz theta function, random graphs}
}
Document
Track A: Algorithms, Complexity and Games
Sampling Arbitrary Subgraphs Exactly Uniformly in Sublinear Time

Authors: Hendrik Fichtenberger, Mingze Gao, and Pan Peng


Abstract
We present a simple sublinear-time algorithm for sampling an arbitrary subgraph H exactly uniformly from a graph G, to which the algorithm has access by performing the following types of queries: (1) uniform vertex queries, (2) degree queries, (3) neighbor queries, (4) pair queries and (5) edge sampling queries. The query complexity and running time of our algorithm are Õ(min{m, (m^ρ(H))/#H}) and Õ((m^ρ(H))/#H}), respectively, where ρ(H) is the fractional edge-cover of H and #H is the number of copies of H in G. For any clique on r vertices, i.e., H = K_r, our algorithm is almost optimal as any algorithm that samples an H from any distribution that has Ω(1) total probability mass on the set of all copies of H must perform Ω(min{m, (m^ρ(H))/(#H⋅(cr)^r)}) queries. Together with the query and time complexities of the (1±ε)-approximation algorithm for the number of subgraphs H by Assadi et al. [Sepehr Assadi et al., 2018] and the lower bound by Eden and Rosenbaum [Eden and Rosenbaum, 2018] for approximately counting cliques, our results suggest that in our query model, approximately counting cliques is "equivalent to" exactly uniformly sampling cliques, in the sense that the query and time complexities of exactly uniform sampling and randomized approximate counting are within polylogarithmic factor of each other. This stands in interesting contrast to an analogous relation between approximate counting and almost uniformly sampling for self-reducible problems in the polynomial-time regime by Jerrum, Valiant and Vazirani [Jerrum et al., 1986].

Cite as

Hendrik Fichtenberger, Mingze Gao, and Pan Peng. Sampling Arbitrary Subgraphs Exactly Uniformly in Sublinear Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fichtenberger_et_al:LIPIcs.ICALP.2020.45,
  author =	{Fichtenberger, Hendrik and Gao, Mingze and Peng, Pan},
  title =	{{Sampling Arbitrary Subgraphs Exactly Uniformly in Sublinear Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.45},
  URN =		{urn:nbn:de:0030-drops-124526},
  doi =		{10.4230/LIPIcs.ICALP.2020.45},
  annote =	{Keywords: Graph sampling, Graph algorithms, Sublinear-time algorithms}
}
Document
Track A: Algorithms, Complexity and Games
A Water-Filling Primal-Dual Algorithm for Approximating Non-Linear Covering Problems

Authors: Andrés Fielbaum, Ignacio Morales, and José Verschae


Abstract
Obtaining strong linear relaxations of capacitated covering problems constitute a significant technical challenge even for simple settings. For one of the most basic cases, the Knapsack-Cover (Min-Knapsack) problem, the relaxation based on knapsack-cover inequalities has an integrality gap of 2. These inequalities are exploited in more general problems, many of which admit primal-dual approximation algorithms. Inspired by problems from power and transport systems, we introduce a general setting in which items can be taken fractionally to cover a given demand. The cost incurred by an item is given by an arbitrary non-decreasing function of the chosen fraction. We generalize the knapsack-cover inequalities to this setting an use them to obtain a (2+ε)-approximate primal-dual algorithm. Our procedure has a natural interpretation as a bucket-filling algorithm which effectively overcomes the difficulties implied by having different slopes in the cost functions. More precisely, when some superior segment of an item presents a low slope, it helps to increase the priority of inferior segments. We also present a rounding algorithm with an approximation guarantee of 2. We generalize our algorithm to the Unsplittable Flow-Cover problem on a line, also for the setting of fractional items with non-linear costs. For this problem we obtain a (4+ε)-approximation algorithm in polynomial time, almost matching the 4-approximation algorithm known for the classical setting.

Cite as

Andrés Fielbaum, Ignacio Morales, and José Verschae. A Water-Filling Primal-Dual Algorithm for Approximating Non-Linear Covering Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fielbaum_et_al:LIPIcs.ICALP.2020.46,
  author =	{Fielbaum, Andr\'{e}s and Morales, Ignacio and Verschae, Jos\'{e}},
  title =	{{A Water-Filling Primal-Dual Algorithm for Approximating Non-Linear Covering Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.46},
  URN =		{urn:nbn:de:0030-drops-124531},
  doi =		{10.4230/LIPIcs.ICALP.2020.46},
  annote =	{Keywords: Knapsack-Cover Inequalities, Non-Linear Knapsack-Cover, Primal-Dual, Water-Filling Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Scattering and Sparse Partitions, and Their Applications

Authors: Arnold Filtser


Abstract
A partition 𝒫 of a weighted graph G is (σ,τ,Δ)-sparse if every cluster has diameter at most Δ, and every ball of radius Δ/σ intersects at most τ clusters. Similarly, 𝒫 is (σ,τ,Δ)-scattering if instead for balls we require that every shortest path of length at most Δ/σ intersects at most τ clusters. Given a graph G that admits a (σ,τ,Δ)-sparse partition for all Δ > 0, Jia et al. [STOC05] constructed a solution for the Universal Steiner Tree problem (and also Universal TSP) with stretch O(τσ²log_τ n). Given a graph G that admits a (σ,τ,Δ)-scattering partition for all Δ > 0, we construct a solution for the Steiner Point Removal problem with stretch O(τ³σ³). We then construct sparse and scattering partitions for various different graph families, receiving many new results for the Universal Steiner Tree and Steiner Point Removal problems.

Cite as

Arnold Filtser. Scattering and Sparse Partitions, and Their Applications. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 47:1-47:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{filtser:LIPIcs.ICALP.2020.47,
  author =	{Filtser, Arnold},
  title =	{{Scattering and Sparse Partitions, and Their Applications}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{47:1--47:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.47},
  URN =		{urn:nbn:de:0030-drops-124547},
  doi =		{10.4230/LIPIcs.ICALP.2020.47},
  annote =	{Keywords: Scattering partitions, sparse partitions, sparse covers, Steiner point removal, Universal Steiner tree, Universal TSP}
}
Document
Track A: Algorithms, Complexity and Games
Approximate Nearest Neighbor for Curves - Simple, Efficient, and Deterministic

Authors: Arnold Filtser, Omrit Filtser, and Matthew J. Katz


Abstract
In the (1+ε,r)-approximate near-neighbor problem for curves (ANNC) under some similarity measure δ, the goal is to construct a data structure for a given set 𝒞 of curves that supports approximate near-neighbor queries: Given a query curve Q, if there exists a curve C ∈ 𝒞 such that δ(Q,C)≤ r, then return a curve C' ∈ 𝒞 with δ(Q,C') ≤ (1+ε)r. There exists an efficient reduction from the (1+ε)-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve C ∈ 𝒞 with δ(Q,C) ≤ (1+ε)⋅δ(Q,C^*), where C^* is the curve of 𝒞 most similar to Q. Given a set 𝒞 of n curves, each consisting of m points in d dimensions, we construct a data structure for ANNC that uses n⋅ O(1/ε)^{md} storage space and has O(md) query time (for a query curve of length m), where the similarity measure between two curves is their discrete Fréchet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is k ≪ m, and obtain essentially the same storage and query bounds as above, except that m is replaced by k. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

Cite as

Arnold Filtser, Omrit Filtser, and Matthew J. Katz. Approximate Nearest Neighbor for Curves - Simple, Efficient, and Deterministic. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 48:1-48:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{filtser_et_al:LIPIcs.ICALP.2020.48,
  author =	{Filtser, Arnold and Filtser, Omrit and Katz, Matthew J.},
  title =	{{Approximate Nearest Neighbor for Curves - Simple, Efficient, and Deterministic}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{48:1--48:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.48},
  URN =		{urn:nbn:de:0030-drops-124555},
  doi =		{10.4230/LIPIcs.ICALP.2020.48},
  annote =	{Keywords: polygonal curves, Fr\'{e}chet distance, dynamic time warping, approximation algorithms, (asymmetric) approximate nearest neighbor, range counting}
}
Document
Track A: Algorithms, Complexity and Games
Computation of Hadwiger Number and Related Contraction Problems: Tight Lower Bounds

Authors: Fedor V. Fomin, Daniel Lokshtanov, Ivan Mihajlin, Saket Saurabh, and Meirav Zehavi


Abstract
We prove that the Hadwiger number of an n-vertex graph G (the maximum size of a clique minor in G) cannot be computed in time n^o(n), unless the Exponential Time Hypothesis (ETH) fails. This resolves a well-known open question in the area of exact exponential algorithms. The technique developed for resolving the Hadwiger number problem has a wider applicability. We use it to rule out the existence of n^o(n)-time algorithms (up to ETH) for a large class of computational problems concerning edge contractions in graphs.

Cite as

Fedor V. Fomin, Daniel Lokshtanov, Ivan Mihajlin, Saket Saurabh, and Meirav Zehavi. Computation of Hadwiger Number and Related Contraction Problems: Tight Lower Bounds. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 49:1-49:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ICALP.2020.49,
  author =	{Fomin, Fedor V. and Lokshtanov, Daniel and Mihajlin, Ivan and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Computation of Hadwiger Number and Related Contraction Problems: Tight Lower Bounds}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{49:1--49:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.49},
  URN =		{urn:nbn:de:0030-drops-124568},
  doi =		{10.4230/LIPIcs.ICALP.2020.49},
  annote =	{Keywords: Hadwiger Number, Exponential-Time Hypothesis, Exact Algorithms, Edge Contraction Problems}
}
Document
Track A: Algorithms, Complexity and Games
Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games

Authors: Dimitris Fotakis, Vardis Kandiros, Thanasis Lianeas, Nikos Mouzakis, Panagiotis Patsilinakos, and Stratis Skoulakis


Abstract
In this work, we seek a more refined understanding of the complexity of local optimum computation for Max-Cut and pure Nash equilibrium (PNE) computation for congestion games with weighted players and linear latency functions. We show that computing a PNE of linear weighted congestion games is PLS-complete either for very restricted strategy spaces, namely when player strategies are paths on a series-parallel network with a single origin and destination, or for very restricted latency functions, namely when the latency on each resource is equal to the congestion. Our results reveal a remarkable gap regarding the complexity of PNE in congestion games with weighted and unweighted players, since in case of unweighted players, a PNE can be easily computed by either a simple greedy algorithm (for series-parallel networks) or any better response dynamics (when the latency is equal to the congestion). For the latter of the results above, we need to show first that computing a local optimum of a natural restriction of Max-Cut, which we call Node-Max-Cut, is PLS-complete. In Node-Max-Cut, the input graph is vertex-weighted and the weight of each edge is equal to the product of the weights of its endpoints. Due to the very restricted nature of Node-Max-Cut, the reduction requires a careful combination of new gadgets with ideas and techniques from previous work. We also show how to compute efficiently a (1+ε)-approximate equilibrium for Node-Max-Cut, if the number of different vertex weights is constant.

Cite as

Dimitris Fotakis, Vardis Kandiros, Thanasis Lianeas, Nikos Mouzakis, Panagiotis Patsilinakos, and Stratis Skoulakis. Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 50:1-50:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fotakis_et_al:LIPIcs.ICALP.2020.50,
  author =	{Fotakis, Dimitris and Kandiros, Vardis and Lianeas, Thanasis and Mouzakis, Nikos and Patsilinakos, Panagiotis and Skoulakis, Stratis},
  title =	{{Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{50:1--50:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.50},
  URN =		{urn:nbn:de:0030-drops-124573},
  doi =		{10.4230/LIPIcs.ICALP.2020.50},
  annote =	{Keywords: PLS-completeness, Local-Max-Cut, Weighted Congestion Games, Equilibrium Computation}
}
Document
Track A: Algorithms, Complexity and Games
The Online Min-Sum Set Cover Problem

Authors: Dimitris Fotakis, Loukas Kavouras, Grigorios Koumoutsos, Stratis Skoulakis, and Manolis Vardas


Abstract
We consider the online Min-Sum Set Cover (MSSC), a natural and intriguing generalization of the classical list update problem. In Online MSSC, the algorithm maintains a permutation on n elements based on subsets S₁, S₂, … arriving online. The algorithm serves each set S_t upon arrival, using its current permutation π_t, incurring an access cost equal to the position of the first element of S_t in π_t. Then, the algorithm may update its permutation to π_{t+1}, incurring a moving cost equal to the Kendall tau distance of π_t to π_{t+1}. The objective is to minimize the total access and moving cost for serving the entire sequence. We consider the r-uniform version, where each S_t has cardinality r. List update is the special case where r = 1. We obtain tight bounds on the competitive ratio of deterministic online algorithms for MSSC against a static adversary, that serves the entire sequence by a single permutation. First, we show a lower bound of (r+1)(1-r/(n+1)) on the competitive ratio. Then, we consider several natural generalizations of successful list update algorithms and show that they fail to achieve any interesting competitive guarantee. On the positive side, we obtain a O(r)-competitive deterministic algorithm using ideas from online learning and the multiplicative weight updates (MWU) algorithm. Furthermore, we consider efficient algorithms. We propose a memoryless online algorithm, called Move-All-Equally, which is inspired by the Double Coverage algorithm for the k-server problem. We show that its competitive ratio is Ω(r²) and 2^{O(√{log n ⋅ log r})}, and conjecture that it is f(r)-competitive. We also compare Move-All-Equally against the dynamic optimal solution and obtain (almost) tight bounds by showing that it is Ω(r √n) and O(r^{3/2} √n)-competitive.

Cite as

Dimitris Fotakis, Loukas Kavouras, Grigorios Koumoutsos, Stratis Skoulakis, and Manolis Vardas. The Online Min-Sum Set Cover Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 51:1-51:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fotakis_et_al:LIPIcs.ICALP.2020.51,
  author =	{Fotakis, Dimitris and Kavouras, Loukas and Koumoutsos, Grigorios and Skoulakis, Stratis and Vardas, Manolis},
  title =	{{The Online Min-Sum Set Cover Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{51:1--51:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.51},
  URN =		{urn:nbn:de:0030-drops-124582},
  doi =		{10.4230/LIPIcs.ICALP.2020.51},
  annote =	{Keywords: Online Algorithms, Competitive Analysis, Min-Sum Set Cover}
}
Document
Track A: Algorithms, Complexity and Games
Efficient Diagonalization of Symmetric Matrices Associated with Graphs of Small Treewidth

Authors: Martin Fürer, Carlos Hoppen, and Vilmar Trevisan


Abstract
Let M = (m_{ij}) be a symmetric matrix of order n and let G be the graph with vertex set {1,…,n} such that distinct vertices i and j are adjacent if and only if m_{ij} ≠ 0. We introduce a dynamic programming algorithm that finds a diagonal matrix that is congruent to M. If G is given with a tree decomposition 𝒯 of width k, then this can be done in time O(k|𝒯| + k² n), where |𝒯| denotes the number of nodes in 𝒯.

Cite as

Martin Fürer, Carlos Hoppen, and Vilmar Trevisan. Efficient Diagonalization of Symmetric Matrices Associated with Graphs of Small Treewidth. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 52:1-52:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{furer_et_al:LIPIcs.ICALP.2020.52,
  author =	{F\"{u}rer, Martin and Hoppen, Carlos and Trevisan, Vilmar},
  title =	{{Efficient Diagonalization of Symmetric Matrices Associated with Graphs of Small Treewidth}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{52:1--52:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.52},
  URN =		{urn:nbn:de:0030-drops-124590},
  doi =		{10.4230/LIPIcs.ICALP.2020.52},
  annote =	{Keywords: Treewidth, Diagonalization, Eigenvalues}
}
Document
Track A: Algorithms, Complexity and Games
Counting Solutions to Random CNF Formulas

Authors: Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Kuan Yang


Abstract
We give the first efficient algorithm to approximately count the number of solutions in the random k-SAT model when the density of the formula scales exponentially with k. The best previous counting algorithm was due to Montanari and Shah and was based on the correlation decay method, which works up to densities (1+o_k(1))(2log k)/k, the Gibbs uniqueness threshold for the model. Instead, our algorithm harnesses a recent technique by Moitra to work for random formulas with much higher densities. The main challenge in our setting is to account for the presence of high-degree variables whose marginal distributions are hard to control and which cause significant correlations within the formula.

Cite as

Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Kuan Yang. Counting Solutions to Random CNF Formulas. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 53:1-53:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2020.53,
  author =	{Galanis, Andreas and Goldberg, Leslie Ann and Guo, Heng and Yang, Kuan},
  title =	{{Counting Solutions to Random CNF Formulas}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{53:1--53:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.53},
  URN =		{urn:nbn:de:0030-drops-124603},
  doi =		{10.4230/LIPIcs.ICALP.2020.53},
  annote =	{Keywords: random CNF formulas, approximate counting}
}
Document
Track A: Algorithms, Complexity and Games
Robust Algorithms for TSP and Steiner Tree

Authors: Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi


Abstract
Robust optimization is a widely studied area in operations research, where the algorithm takes as input a range of values and outputs a single solution that performs well for the entire range. Specifically, a robust algorithm aims to minimize regret, defined as the maximum difference between the solution’s cost and that of an optimal solution in hindsight once the input has been realized. For graph problems in P, such as shortest path and minimum spanning tree, robust polynomial-time algorithms that obtain a constant approximation on regret are known. In this paper, we study robust algorithms for minimizing regret in NP-hard graph optimization problems, and give constant approximations on regret for the classical traveling salesman and Steiner tree problems.

Cite as

Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi. Robust Algorithms for TSP and Steiner Tree. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 54:1-54:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.ICALP.2020.54,
  author =	{Ganesh, Arun and Maggs, Bruce M. and Panigrahi, Debmalya},
  title =	{{Robust Algorithms for TSP and Steiner Tree}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{54:1--54:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.54},
  URN =		{urn:nbn:de:0030-drops-124619},
  doi =		{10.4230/LIPIcs.ICALP.2020.54},
  annote =	{Keywords: Robust optimization, Steiner tree, traveling salesman problem}
}
Document
Track A: Algorithms, Complexity and Games
Cryptographic Reverse Firewalls for Interactive Proof Systems

Authors: Chaya Ganesh, Bernardo Magri, and Daniele Venturi


Abstract
We study interactive proof systems (IPSes) in a strong adversarial setting where the machines of honest parties might be corrupted and under control of the adversary. Our aim is to answer the following, seemingly paradoxical, questions: - Can Peggy convince Vic of the veracity of an NP statement, without leaking any information about the witness even in case Vic is malicious and Peggy does not trust her computer? - Can we avoid that Peggy fools Vic into accepting false statements, even if Peggy is malicious and Vic does not trust her computer? At EUROCRYPT 2015, Mironov and Stephens-Davidowitz introduced cryptographic reverse firewalls (RFs) as an attractive approach to tackling such questions. Intuitively, a RF for Peggy/Vic is an external party that sits between Peggy/Vic and the outside world and whose scope is to sanitize Peggy’s/Vic’s incoming and outgoing messages in the face of subversion of her/his computer, e.g. in order to destroy subliminal channels. In this paper, we put forward several natural security properties for RFs in the concrete setting of IPSes. As our main contribution, we construct efficient RFs for different IPSes derived from a large class of Sigma protocols that we call malleable. A nice feature of our design is that it is completely transparent, in the sense that our RFs can be directly applied to already deployed IPSes, without the need to re-implement them.

Cite as

Chaya Ganesh, Bernardo Magri, and Daniele Venturi. Cryptographic Reverse Firewalls for Interactive Proof Systems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 55:1-55:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.ICALP.2020.55,
  author =	{Ganesh, Chaya and Magri, Bernardo and Venturi, Daniele},
  title =	{{Cryptographic Reverse Firewalls for Interactive Proof Systems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{55:1--55:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.55},
  URN =		{urn:nbn:de:0030-drops-124621},
  doi =		{10.4230/LIPIcs.ICALP.2020.55},
  annote =	{Keywords: Subversion, Algorithm substitution attacks, Cryptographic reverse firewalls, Interactive proofs, Zero knowledge}
}
Document
Track A: Algorithms, Complexity and Games
Robust Algorithms Under Adversarial Injections

Authors: Paritosh Garg, Sagar Kale, Lars Rohwedder, and Ola Svensson


Abstract
In this paper, we study streaming and online algorithms in the context of randomness in the input. For several problems, a random order of the input sequence - as opposed to the worst-case order - appears to be a necessary evil in order to prove satisfying guarantees. However, algorithmic techniques that work under this assumption tend to be vulnerable to even small changes in the distribution. For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. We investigate two classical combinatorial-optimization problems in this model: Maximum matching and cardinality constrained monotone submodular function maximization. Our main technical contribution is a novel streaming algorithm for the latter that computes a 0.55-approximation. While the algorithm itself is clean and simple, an involved analysis shows that it emulates a subdivision of the input stream which can be used to greatly limit the power of the adversary.

Cite as

Paritosh Garg, Sagar Kale, Lars Rohwedder, and Ola Svensson. Robust Algorithms Under Adversarial Injections. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 56:1-56:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ICALP.2020.56,
  author =	{Garg, Paritosh and Kale, Sagar and Rohwedder, Lars and Svensson, Ola},
  title =	{{Robust Algorithms Under Adversarial Injections}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{56:1--56:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.56},
  URN =		{urn:nbn:de:0030-drops-124632},
  doi =		{10.4230/LIPIcs.ICALP.2020.56},
  annote =	{Keywords: Streaming algorithm, adversary, submodular maximization, matching}
}
Document
Track A: Algorithms, Complexity and Games
Minimum Cut in O(m log² n) Time

Authors: Paweł Gawrychowski, Shay Mozes, and Oren Weimann


Abstract
We give a randomized algorithm that finds a minimum cut in an undirected weighted m-edge n-vertex graph G with high probability in O(m log² n) time. This is the first improvement to Karger’s celebrated O(m log³ n) time algorithm from 1996. Our main technical contribution is a deterministic O(m log n) time algorithm that, given a spanning tree T of G, finds a minimum cut of G that 2-respects (cuts two edges of) T.

Cite as

Paweł Gawrychowski, Shay Mozes, and Oren Weimann. Minimum Cut in O(m log² n) Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2020.57,
  author =	{Gawrychowski, Pawe{\l} and Mozes, Shay and Weimann, Oren},
  title =	{{Minimum Cut in O(m log² n) Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.57},
  URN =		{urn:nbn:de:0030-drops-124646},
  doi =		{10.4230/LIPIcs.ICALP.2020.57},
  annote =	{Keywords: Minimum cut, Minimum 2-respecting cut}
}
Document
Track A: Algorithms, Complexity and Games
Sparse Recovery for Orthogonal Polynomial Transforms

Authors: Anna Gilbert, Albert Gu, Christopher Ré, Atri Rudra, and Mary Wootters


Abstract
In this paper we consider the following sparse recovery problem. We have query access to a vector 𝐱 ∈ ℝ^N such that x̂ = 𝐅 𝐱 is k-sparse (or nearly k-sparse) for some orthogonal transform 𝐅. The goal is to output an approximation (in an 𝓁₂ sense) to x̂ in sublinear time. This problem has been well-studied in the special case that 𝐅 is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ⋅ polylog N). However, for transforms 𝐅 other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms 𝐅 that arise from orthogonal polynomials. More precisely, our algorithm works for any 𝐅 that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box.

Cite as

Anna Gilbert, Albert Gu, Christopher Ré, Atri Rudra, and Mary Wootters. Sparse Recovery for Orthogonal Polynomial Transforms. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 58:1-58:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gilbert_et_al:LIPIcs.ICALP.2020.58,
  author =	{Gilbert, Anna and Gu, Albert and R\'{e}, Christopher and Rudra, Atri and Wootters, Mary},
  title =	{{Sparse Recovery for Orthogonal Polynomial Transforms}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{58:1--58:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.58},
  URN =		{urn:nbn:de:0030-drops-124653},
  doi =		{10.4230/LIPIcs.ICALP.2020.58},
  annote =	{Keywords: Orthogonal polynomials, Jacobi polynomials, sublinear algorithms, sparse recovery}
}
Document
Track A: Algorithms, Complexity and Games
Hitting Long Directed Cycles Is Fixed-Parameter Tractable

Authors: Alexander Göke, Dániel Marx, and Matthias Mnich


Abstract
In the Directed Long Cycle Hitting Set problem we are given a directed graph G, and the task is to find a set S of at most k vertices/arcs such that G-S has no cycle of length longer than ℓ. We show that the problem can be solved in time 2^O(ℓ^6 + ℓ k^3 log k + k^5 log k log ℓ) ⋅ n^O(1), that is, it is fixed-parameter tractable (FPT) parameterized by k and ℓ. This algorithm can be seen as a far-reaching generalization of the fixed-parameter tractability of Mixed Graph Feedback Vertex Set [Bonsma and Lokshtanov WADS 2011], which is already a common generalization of the fixed-parameter tractability of (undirected) Feedback Vertex Set and the Directed Feedback Vertex Set problems, two classic results in parameterized algorithms. The algorithm requires significant insights into the structure of graphs without directed cycles of length longer than ℓ and can be seen as an exact version of the approximation algorithm following from the Erdős-Pósa property for long cycles in directed graphs proved by Kreutzer and Kawarabayashi [STOC 2015].

Cite as

Alexander Göke, Dániel Marx, and Matthias Mnich. Hitting Long Directed Cycles Is Fixed-Parameter Tractable. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 59:1-59:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goke_et_al:LIPIcs.ICALP.2020.59,
  author =	{G\"{o}ke, Alexander and Marx, D\'{a}niel and Mnich, Matthias},
  title =	{{Hitting Long Directed Cycles Is Fixed-Parameter Tractable}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{59:1--59:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.59},
  URN =		{urn:nbn:de:0030-drops-124664},
  doi =		{10.4230/LIPIcs.ICALP.2020.59},
  annote =	{Keywords: Directed graphs, directed feedback vertex set, circumference}
}
Document
Track A: Algorithms, Complexity and Games
On the Central Levels Problem

Authors: Petr Gregor, Ondřej Mička, and Torsten Mütze


Abstract
The central levels problem asserts that the subgraph of the (2m+1)-dimensional hypercube induced by all bitstrings with at least m+1-𝓁 many 1s and at most m+𝓁 many 1s, i.e., the vertices in the middle 2𝓁 levels, has a Hamilton cycle for any m ≥ 1 and 1 ≤ 𝓁 ≤ m+1. This problem was raised independently by Savage, by Gregor and Škrekovski, and by Shen and Williams, and it is a common generalization of the well-known middle levels problem, namely the case 𝓁 = 1, and classical binary Gray codes, namely the case 𝓁 = m+1. In this paper we present a general constructive solution of the central levels problem. Our results also imply the existence of optimal cycles through any sequence of 𝓁 consecutive levels in the n-dimensional hypercube for any n ≥ 1 and 1 ≤ 𝓁 ≤ n+1. Moreover, extending an earlier construction by Streib and Trotter, we construct a Hamilton cycle through the n-dimensional hypercube, n≥ 2, that contains the symmetric chain decomposition constructed by Greene and Kleitman in the 1970s, and we provide a loopless algorithm for computing the corresponding Gray code.

Cite as

Petr Gregor, Ondřej Mička, and Torsten Mütze. On the Central Levels Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 60:1-60:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gregor_et_al:LIPIcs.ICALP.2020.60,
  author =	{Gregor, Petr and Mi\v{c}ka, Ond\v{r}ej and M\"{u}tze, Torsten},
  title =	{{On the Central Levels Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{60:1--60:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.60},
  URN =		{urn:nbn:de:0030-drops-124678},
  doi =		{10.4230/LIPIcs.ICALP.2020.60},
  annote =	{Keywords: Gray code, Hamilton cycle, hypercube, middle levels, symmetric chain decomposition}
}
Document
Track A: Algorithms, Complexity and Games
Linearly Representable Submodular Functions: An Algebraic Algorithm for Minimization

Authors: Rohit Gurjar and Rajat Rathi


Abstract
A set function f : 2^E → ℝ on the subsets of a set E is called submodular if it satisfies a natural diminishing returns property: for any S ⊆ E and x,y ∉ S, we have f(S ∪ {x,y}) - f(S ∪ {y}) ≤ f(S ∪ {x}) - f(S). Submodular minimization problem asks for finding the minimum value a given submodular function takes. We give an algebraic algorithm for this problem for a special class of submodular functions that are "linearly representable". It is known that every submodular function f can be decomposed into a sum of two monotone submodular functions, i.e., there exist two non-decreasing submodular functions f₁,f₂ such that f(S) = f₁(S) + f₂(E ⧵ S) for each S ⊆ E. Our class consists of those submodular functions f, for which each of f₁ and f₂ is a sum of k rank functions on families of subspaces of 𝔽ⁿ, for some field 𝔽. Our algebraic algorithm for this class of functions can be parallelized, and thus, puts the problem of finding the minimizing set in the complexity class randomized NC. Further, we derandomize our algorithm so that it needs only O(log²(kn|E|)) many random bits. We also give reductions from two combinatorial optimization problems to linearly representable submodular minimization, and thus, get such parallel algorithms for these problems. These problems are (i) covering a directed graph by k a-arborescences and (ii) packing k branchings with given root sets in a directed graph.

Cite as

Rohit Gurjar and Rajat Rathi. Linearly Representable Submodular Functions: An Algebraic Algorithm for Minimization. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 61:1-61:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gurjar_et_al:LIPIcs.ICALP.2020.61,
  author =	{Gurjar, Rohit and Rathi, Rajat},
  title =	{{Linearly Representable Submodular Functions: An Algebraic Algorithm for Minimization}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{61:1--61:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.61},
  URN =		{urn:nbn:de:0030-drops-124687},
  doi =		{10.4230/LIPIcs.ICALP.2020.61},
  annote =	{Keywords: Submodular Minimization, Parallel Algorithms, Derandomization, Algebraic Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
d-To-1 Hardness of Coloring 3-Colorable Graphs with O(1) Colors

Authors: Venkatesan Guruswami and Sai Sandeep


Abstract
The d-to-1 conjecture of Khot asserts that it is NP-hard to satisfy an ε fraction of constraints of a satisfiable d-to-1 Label Cover instance, for arbitrarily small ε > 0. We prove that the d-to-1 conjecture for any fixed d implies the hardness of coloring a 3-colorable graph with C colors for arbitrarily large integers C. Earlier, the hardness of O(1)-coloring a 4-colorable graphs is known under the 2-to-1 conjecture, which is the strongest in the family of d-to-1 conjectures, and the hardness for 3-colorable graphs is known under a certain "fish-shaped" variant of the 2-to-1 conjecture.

Cite as

Venkatesan Guruswami and Sai Sandeep. d-To-1 Hardness of Coloring 3-Colorable Graphs with O(1) Colors. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 62:1-62:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2020.62,
  author =	{Guruswami, Venkatesan and Sandeep, Sai},
  title =	{{d-To-1 Hardness of Coloring 3-Colorable Graphs with O(1) Colors}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{62:1--62:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.62},
  URN =		{urn:nbn:de:0030-drops-124694},
  doi =		{10.4230/LIPIcs.ICALP.2020.62},
  annote =	{Keywords: graph coloring, hardness of approximation}
}
Document
Track A: Algorithms, Complexity and Games
Feasible Interpolation for Polynomial Calculus and Sums-Of-Squares

Authors: Tuomas Hakoniemi


Abstract
We prove that both Polynomial Calculus and Sums-of-Squares proof systems admit a strong form of feasible interpolation property for sets of polynomial equality constraints. Precisely, given two sets P(x,z) and Q(y,z) of equality constraints, a refutation Π of P(x,z) ∪ Q(y,z), and any assignment a to the variables z, one can find a refutation of P(x,a) or a refutation of Q(y,a) in time polynomial in the length of the bit-string encoding the refutation Π. For Sums-of-Squares we rely on the use of Boolean axioms, but for Polynomial Calculus we do not assume their presence.

Cite as

Tuomas Hakoniemi. Feasible Interpolation for Polynomial Calculus and Sums-Of-Squares. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hakoniemi:LIPIcs.ICALP.2020.63,
  author =	{Hakoniemi, Tuomas},
  title =	{{Feasible Interpolation for Polynomial Calculus and Sums-Of-Squares}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.63},
  URN =		{urn:nbn:de:0030-drops-124707},
  doi =		{10.4230/LIPIcs.ICALP.2020.63},
  annote =	{Keywords: Proof Complexity, Feasible Interpolation, Sums-of-Squares, Polynomial Calculus}
}
Document
Track A: Algorithms, Complexity and Games
Active Learning a Convex Body in Low Dimensions

Authors: Sariel Har-Peled, Mitchell Jones, and Saladi Rahul


Abstract
Consider a set P ⊆ ℝ^d of n points, and a convex body C provided via a separation oracle. The task at hand is to decide for each point of P if it is in C using the fewest number of oracle queries. We show that one can solve this problem in two and three dimensions using O(⬡_P log n) queries, where ⬡_P is the largest subset of points of P in convex position. In 2D, we provide an algorithm which efficiently generates these adaptive queries. Furthermore, we show that in two dimensions one can solve this problem using O(⊚(P,C) log² n) oracle queries, where ⊚(P,C) is a lower bound on the minimum number of queries that any algorithm for this specific instance requires. Finally, we consider other variations on the problem, such as using the fewest number of queries to decide if C contains all points of P. As an application of the above, we show that the discrete geometric median of a point set P in ℝ² can be computed in O(n log² n (log n log log n + ⬡(P))) expected time.

Cite as

Sariel Har-Peled, Mitchell Jones, and Saladi Rahul. Active Learning a Convex Body in Low Dimensions. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 64:1-64:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.ICALP.2020.64,
  author =	{Har-Peled, Sariel and Jones, Mitchell and Rahul, Saladi},
  title =	{{Active Learning a Convex Body in Low Dimensions}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{64:1--64:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.64},
  URN =		{urn:nbn:de:0030-drops-124711},
  doi =		{10.4230/LIPIcs.ICALP.2020.64},
  annote =	{Keywords: Approximation algorithms, computational geometry, separation oracles, active learning}
}
Document
Track A: Algorithms, Complexity and Games
Node-Connectivity Terminal Backup, Separately-Capacitated Multiflow, and Discrete Convexity

Authors: Hiroshi Hirai and Motoki Ikeda


Abstract
The terminal backup problems [Anshelevich and Karagiozova, 2011] form a class of network design problems: Given an undirected graph with a requirement on terminals, the goal is to find a minimum cost subgraph satisfying the connectivity requirement. The node-connectivity terminal backup problem requires a terminal to connect other terminals with a number of node-disjoint paths. This problem is not known whether is NP-hard or tractable. Fukunaga (2016) gave a 4/3-approximation algorithm based on LP-rounding scheme using a general LP-solver. In this paper, we develop a combinatorial algorithm for the relaxed LP to find a half-integral optimal solution in O(mlog (mUA)⋅ MF(kn,m+k²n)) time, where m is the number of edges, k is the number of terminals, A is the maximum edge-cost, U is the maximum edge-capacity, and MF(n',m') is the time complexity of a max-flow algorithm in a network with n' nodes and m' edges. The algorithm implies that the 4/3-approximation algorithm for the node-connectivity terminal backup problem is also efficiently implemented. For the design of algorithm, we explore a connection between the node-connectivity terminal backup problem and a new type of a multiflow, called a separately-capacitated multiflow. We show a min-max theorem which extends Lovász - Cherkassky theorem to the node-capacity setting. Our results build on discrete convex analysis for the node-connectivity terminal backup problem.

Cite as

Hiroshi Hirai and Motoki Ikeda. Node-Connectivity Terminal Backup, Separately-Capacitated Multiflow, and Discrete Convexity. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirai_et_al:LIPIcs.ICALP.2020.65,
  author =	{Hirai, Hiroshi and Ikeda, Motoki},
  title =	{{Node-Connectivity Terminal Backup, Separately-Capacitated Multiflow, and Discrete Convexity}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.65},
  URN =		{urn:nbn:de:0030-drops-124725},
  doi =		{10.4230/LIPIcs.ICALP.2020.65},
  annote =	{Keywords: terminal backup problem, node-connectivity, separately-capacitated multiflow, discrete convex analysis}
}
Document
Track A: Algorithms, Complexity and Games
A Dichotomy for Bounded Degree Graph Homomorphisms with Nonnegative Weights

Authors: Artem Govorov, Jin-Yi Cai, and Martin Dyer


Abstract
We consider the complexity of counting weighted graph homomorphisms defined by a symmetric matrix A. Each symmetric matrix A defines a graph homomorphism function Z_A(⋅), also known as the partition function. Dyer and Greenhill [Martin E. Dyer and Catherine S. Greenhill, 2000] established a complexity dichotomy of Z_A(⋅) for symmetric {0, 1}-matrices A, and they further proved that its #P-hardness part also holds for bounded degree graphs. Bulatov and Grohe [Andrei Bulatov and Martin Grohe, 2005] extended the Dyer-Greenhill dichotomy to nonnegative symmetric matrices A. However, their hardness proof requires graphs of arbitrarily large degree, and whether the bounded degree part of the Dyer-Greenhill dichotomy can be extended has been an open problem for 15 years. We resolve this open problem and prove that for nonnegative symmetric A, either Z_A(G) is in polynomial time for all graphs G, or it is #P-hard for bounded degree (and simple) graphs G. We further extend the complexity dichotomy to include nonnegative vertex weights. Additionally, we prove that the #P-hardness part of the dichotomy by Goldberg et al. [Leslie A. Goldberg et al., 2010] for Z_A(⋅) also holds for simple graphs, where A is any real symmetric matrix.

Cite as

Artem Govorov, Jin-Yi Cai, and Martin Dyer. A Dichotomy for Bounded Degree Graph Homomorphisms with Nonnegative Weights. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 66:1-66:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{govorov_et_al:LIPIcs.ICALP.2020.66,
  author =	{Govorov, Artem and Cai, Jin-Yi and Dyer, Martin},
  title =	{{A Dichotomy for Bounded Degree Graph Homomorphisms with Nonnegative Weights}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{66:1--66:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.66},
  URN =		{urn:nbn:de:0030-drops-124733},
  doi =		{10.4230/LIPIcs.ICALP.2020.66},
  annote =	{Keywords: Graph homomorphism, Complexity dichotomy, Counting problems}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear-Space Lexicographic Depth-First Search for Bounded Treewidth Graphs and Planar Graphs

Authors: Taisuke Izumi and Yota Otachi


Abstract
The lexicographic depth-first search (Lex-DFS) is one of the first basic graph problems studied in the context of space-efficient algorithms. It is shown independently by Asano et al. [ISAAC 2014] and Elmasry et al. [STACS 2015] that Lex-DFS admits polynomial-time algorithms that run with O(n)-bit working memory, where n is the number of vertices in the graph. Lex-DFS is known to be P-complete under logspace reduction, and giving or ruling out polynomial-time sublinear-space algorithms for Lex-DFS on general graphs is quite challenging. In this paper, we study Lex-DFS on graphs of bounded treewidth. We first show that given a tree decomposition of width O(n^(1-ε)) with ε > 0, Lex-DFS can be solved in sublinear space. We then complement this result by presenting a space-efficient algorithm that can compute, for w ≤ √n, a tree decomposition of width O(w √nlog n) or correctly decide that the graph has a treewidth more than w. This algorithm itself would be of independent interest as the first space-efficient algorithm for computing a tree decomposition of moderate (small but non-constant) width. By combining these results, we can show in particular that graphs of treewidth O(n^(1/2 - ε)) for some ε > 0 admits a polynomial-time sublinear-space algorithm for Lex-DFS. We can also show that planar graphs admit a polynomial-time algorithm with O(n^(1/2+ε))-bit working memory for Lex-DFS.

Cite as

Taisuke Izumi and Yota Otachi. Sublinear-Space Lexicographic Depth-First Search for Bounded Treewidth Graphs and Planar Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 67:1-67:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{izumi_et_al:LIPIcs.ICALP.2020.67,
  author =	{Izumi, Taisuke and Otachi, Yota},
  title =	{{Sublinear-Space Lexicographic Depth-First Search for Bounded Treewidth Graphs and Planar Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{67:1--67:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.67},
  URN =		{urn:nbn:de:0030-drops-124745},
  doi =		{10.4230/LIPIcs.ICALP.2020.67},
  annote =	{Keywords: depth-first search, space complexity, treewidth}
}
Document
Track A: Algorithms, Complexity and Games
Scheduling in the Random-Order Model

Authors: Susanne Albers and Maximilian Janke


Abstract
Makespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to m identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is (2-1/m)-competitive [Graham, 1966]. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201 [Fleischer and Wahl, 2000]. No deterministic online strategy can obtain a competitiveness smaller than 1.88 [Rudin III, 2001]. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting [Osborn and Torng, 2008]. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability.

Cite as

Susanne Albers and Maximilian Janke. Scheduling in the Random-Order Model. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 68:1-68:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{albers_et_al:LIPIcs.ICALP.2020.68,
  author =	{Albers, Susanne and Janke, Maximilian},
  title =	{{Scheduling in the Random-Order Model}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{68:1--68:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.68},
  URN =		{urn:nbn:de:0030-drops-124750},
  doi =		{10.4230/LIPIcs.ICALP.2020.68},
  annote =	{Keywords: Scheduling, makespan minimization, online algorithm, competitive analysis, lower bound, random-order}
}
Document
Track A: Algorithms, Complexity and Games
Online Algorithms for Weighted Paging with Predictions

Authors: Zhihao Jiang, Debmalya Panigrahi, and Kevin Sun


Abstract
In this paper, we initiate the study of the weighted paging problem with predictions. This continues the recent line of work in online algorithms with predictions, particularly that of Lykouris and Vassilvitski (ICML 2018) and Rohatgi (SODA 2020) on unweighted paging with predictions. We show that unlike unweighted paging, neither a fixed lookahead nor knowledge of the next request for every page is sufficient information for an algorithm to overcome existing lower bounds in weighted paging. However, a combination of the two, which we call the strong per request prediction (SPRP) model, suffices to give a 2-competitive algorithm. We also explore the question of gracefully degrading algorithms with increasing prediction error, and give both upper and lower bounds for a set of natural measures of prediction error.

Cite as

Zhihao Jiang, Debmalya Panigrahi, and Kevin Sun. Online Algorithms for Weighted Paging with Predictions. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 69:1-69:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ICALP.2020.69,
  author =	{Jiang, Zhihao and Panigrahi, Debmalya and Sun, Kevin},
  title =	{{Online Algorithms for Weighted Paging with Predictions}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{69:1--69:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.69},
  URN =		{urn:nbn:de:0030-drops-124761},
  doi =		{10.4230/LIPIcs.ICALP.2020.69},
  annote =	{Keywords: Online algorithms, paging}
}
Document
Track A: Algorithms, Complexity and Games
Popular Matchings with One-Sided Bias

Authors: Telikepalli Kavitha


Abstract
Let G = (A ∪ B,E) be a bipartite graph where A consists of agents or main players and B consists of jobs or secondary players. Every vertex has a strict ranking of its neighbors. A matching M is popular if for any matching N, the number of vertices that prefer M to N is at least the number that prefer N to M. Popular matchings always exist in G since every stable matching is popular. A matching M is A-popular if for any matching N, the number of agents (i.e., vertices in A) that prefer M to N is at least the number of agents that prefer N to M. Unlike popular matchings, A-popular matchings need not exist in a given instance G and there is a simple linear time algorithm to decide if G admits an A-popular matching and compute one, if so. We consider the problem of deciding if G admits a matching that is both popular and A-popular and finding one, if so. We call such matchings fully popular. A fully popular matching is useful when A is the more important side - so along with overall popularity, we would like to maintain "popularity within the set A". A fully popular matching is not necessarily a min-size/max-size popular matching and all known polynomial time algorithms for popular matching problems compute either min-size or max-size popular matchings. Here we show a linear time algorithm for the fully popular matching problem, thus our result shows a new tractable subclass of popular matchings.

Cite as

Telikepalli Kavitha. Popular Matchings with One-Sided Bias. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 70:1-70:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kavitha:LIPIcs.ICALP.2020.70,
  author =	{Kavitha, Telikepalli},
  title =	{{Popular Matchings with One-Sided Bias}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{70:1--70:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.70},
  URN =		{urn:nbn:de:0030-drops-124774},
  doi =		{10.4230/LIPIcs.ICALP.2020.70},
  annote =	{Keywords: Bipartite graphs, Stable matchings, Gale-Shapley algorithm, LP-duality}
}
Document
Track A: Algorithms, Complexity and Games
Obviously Strategyproof Single-Minded Combinatorial Auctions

Authors: Bart de Keijzer, Maria Kyropoulou, and Carmine Ventre


Abstract
We consider the setting of combinatorial auctions when the agents are single-minded and have no contingent reasoning skills. We are interested in mechanisms that provide the right incentives to these imperfectly rational agents, and therefore focus our attention to obviously strategyproof (OSP) mechanisms. These mechanisms require that at each point during the execution where an agent is queried to communicate information, it should be "obvious" for the agent what strategy to adopt in order to maximise her utility. In this paper we study the potential of OSP mechanisms with respect to the approximability of the optimal social welfare. We consider two cases depending on whether the desired bundles of the agents are known or unknown to the mechanism. For the case of known-bundle single-minded agents we show that OSP can actually be as powerful as (plain) strategyproofness (SP). In particular, we show that we can implement the very same algorithm used for SP to achieve a √m-approximation of the optimal social welfare with an OSP mechanism, m being the total number of items. Restricting our attention to declaration domains with two values, we provide a 2-approximate OSP mechanism, and prove that this approximation bound is tight. We also present a randomised mechanism that is universally OSP and achieves a finite approximation of the optimal social welfare for the case of arbitrary size finite domains. This mechanism also provides a bounded approximation ratio when the valuations lie in a bounded interval (even if the declaration domain is infinitely large). For the case of unknown-bundle single-minded agents, we show how we can achieve an approximation ratio equal to the size of the largest desired set, in an OSP way. We remark this is the first known application of OSP to multi-dimensional settings, i.e., settings where agents have to declare more than one parameter. Our results paint a rather positive picture regarding the power of OSP mechanisms in this context, particularly for known-bundle single-minded agents. All our results are constructive, and even though some known strategyproof algorithms are used, implementing them in an OSP way is a non-trivial task.

Cite as

Bart de Keijzer, Maria Kyropoulou, and Carmine Ventre. Obviously Strategyproof Single-Minded Combinatorial Auctions. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 71:1-71:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dekeijzer_et_al:LIPIcs.ICALP.2020.71,
  author =	{de Keijzer, Bart and Kyropoulou, Maria and Ventre, Carmine},
  title =	{{Obviously Strategyproof Single-Minded Combinatorial Auctions}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{71:1--71:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.71},
  URN =		{urn:nbn:de:0030-drops-124781},
  doi =		{10.4230/LIPIcs.ICALP.2020.71},
  annote =	{Keywords: OSP Mechanisms, Extensive-form Mechanisms, Single-minded Combinatorial Auctions, Greedy algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Knapsack Secretary with Bursty Adversary

Authors: Thomas Kesselheim and Marco Molinaro


Abstract
The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While this model avoids the pessimism of the traditional adversarial model, in practice we cannot expect the input to be presented in perfectly random order. This has motivated research on best of both worlds (algorithms with good performance on both purely stochastic and purely adversarial inputs), or even better, on inputs that are a mix of both stochastic and adversarial parts. Unfortunately the latter seems much harder to achieve and very few results of this type are known. Towards advancing our understanding of designing such robust algorithms, we propose a random-order model with bursts of adversarial time steps. The assumption of burstiness of unexpected patterns is reasonable in many contexts, since changes (e.g. spike in a demand for a good) are often triggered by a common external event. We then consider the Knapsack Secretary problem in this model: there is a knapsack of size k (e.g., available quantity of a good), and in each of the n time steps an item comes with its value and size in [0,1] and the algorithm needs to make an irrevocable decision whether to accept or reject the item. We design an algorithm that gives an approximation of 1 - Õ(Γ/k) when the adversarial time steps can be covered by Γ ≥ √k intervals of size Õ(n/k). In particular, setting Γ = √k gives a (1 - O((ln² k)/√k))-approximation that is resistant to up to a (ln k)/√k-fraction of the items being adversarial, which is almost optimal even in the absence of adversarial items. Also, setting Γ = Ω̃(k) gives a constant approximation that is resistant to up to a constant fraction of items being adversarial. While the algorithm is a simple "primal" one it does not possess the crucial symmetry properties exploited in the traditional analyses. The strategy of our analysis is more robust and significantly different from previous ones, and we hope it can be useful for other beyond-worst-case models.

Cite as

Thomas Kesselheim and Marco Molinaro. Knapsack Secretary with Bursty Adversary. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 72:1-72:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kesselheim_et_al:LIPIcs.ICALP.2020.72,
  author =	{Kesselheim, Thomas and Molinaro, Marco},
  title =	{{Knapsack Secretary with Bursty Adversary}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{72:1--72:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.72},
  URN =		{urn:nbn:de:0030-drops-124798},
  doi =		{10.4230/LIPIcs.ICALP.2020.72},
  annote =	{Keywords: Beyond worst-case, secretary problem, random order, online algorithms, knapsack}
}
Document
Track A: Algorithms, Complexity and Games
The Iteration Number of Colour Refinement

Authors: Sandra Kiefer and Brendan D. McKay


Abstract
The Colour Refinement procedure and its generalisation to higher dimensions, the Weisfeiler-Leman algorithm, are central subroutines in approaches to the graph isomorphism problem. In an iterative fashion, Colour Refinement computes a colouring of the vertices of its input graph. A trivial upper bound on the iteration number of Colour Refinement on graphs of order n is n-1. We show that this bound is tight. More precisely, we prove via explicit constructions that there are infinitely many graphs G on which Colour Refinement takes |G|-1 iterations to stabilise. Modifying the infinite families that we present, we show that for every natural number n ≥ 10, there are graphs on n vertices on which Colour Refinement requires at least n-2 iterations to reach stabilisation.

Cite as

Sandra Kiefer and Brendan D. McKay. The Iteration Number of Colour Refinement. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 73:1-73:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kiefer_et_al:LIPIcs.ICALP.2020.73,
  author =	{Kiefer, Sandra and McKay, Brendan D.},
  title =	{{The Iteration Number of Colour Refinement}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{73:1--73:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.73},
  URN =		{urn:nbn:de:0030-drops-124801},
  doi =		{10.4230/LIPIcs.ICALP.2020.73},
  annote =	{Keywords: Colour Refinement, iteration number, Weisfeiler-Leman algorithm, quantifier depth}
}
Document
Track A: Algorithms, Complexity and Games
Towards Optimal Set-Disjointness and Set-Intersection Data Structures

Authors: Tsvi Kopelowitz and Virginia Vassilevska Williams


Abstract
In the online set-disjointness problem the goal is to preprocess a family of sets ℱ, so that given two sets S,S' ∈ ℱ, one can quickly establish whether the two sets are disjoint or not. If N = ∑_{S ∈ ℱ} |S|, then let N^p be the preprocessing time and let N^q be the query time. The most efficient known combinatorial algorithm is a generalization of an algorithm by Cohen and Porat [TCS'10] which has a tradeoff curve of p+q = 2. Kopelowitz, Pettie, and Porat [SODA'16] showed that, based on the 3SUM hypothesis, there is a conditional lower bound curve of p+2q ≥ 2. Thus, the current state-of-the-art exhibits a large gap. The online set-intersection problem is the reporting version of the online set-disjointness problem, and given a query, the goal is to report all of the elements in the intersection. When considering algorithms with N^p preprocessing time and N^q +O(op) query time, where op is the size of the output, the combinatorial algorithm for online set-disjointess can be extended to solve online set-intersection with a tradeoff curve of p+q = 2. Kopelowitz, Pettie, and Porat [SODA'16] showed that, assuming the 3SUM hypothesis, for 0 ≤ q ≤ 2/3 this curve is tight. However, for 2/3 ≤ q < 1 there is no known lower bound. In this paper we close both gaps by showing the following: - For online set-disjointness we design an algorithm whose runtime, assuming ω = 2 (where ω is the exponent in the fastest matrix multiplication algorithm), matches the lower bound curve of Kopelowitz et al., for q ≤ 1/3. We then complement the new algorithm by a matching conditional lower bound for q > 1/3 which is based on a natural hypothesis on the time required to detect a triangle in an unbalanced tripartite graph. Remarkably, even if ω > 2, the algorithm matches the lower bound curve of Kopelowitz et al. for p≥ 1.73688 and q ≤ 0.13156. - For set-intersection, we prove a conditional lower bound that matches the combinatorial upper bound curve for q≥ 1/2 which is based on a hypothesis on the time required to enumerate all triangles in an unbalanced tripartite graph. - Finally, we design algorithms for detecting and enumerating triangles in unbalanced tripartite graphs which match the lower bounds of the corresponding hypotheses, assuming ω = 2.

Cite as

Tsvi Kopelowitz and Virginia Vassilevska Williams. Towards Optimal Set-Disjointness and Set-Intersection Data Structures. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 74:1-74:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kopelowitz_et_al:LIPIcs.ICALP.2020.74,
  author =	{Kopelowitz, Tsvi and Vassilevska Williams, Virginia},
  title =	{{Towards Optimal Set-Disjointness and Set-Intersection Data Structures}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{74:1--74:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.74},
  URN =		{urn:nbn:de:0030-drops-124813},
  doi =		{10.4230/LIPIcs.ICALP.2020.74},
  annote =	{Keywords: Set-disjointness data structures, Triangle detection, Triangle enumeration, Fine-grained complexity, Fast matrix multiplication}
}
Document
Track A: Algorithms, Complexity and Games
Kinetic Geodesic Voronoi Diagrams in a Simple Polygon

Authors: Matias Korman, André van Renssen, Marcel Roeloffzen, and Frank Staals


Abstract
We study the geodesic Voronoi diagram of a set S of n linearly moving sites inside a static simple polygon P with m vertices. We identify all events where the structure of the Voronoi diagram changes, bound the number of such events, and then develop a kinetic data structure (KDS) that maintains the geodesic Voronoi diagram as the sites move. To this end, we first analyze how often a single bisector, defined by two sites, or a single Voronoi center, defined by three sites, can change. For both these structures we prove that the number of such changes is at most O(m³), and that this is tight in the worst case. Moreover, we develop compact, responsive, local, and efficient kinetic data structures for both structures. Our data structures use linear space and process a worst-case optimal number of events. Our bisector KDS handles each event in O(log m) time, and our Voronoi center handles each event in O(log² m) time. Both structures can be extended to efficiently support updating the movement of the sites as well. Using these data structures as building blocks we obtain a compact KDS for maintaining the full geodesic Voronoi diagram.

Cite as

Matias Korman, André van Renssen, Marcel Roeloffzen, and Frank Staals. Kinetic Geodesic Voronoi Diagrams in a Simple Polygon. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 75:1-75:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{korman_et_al:LIPIcs.ICALP.2020.75,
  author =	{Korman, Matias and van Renssen, Andr\'{e} and Roeloffzen, Marcel and Staals, Frank},
  title =	{{Kinetic Geodesic Voronoi Diagrams in a Simple Polygon}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{75:1--75:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.75},
  URN =		{urn:nbn:de:0030-drops-124820},
  doi =		{10.4230/LIPIcs.ICALP.2020.75},
  annote =	{Keywords: kinetic data structure, simple polygon, geodesic voronoi diagram}
}
Document
Track A: Algorithms, Complexity and Games
Polytopes, Lattices, and Spherical Codes for the Nearest Neighbor Problem

Authors: Thijs Laarhoven


Abstract
We study locality-sensitive hash methods for the nearest neighbor problem for the angular distance, focusing on the approach of first projecting down onto a random low-dimensional subspace, and then partitioning the projected vectors according to the Voronoi cells induced by a well-chosen spherical code. This approach generalizes and interpolates between the fast but asymptotically suboptimal hyperplane hashing of Charikar [STOC 2002], and asymptotically optimal but practically often slower hash families of e.g. Andoni - Indyk [FOCS 2006], Andoni - Indyk - Nguyen - Razenshteyn [SODA 2014] and Andoni - Indyk - Laarhoven - Razenshteyn - Schmidt [NIPS 2015]. We set up a framework for analyzing the performance of any spherical code in this context, and we provide results for various codes appearing in the literature, such as those related to regular polytopes and root lattices. Similar to hyperplane hashing, and unlike e.g. cross-polytope hashing, our analysis of collision probabilities and query exponents is exact and does not hide any order terms which vanish only for large d, thus facilitating an easier parameter selection in practical applications. For the two-dimensional case, we analytically derive closed-form expressions for arbitrary spherical codes, and we show that the equilateral triangle is optimal, achieving a better performance than the two-dimensional analogues of hyperplane and cross-polytope hashing. In three and four dimensions, we numerically find that the tetrahedron and 5-cell (the 3-simplex and 4-simplex) and the 16-cell (the 4-orthoplex) achieve the best query exponents, while in five or more dimensions orthoplices appear to outperform regular simplices, as well as the root lattice families A_k and D_k in terms of minimizing the query exponent. We provide lower bounds based on spherical caps, and we predict that in higher dimensions, larger spherical codes exist which outperform orthoplices in terms of the query exponent, and we argue why using the D_k root lattices will likely lead to better results in practice as well (compared to using cross-polytopes), due to a better trade-off between the asymptotic query exponent and the concrete costs of hashing.

Cite as

Thijs Laarhoven. Polytopes, Lattices, and Spherical Codes for the Nearest Neighbor Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{laarhoven:LIPIcs.ICALP.2020.76,
  author =	{Laarhoven, Thijs},
  title =	{{Polytopes, Lattices, and Spherical Codes for the Nearest Neighbor Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.76},
  URN =		{urn:nbn:de:0030-drops-124834},
  doi =		{10.4230/LIPIcs.ICALP.2020.76},
  annote =	{Keywords: (approximate) nearest neighbor problem, spherical codes, polytopes, lattices, locality-sensitive hashing (LSH)}
}
Document
Track A: Algorithms, Complexity and Games
Deterministic Sparse Fourier Transform with an 𝓁_{∞} Guarantee

Authors: Yi Li and Vasileios Nakos


Abstract
In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of x ∈ ℂⁿ and design a recovery algorithm such that the output of the algorithm approximates x̂, the Discrete Fourier Transform (DFT) of x. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al. (J Fourier Anal Appl 2018), which obtains O(k² log^(-1) k ⋅ log^5.5 n) samples and a similar runtime with the 𝓁₂/𝓁₁ guarantee. We focus on the stronger 𝓁_∞/𝓁₁ guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1) We find a deterministic collection of O(k² log n) samples for the 𝓁_∞/𝓁₁ recovery in time O(nk log² n), and a deterministic collection of O(k² log² n) samples for the 𝓁_∞/𝓁₁ sparse recovery in time O(k² log³n). 2) We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein’s inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k² + k log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k² log n/ log k) is known for incoherent matrices.

Cite as

Yi Li and Vasileios Nakos. Deterministic Sparse Fourier Transform with an 𝓁_{∞} Guarantee. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2020.77,
  author =	{Li, Yi and Nakos, Vasileios},
  title =	{{Deterministic Sparse Fourier Transform with an 𝓁\underline\{∞\} Guarantee}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.77},
  URN =		{urn:nbn:de:0030-drops-124844},
  doi =		{10.4230/LIPIcs.ICALP.2020.77},
  annote =	{Keywords: Fourier sparse recovery, derandomization, incoherent matrices}
}
Document
Track A: Algorithms, Complexity and Games
Faster Random k-CNF Satisfiability

Authors: Andrea Lincoln and Adam Yedidia


Abstract
We describe an algorithm to solve the problem of Boolean CNF-Satisfiability when the input formula is chosen randomly. We build upon the algorithms of Schöning 1999 and Dantsin et al. in 2002. The Schöning algorithm works by trying many possible random assignments, and for each one searching systematically in the neighborhood of that assignment for a satisfying solution. Previous algorithms for this problem run in time O(2^(n (1- Ω(1)/k))). Our improvement is simple: we count how many clauses are satisfied by each randomly sampled assignment, and only search in the neighborhoods of assignments with abnormally many satisfied clauses. We show that assignments like these are significantly more likely to be near a satisfying assignment. This improvement saves a factor of 2^(n Ω(lg² k)/k), resulting in an overall runtime of O(2^(n (1- Ω(lg² k)/k))) for random k-SAT.

Cite as

Andrea Lincoln and Adam Yedidia. Faster Random k-CNF Satisfiability. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 78:1-78:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ICALP.2020.78,
  author =	{Lincoln, Andrea and Yedidia, Adam},
  title =	{{Faster Random k-CNF Satisfiability}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{78:1--78:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.78},
  URN =		{urn:nbn:de:0030-drops-124857},
  doi =		{10.4230/LIPIcs.ICALP.2020.78},
  annote =	{Keywords: Random k-SAT, Average-Case, Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Succinct Filters for Sets of Unknown Sizes

Authors: Mingmou Liu, Yitong Yin, and Huacheng Yu


Abstract
The membership problem asks to maintain a set S ⊆ [u], supporting insertions and membership queries, i.e., testing if a given element is in the set. A data structure that computes exact answers is called a dictionary. When a (small) false positive rate ε is allowed, the data structure is called a filter. The space usages of the standard dictionaries or filters usually depend on the upper bound on the size of S, while the actual set can be much smaller. Pagh, Segev and Wieder [Pagh et al., 2013] were the first to study filters with varying space usage based on the current |S|. They showed in order to match the space with the current set size n = |S|, any filter data structure must use (1-o(1))n(log(1/ε)+(1-O(ε))log log n) bits, in contrast to the well-known lower bound of N log(1/ε) bits, where N is an upper bound on |S|. They also presented a data structure with almost optimal space of (1+o(1))n(log(1/ε)+O(log log n)) bits provided that n > u^0.001, with expected amortized constant insertion time and worst-case constant lookup time. In this work, we present a filter data structure with improvements in two aspects: - it has constant worst-case time for all insertions and lookups with high probability; - it uses space (1+o(1))n(log (1/ε)+log log n) bits when n > u^0.001, achieving optimal leading constant for all ε = o(1). We also present a dictionary that uses (1+o(1))nlog(u/n) bits of space, matching the optimal space in terms of the current size, and performs all operations in constant time with high probability.

Cite as

Mingmou Liu, Yitong Yin, and Huacheng Yu. Succinct Filters for Sets of Unknown Sizes. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 79:1-79:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ICALP.2020.79,
  author =	{Liu, Mingmou and Yin, Yitong and Yu, Huacheng},
  title =	{{Succinct Filters for Sets of Unknown Sizes}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{79:1--79:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.79},
  URN =		{urn:nbn:de:0030-drops-124867},
  doi =		{10.4230/LIPIcs.ICALP.2020.79},
  annote =	{Keywords: Bloom filters, Data structures, Approximate set membership, Dictionaries}
}
Document
Track A: Algorithms, Complexity and Games
A (2 + ε)-Factor Approximation Algorithm for Split Vertex Deletion

Authors: Daniel Lokshtanov, Pranabendu Misra, Fahad Panolan, Geevarghese Philip, and Saket Saurabh


Abstract
In the Split Vertex Deletion (SVD) problem, the input is an n-vertex undirected graph G and a weight function w: V(G) → ℕ, and the objective is to find a minimum weight subset S of vertices such that G-S is a split graph (i.e., there is bipartition of V(G-S) = C ⊎ I such that C is a clique and I is an independent set in G-S). This problem is a special case of 5-Hitting Set and consequently, there is a simple factor 5-approximation algorithm for this. On the negative side, it is easy to show that the problem does not admit a polynomial time (2-δ)-approximation algorithm, for any fixed δ > 0, unless the Unique Games Conjecture fails. We start by giving a simple quasipolynomial time (n^O(log n)) factor 2-approximation algorithm for SVD using the notion of clique-independent set separating collection. Thus, on the one hand SVD admits a factor 2-approximation in quasipolynomial time, and on the other hand this approximation factor cannot be improved assuming UGC. It naturally leads to the following question: Can SVD be 2-approximated in polynomial time? In this work we almost close this gap and prove that for any ε > 0, there is a n^O(log 1/(ε))-time 2(1+ε)-approximation algorithm.

Cite as

Daniel Lokshtanov, Pranabendu Misra, Fahad Panolan, Geevarghese Philip, and Saket Saurabh. A (2 + ε)-Factor Approximation Algorithm for Split Vertex Deletion. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 80:1-80:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ICALP.2020.80,
  author =	{Lokshtanov, Daniel and Misra, Pranabendu and Panolan, Fahad and Philip, Geevarghese and Saurabh, Saket},
  title =	{{A (2 + \epsilon)-Factor Approximation Algorithm for Split Vertex Deletion}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{80:1--80:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.80},
  URN =		{urn:nbn:de:0030-drops-124879},
  doi =		{10.4230/LIPIcs.ICALP.2020.80},
  annote =	{Keywords: Approximation Algorithms, Graph Algorithms, Split Vertex Deletion}
}
Document
Track A: Algorithms, Complexity and Games
Near Optimal Algorithm for the Directed Single Source Replacement Paths Problem

Authors: Shiri Chechik and Ofer Magen


Abstract
In the Single Source Replacement Paths (SSRP) problem we are given a graph G = (V, E), and a shortest paths tree K̂ rooted at a node s, and the goal is to output for every node t ∈ V and for every edge e in K̂ the length of the shortest path from s to t avoiding e. We present an Õ(m√n + n²) time randomized combinatorial algorithm for unweighted directed graphs. Previously such a bound was known in the directed case only for the seemingly easier problem of replacement path where both the source and the target nodes are fixed. Our new upper bound for this problem matches the existing conditional combinatorial lower bounds. Hence, (assuming these conditional lower bounds) our result is essentially optimal and completes the picture of the SSRP problem in the combinatorial setting. Our algorithm naturally extends to the case of small, rational edge weights. In the full version of the paper, we strengthen the existing conditional lower bounds in this case by showing that any O(mn^(1/2-ε)) time (combinatorial or algebraic) algorithm for some fixed ε > 0 yields a truly sub-cubic algorithm for the weighted All Pairs Shortest Paths problem (previously such a bound was known only for the combinatorial setting).

Cite as

Shiri Chechik and Ofer Magen. Near Optimal Algorithm for the Directed Single Source Replacement Paths Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 81:1-81:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2020.81,
  author =	{Chechik, Shiri and Magen, Ofer},
  title =	{{Near Optimal Algorithm for the Directed Single Source Replacement Paths Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{81:1--81:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.81},
  URN =		{urn:nbn:de:0030-drops-124886},
  doi =		{10.4230/LIPIcs.ICALP.2020.81},
  annote =	{Keywords: Fault tolerance, Replacement Paths, Combinatorial algorithms, Conditional lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Quantum Distributed Complexity of Set Disjointness on a Line

Authors: Frédéric Magniez and Ashwin Nayak


Abstract
Given x,y ∈ {0,1}ⁿ, Set Disjointness consists in deciding whether x_i = y_i = 1 for some index i ∈ [n]. We study the problem of computing this function in a distributed computing scenario in which the inputs x and y are given to the processors at the two extremities of a path of length d. Each vertex of the path has a quantum processor that can communicate with each of its neighbours by exchanging O(log n) qubits per round. We are interested in the number of rounds required for computing Set Disjointness with constant probability bounded away from 1/2. We call this problem "Set Disjointness on a Line". Set Disjointness on a Line was introduced by Le Gall and Magniez [Le Gall and Magniez, 2018] for proving lower bounds on the quantum distributed complexity of computing the diameter of an arbitrary network in the CONGEST model. However, they were only able to provide a lower bound when the local memory used by the processors on the intermediate vertices of the path is severely limited. More precisely, their bound applies only when the local memory of each intermediate processor consists of O(log n) qubits. In this work, we prove an unconditional lower bound of Ω̃(∛{n d²} + √n) rounds for Set Disjointness on a Line with d + 1 processors. This is the first non-trivial lower bound when there is no restriction on the memory used by the processors. The result gives us a new lower bound of Ω̃ (∛{nδ²} + √n) on the number of rounds required for computing the diameter δ of any n-node network with quantum messages of size O(log n) in the CONGEST model. We draw a connection between the distributed computing scenario above and a new model of query complexity. In this model, an algorithm computing a bi-variate function f (such as Set Disjointness) has access to the inputs x and y through two separate oracles 𝒪_x and 𝒪_y, respectively. The restriction is that the algorithm is required to alternately make d queries to 𝒪_x and d queries to 𝒪_y, with input-independent computation in between queries. The model reflects a "switching delay" of d queries between a "round" of queries to x and the following "round" of queries to y. The technique we use for deriving the round lower bound for Set Disjointness on a Line also applies to this query model. We provide an algorithm for Set Disjointness in this query model with query complexity that matches the round lower bound stated above, up to a polylogarithmic factor. In this sense, the round lower bound we show for Set Disjointness on a Line is optimal.

Cite as

Frédéric Magniez and Ashwin Nayak. Quantum Distributed Complexity of Set Disjointness on a Line. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 82:1-82:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{magniez_et_al:LIPIcs.ICALP.2020.82,
  author =	{Magniez, Fr\'{e}d\'{e}ric and Nayak, Ashwin},
  title =	{{Quantum Distributed Complexity of Set Disjointness on a Line}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{82:1--82:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.82},
  URN =		{urn:nbn:de:0030-drops-124892},
  doi =		{10.4230/LIPIcs.ICALP.2020.82},
  annote =	{Keywords: Quantum distributed computing, Set Disjointness, communication complexity, query complexity}
}
Document
Track A: Algorithms, Complexity and Games
Can Verifiable Delay Functions Be Based on Random Oracles?

Authors: Mohammad Mahmoody, Caleb Smith, and David J. Wu


Abstract
Boneh, Bonneau, Bünz, and Fisch (CRYPTO 2018) recently introduced the notion of a verifiable delay function (VDF). VDFs are functions that take a long sequential time T to compute, but whose outputs y := Eval(x) can be efficiently verified (possibly given a proof π) in time t ≪ T (e.g., t = poly(λ, log T) where λ is the security parameter). The first security requirement on a VDF, called uniqueness, is that no polynomial-time algorithm can find a convincing proof π' that verifies for an input x and a different output y' ≠ y. The second security requirement, called sequentiality, is that no polynomial-time algorithm running in time σ < T for some parameter σ (e.g., σ = T^{1/10}) can compute y, even with poly(T,λ) many parallel processors. Starting from the work of Boneh et al., there are now multiple constructions of VDFs from various algebraic assumptions. In this work, we study whether VDFs can be constructed from ideal hash functions in a black-box way, as modeled in the random oracle model (ROM). In the ROM, we measure the running time by the number of oracle queries and the sequentiality by the number of rounds of oracle queries. We rule out two classes of constructions of VDFs in the ROM: - We show that VDFs satisfying perfect uniqueness (i.e., VDFs where no different convincing solution y' ≠ y exists) cannot be constructed in the ROM. More formally, we give an attacker that finds the solution y in ≈ t rounds of queries, asking only poly(T) queries in total. - We also rule out tight verifiable delay functions in the ROM. Tight verifiable delay functions, recently studied by Döttling, Garg, Malavolta, and Vasudevan (ePrint Report 2019), require sequentiality for σ ≈ T-T^ρ for some constant 0 < ρ < 1. More generally, our lower bound also applies to proofs of sequential work (i.e., VDFs without the uniqueness property), even in the private verification setting, and sequentiality σ > T-(T)/(2t) for a concrete verification time t.

Cite as

Mohammad Mahmoody, Caleb Smith, and David J. Wu. Can Verifiable Delay Functions Be Based on Random Oracles?. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mahmoody_et_al:LIPIcs.ICALP.2020.83,
  author =	{Mahmoody, Mohammad and Smith, Caleb and Wu, David J.},
  title =	{{Can Verifiable Delay Functions Be Based on Random Oracles?}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.83},
  URN =		{urn:nbn:de:0030-drops-124907},
  doi =		{10.4230/LIPIcs.ICALP.2020.83},
  annote =	{Keywords: verifiable delay function, lower bound, random oracle model}
}
Document
Track A: Algorithms, Complexity and Games
On the Two-Dimensional Knapsack Problem for Convex Polygons

Authors: Arturo Merino and Andreas Wiese


Abstract
We study the two-dimensional geometric knapsack problem for convex polygons. Given a set of weighted convex polygons and a square knapsack, the goal is to select the most profitable subset of the given polygons that fits non-overlappingly into the knapsack. We allow to rotate the polygons by arbitrary angles. We present a quasi-polynomial time O(1)-approximation algorithm for the general case and a polynomial time O(1)-approximation algorithm if all input polygons are triangles, both assuming polynomially bounded integral input data. Also, we give a quasi-polynomial time algorithm that computes a solution of optimal weight under resource augmentation, i.e., we allow to increase the size of the knapsack by a factor of 1+δ for some δ > 0 but compare ourselves with the optimal solution for the original knapsack. To the best of our knowledge, these are the first results for two-dimensional geometric knapsack in which the input objects are more general than axis-parallel rectangles or circles and in which the input polygons can be rotated by arbitrary angles.

Cite as

Arturo Merino and Andreas Wiese. On the Two-Dimensional Knapsack Problem for Convex Polygons. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 84:1-84:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{merino_et_al:LIPIcs.ICALP.2020.84,
  author =	{Merino, Arturo and Wiese, Andreas},
  title =	{{On the Two-Dimensional Knapsack Problem for Convex Polygons}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{84:1--84:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.84},
  URN =		{urn:nbn:de:0030-drops-124916},
  doi =		{10.4230/LIPIcs.ICALP.2020.84},
  annote =	{Keywords: Approximation algorithms, geometric knapsack problem, polygons, rotation}
}
Document
Track A: Algorithms, Complexity and Games
Proportionally Fair Clustering Revisited

Authors: Evi Micha and Nisarg Shah


Abstract
In this work, we study fairness in centroid clustering. In this problem, k cluster centers must be placed given n points in a metric space, and the cost to each point is its distance to the nearest cluster center. Recent work of Chen et al. [Chen et al., 2019] introduces the notion of a proportionally fair clustering, in which no group of at least n/k points can find a new cluster center which provides lower cost to each member of the group. They propose a greedy capture algorithm which provides a 1+√2 approximation of proportional fairness for any metric space, and derive generalization bounds for learning proportionally fair clustering from samples in the case where a cluster center can only be placed at one of finitely many given locations in the metric space. We focus on the case where cluster centers can be placed anywhere in the (usually infinite) metric space. In case of the L² distance metric over ℝ^t, we show that the approximation ratio of greedy capture improves to 2. We also show that this is due to a special property of the L² distance; for the L¹ and L^∞ distances, the approximation ratio remains 1+√2. We provide universal lower bounds which apply to all algorithms. We also consider metric spaces defined on graphs. For trees, we show that an exact proportionally fair clustering always exists and provide an efficient algorithm to find one. The corresponding question for general graph remains an interesting open question. Finally, we show that for the L² distance, checking whether a proportionally fair clustering exists and implementing greedy capture over an infinite metric space are NP-hard problems, but (approximately) solvable in special cases. We also derive generalization bounds which show that an approximately proportionally fair clustering for a large number of points can be learned from a small number of samples. Our work advances the understanding of proportional fairness in clustering, and points out many avenues for future work.

Cite as

Evi Micha and Nisarg Shah. Proportionally Fair Clustering Revisited. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 85:1-85:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{micha_et_al:LIPIcs.ICALP.2020.85,
  author =	{Micha, Evi and Shah, Nisarg},
  title =	{{Proportionally Fair Clustering Revisited}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{85:1--85:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.85},
  URN =		{urn:nbn:de:0030-drops-124923},
  doi =		{10.4230/LIPIcs.ICALP.2020.85},
  annote =	{Keywords: Fairness, Clustering, Facility location}
}
Document
Track A: Algorithms, Complexity and Games
Breaking the Barrier of 2 for the Storage Allocation Problem

Authors: Tobias Mömke and Andreas Wiese


Abstract
Packing problems are an important class of optimization problems. The probably most well-known problem if this type is knapsack and many generalizations of it have been studied in the literature like Two-dimensional Geometric Knapsack (2DKP) and Unsplittable Flow on a Path (UFP). For the latter two problems, recently the first polynomial time approximation algorithms with better approximation ratios than 2 were presented [Gálvez et al., FOCS 2017][Grandoni et al., STOC 2018]. In this paper we break the barrier of 2 for the Storage Allocation Problem (SAP), a problem which combines properties of 2DKP and UFP. In SAP, we are given a path with capacitated edges and a set of tasks where each task has a start vertex, an end vertex, a size, and a profit. We seek to select the most profitable set of tasks that we can draw as non-overlapping rectangles underneath the capacity profile of the edges where the height of each rectangle equals the size of the corresponding task. The problem SAP appears naturally in settings of allocating resources like memory, bandwidth, etc. where each request needs a contiguous portion of the resource. The best known polynomial time approximation algorithm for SAP has an approximation ratio of 2+ε [Mömke and Wiese, ICALP 2015] and no better quasi-polynomial time algorithm is known. We present a polynomial time (63/32+ε) < 1.969-approximation algorithm for the important case of uniform edge capacities and a quasi-polynomial time (1.997+ε)-approximation algorithm for non-uniform quasi-polynomially bounded edge capacities. Key to our results are building blocks consisting of stair-blocks, jammed tasks, and boxes that we use to construct profitable solutions and which allow us to compute solutions of these types efficiently. Finally, using our techniques we show that under slight resource augmentation we can obtain even approximation ratios of 3/2+ε in polynomial time and 1+ε in quasi-polynomial time, both for arbitrary edge capacities.

Cite as

Tobias Mömke and Andreas Wiese. Breaking the Barrier of 2 for the Storage Allocation Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 86:1-86:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{momke_et_al:LIPIcs.ICALP.2020.86,
  author =	{M\"{o}mke, Tobias and Wiese, Andreas},
  title =	{{Breaking the Barrier of 2 for the Storage Allocation Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{86:1--86:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.86},
  URN =		{urn:nbn:de:0030-drops-124931},
  doi =		{10.4230/LIPIcs.ICALP.2020.86},
  annote =	{Keywords: Approximation Algorithms, Resource Allocation, Dynamic Programming}
}
Document
Track A: Algorithms, Complexity and Games
On the Complexity of Zero Gap MIP*

Authors: Hamoon Mousavi, Seyed Sajjad Nezhadi, and Henry Yuen


Abstract
The class MIP^* is the set of languages decidable by multiprover interactive proofs with quantum entangled provers. It was recently shown by Ji, Natarajan, Vidick, Wright and Yuen that MIP^* is equal to RE, the set of recursively enumerable languages. In particular this shows that the complexity of approximating the quantum value of a non-local game G is equivalent to the complexity of the Halting problem. In this paper we investigate the complexity of deciding whether the quantum value of a non-local game G is exactly 1. This problem corresponds to a complexity class that we call zero gap MIP^*, denoted by MIP₀^*, where there is no promise gap between the verifier’s acceptance probabilities in the YES and NO cases. We prove that MIP₀^* extends beyond the first level of the arithmetical hierarchy (which includes RE and its complement coRE), and in fact is equal to Π₂⁰, the class of languages that can be decided by quantified formulas of the form ∀ y ∃ z R(x,y,z). Combined with the previously known result that MIP₀^{co} (the commuting operator variant of MIP₀^*) is equal to coRE, our result further highlights the fascinating connection between various models of quantum multiprover interactive proofs and different classes in computability theory.

Cite as

Hamoon Mousavi, Seyed Sajjad Nezhadi, and Henry Yuen. On the Complexity of Zero Gap MIP*. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 87:1-87:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mousavi_et_al:LIPIcs.ICALP.2020.87,
  author =	{Mousavi, Hamoon and Nezhadi, Seyed Sajjad and Yuen, Henry},
  title =	{{On the Complexity of Zero Gap MIP*}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{87:1--87:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.87},
  URN =		{urn:nbn:de:0030-drops-124940},
  doi =		{10.4230/LIPIcs.ICALP.2020.87},
  annote =	{Keywords: Quantum Complexity, Multiprover Interactive Proofs, Computability Theory}
}
Document
Track A: Algorithms, Complexity and Games
Hypergraph Isomorphism for Groups with Restricted Composition Factors

Authors: Daniel Neuen


Abstract
We consider the isomorphism problem for hypergraphs taking as input two hypergraphs over the same set of vertices V and a permutation group Γ over domain V, and asking whether there is a permutation γ ∈ Γ that proves the two hypergraphs to be isomorphic. We show that for input groups, all of whose composition factors are isomorphic to a subgroup of the symmetric group on d points, this problem can be solved in time (n+m)^O((log d)^c) for some absolute constant c where n denotes the number of vertices and m the number of hyperedges. In particular, this gives the currently fastest isomorphism test for hypergraphs in general. The previous best algorithm for the above problem due to Schweitzer and Wiebking (STOC 2019) runs in time n^O(d)m^O(1). As an application of this result, we obtain, for example, an algorithm testing isomorphism of graphs excluding K_{3,h} as a minor in time n^O((log h)^c). In particular, this gives an isomorphism test for graphs of Euler genus at most g running in time n^O((log g)^c).

Cite as

Daniel Neuen. Hypergraph Isomorphism for Groups with Restricted Composition Factors. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 88:1-88:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{neuen:LIPIcs.ICALP.2020.88,
  author =	{Neuen, Daniel},
  title =	{{Hypergraph Isomorphism for Groups with Restricted Composition Factors}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{88:1--88:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.88},
  URN =		{urn:nbn:de:0030-drops-124959},
  doi =		{10.4230/LIPIcs.ICALP.2020.88},
  annote =	{Keywords: graph isomorphism, groups with restricted composition factors, hypergraphs, bounded genus graphs}
}
Document
Track A: Algorithms, Complexity and Games
On Solving (Non)commutative Weighted Edmonds' Problem

Authors: Taihei Oki


Abstract
In this paper, we consider computing the degree of the Dieudonné determinant of a polynomial matrix A = A_l + A_{l-1} s + ⋯ + A₀ s^l, where each A_d is a linear symbolic matrix, i.e., entries of A_d are affine functions in symbols x₁, …, x_m over a field K. This problem is a natural "weighted analog" of Edmonds' problem, which is to compute the rank of a linear symbolic matrix. Regarding x₁, …, x_m as commutative or noncommutative, two different versions of weighted and unweighted Edmonds' problems can be considered. Deterministic polynomial-time algorithms are unknown for commutative Edmonds' problem and have been proposed recently for noncommutative Edmonds' problem. The main contribution of this paper is to establish a deterministic polynomial-time reduction from (non)commutative weighted Edmonds' problem to unweighed Edmonds' problem. Our reduction makes use of the discrete Legendre conjugacy between the integer sequences of the maximum degree of minors of A and the rank of linear symbolic matrices obtained from the coefficient matrices of A. Combined with algorithms for noncommutative Edmonds' problem, our reduction yields the first deterministic polynomial-time algorithm for noncommutative weighted Edmonds' problem with polynomial bit-length bounds. We also give a reduction of the degree computation of quasideterminants and its application to the degree computation of noncommutative rational functions.

Cite as

Taihei Oki. On Solving (Non)commutative Weighted Edmonds' Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 89:1-89:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{oki:LIPIcs.ICALP.2020.89,
  author =	{Oki, Taihei},
  title =	{{On Solving (Non)commutative Weighted Edmonds' Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{89:1--89:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.89},
  URN =		{urn:nbn:de:0030-drops-124963},
  doi =		{10.4230/LIPIcs.ICALP.2020.89},
  annote =	{Keywords: skew fields, Edmonds' problem, Dieudonn\'{e} determinant, degree computation, Smith - McMillan form, matrix expansion, discrete Legendre conjugacy}
}
Document
Track A: Algorithms, Complexity and Games
A General Stabilization Bound for Influence Propagation in Graphs

Authors: Pál András Papp and Roger Wattenhofer


Abstract
We study the stabilization time of a wide class of processes on graphs, in which each node can only switch its state if it is motivated to do so by at least a (1+λ)/2 fraction of its neighbors, for some 0 < λ < 1. Two examples of such processes are well-studied dynamically changing colorings in graphs: in majority processes, nodes switch to the most frequent color in their neighborhood, while in minority processes, nodes switch to the least frequent color in their neighborhood. We describe a non-elementary function f(λ), and we show that in the sequential model, the worst-case stabilization time of these processes can completely be characterized by f(λ). More precisely, we prove that for any ε > 0, O(n^(1+f(λ)+ε)) is an upper bound on the stabilization time of any proportional majority/minority process, and we also show that there are graph constructions where stabilization indeed takes Ω(n^(1+f(λ)-ε)) steps.

Cite as

Pál András Papp and Roger Wattenhofer. A General Stabilization Bound for Influence Propagation in Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 90:1-90:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{papp_et_al:LIPIcs.ICALP.2020.90,
  author =	{Papp, P\'{a}l Andr\'{a}s and Wattenhofer, Roger},
  title =	{{A General Stabilization Bound for Influence Propagation in Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{90:1--90:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.90},
  URN =		{urn:nbn:de:0030-drops-124978},
  doi =		{10.4230/LIPIcs.ICALP.2020.90},
  annote =	{Keywords: Minority process, Majority process}
}
Document
Track A: Algorithms, Complexity and Games
Network-Aware Strategies in Financial Systems

Authors: Pál András Papp and Roger Wattenhofer


Abstract
We study the incentives of banks in a financial network, where the network consists of debt contracts and credit default swaps (CDSs) between banks. One of the most important questions in such a system is the problem of deciding which of the banks are in default, and how much of their liabilities these banks can pay. We study the payoff and preferences of the banks in the different solutions to this problem. We also introduce a more refined model which allows assigning priorities to payment obligations; this provides a more expressive and realistic model of real-life financial systems, while it always ensures the existence of a solution. The main focus of the paper is an analysis of the actions that a single bank can execute in a financial system in order to influence the outcome to its advantage. We show that removing an incoming debt, or donating funds to another bank can result in a single new solution that is strictly more favorable to the acting bank. We also show that increasing the bank’s external funds or modifying the priorities of outgoing payments cannot introduce a more favorable new solution into the system, but may allow the bank to remove some unfavorable solutions, or to increase its recovery rate. Finally, we show how the actions of two banks in a simple financial system can result in classical game theoretic situations like the prisoner’s dilemma or the dollar auction, demonstrating the wide expressive capability of the financial system model.

Cite as

Pál András Papp and Roger Wattenhofer. Network-Aware Strategies in Financial Systems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 91:1-91:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{papp_et_al:LIPIcs.ICALP.2020.91,
  author =	{Papp, P\'{a}l Andr\'{a}s and Wattenhofer, Roger},
  title =	{{Network-Aware Strategies in Financial Systems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{91:1--91:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.91},
  URN =		{urn:nbn:de:0030-drops-124988},
  doi =		{10.4230/LIPIcs.ICALP.2020.91},
  annote =	{Keywords: Financial network, credit default swap, creditor priority, clearing problem, prisoner’s dilemma, dollar auction}
}
Document
Track A: Algorithms, Complexity and Games
Nondeterministic and Randomized Boolean Hierarchies in Communication Complexity

Authors: Toniann Pitassi, Morgan Shirley, and Thomas Watson


Abstract
We investigate the power of randomness in two-party communication complexity. In particular, we study the model where the parties can make a constant number of queries to a function with an efficient one-sided-error randomized protocol. The complexity classes defined by this model comprise the Randomized Boolean Hierarchy, which is analogous to the Boolean Hierarchy but defined with one-sided-error randomness instead of nondeterminism. Our techniques connect the Nondeterministic and Randomized Boolean Hierarchies, and we provide a complete picture of the relationships among complexity classes within and across these two hierarchies. In particular, we prove that the Randomized Boolean Hierarchy does not collapse, and we prove a query-to-communication lifting theorem for all levels of the Nondeterministic Boolean Hierarchy and use it to resolve an open problem stated in the paper by Halstenberg and Reischuk (CCC 1988) which initiated the study of this hierarchy.

Cite as

Toniann Pitassi, Morgan Shirley, and Thomas Watson. Nondeterministic and Randomized Boolean Hierarchies in Communication Complexity. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 92:1-92:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{pitassi_et_al:LIPIcs.ICALP.2020.92,
  author =	{Pitassi, Toniann and Shirley, Morgan and Watson, Thomas},
  title =	{{Nondeterministic and Randomized Boolean Hierarchies in Communication Complexity}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{92:1--92:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.92},
  URN =		{urn:nbn:de:0030-drops-124992},
  doi =		{10.4230/LIPIcs.ICALP.2020.92},
  annote =	{Keywords: Boolean hierarchies, lifting theorems, query complexity}
}
Document
Track A: Algorithms, Complexity and Games
A Spectral Bound on Hypergraph Discrepancy

Authors: Aditya Potukuchi


Abstract
Let ℋ be a t-regular hypergraph on n vertices and m edges. Let M be the m × n incidence matrix of ℋ and let us denote λ = max_{v ∈ 𝟏^⟂} 1/‖v‖ ‖Mv‖. We show that the discrepancy of ℋ is O(√t + λ). As a corollary, this gives us that for every t, the discrepancy of a random t-regular hypergraph with n vertices and m ≥ n edges is almost surely O(√t) as n grows. The proof also gives a polynomial time algorithm that takes a hypergraph as input and outputs a coloring with the above guarantee.

Cite as

Aditya Potukuchi. A Spectral Bound on Hypergraph Discrepancy. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 93:1-93:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{potukuchi:LIPIcs.ICALP.2020.93,
  author =	{Potukuchi, Aditya},
  title =	{{A Spectral Bound on Hypergraph Discrepancy}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{93:1--93:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.93},
  URN =		{urn:nbn:de:0030-drops-125002},
  doi =		{10.4230/LIPIcs.ICALP.2020.93},
  annote =	{Keywords: Hypergraph discrepancy, Spectral methods, Beck-Fiala conjecture}
}
Document
Track A: Algorithms, Complexity and Games
Faster Dynamic Range Mode

Authors: Bryce Sandlund and Yinzhan Xu


Abstract
In the dynamic range mode problem, we are given a sequence a of length bounded by N and asked to support element insertion, deletion, and queries for the most frequent element of a contiguous subsequence of a. In this work, we devise a deterministic data structure that handles each operation in worst-case Õ(N^0.655994) time, thus breaking the O(N^{2/3}) per-operation time barrier for this problem. The data structure is achieved by combining the ideas in Williams and Xu (SODA 2020) for batch range mode with a novel data structure variant of the Min-Plus product.

Cite as

Bryce Sandlund and Yinzhan Xu. Faster Dynamic Range Mode. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 94:1-94:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sandlund_et_al:LIPIcs.ICALP.2020.94,
  author =	{Sandlund, Bryce and Xu, Yinzhan},
  title =	{{Faster Dynamic Range Mode}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{94:1--94:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.94},
  URN =		{urn:nbn:de:0030-drops-125018},
  doi =		{10.4230/LIPIcs.ICALP.2020.94},
  annote =	{Keywords: Range Mode, Min-Plus Product}
}
Document
Track A: Algorithms, Complexity and Games
An FPT-Algorithm for Recognizing k-Apices of Minor-Closed Graph Classes

Authors: Ignasi Sau, Giannos Stamoulis, and Dimitrios M. Thilikos


Abstract
Let G be a graph class. We say that a graph G is a k-apex of G if G contains a set S of at most k vertices such that G⧵S belongs to G. We prove that if G is minor-closed, then there is an algorithm that either returns a set S certifying that G is a k-apex of G or reports that such a set does not exist, in 2^{poly(k)}n³ time. Here poly is a polynomial function whose degree depends on the maximum size of a minor-obstruction of G, i.e., the minor-minimal set of graphs not belonging to G. In the special case where G excludes some apex graph as a minor, we give an alternative algorithm running in 2^{poly(k)}n² time.

Cite as

Ignasi Sau, Giannos Stamoulis, and Dimitrios M. Thilikos. An FPT-Algorithm for Recognizing k-Apices of Minor-Closed Graph Classes. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 95:1-95:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sau_et_al:LIPIcs.ICALP.2020.95,
  author =	{Sau, Ignasi and Stamoulis, Giannos and Thilikos, Dimitrios M.},
  title =	{{An FPT-Algorithm for Recognizing k-Apices of Minor-Closed Graph Classes}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{95:1--95:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.95},
  URN =		{urn:nbn:de:0030-drops-125027},
  doi =		{10.4230/LIPIcs.ICALP.2020.95},
  annote =	{Keywords: Graph modification problems, irrelevant vertex technique, graph minors, parameterized algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Contraction: A Unified Perspective of Correlation Decay and Zero-Freeness of 2-Spin Systems

Authors: Shuai Shao and Yuxin Sun


Abstract
We study complex zeros of the partition function of 2-spin systems, viewed as a multivariate polynomial in terms of the edge interaction parameters and the uniform external field. We obtain new zero-free regions in which all these parameters are complex-valued. Crucially based on the zero-freeness, we are able to extend the existence of correlation decay to these complex regions from real parameters. As a consequence, we obtain an FPTAS for computing the partition function of 2-spin systems on graphs of bounded degree for these parameter settings. We introduce the contraction property as a unified sufficient condition to devise FPTAS via either Weitz’s algorithm or Barvinok’s algorithm. Our main technical contribution is a very simple but general approach to extend any real parameter of which the 2-spin system exhibits correlation decay to its complex neighborhood where the partition function is zero-free and correlation decay still exists. This result formally establishes the inherent connection between two distinct notions of phase transition for 2-spin systems: the existence of correlation decay and the zero-freeness of the partition function via a unified perspective, contraction.

Cite as

Shuai Shao and Yuxin Sun. Contraction: A Unified Perspective of Correlation Decay and Zero-Freeness of 2-Spin Systems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 96:1-96:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{shao_et_al:LIPIcs.ICALP.2020.96,
  author =	{Shao, Shuai and Sun, Yuxin},
  title =	{{Contraction: A Unified Perspective of Correlation Decay and Zero-Freeness of 2-Spin Systems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{96:1--96:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.96},
  URN =		{urn:nbn:de:0030-drops-125036},
  doi =		{10.4230/LIPIcs.ICALP.2020.96},
  annote =	{Keywords: 2-Spin system, Correlation decay, Zero-freeness, Phase transition, Contraction}
}
Document
Track A: Algorithms, Complexity and Games
Quasi-Majority Functional Voting on Expander Graphs

Authors: Nobutaka Shimizu and Takeharu Shiraga


Abstract
Consider a distributed graph where each vertex holds one of two distinct opinions. In this paper, we are interested in synchronous voting processes where each vertex updates its opinion according to a predefined common local updating rule. For example, each vertex adopts the majority opinion among 1) itself and two randomly picked neighbors in best-of-two or 2) three randomly picked neighbors in best-of-three. Previous works intensively studied specific rules including best-of-two and best-of-three individually. In this paper, we generalize and extend previous works of best-of-two and best-of-three on expander graphs by proposing a new model, quasi-majority functional voting. This new model contains best-of-two and best-of-three as special cases. We show that, on expander graphs with sufficiently large initial bias, any quasi-majority functional voting reaches consensus within O(log n) steps with high probability. Moreover, we show that, for any initial opinion configuration, any quasi-majority functional voting on expander graphs with higher expansion (e.g., Erdős-Rényi graph G(n,p) with p = Ω(1/√n)) reaches consensus within O(log n) with high probability. Furthermore, we show that the consensus time is O(log n/log k) of best-of-(2k+1) for k = o(n/log n).

Cite as

Nobutaka Shimizu and Takeharu Shiraga. Quasi-Majority Functional Voting on Expander Graphs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 97:1-97:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{shimizu_et_al:LIPIcs.ICALP.2020.97,
  author =	{Shimizu, Nobutaka and Shiraga, Takeharu},
  title =	{{Quasi-Majority Functional Voting on Expander Graphs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{97:1--97:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.97},
  URN =		{urn:nbn:de:0030-drops-125042},
  doi =		{10.4230/LIPIcs.ICALP.2020.97},
  annote =	{Keywords: Distributed voting, consensus problem, expander graph, Markov chain}
}
Document
Track A: Algorithms, Complexity and Games
Property Testing of LP-Type Problems

Authors: Rogers Epstein and Sandeep Silwal


Abstract
Given query access to a set of constraints S, we wish to quickly check if some objective function φ subject to these constraints is at most a given value k. We approach this problem using the framework of property testing where our goal is to distinguish the case φ(S) ≤ k from the case that at least an ε fraction of the constraints in S need to be removed for φ(S) ≤ k to hold. We restrict our attention to the case where (S,φ) are LP-Type problems which is a rich family of combinatorial optimization problems with an inherent geometric structure. By utilizing a simple sampling procedure which has been used previously to study these problems, we are able to create property testers for any LP-Type problem whose query complexities are independent of the number of constraints. To the best of our knowledge, this is the first work that connects the area of LP-Type problems and property testing in a systematic way. Among our results are property testers for a variety of LP-Type problems that are new and also problems that have been studied previously such as a tight upper bound on the query complexity of testing clusterability with one cluster considered by Alon, Dar, Parnas, and Ron (FOCS 2000). We also supply a corresponding tight lower bound for this problem and other LP-Type problems using geometric constructions.

Cite as

Rogers Epstein and Sandeep Silwal. Property Testing of LP-Type Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 98:1-98:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{epstein_et_al:LIPIcs.ICALP.2020.98,
  author =	{Epstein, Rogers and Silwal, Sandeep},
  title =	{{Property Testing of LP-Type Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{98:1--98:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.98},
  URN =		{urn:nbn:de:0030-drops-125056},
  doi =		{10.4230/LIPIcs.ICALP.2020.98},
  annote =	{Keywords: property pesting, LP-Type problems, random sampling}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds for Dynamic Distributed Task Allocation

Authors: Hsin-Hao Su and Nicole Wein


Abstract
We study the problem of distributed task allocation in multi-agent systems. Suppose there is a collection of agents, a collection of tasks, and a demand vector, which specifies the number of agents required to perform each task. The goal of the agents is to cooperatively allocate themselves to the tasks to satisfy the demand vector. We study the dynamic version of the problem where the demand vector changes over time. Here, the goal is to minimize the switching cost, which is the number of agents that change tasks in response to a change in the demand vector. The switching cost is an important metric since changing tasks may incur significant overhead. We study a mathematical formalization of the above problem introduced by Su, Su, Dornhaus, and Lynch [Su et al., 2017], which can be reformulated as a question of finding a low distortion embedding from symmetric difference to Hamming distance. In this model it is trivial to prove that the switching cost is at least 2. We present the first non-trivial lower bounds for the switching cost, by giving lower bounds of 3 and 4 for different ranges of the parameters.

Cite as

Hsin-Hao Su and Nicole Wein. Lower Bounds for Dynamic Distributed Task Allocation. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 99:1-99:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{su_et_al:LIPIcs.ICALP.2020.99,
  author =	{Su, Hsin-Hao and Wein, Nicole},
  title =	{{Lower Bounds for Dynamic Distributed Task Allocation}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{99:1--99:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.99},
  URN =		{urn:nbn:de:0030-drops-125063},
  doi =		{10.4230/LIPIcs.ICALP.2020.99},
  annote =	{Keywords: distributed task allocation, combinatorics, lower bounds, multi-agent systems, low-distortion embedding, dynamic algorithms, biological distributed algorithms}
}
Document
Track A: Algorithms, Complexity and Games
On the Degree of Boolean Functions as Polynomials over ℤ_m

Authors: Xiaoming Sun, Yuan Sun, Jiaheng Wang, Kewen Wu, Zhiyu Xia, and Yufan Zheng


Abstract
Polynomial representations of Boolean functions over various rings such as ℤ and ℤ_m have been studied since Minsky and Papert (1969). From then on, they have been employed in a large variety of areas including communication complexity, circuit complexity, learning theory, coding theory and so on. For any integer m ≥ 2, each Boolean function has a unique multilinear polynomial representation over ring ℤ_m. The degree of such polynomial is called modulo-m degree, denoted as deg_m(⋅). In this paper, we investigate the lower bound of modulo-m degree of Boolean functions. When m = p^k (k ≥ 1) for some prime p, we give a tight lower bound deg_m(f) ≥ k(p-1) for any non-degenerate function f:{0,1}ⁿ → {0,1}, provided that n is sufficient large. When m contains two different prime factors p and q, we give a nearly optimal lower bound for any symmetric function f:{0,1}ⁿ → {0,1} that deg_m(f) ≥ n/{2+1/(p-1)+1/(q-1)}.

Cite as

Xiaoming Sun, Yuan Sun, Jiaheng Wang, Kewen Wu, Zhiyu Xia, and Yufan Zheng. On the Degree of Boolean Functions as Polynomials over ℤ_m. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 100:1-100:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ICALP.2020.100,
  author =	{Sun, Xiaoming and Sun, Yuan and Wang, Jiaheng and Wu, Kewen and Xia, Zhiyu and Zheng, Yufan},
  title =	{{On the Degree of Boolean Functions as Polynomials over \mathbb{Z}\underlinem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{100:1--100:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.100},
  URN =		{urn:nbn:de:0030-drops-125070},
  doi =		{10.4230/LIPIcs.ICALP.2020.100},
  annote =	{Keywords: Boolean function, polynomial, modular degree, Ramsey theory}
}
Document
Track A: Algorithms, Complexity and Games
On Quasipolynomial Multicut-Mimicking Networks and Kernelization of Multiway Cut Problems

Authors: Magnus Wahlström


Abstract
We show the existence of an exact mimicking network of k^O(log k) edges for minimum multicuts over a set of terminals in an undirected graph, where k is the total capacity of the terminals. Furthermore, if Small Set Expansion has an approximation algorithm with a ratio slightly better than Θ(log n), then a mimicking network of quasipolynomial size can be computed in polynomial time. As a consequence of the latter, several problems would have quasipolynomial kernels, including Edge Multiway Cut, Group Feedback Edge Set for an arbitrary group, 0-Extension for integer-weighted metrics, and Edge Multicut parameterized by the solution and the number of cut requests. The result works via a combination of the matroid-based irrelevant edge approach used in the kernel for s-Multiway Cut with a recursive decomposition and sparsification of the graph along sparse cuts. The main technical contribution is a matroid-based marking procedure that we can show will mark all non-irrelevant edges, assuming that the graph is sufficiently densely connected. The only part of the result that is not currently constructive and polynomial-time computable is the detection of such sparse cuts. This is the first progress on the kernelization of Multiway Cut problems since the kernel for s-Multiway Cut for constant value of s (Kratsch and Wahlström, FOCS 2012).

Cite as

Magnus Wahlström. On Quasipolynomial Multicut-Mimicking Networks and Kernelization of Multiway Cut Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 101:1-101:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wahlstrom:LIPIcs.ICALP.2020.101,
  author =	{Wahlstr\"{o}m, Magnus},
  title =	{{On Quasipolynomial Multicut-Mimicking Networks and Kernelization of Multiway Cut Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{101:1--101:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.101},
  URN =		{urn:nbn:de:0030-drops-125082},
  doi =		{10.4230/LIPIcs.ICALP.2020.101},
  annote =	{Keywords: Multiway Cut, Kernelization, Small Set Expansion, Mimicking Networks}
}
Document
Track A: Algorithms, Complexity and Games
Hardness of Equations over Finite Solvable Groups Under the Exponential Time Hypothesis

Authors: Armin Weiß


Abstract
Goldmann and Russell (2002) initiated the study of the complexity of the equation satisfiability problem in finite groups by showing that it is in 𝖯 for nilpotent groups while it is 𝖭𝖯-complete for non-solvable groups. Since then, several results have appeared showing that the problem can be solved in polynomial time in certain solvable groups of Fitting length two. In this work, we present the first lower bounds for the equation satisfiability problem in finite solvable groups: under the assumption of the exponential time hypothesis, we show that it cannot be in 𝖯 for any group of Fitting length at least four and for certain groups of Fitting length three. Moreover, the same hardness result applies to the equation identity problem.

Cite as

Armin Weiß. Hardness of Equations over Finite Solvable Groups Under the Exponential Time Hypothesis. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 102:1-102:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wei:LIPIcs.ICALP.2020.102,
  author =	{Wei{\ss}, Armin},
  title =	{{Hardness of Equations over Finite Solvable Groups Under the Exponential Time Hypothesis}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{102:1--102:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.102},
  URN =		{urn:nbn:de:0030-drops-125093},
  doi =		{10.4230/LIPIcs.ICALP.2020.102},
  annote =	{Keywords: equations in groups, solvable groups, exponential time hypothesis}
}
Document
Track A: Algorithms, Complexity and Games
Graph Isomorphism in Quasipolynomial Time Parameterized by Treewidth

Authors: Daniel Wiebking


Abstract
We extend Babai’s quasipolynomial-time graph isomorphism test (STOC 2016) and develop a quasipolynomial-time algorithm for the multiple-coset isomorphism problem. The algorithm for the multiple-coset isomorphism problem allows to exploit graph decompositions of the given input graphs within Babai’s group-theoretic framework. We use it to develop a graph isomorphism test that runs in time n^polylog(k) where n is the number of vertices and k is the minimum treewidth of the given graphs and polylog(k) is some polynomial in log(k). Our result generalizes Babai’s quasipolynomial-time graph isomorphism test.

Cite as

Daniel Wiebking. Graph Isomorphism in Quasipolynomial Time Parameterized by Treewidth. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 103:1-103:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wiebking:LIPIcs.ICALP.2020.103,
  author =	{Wiebking, Daniel},
  title =	{{Graph Isomorphism in Quasipolynomial Time Parameterized by Treewidth}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{103:1--103:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.103},
  URN =		{urn:nbn:de:0030-drops-125106},
  doi =		{10.4230/LIPIcs.ICALP.2020.103},
  annote =	{Keywords: Graph isomorphism, canonization, treewidth, hypergraphs}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Inapproximability for Steiner Orientation by Gap Amplification

Authors: Michał Włodarczyk


Abstract
In the k-Steiner Orientation problem, we are given a mixed graph, that is, with both directed and undirected edges, and a set of k terminal pairs. The goal is to find an orientation of the undirected edges that maximizes the number of terminal pairs for which there is a path from the source to the sink. The problem is known to be W[1]-hard when parameterized by k and hard to approximate up to some constant for FPT algorithms assuming Gap-ETH. On the other hand, no approximation factor better than 𝒪(k) is known. We show that k-Steiner Orientation is unlikely to admit an approximation algorithm with any constant factor, even within FPT running time. To obtain this result, we construct a self-reduction via a hashing-based gap amplification technique, which turns out useful even outside of the FPT paradigm. Precisely, we rule out any approximation factor of the form (log k)^o(1) for FPT algorithms (assuming FPT ≠ W[1]) and (log n)^o(1) for purely polynomial-time algorithms (assuming that the class W[1] does not admit randomized FPT algorithms). This constitutes a novel inapproximability result for polynomial-time algorithms obtained via tools from the FPT theory. Moreover, we prove k-Steiner Orientation to belong to W[1], which entails W[1]-completeness of (log k)^o(1)-approximation for k-Steiner Orientation. This provides an example of a natural approximation task that is complete in a parameterized complexity class. Finally, we apply our technique to the maximization version of directed multicut - Max (k,p)-Directed Multicut - where we are given a directed graph, k terminals pairs, and a budget p. The goal is to maximize the number of separated terminal pairs by removing p edges. We present a simple proof that the problem admits no FPT approximation with factor 𝒪(k^(1/2 - ε)) (assuming FPT ≠ W[1]) and no polynomial-time approximation with ratio 𝒪(|E(G)|^(1/2 - ε)) (assuming NP ⊈ co-RP).

Cite as

Michał Włodarczyk. Parameterized Inapproximability for Steiner Orientation by Gap Amplification. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 104:1-104:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wlodarczyk:LIPIcs.ICALP.2020.104,
  author =	{W{\l}odarczyk, Micha{\l}},
  title =	{{Parameterized Inapproximability for Steiner Orientation by Gap Amplification}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{104:1--104:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.104},
  URN =		{urn:nbn:de:0030-drops-125110},
  doi =		{10.4230/LIPIcs.ICALP.2020.104},
  annote =	{Keywords: approximation algorithms, fixed-parameter tractability, hardness of approximation, gap amplification}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Algorithm for Constructing Greedy Consensus Tree

Authors: Hongxun Wu


Abstract
In biology, phylogenetic trees are important tools for describing evolutionary relations, but various data sources may result in conflicting phylogenetic trees. To summarize these conflicting phylogenetic trees, consensus tree methods take k conflicting phylogenetic trees (each with n leaves) as input and output a single phylogenetic tree as consensus. Among the consensus tree methods, a widely used method is the greedy consensus tree. The previous fastest algorithms for constructing a greedy consensus tree have time complexity Õ(kn^1.5) [Gawrychowski, Landau, Sung, Weimann 2018] and Õ(k²n) [Sung 2019] respectively. In this paper, we improve the running time to Õ(kn). Since k input trees have Θ(kn) nodes in total, our algorithm is optimal up to polylogarithmic factors.

Cite as

Hongxun Wu. Near-Optimal Algorithm for Constructing Greedy Consensus Tree. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 105:1-105:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wu:LIPIcs.ICALP.2020.105,
  author =	{Wu, Hongxun},
  title =	{{Near-Optimal Algorithm for Constructing Greedy Consensus Tree}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{105:1--105:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.105},
  URN =		{urn:nbn:de:0030-drops-125122},
  doi =		{10.4230/LIPIcs.ICALP.2020.105},
  annote =	{Keywords: phylogenetic trees, greedy consensus trees, splay tree}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Decision Problems in Information Theory

Authors: Mahmoud Abo Khamis, Phokion G. Kolaitis, Hung Q. Ngo, and Dan Suciu


Abstract
Constraints on entropies are considered to be the laws of information theory. Even though the pursuit of their discovery has been a central theme of research in information theory, the algorithmic aspects of constraints on entropies remain largely unexplored. Here, we initiate an investigation of decision problems about constraints on entropies by placing several different such problems into levels of the arithmetical hierarchy. We establish the following results on checking the validity over all almost-entropic functions: first, validity of a Boolean information constraint arising from a monotone Boolean formula is co-recursively enumerable; second, validity of "tight" conditional information constraints is in Π⁰₃. Furthermore, under some restrictions, validity of conditional information constraints "with slack" is in Σ⁰₂, and validity of information inequality constraints involving max is Turing equivalent to validity of information inequality constraints (with no max involved). We also prove that the classical implication problem for conditional independence statements is co-recursively enumerable.

Cite as

Mahmoud Abo Khamis, Phokion G. Kolaitis, Hung Q. Ngo, and Dan Suciu. Decision Problems in Information Theory. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 106:1-106:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abokhamis_et_al:LIPIcs.ICALP.2020.106,
  author =	{Abo Khamis, Mahmoud and Kolaitis, Phokion G. and Ngo, Hung Q. and Suciu, Dan},
  title =	{{Decision Problems in Information Theory}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{106:1--106:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.106},
  URN =		{urn:nbn:de:0030-drops-125137},
  doi =		{10.4230/LIPIcs.ICALP.2020.106},
  annote =	{Keywords: Information theory, decision problems, arithmetical hierarchy, entropic functions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Invariants for Continuous Linear Dynamical Systems

Authors: Shaull Almagor, Edon Kelmendi, Joël Ouaknine, and James Worrell


Abstract
Continuous linear dynamical systems are used extensively in mathematics, computer science, physics, and engineering to model the evolution of a system over time. A central technique for certifying safety properties of such systems is by synthesising inductive invariants. This is the task of finding a set of states that is closed under the dynamics of the system and is disjoint from a given set of error states. In this paper we study the problem of synthesising inductive invariants that are definable in o-minimal expansions of the ordered field of real numbers. In particular, assuming Schanuel’s conjecture in transcendental number theory, we establish effective synthesis of o-minimal invariants in the case of semi-algebraic error sets. Without using Schanuel’s conjecture, we give a procedure for synthesizing o-minimal invariants that contain all but a bounded initial segment of the orbit and are disjoint from a given semi-algebraic error set. We further prove that effective synthesis of semi-algebraic invariants that contain the whole orbit, is at least as hard as a certain open problem in transcendental number theory.

Cite as

Shaull Almagor, Edon Kelmendi, Joël Ouaknine, and James Worrell. Invariants for Continuous Linear Dynamical Systems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 107:1-107:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{almagor_et_al:LIPIcs.ICALP.2020.107,
  author =	{Almagor, Shaull and Kelmendi, Edon and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{Invariants for Continuous Linear Dynamical Systems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{107:1--107:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.107},
  URN =		{urn:nbn:de:0030-drops-125141},
  doi =		{10.4230/LIPIcs.ICALP.2020.107},
  annote =	{Keywords: Invariants, continuous linear dynamical systems, continuous Skolem problem, safety, o-minimality}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Higher-Order Cryptography

Authors: Boaz Barak, Raphaëlle Crubillé, and Ugo Dal Lago


Abstract
Type-two constructions abound in cryptography: adversaries for encryption and authentication schemes, if active, are modeled as algorithms having access to oracles, i.e. as second-order algorithms. But how about making cryptographic schemes themselves higher-order? This paper gives an answer to this question, by first describing why higher-order cryptography is interesting as an object of study, then showing how the concept of probabilistic polynomial time algorithm can be generalized so as to encompass algorithms of order strictly higher than two, and finally proving some positive and negative results about the existence of higher-order cryptographic primitives, namely authentication schemes and pseudorandom functions.

Cite as

Boaz Barak, Raphaëlle Crubillé, and Ugo Dal Lago. On Higher-Order Cryptography. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 108:1-108:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barak_et_al:LIPIcs.ICALP.2020.108,
  author =	{Barak, Boaz and Crubill\'{e}, Rapha\"{e}lle and Dal Lago, Ugo},
  title =	{{On Higher-Order Cryptography}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{108:1--108:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.108},
  URN =		{urn:nbn:de:0030-drops-125153},
  doi =		{10.4230/LIPIcs.ICALP.2020.108},
  annote =	{Keywords: Higher-order computation, probabilistic computation, game semantics, cryptography}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Cost Automata, Safe Schemes, and Downward Closures

Authors: David Barozzini, Lorenzo Clemente, Thomas Colcombet, and Paweł Parys


Abstract
Higher-order recursion schemes are an expressive formalism used to define languages of possibly infinite ranked trees. They extend regular and context-free grammars, and are equivalent to simply typed λY-calculus and collapsible pushdown automata. In this work we prove, under a syntactical constraint called safety, decidability of the model-checking problem for recursion schemes against properties defined by alternating B-automata, an extension of alternating parity automata for infinite trees with a boundedness acceptance condition. We then exploit this result to show how to compute downward closures of languages of finite trees recognized by safe recursion schemes.

Cite as

David Barozzini, Lorenzo Clemente, Thomas Colcombet, and Paweł Parys. Cost Automata, Safe Schemes, and Downward Closures. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 109:1-109:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barozzini_et_al:LIPIcs.ICALP.2020.109,
  author =	{Barozzini, David and Clemente, Lorenzo and Colcombet, Thomas and Parys, Pawe{\l}},
  title =	{{Cost Automata, Safe Schemes, and Downward Closures}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{109:1--109:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.109},
  URN =		{urn:nbn:de:0030-drops-125169},
  doi =		{10.4230/LIPIcs.ICALP.2020.109},
  annote =	{Keywords: Cost logics, cost automata, downward closures, higher-order recursion schemes, safe recursion schemes}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Sensitive Instances of the Constraint Satisfaction Problem

Authors: Libor Barto, Marcin Kozik, Johnson Tan, and Matt Valeriote


Abstract
We investigate the impact of modifying the constraining relations of a Constraint Satisfaction Problem (CSP) instance, with a fixed template, on the set of solutions of the instance. More precisely we investigate sensitive instances: an instance of the CSP is called sensitive, if removing any tuple from any constraining relation invalidates some solution of the instance. Equivalently, one could require that every tuple from any one of its constraints extends to a solution of the instance. Clearly, any non-trivial template has instances which are not sensitive. Therefore we follow the direction proposed (in the context of strict width) by Feder and Vardi in [Feder and Vardi, 1999] and require that only the instances produced by a local consistency checking algorithm are sensitive. In the language of the algebraic approach to the CSP we show that a finite idempotent algebra 𝔸 has a k+2 variable near unanimity term operation if and only if any instance that results from running the (k, k+1)-consistency algorithm on an instance over 𝔸² is sensitive. A version of our result, without idempotency but with the sensitivity condition holding in a variety of algebras, settles a question posed by G. Bergman about systems of projections of algebras that arise from some subalgebra of a finite product of algebras. Our results hold for infinite (albeit in the case of 𝔸 idempotent) algebras as well and exhibit a surprising similarity to the strict width k condition proposed by Feder and Vardi. Both conditions can be characterized by the existence of a near unanimity operation, but the arities of the operations differ by 1.

Cite as

Libor Barto, Marcin Kozik, Johnson Tan, and Matt Valeriote. Sensitive Instances of the Constraint Satisfaction Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 110:1-110:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{barto_et_al:LIPIcs.ICALP.2020.110,
  author =	{Barto, Libor and Kozik, Marcin and Tan, Johnson and Valeriote, Matt},
  title =	{{Sensitive Instances of the Constraint Satisfaction Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{110:1--110:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.110},
  URN =		{urn:nbn:de:0030-drops-125176},
  doi =		{10.4230/LIPIcs.ICALP.2020.110},
  annote =	{Keywords: Constraint satisfaction problem, bounded width, local consistency, near unanimity operation, loop lemma}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Complexity of Bounded Context Switching with Dynamic Thread Creation

Authors: Pascal Baumann, Rupak Majumdar, Ramanathan S. Thinniyam, and Georg Zetzsche


Abstract
Dynamic networks of concurrent pushdown systems (DCPS) are a theoretical model for multi-threaded recursive programs with shared global state and dynamical creation of threads. The (global) state reachability problem for DCPS is undecidable in general, but Atig et al. (2009) showed that it becomes decidable, and is in 2EXPSPACE, when each thread is restricted to a fixed number of context switches. The best known lower bound for the problem is EXPSPACE-hard and this lower bound follows already when each thread is a finite-state machine and runs atomically to completion (i.e., does not switch contexts). In this paper, we close the gap by showing that state reachability is 2EXPSPACE-hard already with only one context switch. Interestingly, state reachability analysis is in EXPSPACE both for pushdown threads without context switches as well as for finite-state threads with arbitrary context switches. Thus, recursive threads together with a single context switch provide an exponential advantage. Our proof techniques are of independent interest for 2EXPSPACE-hardness results. We introduce transducer-defined Petri nets, a succinct representation for Petri nets, and show coverability is 2EXPSPACE-hard for this model. To show 2EXPSPACE-hardness, we present a modified version of Lipton’s simulation of counter machines by Petri nets, where the net programs can make explicit recursive procedure calls up to a bounded depth.

Cite as

Pascal Baumann, Rupak Majumdar, Ramanathan S. Thinniyam, and Georg Zetzsche. The Complexity of Bounded Context Switching with Dynamic Thread Creation. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 111:1-111:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{baumann_et_al:LIPIcs.ICALP.2020.111,
  author =	{Baumann, Pascal and Majumdar, Rupak and Thinniyam, Ramanathan S. and Zetzsche, Georg},
  title =	{{The Complexity of Bounded Context Switching with Dynamic Thread Creation}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{111:1--111:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.111},
  URN =		{urn:nbn:de:0030-drops-125187},
  doi =		{10.4230/LIPIcs.ICALP.2020.111},
  annote =	{Keywords: Dynamic thread creation, Bounded context switching, Asynchronous Programs, Safety verification, State reachability, Petri nets, Complexity, Succinctness, Counter Programs}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Two Variable Logic with Ultimately Periodic Counting

Authors: Michael Benedikt, Egor V. Kostylev, and Tony Tan


Abstract
We consider the extension of FO² with quantifiers that state that the number of elements where a formula holds should belong to a given ultimately periodic set. We show that both satisfiability and finite satisfiability of the logic are decidable. We also show that the spectrum of any sentence is definable in Presburger arithmetic. In the process we present several refinements to the "biregular graph method". In this method, decidability issues concerning two-variable logics are reduced to questions about Presburger definability of integer vectors associated with partitioned graphs, where nodes in a partition satisfy certain constraints on their in- and out-degrees.

Cite as

Michael Benedikt, Egor V. Kostylev, and Tony Tan. Two Variable Logic with Ultimately Periodic Counting. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 112:1-112:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{benedikt_et_al:LIPIcs.ICALP.2020.112,
  author =	{Benedikt, Michael and Kostylev, Egor V. and Tan, Tony},
  title =	{{Two Variable Logic with Ultimately Periodic Counting}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{112:1--112:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.112},
  URN =		{urn:nbn:de:0030-drops-125197},
  doi =		{10.4230/LIPIcs.ICALP.2020.112},
  annote =	{Keywords: Presburger Arithmetic, Two-variable logic}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Single-Use Automata and Transducers for Infinite Alphabets

Authors: Mikołaj Bojańczyk and Rafał Stefański


Abstract
Our starting point are register automata for data words, in the style of Kaminski and Francez. We study the effects of the single-use restriction, which says that a register is emptied immediately after being used. We show that under the single-use restriction, the theory of automata for data words becomes much more robust. The main results are: (a) five different machine models are equivalent as language acceptors, including one-way and two-way single-use register automata; (b) one can recover some of the algebraic theory of languages over finite alphabets, including a version of the Krohn-Rhodes Theorem; (c) there is also a robust theory of transducers, with four equivalent models, including two-way single use transducers and a variant of streaming string transducers for data words. These results are in contrast with automata for data words without the single-use restriction, where essentially all models are pairwise non-equivalent.

Cite as

Mikołaj Bojańczyk and Rafał Stefański. Single-Use Automata and Transducers for Infinite Alphabets. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 113:1-113:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2020.113,
  author =	{Boja\'{n}czyk, Miko{\l}aj and Stefa\'{n}ski, Rafa{\l}},
  title =	{{Single-Use Automata and Transducers for Infinite Alphabets}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{113:1--113:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.113},
  URN =		{urn:nbn:de:0030-drops-125200},
  doi =		{10.4230/LIPIcs.ICALP.2020.113},
  annote =	{Keywords: Automata, semigroups, data words, orbit-finite sets}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Weakly-Unambiguous Parikh Automata and Their Link to Holonomic Series

Authors: Alin Bostan, Arnaud Carayol, Florent Koechlin, and Cyril Nicaud


Abstract
We investigate the connection between properties of formal languages and properties of their generating series, with a focus on the class of holonomic power series. We first prove a strong version of a conjecture by Castiglione and Massazza: weakly-unambiguous Parikh automata are equivalent to unambiguous two-way reversal bounded counter machines, and their multivariate generating series are holonomic. We then show that the converse is not true: we construct a language whose generating series is algebraic (thus holonomic), but which is inherently weakly-ambiguous as a Parikh automata language. Finally, we prove an effective decidability result for the inclusion problem for weakly-unambiguous Parikh automata, and provide an upper-bound on its complexity.

Cite as

Alin Bostan, Arnaud Carayol, Florent Koechlin, and Cyril Nicaud. Weakly-Unambiguous Parikh Automata and Their Link to Holonomic Series. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 114:1-114:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bostan_et_al:LIPIcs.ICALP.2020.114,
  author =	{Bostan, Alin and Carayol, Arnaud and Koechlin, Florent and Nicaud, Cyril},
  title =	{{Weakly-Unambiguous Parikh Automata and Their Link to Holonomic Series}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{114:1--114:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.114},
  URN =		{urn:nbn:de:0030-drops-125212},
  doi =		{10.4230/LIPIcs.ICALP.2020.114},
  annote =	{Keywords: generating series, holonomicity, ambiguity, reversal bounded counter machine, Parikh automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On the Size of Finite Rational Matrix Semigroups

Authors: Georgina Bumpus, Christoph Haase, Stefan Kiefer, Paul-Ioan Stoienescu, and Jonathan Tanner


Abstract
Let n be a positive integer and M a set of rational n × n-matrices such that M generates a finite multiplicative semigroup. We show that any matrix in the semigroup is a product of matrices in M whose length is at most 2^{n (2 n + 3)} g(n)^{n+1} ∈ 2^{O(n² log n)}, where g(n) is the maximum order of finite groups over rational n × n-matrices. This result implies algorithms with an elementary running time for deciding finiteness of weighted automata over the rationals and for deciding reachability in affine integer vector addition systems with states with the finite monoid property.

Cite as

Georgina Bumpus, Christoph Haase, Stefan Kiefer, Paul-Ioan Stoienescu, and Jonathan Tanner. On the Size of Finite Rational Matrix Semigroups. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 115:1-115:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bumpus_et_al:LIPIcs.ICALP.2020.115,
  author =	{Bumpus, Georgina and Haase, Christoph and Kiefer, Stefan and Stoienescu, Paul-Ioan and Tanner, Jonathan},
  title =	{{On the Size of Finite Rational Matrix Semigroups}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{115:1--115:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.115},
  URN =		{urn:nbn:de:0030-drops-125226},
  doi =		{10.4230/LIPIcs.ICALP.2020.115},
  annote =	{Keywords: Matrix semigroups, Burnside problem, weighted automata, vector addition systems}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Rational Subsets of Baumslag-Solitar Groups

Authors: Michaël Cadilhac, Dmitry Chistikov, and Georg Zetzsche


Abstract
We consider the rational subset membership problem for Baumslag-Solitar groups. These groups form a prominent class in the area of algorithmic group theory, and they were recently identified as an obstacle for understanding the rational subsets of GL(2,ℚ). We show that rational subset membership for Baumslag-Solitar groups BS(1,q) with q ≥ 2 is decidable and PSPACE-complete. To this end, we introduce a word representation of the elements of BS(1,q): their pointed expansion (PE), an annotated q-ary expansion. Seeing subsets of BS(1,q) as word languages, this leads to a natural notion of PE-regular subsets of BS(1,q): these are the subsets of BS(1,q) whose sets of PE are regular languages. Our proof shows that every rational subset of BS(1,q) is PE-regular. Since the class of PE-regular subsets of BS(1,q) is well-equipped with closure properties, we obtain further applications of these results. Our results imply that (i) emptiness of Boolean combinations of rational subsets is decidable, (ii) membership to each fixed rational subset of BS(1,q) is decidable in logarithmic space, and (iii) it is decidable whether a given rational subset is recognizable. In particular, it is decidable whether a given finitely generated subgroup of BS(1,q) has finite index.

Cite as

Michaël Cadilhac, Dmitry Chistikov, and Georg Zetzsche. Rational Subsets of Baumslag-Solitar Groups. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 116:1-116:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cadilhac_et_al:LIPIcs.ICALP.2020.116,
  author =	{Cadilhac, Micha\"{e}l and Chistikov, Dmitry and Zetzsche, Georg},
  title =	{{Rational Subsets of Baumslag-Solitar Groups}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{116:1--116:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.116},
  URN =		{urn:nbn:de:0030-drops-125238},
  doi =		{10.4230/LIPIcs.ICALP.2020.116},
  annote =	{Keywords: Rational subsets, Baumslag-Solitar groups, decidability, regular languages, pointed expansion}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Polynomial Recursive Sequences

Authors: Michaël Cadilhac, Filip Mazowiecki, Charles Paperman, Michał Pilipczuk, and Géraud Sénizergues


Abstract
We study the expressive power of polynomial recursive sequences, a nonlinear extension of the well-known class of linear recursive sequences. These sequences arise naturally in the study of nonlinear extensions of weighted automata, where (non)expressiveness results translate to class separations. A typical example of a polynomial recursive sequence is b_n = n!. Our main result is that the sequence u_n = nⁿ is not polynomial recursive.

Cite as

Michaël Cadilhac, Filip Mazowiecki, Charles Paperman, Michał Pilipczuk, and Géraud Sénizergues. On Polynomial Recursive Sequences. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 117:1-117:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cadilhac_et_al:LIPIcs.ICALP.2020.117,
  author =	{Cadilhac, Micha\"{e}l and Mazowiecki, Filip and Paperman, Charles and Pilipczuk, Micha{\l} and S\'{e}nizergues, G\'{e}raud},
  title =	{{On Polynomial Recursive Sequences}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{117:1--117:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.117},
  URN =		{urn:nbn:de:0030-drops-125240},
  doi =		{10.4230/LIPIcs.ICALP.2020.117},
  annote =	{Keywords: recursive sequences, expressive power, weighted automata, higher-order pushdown automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
A Recipe for Quantum Graphical Languages

Authors: Titouan Carette and Emmanuel Jeandel


Abstract
Different graphical calculi have been proposed to represent quantum computation. First the ZX-calculus [Coecke and Duncan, 2011], followed by the ZW-calculus [Hadzihasanovic, 2015] and then the ZH-calculus [Backens and Kissinger, 2018]. We can wonder if new ZX-like calculi will continue to be proposed forever. This article answers negatively. All those language share a common core structure we call Z^*-algebras. We classify Z^*-algebras up to isomorphism in two dimensional Hilbert spaces and show that they are all variations of the aforementioned calculi. We do the same for linear relations and show that the calculus of [Bonchi et al., 2017] is essentially the unique one.

Cite as

Titouan Carette and Emmanuel Jeandel. A Recipe for Quantum Graphical Languages. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 118:1-118:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{carette_et_al:LIPIcs.ICALP.2020.118,
  author =	{Carette, Titouan and Jeandel, Emmanuel},
  title =	{{A Recipe for Quantum Graphical Languages}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{118:1--118:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.118},
  URN =		{urn:nbn:de:0030-drops-125250},
  doi =		{10.4230/LIPIcs.ICALP.2020.118},
  annote =	{Keywords: Categorical Quantum Mechanics, Quantum Computing, Category Theory}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On the Power of Ordering in Linear Arithmetic Theories

Authors: Dmitry Chistikov and Christoph Haase


Abstract
We study the problems of deciding whether a relation definable by a first-order formula in linear rational or linear integer arithmetic with an order relation is definable in absence of the order relation. Over the integers, this problem was shown decidable by Choffrut and Frigeri [Discret. Math. Theor. C., 12(1), pp. 21 - 38, 2010], albeit with non-elementary time complexity. Our contribution is to establish a full geometric characterisation of those sets definable without order which in turn enables us to prove coNP-completeness of this problem over the rationals and to establish an elementary upper bound over the integers. We also provide a complementary Π₂^P lower bound for the integer case that holds even in a fixed dimension. This lower bound is obtained by showing that universality for ultimately periodic sets, i.e., semilinear sets in dimension one, is Π₂^P-hard, which resolves an open problem of Huynh [Elektron. Inf.verarb. Kybern., 18(6), pp. 291 - 338, 1982].

Cite as

Dmitry Chistikov and Christoph Haase. On the Power of Ordering in Linear Arithmetic Theories. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 119:1-119:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.ICALP.2020.119,
  author =	{Chistikov, Dmitry and Haase, Christoph},
  title =	{{On the Power of Ordering in Linear Arithmetic Theories}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{119:1--119:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.119},
  URN =		{urn:nbn:de:0030-drops-125265},
  doi =		{10.4230/LIPIcs.ICALP.2020.119},
  annote =	{Keywords: logical definability, linear arithmetic theories, semi linear sets, ultimately periodic sets, numerical semigroups}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Post Correspondence Problem and Equalisers for Certain Free Group and Monoid Morphisms

Authors: Laura Ciobanu and Alan D. Logan


Abstract
A marked free monoid morphism is a morphism for which the image of each generator starts with a different letter, and immersions are the analogous maps in free groups. We show that the (simultaneous) PCP is decidable for immersions of free groups, and provide an algorithm to compute bases for the sets, called equalisers, on which the immersions take the same values. We also answer a question of Stallings about the rank of the equaliser. Analogous results are proven for marked morphisms of free monoids.

Cite as

Laura Ciobanu and Alan D. Logan. The Post Correspondence Problem and Equalisers for Certain Free Group and Monoid Morphisms. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 120:1-120:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ciobanu_et_al:LIPIcs.ICALP.2020.120,
  author =	{Ciobanu, Laura and Logan, Alan D.},
  title =	{{The Post Correspondence Problem and Equalisers for Certain Free Group and Monoid Morphisms}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{120:1--120:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.120},
  URN =		{urn:nbn:de:0030-drops-125271},
  doi =		{10.4230/LIPIcs.ICALP.2020.120},
  annote =	{Keywords: Post Correspondence Problem, marked map, immersion, free group, free monoid}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Timed Games and Deterministic Separability

Authors: Lorenzo Clemente, Sławomir Lasota, and Radosław Piórkowski


Abstract
We study a generalisation of Büchi-Landweber games to the timed setting. The winning condition is specified by a non-deterministic timed automaton with epsilon transitions and only Player I can elapse time. We show that for fixed number of clocks and maximal numerical constant available to Player II, it is decidable whether she has a winning timed controller using these resources. More interestingly, we also show that the problem remains decidable even when the maximal numerical constant is not specified in advance, which is an important technical novelty not present in previous literature on timed games. We complement these two decidability result by showing undecidability when the number of clocks available to Player II is not fixed. As an application of timed games, and our main motivation to study them, we show that they can be used to solve the deterministic separability problem for nondeterministic timed automata with epsilon transitions. This is a novel decision problem about timed automata which has not been studied before. We show that separability is decidable when the number of clocks of the separating automaton is fixed and the maximal constant is not. The problem whether separability is decidable without bounding the number of clocks of the separator remains an interesting open problem.

Cite as

Lorenzo Clemente, Sławomir Lasota, and Radosław Piórkowski. Timed Games and Deterministic Separability. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 121:1-121:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{clemente_et_al:LIPIcs.ICALP.2020.121,
  author =	{Clemente, Lorenzo and Lasota, S{\l}awomir and Pi\'{o}rkowski, Rados{\l}aw},
  title =	{{Timed Games and Deterministic Separability}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{121:1--121:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.121},
  URN =		{urn:nbn:de:0030-drops-125282},
  doi =		{10.4230/LIPIcs.ICALP.2020.121},
  annote =	{Keywords: Timed automata, separability problems, timed games}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Dynamic Complexity of Reachability: How Many Changes Can We Handle?

Authors: Samir Datta, Pankaj Kumar, Anish Mukherjee, Anuj Tawari, Nils Vortmeier, and Thomas Zeume


Abstract
In 2015, it was shown that reachability for arbitrary directed graphs can be updated by first-order formulas after inserting or deleting single edges. Later, in 2018, this was extended for changes of size (log n)/(log log n), where n is the size of the graph. Changes of polylogarithmic size can be handled when also majority quantifiers may be used. In this paper we extend these results by showing that, for changes of polylogarithmic size, first-order update formulas suffice for maintaining (1) undirected reachability, and (2) directed reachability under insertions. For classes of directed graphs for which efficient parallel algorithms can compute non-zero circulation weights, reachability can be maintained with update formulas that may use "modulo 2" quantifiers under changes of polylogarithmic size. Examples for these classes include the class of planar graphs and graphs with bounded treewidth. The latter is shown here. As the logics we consider cannot maintain reachability under changes of larger sizes, our results are optimal with respect to the size of the changes.

Cite as

Samir Datta, Pankaj Kumar, Anish Mukherjee, Anuj Tawari, Nils Vortmeier, and Thomas Zeume. Dynamic Complexity of Reachability: How Many Changes Can We Handle?. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 122:1-122:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.ICALP.2020.122,
  author =	{Datta, Samir and Kumar, Pankaj and Mukherjee, Anish and Tawari, Anuj and Vortmeier, Nils and Zeume, Thomas},
  title =	{{Dynamic Complexity of Reachability: How Many Changes Can We Handle?}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{122:1--122:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.122},
  URN =		{urn:nbn:de:0030-drops-125291},
  doi =		{10.4230/LIPIcs.ICALP.2020.122},
  annote =	{Keywords: Dynamic complexity, reachability, complex changes}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Strahler Number of a Parity Game

Authors: Laure Daviaud, Marcin Jurdziński, and K. S. Thejaswini


Abstract
The Strahler number of a rooted tree is the largest height of a perfect binary tree that is its minor. The Strahler number of a parity game is proposed to be defined as the smallest Strahler number of the tree of any of its attractor decompositions. It is proved that parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices n and linear in (d/(2k))^k, where d is the number of priorities and k is the Strahler number. This complexity is quasi-polynomial because the Strahler number is at most logarithmic in the number of vertices. The proof is based on a new construction of small Strahler-universal trees. It is shown that the Strahler number of a parity game is a robust, and hence arguably natural, parameter: it coincides with its alternative version based on trees of progress measures and - remarkably - with the register number defined by Lehtinen (2018). It follows that parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices and linear in (d/(2k))^k, where k is the register number. This significantly improves the running times and space achieved for parity games of bounded register number by Lehtinen (2018) and by Parys (2020). The running time of the algorithm based on small Strahler-universal trees yields a novel trade-off k ⋅ lg(d/k) = O(log n) between the two natural parameters that measure the structural complexity of a parity game, which allows solving parity games in polynomial time. This includes as special cases the asymptotic settings of those parameters covered by the results of Calude, Jain Khoussainov, Li, and Stephan (2017), of Jurdziński and Lazić (2017), and of Lehtinen (2018), and it significantly extends the range of such settings, for example to d = 2^O(√{lg n}) and k = O(√{lg n}).

Cite as

Laure Daviaud, Marcin Jurdziński, and K. S. Thejaswini. The Strahler Number of a Parity Game. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 123:1-123:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{daviaud_et_al:LIPIcs.ICALP.2020.123,
  author =	{Daviaud, Laure and Jurdzi\'{n}ski, Marcin and Thejaswini, K. S.},
  title =	{{The Strahler Number of a Parity Game}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{123:1--123:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.123},
  URN =		{urn:nbn:de:0030-drops-125304},
  doi =		{10.4230/LIPIcs.ICALP.2020.123},
  annote =	{Keywords: parity game, attractor decomposition, progress measure, universal tree, Strahler number}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On the Structure of Solution Sets to Regular Word Equations

Authors: Joel D. Day and Florin Manea


Abstract
For quadratic word equations, there exists an algorithm based on rewriting rules which generates a directed graph describing all solutions to the equation. For regular word equations - those for which each variable occurs at most once on each side of the equation - we investigate the properties of this graph, such as bounds on its diameter, size, and DAG-width, as well as providing some insights into symmetries in its structure. As a consequence, we obtain a combinatorial proof that the problem of deciding whether a regular word equation has a solution is in NP.

Cite as

Joel D. Day and Florin Manea. On the Structure of Solution Sets to Regular Word Equations. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 124:1-124:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{day_et_al:LIPIcs.ICALP.2020.124,
  author =	{Day, Joel D. and Manea, Florin},
  title =	{{On the Structure of Solution Sets to Regular Word Equations}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{124:1--124:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.124},
  URN =		{urn:nbn:de:0030-drops-125314},
  doi =		{10.4230/LIPIcs.ICALP.2020.124},
  annote =	{Keywords: Quadratic Word Equations, Regular Word Equations, String Solving, NP}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
From Linear to Additive Cellular Automata

Authors: Alberto Dennunzio, Enrico Formenti, Darij Grinberg, and Luciano Margara


Abstract
This paper proves the decidability of several important properties of additive cellular automata over finite abelian groups. First of all, we prove that equicontinuity and sensitivity to initial conditions are decidable for a nontrivial subclass of additive cellular automata, namely, the linear cellular automata over 𝕂ⁿ, where 𝕂 is the ring ℤ/mℤ. The proof of this last result has required to prove a general result on the powers of matrices over a commutative ring which is of interest in its own. Then, we extend the decidability result concerning sensitivity and equicontinuity to the whole class of additive cellular automata over a finite abelian group and for such a class we also prove the decidability of topological transitivity and all the properties (as, for instance, ergodicity) that are equivalent to it. Finally, a decidable characterization of injectivity and surjectivity for additive cellular automata over a finite abelian group is provided in terms of injectivity and surjectivity of an associated linear cellular automata over 𝕂ⁿ.

Cite as

Alberto Dennunzio, Enrico Formenti, Darij Grinberg, and Luciano Margara. From Linear to Additive Cellular Automata. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 125:1-125:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dennunzio_et_al:LIPIcs.ICALP.2020.125,
  author =	{Dennunzio, Alberto and Formenti, Enrico and Grinberg, Darij and Margara, Luciano},
  title =	{{From Linear to Additive Cellular Automata}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{125:1--125:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.125},
  URN =		{urn:nbn:de:0030-drops-125321},
  doi =		{10.4230/LIPIcs.ICALP.2020.125},
  annote =	{Keywords: Cellular Automata, Decidability, Symbolic Dynamics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Complexity of Knapsack Problems in Wreath Products

Authors: Michael Figelius, Moses Ganardi, Markus Lohrey, and Georg Zetzsche


Abstract
We prove new complexity results for computational problems in certain wreath products of groups and (as an application) for free solvable groups. For a finitely generated group we study the so-called power word problem (does a given expression u₁^{k₁} … u_d^{k_d}, where u₁, …, u_d are words over the group generators and k₁, …, k_d are binary encoded integers, evaluate to the group identity?) and knapsack problem (does a given equation u₁^{x₁} … u_d^{x_d} = v, where u₁, …, u_d,v are words over the group generators and x₁,…,x_d are variables, have a solution in the natural numbers). We prove that the power word problem for wreath products of the form G ≀ ℤ with G nilpotent and iterated wreath products of free abelian groups belongs to TC⁰. As an application of the latter, the power word problem for free solvable groups is in TC⁰. On the other hand we show that for wreath products G ≀ ℤ, where G is a so called uniformly strongly efficiently non-solvable group (which form a large subclass of non-solvable groups), the power word problem is coNP-hard. For the knapsack problem we show NP-completeness for iterated wreath products of free abelian groups and hence free solvable groups. Moreover, the knapsack problem for every wreath product G ≀ ℤ, where G is uniformly efficiently non-solvable, is Σ₂^p-hard.

Cite as

Michael Figelius, Moses Ganardi, Markus Lohrey, and Georg Zetzsche. The Complexity of Knapsack Problems in Wreath Products. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 126:1-126:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{figelius_et_al:LIPIcs.ICALP.2020.126,
  author =	{Figelius, Michael and Ganardi, Moses and Lohrey, Markus and Zetzsche, Georg},
  title =	{{The Complexity of Knapsack Problems in Wreath Products}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{126:1--126:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.126},
  URN =		{urn:nbn:de:0030-drops-125339},
  doi =		{10.4230/LIPIcs.ICALP.2020.126},
  annote =	{Keywords: algorithmic group theory, knapsack, wreath product}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Adversarial Stackelberg Value in Quantitative Games

Authors: Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin


Abstract
In this paper, we study the notion of adversarial Stackelberg value for two-player non-zero sum games played on bi-weighted graphs with the mean-payoff and the discounted sum functions. The adversarial Stackelberg value of Player 0 is the largest value that Player 0 can obtain when announcing her strategy to Player 1 which in turn responds with any of his best response. For the mean-payoff function, we show that the adversarial Stackelberg value is not always achievable but ε-optimal strategies exist. We show how to compute this value and prove that the associated threshold problem is in NP. For the discounted sum payoff function, we draw a link with the target discounted sum problem which explains why the problem is difficult to solve for this payoff function. We also provide solutions to related gap problems.

Cite as

Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. The Adversarial Stackelberg Value in Quantitative Games. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 127:1-127:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{filiot_et_al:LIPIcs.ICALP.2020.127,
  author =	{Filiot, Emmanuel and Gentilini, Raffaella and Raskin, Jean-Fran\c{c}ois},
  title =	{{The Adversarial Stackelberg Value in Quantitative Games}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{127:1--127:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.127},
  URN =		{urn:nbn:de:0030-drops-125348},
  doi =		{10.4230/LIPIcs.ICALP.2020.127},
  annote =	{Keywords: Non-zero sum games, reactive synthesis, adversarial Stackelberg}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Topology of Local Computing in Networks

Authors: Pierre Fraigniaud and Ami Paz


Abstract
Modeling distributed computing in a way enabling the use of formal methods is a challenge that has been approached from different angles, among which two techniques emerged at the turn of the century: protocol complexes, and directed algebraic topology. In both cases, the considered computational model generally assumes communication via shared objects (typically a shared memory consisting of a collection of read-write registers), or message-passing enabling direct communication between any pair of processes. Our paper is concerned with network computing, where the processes are located at the nodes of a network, and communicate by exchanging messages along the edges of that network (only neighboring processes can communicate directly). Applying the topological approach for verification in network computing is a considerable challenge, mainly because the presence of identifiers assigned to the nodes yields protocol complexes whose size grows exponentially with the size of the underlying network. However, many of the problems studied in this context are of local nature, and their definitions do not depend on the identifiers or on the size of the network. We leverage this independence in order to meet the above challenge, and present local protocol complexes, whose sizes do not depend on the size of the network. As an application of the design of "compacted" protocol complexes, we reformulate the celebrated lower bound of Ω(log^*n) rounds for 3-coloring the n-node ring, in the algebraic topology framework.

Cite as

Pierre Fraigniaud and Ami Paz. The Topology of Local Computing in Networks. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 128:1-128:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fraigniaud_et_al:LIPIcs.ICALP.2020.128,
  author =	{Fraigniaud, Pierre and Paz, Ami},
  title =	{{The Topology of Local Computing in Networks}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{128:1--128:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.128},
  URN =		{urn:nbn:de:0030-drops-125358},
  doi =		{10.4230/LIPIcs.ICALP.2020.128},
  annote =	{Keywords: Distributed computing, distributed graph algorithms, combinatorial topology}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Complexity of Verifying Loop-Free Programs as Differentially Private

Authors: Marco Gaboardi, Kobbi Nissim, and David Purser


Abstract
We study the problem of verifying differential privacy for loop-free programs with probabilistic choice. Programs in this class can be seen as randomized Boolean circuits, which we will use as a formal model to answer two different questions: first, deciding whether a program satisfies a prescribed level of privacy; second, approximating the privacy parameters a program realizes. We show that the problem of deciding whether a program satisfies ε-differential privacy is coNP^#P-complete. In fact, this is the case when either the input domain or the output range of the program is large. Further, we show that deciding whether a program is (ε,δ)-differentially private is coNP^#P-hard, and in coNP^#P for small output domains, but always in coNP^{#P^#P}. Finally, we show that the problem of approximating the level of differential privacy is both NP-hard and coNP-hard. These results complement previous results by Murtagh and Vadhan [Jack Murtagh and Salil P. Vadhan, 2016] showing that deciding the optimal composition of differentially private components is #P-complete, and that approximating the optimal composition of differentially private components is in P.

Cite as

Marco Gaboardi, Kobbi Nissim, and David Purser. The Complexity of Verifying Loop-Free Programs as Differentially Private. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 129:1-129:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gaboardi_et_al:LIPIcs.ICALP.2020.129,
  author =	{Gaboardi, Marco and Nissim, Kobbi and Purser, David},
  title =	{{The Complexity of Verifying Loop-Free Programs as Differentially Private}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{129:1--129:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.129},
  URN =		{urn:nbn:de:0030-drops-125362},
  doi =		{10.4230/LIPIcs.ICALP.2020.129},
  annote =	{Keywords: differential privacy, program verification, probabilistic programs}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Logical Characterisation of Hybrid Conformance

Authors: Maciej Gazda and Mohammad Reza Mousavi


Abstract
Logical characterisation of a behavioural equivalence relation precisely specifies the set of formulae that are preserved and reflected by the relation. Such characterisations have been studied extensively for exact semantics on discrete models such as bisimulations for labelled transition systems and Kripke structures, but to a much lesser extent for approximate relations, in particular in the context of hybrid systems. We present what is to our knowledge the first characterisation result for approximate notions of hybrid refinement and hybrid conformance involving tolerance thresholds in both time and value. Since the notion of conformance in this setting is approximate, any characterisation will unavoidably involve a notion of relaxation, denoting how the specification formulae should be relaxed in order to hold for the implementation. We also show that an existing relaxation scheme on Metric Temporal Logic used for preservation results in this setting is not tight enough for providing a characterisation of neither hybrid conformance nor refinement. The characterisation result, while interesting in its own right, paves the way to more applied research, as our notion of hybrid conformance underlies a formal model-based technique for the verification of cyber-physical systems.

Cite as

Maciej Gazda and Mohammad Reza Mousavi. Logical Characterisation of Hybrid Conformance. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 130:1-130:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gazda_et_al:LIPIcs.ICALP.2020.130,
  author =	{Gazda, Maciej and Mousavi, Mohammad Reza},
  title =	{{Logical Characterisation of Hybrid Conformance}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{130:1--130:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.130},
  URN =		{urn:nbn:de:0030-drops-125377},
  doi =		{10.4230/LIPIcs.ICALP.2020.130},
  annote =	{Keywords: Logical Characterisation, Metric Temporal Logic, Conformance, Behavioural Equivalence, Hybrid Systems, Relaxation}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Hrushovski’s Encoding and ω-Categorical CSP Monsters

Authors: Pierre Gillibert, Julius Jonušas, Michael Kompatscher, Antoine Mottet, and Michael Pinsker


Abstract
We produce a class of ω-categorical structures with finite signature by applying a model-theoretic construction - a refinement of an encoding due to Hrushosvki - to ω-categorical structures in a possibly infinite signature. We show that the encoded structures retain desirable algebraic properties of the original structures, but that the constraint satisfaction problems (CSPs) associated with these structures can be badly behaved in terms of computational complexity. This method allows us to systematically generate ω-categorical templates whose CSPs are complete for a variety of complexity classes of arbitrarily high complexity, and ω-categorical templates that show that membership in any given complexity class cannot be expressed by a set of identities on the polymorphisms. It moreover enables us to prove that recent results about the relevance of topology on polymorphism clones of ω-categorical structures also apply for CSP templates, i.e., structures in a finite language. Finally, we obtain a concrete algebraic criterion which could constitute a description of the delineation between tractability and NP-hardness in the dichotomy conjecture for first-order reducts of finitely bounded homogeneous structures.

Cite as

Pierre Gillibert, Julius Jonušas, Michael Kompatscher, Antoine Mottet, and Michael Pinsker. Hrushovski’s Encoding and ω-Categorical CSP Monsters. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 131:1-131:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gillibert_et_al:LIPIcs.ICALP.2020.131,
  author =	{Gillibert, Pierre and Jonu\v{s}as, Julius and Kompatscher, Michael and Mottet, Antoine and Pinsker, Michael},
  title =	{{Hrushovski’s Encoding and \omega-Categorical CSP Monsters}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{131:1--131:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.131},
  URN =		{urn:nbn:de:0030-drops-125387},
  doi =		{10.4230/LIPIcs.ICALP.2020.131},
  annote =	{Keywords: Constraint satisfaction problem, complexity, polymorphism, pointwise convergence topology, height 1 identity, \omega-categoricity, orbit growth}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Descriptive Complexity on Non-Polish Spaces II

Authors: Mathieu Hoyrup


Abstract
This article is a study of descriptive complexity of subsets of represented spaces. Two competing measures of descriptive complexity are available. The first one is topological and measures how complex it is to obtain a set from open sets using boolean operations. The second one measures how complex it is to test membership in the set, and we call it symbolic complexity because it measures the complexity of the symbolic representation of the set. While topological and symbolic complexity are equivalent on countably-based spaces, they differ on more general spaces. Our investigation is aimed at explaining this difference and highly suggests that it is related to the well-known mismatch between topological and sequential aspects of topological spaces.

Cite as

Mathieu Hoyrup. Descriptive Complexity on Non-Polish Spaces II. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 132:1-132:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hoyrup:LIPIcs.ICALP.2020.132,
  author =	{Hoyrup, Mathieu},
  title =	{{Descriptive Complexity on Non-Polish Spaces II}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{132:1--132:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.132},
  URN =		{urn:nbn:de:0030-drops-125395},
  doi =		{10.4230/LIPIcs.ICALP.2020.132},
  annote =	{Keywords: Represented space, Computable analysis, Descriptive set theory, Scott topology}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Decidability of Time-Bounded Reachability in CTMDPs

Authors: Rupak Majumdar, Mahmoud Salamati, and Sadegh Soudjani


Abstract
We consider the time-bounded reachability problem for continuous-time Markov decision processes. We show that the problem is decidable subject to Schanuel’s conjecture. Our decision procedure relies on the structure of optimal policies and the conditional decidability (under Schanuel’s conjecture) of the theory of reals extended with exponential and trigonometric functions over bounded domains. We further show that any unconditional decidability result would imply unconditional decidability of the bounded continuous Skolem problem, or equivalently, the problem of checking if an exponential polynomial has a non-tangential zero in a bounded interval. We note that the latter problems are also decidable subject to Schanuel’s conjecture but finding unconditional decision procedures remain longstanding open problems.

Cite as

Rupak Majumdar, Mahmoud Salamati, and Sadegh Soudjani. On Decidability of Time-Bounded Reachability in CTMDPs. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 133:1-133:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{majumdar_et_al:LIPIcs.ICALP.2020.133,
  author =	{Majumdar, Rupak and Salamati, Mahmoud and Soudjani, Sadegh},
  title =	{{On Decidability of Time-Bounded Reachability in CTMDPs}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{133:1--133:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.133},
  URN =		{urn:nbn:de:0030-drops-125408},
  doi =		{10.4230/LIPIcs.ICALP.2020.133},
  annote =	{Keywords: CTMDP, Time bounded reachability, Continuous Skolem Problem, Schanuel’s Conjecture}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
When Is a Bottom-Up Deterministic Tree Translation Top-Down Deterministic?

Authors: Sebastian Maneth and Helmut Seidl


Abstract
We consider two natural subclasses of deterministic top-down tree-to-tree transducers, namely, linear and uniform-copying transducers. For both classes we show that it is decidable whether the translation of a transducer with look-ahead can be realized by a transducer without look-ahead. The transducers constructed in this way, may still make use of inspection, i.e., have an additional tree automaton restricting the domain. We provide a second procedure which decides whether inspection can be removed and if so, constructs an equivalent transducer without inspection. The construction relies on a fixpoint algorithm that determines inspection requirements and on dedicated earliest normal forms for linear as well as uniform-copying transducers which can be constructed in polynomial time. As a consequence, equivalence of these transducers can be decided in polynomial time. Applying these results to deterministic bottom-up transducers, we obtain that it is decidable whether or not their translations can be realized by deterministic uniform-copying top-down transducers without look-ahead (but with inspection) - or without both look-ahead and inspection.

Cite as

Sebastian Maneth and Helmut Seidl. When Is a Bottom-Up Deterministic Tree Translation Top-Down Deterministic?. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 134:1-134:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{maneth_et_al:LIPIcs.ICALP.2020.134,
  author =	{Maneth, Sebastian and Seidl, Helmut},
  title =	{{When Is a Bottom-Up Deterministic Tree Translation Top-Down Deterministic?}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{134:1--134:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.134},
  URN =		{urn:nbn:de:0030-drops-125416},
  doi =		{10.4230/LIPIcs.ICALP.2020.134},
  annote =	{Keywords: Top-Down Tree Transducers, Earliest Transformation, Linear Transducers, Uniform-copying Transucers, Removal of Look-ahead, Removal of Inspection}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Implicit Automata in Typed λ-Calculi I: Aperiodicity in a Non-Commutative Logic

Authors: Lê Thành Dũng Nguyễn and Pierre Pradic


Abstract
We give a characterization of star-free languages in a λ-calculus with support for non-commutative affine types (in the sense of linear logic), via the algebraic characterization of the former using aperiodic monoids. When the type system is made commutative, we show that we get regular languages instead. A key ingredient in our approach – that it shares with higher-order model checking – is the use of Church encodings for inputs and outputs. Our result is, to our knowledge, the first use of non-commutativity in implicit computational complexity.

Cite as

Lê Thành Dũng Nguyễn and Pierre Pradic. Implicit Automata in Typed λ-Calculi I: Aperiodicity in a Non-Commutative Logic. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 135:1-135:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{nguyen_et_al:LIPIcs.ICALP.2020.135,
  author =	{Nguy\~{ê}n, L\^{e} Th\`{a}nh D\~{u}ng and Pradic, Pierre},
  title =	{{Implicit Automata in Typed \lambda-Calculi I: Aperiodicity in a Non-Commutative Logic}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{135:1--135:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.135},
  URN =		{urn:nbn:de:0030-drops-125426},
  doi =		{10.4230/LIPIcs.ICALP.2020.135},
  annote =	{Keywords: Church encodings, ordered linear types, star-free languages}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Computing Measures of Weak-MSO Definable Sets of Trees

Authors: Damian Niwiński, Marcin Przybyłko, and Michał Skrzypczak


Abstract
This work addresses the problem of computing measures of recognisable sets of infinite trees. An algorithm is provided to compute the probability measure of a tree language recognisable by a weak alternating automaton, or equivalently definable in weak monadic second-order logic. The measure is the uniform coin-flipping measure or more generally it is generated by a branching stochastic process. The class of tree languages in consideration, although smaller than all regular tree languages, comprises in particular the languages definable in the alternation-free μ-calculus or in temporal logic CTL. Thus, the new algorithm may enhance the toolbox of probabilistic model checking.

Cite as

Damian Niwiński, Marcin Przybyłko, and Michał Skrzypczak. Computing Measures of Weak-MSO Definable Sets of Trees. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 136:1-136:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{niwinski_et_al:LIPIcs.ICALP.2020.136,
  author =	{Niwi\'{n}ski, Damian and Przyby{\l}ko, Marcin and Skrzypczak, Micha{\l}},
  title =	{{Computing Measures of Weak-MSO Definable Sets of Trees}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{136:1--136:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.136},
  URN =		{urn:nbn:de:0030-drops-125430},
  doi =		{10.4230/LIPIcs.ICALP.2020.136},
  annote =	{Keywords: infinite trees, weak alternating automata, coin-flipping measure}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Finite Sequentiality of Finitely Ambiguous Max-Plus Tree Automata

Authors: Erik Paul


Abstract
We show that the finite sequentiality problem is decidable for finitely ambiguous max-plus tree automata. A max-plus tree automaton is a weighted tree automaton over the max-plus semiring. A max-plus tree automaton is called finitely ambiguous if the number of accepting runs on every tree is bounded by a global constant. The finite sequentiality problem asks whether for a given max-plus tree automaton, there exist finitely many deterministic max-plus tree automata whose pointwise maximum is equivalent to the given automaton.

Cite as

Erik Paul. Finite Sequentiality of Finitely Ambiguous Max-Plus Tree Automata. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 137:1-137:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{paul:LIPIcs.ICALP.2020.137,
  author =	{Paul, Erik},
  title =	{{Finite Sequentiality of Finitely Ambiguous Max-Plus Tree Automata}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{137:1--137:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.137},
  URN =		{urn:nbn:de:0030-drops-125447},
  doi =		{10.4230/LIPIcs.ICALP.2020.137},
  annote =	{Keywords: Weighted Tree Automata, Max-Plus Tree Automata, Finite Sequentiality, Decidability, Finite Ambiguity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Skolem-Hardness and Saturation Points in Markov Decision Processes

Authors: Jakob Piribauer and Christel Baier


Abstract
The Skolem problem and the related Positivity problem for linear recurrence sequences are outstanding number-theoretic problems whose decidability has been open for many decades. In this paper, the inherent mathematical difficulty of a series of optimization problems on Markov decision processes (MDPs) is shown by a reduction from the Positivity problem to the associated decision problems which establishes that the problems are also at least as hard as the Skolem problem as an immediate consequence. The optimization problems under consideration are two non-classical variants of the stochastic shortest path problem (SSPP) in terms of expected partial or conditional accumulated weights, the optimization of the conditional value-at-risk for accumulated weights, and two problems addressing the long-run satisfaction of path properties, namely the optimization of long-run probabilities of regular co-safety properties and the model-checking problem of the logic frequency-LTL. To prove the Positivity- and hence Skolem-hardness for the latter two problems, a new auxiliary path measure, called weighted long-run frequency, is introduced and the Positivity-hardness of the corresponding decision problem is shown as an intermediate step. For the partial and conditional SSPP on MDPs with non-negative weights and for the optimization of long-run probabilities of constrained reachability properties (aU b), solutions are known that rely on the identification of a bound on the accumulated weight or the number of consecutive visits to certain sates, called a saturation point, from which on optimal schedulers behave memorylessly. In this paper, it is shown that also the optimization of the conditional value-at-risk for the classical SSPP and of weighted long-run frequencies on MDPs with non-negative weights can be solved in pseudo-polynomial time exploiting the existence of a saturation point. As a consequence, one obtains the decidability of the qualitative model-checking problem of a frequency-LTL formula that is not included in the fragments with known solutions.

Cite as

Jakob Piribauer and Christel Baier. On Skolem-Hardness and Saturation Points in Markov Decision Processes. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 138:1-138:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{piribauer_et_al:LIPIcs.ICALP.2020.138,
  author =	{Piribauer, Jakob and Baier, Christel},
  title =	{{On Skolem-Hardness and Saturation Points in Markov Decision Processes}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{138:1--138:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.138},
  URN =		{urn:nbn:de:0030-drops-125455},
  doi =		{10.4230/LIPIcs.ICALP.2020.138},
  annote =	{Keywords: Markov decision process, Skolem problem, stochastic shortest path, conditional expectation, conditional value-at-risk, model checking, frequency-LTL}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Power of a Single Qubit: Two-Way Quantum Finite Automata and the Word Problem

Authors: Zachary Remscrim


Abstract
The two-way finite automaton with quantum and classical states (2QCFA), defined by Ambainis and Watrous, is a model of quantum computation whose quantum part is extremely limited; however, as they showed, 2QCFA are surprisingly powerful: a 2QCFA, with a single qubit, can recognize, with bounded error, the language L_{eq} = {a^m b^m :m ∈ ℕ} in expected polynomial time and the language L_{pal} = {w ∈ {a,b}^*:w is a palindrome} in expected exponential time. We further demonstrate the power of 2QCFA by showing that they can recognize the word problems of many groups. In particular 2QCFA, with a single qubit and algebraic number transition amplitudes, can recognize, with bounded error, the word problem of any finitely generated virtually abelian group in expected polynomial time, as well as the word problems of a large class of linear groups in expected exponential time. This latter class (properly) includes all groups with context-free word problem. We also exhibit results for 2QCFA with any constant number of qubits. As a corollary, we obtain a direct improvement on the original Ambainis and Watrous result by showing that L_{eq} can be recognized by a 2QCFA with better parameters. As a further corollary, we show that 2QCFA can recognize certain non-context-free languages in expected polynomial time. In a companion paper, we prove matching lower bounds, thereby showing that the class of languages recognizable with bounded error by a 2QCFA in expected subexponential time is properly contained in the class of languages recognizable with bounded error by a 2QCFA in expected exponential time.

Cite as

Zachary Remscrim. The Power of a Single Qubit: Two-Way Quantum Finite Automata and the Word Problem. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 139:1-139:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{remscrim:LIPIcs.ICALP.2020.139,
  author =	{Remscrim, Zachary},
  title =	{{The Power of a Single Qubit: Two-Way Quantum Finite Automata and the Word Problem}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{139:1--139:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.139},
  URN =		{urn:nbn:de:0030-drops-125468},
  doi =		{10.4230/LIPIcs.ICALP.2020.139},
  annote =	{Keywords: finite automata, quantum, word problem of a group}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Hardness Results for Constant-Free Pattern Languages and Word Equations

Authors: Aleksi Saarela


Abstract
We study constant-free versions of the inclusion problem of pattern languages and the satisfiability problem of word equations. The inclusion problem of pattern languages is known to be undecidable for both erasing and nonerasing pattern languages, but decidable for constant-free erasing pattern languages. We prove that it is undecidable for constant-free nonerasing pattern languages. The satisfiability problem of word equations is known to be in PSPACE and NP-hard. We prove that the nonperiodic satisfiability problem of constant-free word equations is NP-hard. Additionally, we prove a polynomial-time reduction from the satisfiability problem of word equations to the problem of deciding whether a given constant-free equation has a solution morphism α such that α(xy) ≠ α(yx) for given variables x and y.

Cite as

Aleksi Saarela. Hardness Results for Constant-Free Pattern Languages and Word Equations. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 140:1-140:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{saarela:LIPIcs.ICALP.2020.140,
  author =	{Saarela, Aleksi},
  title =	{{Hardness Results for Constant-Free Pattern Languages and Word Equations}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{140:1--140:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.140},
  URN =		{urn:nbn:de:0030-drops-125472},
  doi =		{10.4230/LIPIcs.ICALP.2020.140},
  annote =	{Keywords: Combinatorics on words, pattern language, word equation}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete

Authors: Wenbo Zhang, Qiang Yin, Huan Long, and Xian Xu


Abstract
Deciding bisimulation equivalence of two pushdown automata is one of the most fundamental problems in formal verification. Though Sénizergues established decidability of this problem in 1998, it has taken a long time to understand its complexity: the problem was proven to be non-elementary in 2013, and only recently, Jančar and Schmitz showed that it has an Ackermann upper bound. We improve the lower bound to Ackermann-hard, and thus close the complexity gap.

Cite as

Wenbo Zhang, Qiang Yin, Huan Long, and Xian Xu. Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 141:1-141:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ICALP.2020.141,
  author =	{Zhang, Wenbo and Yin, Qiang and Long, Huan and Xu, Xian},
  title =	{{Bisimulation Equivalence of Pushdown Automata Is Ackermann-Complete}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{141:1--141:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.141},
  URN =		{urn:nbn:de:0030-drops-125482},
  doi =		{10.4230/LIPIcs.ICALP.2020.141},
  annote =	{Keywords: PDA, Bisimulation, Equivalence checking}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail