LIPIcs, Volume 229

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)



Thumbnail PDF

Event

ICALP 2022, July 4-8, 2022, Paris, France

Editors

Mikołaj Bojańczyk
  • University of Warsaw, Poland
Emanuela Merelli
  • University of Camerino, Italy
David P. Woodruff
  • Carnegie Mellon University, PA, USA

Publication Details

  • published at: 2022-06-28
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-235-8
  • DBLP: db/conf/icalp/icalp2022

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 229, ICALP 2022, Complete Volume

Authors: Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff


Abstract
LIPIcs, Volume 229, ICALP 2022, Complete Volume

Cite as

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 1-2516, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Proceedings{bojanczyk_et_al:LIPIcs.ICALP.2022,
  title =	{{LIPIcs, Volume 229, ICALP 2022, Complete Volume}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{1--2516},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022},
  URN =		{urn:nbn:de:0030-drops-163400},
  doi =		{10.4230/LIPIcs.ICALP.2022},
  annote =	{Keywords: LIPIcs, Volume 229, ICALP 2022, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 0:i-0:xxxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2022.0,
  author =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{0:i--0:xxxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.0},
  URN =		{urn:nbn:de:0030-drops-163417},
  doi =		{10.4230/LIPIcs.ICALP.2022.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Towards a Theory of Algorithmic Proof Complexity (Invited Talk)

Authors: Albert Atserias


Abstract
A possibly unexpected by-product of the mathematical study of the lengths of proofs, as is done in the field of propositional proof complexity, is, I claim, that it may lead to new polynomial-time algorithms. To explain this, I will first recall the origins of proof complexity as a field, and then explain why some of the recent progress in it could lead to some new algorithms.

Cite as

Albert Atserias. Towards a Theory of Algorithmic Proof Complexity (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{atserias:LIPIcs.ICALP.2022.1,
  author =	{Atserias, Albert},
  title =	{{Towards a Theory of Algorithmic Proof Complexity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.1},
  URN =		{urn:nbn:de:0030-drops-163423},
  doi =		{10.4230/LIPIcs.ICALP.2022.1},
  annote =	{Keywords: proof complexity, logic, computational complexity}
}
Document
Invited Talk
Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk)

Authors: Constantinos Daskalakis


Abstract
Machine Learning has recently made significant advances in challenges such as speech and image recognition, automatic translation, and text generation, much of that progress being fueled by the success of gradient descent-based optimization methods in computing local optima of non-convex objectives. From robustifying machine learning models against adversarial attacks to causal inference, training generative models, multi-robot interactions, and learning in strategic environments, many outstanding challenges in Machine Learning lie at its interface with Game Theory. On this front, however, gradient-descent based optimization methods have been less successful. Here, the role of single-objective optimization is played by equilibrium computation, but gradient-descent based methods commonly fail to find equilibria, and even computing local approximate equilibria has remained daunting. We shed light on these challenges through a combination of learning-theoretic, complexity-theoretic, game-theoretic and topological techniques, presenting obstacles and opportunities for Machine Learning and Game Theory going forward. I will assume no Deep Learning background for this talk and present results from joint works with S. Skoulakis and M. Zampetakis [Daskalakis et al., 2021] as well as with N. Golowich and K. Zhang [Daskalakis et al., 2022].

Cite as

Constantinos Daskalakis. Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{daskalakis:LIPIcs.ICALP.2022.2,
  author =	{Daskalakis, Constantinos},
  title =	{{Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.2},
  URN =		{urn:nbn:de:0030-drops-163431},
  doi =		{10.4230/LIPIcs.ICALP.2022.2},
  annote =	{Keywords: Deep Learning, Multi-Agent (Reinforcement) Learning, Game Theory, Nonconvex Optimization, PPAD}
}
Document
Invited Talk
Some New (And Old) Results on Contention Resolution (Invited Talk)

Authors: Leslie Ann Goldberg


Abstract
This is an extended abstract of my talk at ICALP 2022, based on joint work with John Lapinskas.

Cite as

Leslie Ann Goldberg. Some New (And Old) Results on Contention Resolution (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 3:1-3:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2022.3,
  author =	{Goldberg, Leslie Ann},
  title =	{{Some New (And Old) Results on Contention Resolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{3:1--3:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.3},
  URN =		{urn:nbn:de:0030-drops-163444},
  doi =		{10.4230/LIPIcs.ICALP.2022.3},
  annote =	{Keywords: contention resolution, multiple access channel, randomised algorithms}
}
Document
Invited Talk
The Manifold Joys of Sampling (Invited Talk)

Authors: Yin Tat Lee and Santosh S. Vempala


Abstract
We survey recent progress and many open questions in the field of sampling high-dimensional distributions, with specific focus on sampling with non-Euclidean metrics.

Cite as

Yin Tat Lee and Santosh S. Vempala. The Manifold Joys of Sampling (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ICALP.2022.4,
  author =	{Lee, Yin Tat and Vempala, Santosh S.},
  title =	{{The Manifold Joys of Sampling}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.4},
  URN =		{urn:nbn:de:0030-drops-163459},
  doi =		{10.4230/LIPIcs.ICALP.2022.4},
  annote =	{Keywords: Sampling, Diffusion, Optimization, High Dimension}
}
Document
Invited Talk
Streaming and Sketching Complexity of CSPs: A Survey (Invited Talk)

Authors: Madhu Sudan


Abstract
In this survey we describe progress over the last decade or so in understanding the complexity of solving constraint satisfaction problems (CSPs) approximately in the streaming and sketching models of computation. After surveying some of the results we give some sketches of the proofs and in particular try to explain why there is a tight dichotomy result for sketching algorithms working in subpolynomial space regime.

Cite as

Madhu Sudan. Streaming and Sketching Complexity of CSPs: A Survey (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{sudan:LIPIcs.ICALP.2022.5,
  author =	{Sudan, Madhu},
  title =	{{Streaming and Sketching Complexity of CSPs: A Survey}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.5},
  URN =		{urn:nbn:de:0030-drops-163460},
  doi =		{10.4230/LIPIcs.ICALP.2022.5},
  annote =	{Keywords: Streaming algorithms, Sketching algorithms, Dichotomy, Communication Complexity}
}
Document
Invited Talk
A Brief Tour in Twin-Width (Invited Talk)

Authors: Stéphan Thomassé


Abstract
This is an introduction to the notion of twin-width, with emphasis on how it interacts with first-order model checking and enumerative combinatorics. Even though approximating twin-width remains a challenge in general graphs, it is now well understood for ordered graphs, where bounded twin-width coincides with many other complexity gaps. For instance classes of graphs with linear FO-model checking, small classes, or NIP classes are exactly bounded twin-width classes. Some other applications of twin-width are also presented.

Cite as

Stéphan Thomassé. A Brief Tour in Twin-Width (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 6:1-6:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{thomasse:LIPIcs.ICALP.2022.6,
  author =	{Thomass\'{e}, St\'{e}phan},
  title =	{{A Brief Tour in Twin-Width}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{6:1--6:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.6},
  URN =		{urn:nbn:de:0030-drops-163473},
  doi =		{10.4230/LIPIcs.ICALP.2022.6},
  annote =	{Keywords: Twin-width, matrices, ordered graphs, enumerative combinatorics, model theory, algorithms, computational complexity, Ramsey theory}
}
Document
Track A: Algorithms, Complexity and Games
Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems

Authors: Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi


Abstract
We study several questions related to diversifying search results. We give improved approximation algorithms in each of the following problems, together with some lower bounds. 1) We give a polynomial-time approximation scheme (PTAS) for a diversified search ranking problem [Nikhil Bansal et al., 2010] whose objective is to minimizes the discounted cumulative gain. Our PTAS runs in time n^{2^O(log(1/ε)/ε)} ⋅ m^O(1) where n denotes the number of elements in the databases and m denotes the number of constraints. Complementing this result, we show that no PTAS can run in time f(ε) ⋅ (nm)^{2^o(1/ε)} assuming Gap-ETH and therefore our running time is nearly tight. Both our upper and lower bounds answer open questions from [Nikhil Bansal et al., 2010]. 2) We next consider the Max-Sum Dispersion problem, whose objective is to select k out of n elements from a database that maximizes the dispersion, which is defined as the sum of the pairwise distances under a given metric. We give a quasipolynomial-time approximation scheme (QPTAS) for the problem which runs in time n^{O_ε(log n)}. This improves upon previously known polynomial-time algorithms with approximate ratios 0.5 [Refael Hassin et al., 1997; Allan Borodin et al., 2017]. Furthermore, we observe that reductions from previous work rule out approximation schemes that run in n^õ_ε(log n) time assuming ETH. 3) Finally, we consider a generalization of Max-Sum Dispersion called Max-Sum Diversification. In addition to the sum of pairwise distance, the objective also includes another function f. For monotone submodular function f, we give a quasipolynomial-time algorithm with approximation ratio arbitrarily close to (1-1/e). This improves upon the best polynomial-time algorithm which has approximation ratio 0.5 [Allan Borodin et al., 2017]. Furthermore, the (1-1/e) factor is also tight as achieving better-than-(1-1/e) approximation is NP-hard [Uriel Feige, 1998].

Cite as

Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi. Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2022.7,
  author =	{Abboud, Amir and Cohen-Addad, Vincent and Lee, Euiwoong and Manurangsi, Pasin},
  title =	{{Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.7},
  URN =		{urn:nbn:de:0030-drops-163481},
  doi =		{10.4230/LIPIcs.ICALP.2022.7},
  annote =	{Keywords: Approximation Algorithms, Complexity, Data Mining, Diversification}
}
Document
Track A: Algorithms, Complexity and Games
Round-Optimal Lattice-Based Threshold Signatures, Revisited

Authors: Shweta Agrawal, Damien Stehlé, and Anshu Yadav


Abstract
Threshold signature schemes enable distribution of the signature issuing capability to multiple users, to mitigate the threat of signing key compromise. Though a classic primitive, these signatures have witnessed a surge of interest in recent times due to relevance to modern applications like blockchains and cryptocurrencies. In this work, we study round-optimal threshold signatures in the post-quantum regime and improve the only known lattice-based construction by Boneh et al. [CRYPTO'18] as follows: - Efficiency. We reduce the amount of noise flooding used in the construction from 2^Ω(λ) down to √Q, where Q is the bound on the number of generated signatures and λ is the security parameter. By using lattice hardness assumptions over polynomial rings, this allows to decrease the signature bit-lengths from Õ(λ³) to Õ(λ), bringing them significantly closer to practice. Our improvement relies on a careful analysis using Rényi divergence rather than statistical distance in the security proof. - Instantiation. The construction of Boneh et al. requires a standard signature scheme to be evaluated homomorphically. To instantiate this, we provide a homomorphism-friendly variant of Lyubashevsky’s signature [EUROCRYPT '12] which achieves low circuit depth by being "rejection-free" and uses an optimal, moderate noise flooding of √Q, matching the above. - Towards Adaptive Security. The construction of Boneh et al. satisfies only selective security, where all the corrupted parties must be announced before any signing query is made. We improve this in two ways: in the Random Oracle Model, we obtain partial adaptivity where signing queries can be made before the corrupted parties are announced but the set of corrupted parties must be announced all at once. In the standard model, we obtain full adaptivity, where parties can be corrupted at any time but this construction is in a weaker pre-processing model where signers must be provided correlated randomness of length proportional to the number of signatures, in an offline preprocessing phase.

Cite as

Shweta Agrawal, Damien Stehlé, and Anshu Yadav. Round-Optimal Lattice-Based Threshold Signatures, Revisited. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ICALP.2022.8,
  author =	{Agrawal, Shweta and Stehl\'{e}, Damien and Yadav, Anshu},
  title =	{{Round-Optimal Lattice-Based Threshold Signatures, Revisited}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.8},
  URN =		{urn:nbn:de:0030-drops-163491},
  doi =		{10.4230/LIPIcs.ICALP.2022.8},
  annote =	{Keywords: Post-Quantum Cryptography, Lattices, Threshold Signatures}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras

Authors: Josh Alman and Dean Hirsch


Abstract
We design the first efficient sensitivity oracles and dynamic algorithms for a variety of parameterized problems. Our main approach is to modify the algebraic coding technique from static parameterized algorithm design, which had not previously been used in a dynamic context. We particularly build off of the "extensor coding" method of Brand, Dell and Husfeldt [STOC'18], employing properties of the exterior algebra over different fields. For the k-Path detection problem for directed graphs, it is known that no efficient dynamic algorithm exists (under popular assumptions from fine-grained complexity). We circumvent this by designing an efficient sensitivity oracle, which preprocesses a directed graph on n vertices in 2^k poly(k) n^{ω+o(1)} time, such that, given 𝓁 updates (mixing edge insertions and deletions, and vertex deletions) to that input graph, it can decide in time 𝓁² 2^kpoly(k) and with high probability, whether the updated graph contains a path of length k. We also give a deterministic sensitivity oracle requiring 4^k poly(k) n^{ω+o(1)} preprocessing time and 𝓁² 2^{ω k + o(k)} query time, and obtain a randomized sensitivity oracle for the task of approximately counting the number of k-paths. For k-Path detection in undirected graphs, we obtain a randomized sensitivity oracle with O(1.66^k n³) preprocessing time and O(𝓁³ 1.66^k) query time, and a better bound for undirected bipartite graphs. In addition, we present the first fully dynamic algorithms for a variety of problems: k-Partial Cover, m-Set k-Packing, t-Dominating Set, d-Dimensional k-Matching, and Exact k-Partial Cover. For example, for k-Partial Cover we show a randomized dynamic algorithm with 2^k poly(k)polylog(n) update time, and a deterministic dynamic algorithm with 4^k poly(k)polylog(n) update time. Finally, we show how our techniques can be adapted to deal with natural variants on these problems where additional constraints are imposed on the solutions.

Cite as

Josh Alman and Dean Hirsch. Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 9:1-9:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2022.9,
  author =	{Alman, Josh and Hirsch, Dean},
  title =	{{Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{9:1--9:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.9},
  URN =		{urn:nbn:de:0030-drops-163504},
  doi =		{10.4230/LIPIcs.ICALP.2022.9},
  annote =	{Keywords: sensitivity oracles, k-path, dynamic algorithms, parameterized algorithms, set packing, partial cover, exterior algebra, extensor, algebraic algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Low-Degree Polynomials Extract From Local Sources

Authors: Omar Alrabiah, Eshan Chattopadhyay, Jesse Goodman, Xin Li, and João Ribeiro


Abstract
We continue a line of work on extracting random bits from weak sources that are generated by simple processes. We focus on the model of locally samplable sources, where each bit in the source depends on a small number of (hidden) uniformly random input bits. Also known as local sources, this model was introduced by De and Watson (TOCT 2012) and Viola (SICOMP 2014), and is closely related to sources generated by AC⁰ circuits and bounded-width branching programs. In particular, extractors for local sources also work for sources generated by these classical computational models. Despite being introduced a decade ago, little progress has been made on improving the entropy requirement for extracting from local sources. The current best explicit extractors require entropy n^{1/2}, and follow via a reduction to affine extractors. To start, we prove a barrier showing that one cannot hope to improve this entropy requirement via a black-box reduction of this form. In particular, new techniques are needed. In our main result, we seek to answer whether low-degree polynomials (over 𝔽₂) hold potential for breaking this barrier. We answer this question in the positive, and fully characterize the power of low-degree polynomials as extractors for local sources. More precisely, we show that a random degree r polynomial is a low-error extractor for n-bit local sources with min-entropy Ω(r(nlog n)^{1/r}), and we show that this is tight. Our result leverages several new ingredients, which may be of independent interest. Our existential result relies on a new reduction from local sources to a more structured family, known as local non-oblivious bit-fixing sources. To show its tightness, we prove a "local version" of a structural result by Cohen and Tal (RANDOM 2015), which relies on a new "low-weight" Chevalley-Warning theorem.

Cite as

Omar Alrabiah, Eshan Chattopadhyay, Jesse Goodman, Xin Li, and João Ribeiro. Low-Degree Polynomials Extract From Local Sources. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{alrabiah_et_al:LIPIcs.ICALP.2022.10,
  author =	{Alrabiah, Omar and Chattopadhyay, Eshan and Goodman, Jesse and Li, Xin and Ribeiro, Jo\~{a}o},
  title =	{{Low-Degree Polynomials Extract From Local Sources}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.10},
  URN =		{urn:nbn:de:0030-drops-163519},
  doi =		{10.4230/LIPIcs.ICALP.2022.10},
  annote =	{Keywords: Randomness extractors, local sources, samplable sources, AC⁰ circuits, branching programs, low-degree polynomials, Chevalley-Warning}
}
Document
Track A: Algorithms, Complexity and Games
Decremental Matching in General Graphs

Authors: Sepehr Assadi, Aaron Bernstein, and Aditi Dudeja


Abstract
We consider the problem of maintaining an approximate maximum integral matching in a dynamic graph G, while the adversary makes changes to the edges of the graph. The goal is to maintain a (1+ε)-approximate maximum matching for constant ε > 0, while minimizing the update time. In the fully dynamic setting, where both edge insertion and deletions are allowed, Gupta and Peng (see [Manoj Gupta and Richard Peng, 2013]) gave an algorithm for this problem with an update time of O(√m/ε²). Motivated by the fact that the O_ε(√m) barrier is hard to overcome (see Henzinger, Krinninger, Nanongkai, and Saranurak [Henzinger et al., 2015]; Kopelowitz, Pettie, and Porat [Kopelowitz et al., 2016]), we study this problem in the decremental model, where the adversary is only allowed to delete edges. Recently, Bernstein, Probst-Gutenberg, and Saranurak (see [Bernstein et al., 2020]) gave an O(poly({log n}/ε)) update time decremental algorithm for this problem in bipartite graphs. However, beating O(√m) update time remained an open problem for general graphs. In this paper, we bridge the gap between bipartite and general graphs, by giving an O_ε(poly(log n)) update time algorithm that maintains a (1+ε)-approximate maximum integral matching under adversarial deletions. Our algorithm is randomized, but works against an adaptive adversary. Together with the work of Grandoni, Leonardi, Sankowski, Schwiegelshohn, and Solomon [Fabrizio Grandoni et al., 2019] who give an O_ε(1) update time algorithm for general graphs in the incremental (insertion-only) model, our result essentially completes the picture for partially dynamic matching.

Cite as

Sepehr Assadi, Aaron Bernstein, and Aditi Dudeja. Decremental Matching in General Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.ICALP.2022.11,
  author =	{Assadi, Sepehr and Bernstein, Aaron and Dudeja, Aditi},
  title =	{{Decremental Matching in General Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.11},
  URN =		{urn:nbn:de:0030-drops-163528},
  doi =		{10.4230/LIPIcs.ICALP.2022.11},
  annote =	{Keywords: Dynamic algorithms, matching, primal-dual algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Algorithms for Stochastic Online Bin Packing

Authors: Nikhil ^* Ayyadevara, Rajni Dabas, Arindam Khan, and K. V. N. Sreenivas


Abstract
We study the online bin packing problem under two stochastic settings. In the bin packing problem, we are given n items with sizes in (0,1] and the goal is to pack them into the minimum number of unit-sized bins. First, we study bin packing under the i.i.d. model, where item sizes are sampled independently and identically from a distribution in (0,1]. Both the distribution and the total number of items are unknown. The items arrive one by one and their sizes are revealed upon their arrival and they must be packed immediately and irrevocably in bins of size 1. We provide a simple meta-algorithm that takes an offline α-asymptotic proximation algorithm and provides a polynomial-time (α + ε)-competitive algorithm for online bin packing under the i.i.d. model, where ε > 0 is a small constant. Using the AFPTAS for offline bin packing, we thus provide a linear time (1+ε)-competitive algorithm for online bin packing under i.i.d. model, thus settling the problem. We then study the random-order model, where an adversary specifies the items, but the order of arrival of items is drawn uniformly at random from the set of all permutations of the items. Kenyon’s seminal result [SODA'96] showed that the Best-Fit algorithm has a competitive ratio of at most 3/2 in the random-order model, and conjectured the ratio to be ≈ 1.15. However, it has been a long-standing open problem to break the barrier of 3/2 even for special cases. Recently, Albers et al. [Algorithmica'21] showed an improvement to 5/4 competitive ratio in the special case when all the item sizes are greater than 1/3. For this special case, we settle the analysis by showing that Best-Fit has a competitive ratio of 1. We also make further progress by breaking the barrier of 3/2 for the 3-Partition problem, a notoriously hard special case of bin packing, where all item sizes lie in (1/4,1/2].

Cite as

Nikhil ^* Ayyadevara, Rajni Dabas, Arindam Khan, and K. V. N. Sreenivas. Near-Optimal Algorithms for Stochastic Online Bin Packing. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ayyadevara_et_al:LIPIcs.ICALP.2022.12,
  author =	{Ayyadevara, Nikhil ^* and Dabas, Rajni and Khan, Arindam and Sreenivas, K. V. N.},
  title =	{{Near-Optimal Algorithms for Stochastic Online Bin Packing}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.12},
  URN =		{urn:nbn:de:0030-drops-163532},
  doi =		{10.4230/LIPIcs.ICALP.2022.12},
  annote =	{Keywords: Bin Packing, 3-Partition Problem, Online Algorithms, Random Order Arrival, IID model, Best-Fit Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Competitive Vertex Recoloring

Authors: Yossi Azar, Chay Machluf, Boaz Patt-Shamir, and Noam Touitou


Abstract
Motivated by placement of jobs in physical machines, we introduce and analyze the problem of online recoloring, or online disengagement. In this problem, we are given a set of n weighted vertices and a k-coloring of the vertices (vertices represent jobs, and colors represent physical machines). Edges, representing conflicts between jobs, are inserted in an online fashion. After every edge insertion, the algorithm must output a proper k-coloring of the vertices. The cost of a recoloring is the sum of weights of vertices whose color changed. Our aim is to minimize the competitive ratio of the algorithm, i.e., the ratio between the cost paid by the online algorithm and the cost paid by an optimal, offline algorithm. We consider a couple of polynomially-solvable coloring variants. Specifically, for 2-coloring bipartite graphs we present an O(log n)-competitive deterministic algorithm and an Ω(log n) lower bound on the competitive ratio of randomized algorithms. For (Δ+1)-coloring, we present tight bounds of Θ(Δ) and Θ(logΔ) on the competitive ratios of deterministic and randomized algorithms, respectively (where Δ denotes the maximum degree). We also consider a dynamic case which allows edge deletions as well as insertions. All our algorithms are applicable to the case where vertices are weighted and the cost of recoloring a vertex is its weight. All our lower bounds hold even in the unweighted case.

Cite as

Yossi Azar, Chay Machluf, Boaz Patt-Shamir, and Noam Touitou. Competitive Vertex Recoloring. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 13:1-13:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{azar_et_al:LIPIcs.ICALP.2022.13,
  author =	{Azar, Yossi and Machluf, Chay and Patt-Shamir, Boaz and Touitou, Noam},
  title =	{{Competitive Vertex Recoloring}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{13:1--13:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.13},
  URN =		{urn:nbn:de:0030-drops-163542},
  doi =		{10.4230/LIPIcs.ICALP.2022.13},
  annote =	{Keywords: coloring with recourse, anti-affinity constraints}
}
Document
Track A: Algorithms, Complexity and Games
Smoothed Analysis of the Komlós Conjecture

Authors: Nikhil Bansal, Haotian Jiang, Raghu Meka, Sahil Singla, and Makrand Sinha


Abstract
The well-known Komlós conjecture states that given n vectors in ℝ^d with Euclidean norm at most one, there always exists a ± 1 coloring such that the 𝓁_∞ norm of the signed-sum vector is a constant independent of n and d. We prove this conjecture in a smoothed analysis setting where the vectors are perturbed by adding a small Gaussian noise and when the number of vectors n = ω(d log d). The dependence of n on d is the best possible even in a completely random setting. Our proof relies on a weighted second moment method, where instead of considering uniformly randomly colorings we apply the second moment method on an implicit distribution on colorings obtained by applying the Gram-Schmidt walk algorithm to a suitable set of vectors. The main technical idea is to use various properties of these colorings, including subgaussianity, to control the second moment.

Cite as

Nikhil Bansal, Haotian Jiang, Raghu Meka, Sahil Singla, and Makrand Sinha. Smoothed Analysis of the Komlós Conjecture. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 14:1-14:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.ICALP.2022.14,
  author =	{Bansal, Nikhil and Jiang, Haotian and Meka, Raghu and Singla, Sahil and Sinha, Makrand},
  title =	{{Smoothed Analysis of the Koml\'{o}s Conjecture}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{14:1--14:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.14},
  URN =		{urn:nbn:de:0030-drops-163556},
  doi =		{10.4230/LIPIcs.ICALP.2022.14},
  annote =	{Keywords: Koml\'{o}s conjecture, smoothed analysis, weighted second moment method, subgaussian coloring}
}
Document
Track A: Algorithms, Complexity and Games
Minimum+1 (s,t)-cuts and Dual Edge Sensitivity Oracle

Authors: Surender Baswana, Koustav Bhanja, and Abhyuday Pandey


Abstract
Let G be a directed multi-graph on n vertices and m edges with a designated source vertex s and a designated sink vertex t. We study the (s,t)-cuts of capacity minimum+1 and as an important application of them, we give a solution to the dual edge sensitivity for (s,t)-mincuts - reporting the (s,t)-mincut upon failure or addition of any pair of edges. Picard and Queyranne [Mathematical Programming Studies, 13(1):8-16, 1980] showed that there exists a directed acyclic graph (DAG) that compactly stores all minimum (s,t)-cuts of G. This structure also acts as an oracle for the single edge sensitivity of minimum (s,t)-cut. Dinitz and Nutov [STOC, pages 509-518, 1995] showed that there exists an 𝒪(n) size 2-level cactus model that stores all global cuts of capacity minimum+1. However, for minimum+1 (s,t)-cuts, no such compact structures exist till date. We present the following structural and algorithmic results on minimum+1 (s,t)-cuts. 1) There exists a pair of DAGs of size O(m) that compactly store all minimum+1 (s,t)-cuts of G. Each minimum+1 (s,t)-cut appears as a (s,t)-cut in one of the 2 DAGs and is 3-transversal - it intersects any path in the DAG at most thrice. 2) There exists an O(n²) size data structure that, given a pair of vertices {u,v} which are not separated by an (s,t)-mincut, can determine in 𝒪(1) time if there exists a minimum+1 (s,t)-cut, say (A,B), such that {s,u} ∈ A and {v,t} ∈ B; the corresponding cut can be reported in 𝒪(|B|) time. 3) There exists an O(n²) size data structure that solves the dual edge sensitivity problem for (s,t)-mincuts. It takes 𝒪(1) time to report the value of a resulting (s,t)-mincut (A,B) and 𝒪(|B|) time to report the cut. 4) For the data structure problems addressed in (2) and (3) above, we also provide a matching conditional lower bound. We establish a close relationship among three seemingly unrelated problems – all-pairs directed reachability problem, the dual edge sensitivity problem for (s,t)-mincuts, and 2× 2 maximum flow. Assuming the directed reachability hypothesis, this leads to Ω(n²) lower bounds on the space for the latter two problems.

Cite as

Surender Baswana, Koustav Bhanja, and Abhyuday Pandey. Minimum+1 (s,t)-cuts and Dual Edge Sensitivity Oracle. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 15:1-15:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{baswana_et_al:LIPIcs.ICALP.2022.15,
  author =	{Baswana, Surender and Bhanja, Koustav and Pandey, Abhyuday},
  title =	{{Minimum+1 (s,t)-cuts and Dual Edge Sensitivity Oracle}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{15:1--15:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.15},
  URN =		{urn:nbn:de:0030-drops-163566},
  doi =		{10.4230/LIPIcs.ICALP.2022.15},
  annote =	{Keywords: mincut, maxflow, fault tolerant}
}
Document
Track A: Algorithms, Complexity and Games
Counting and Enumerating Optimum Cut Sets for Hypergraph k-Partitioning Problems for Fixed k

Authors: Calvin Beideman, Karthekeyan Chandrasekaran, and Weihang Wang


Abstract
We consider the problem of enumerating optimal solutions for two hypergraph k-partitioning problems - namely, Hypergraph-k-Cut and Minmax-Hypergraph-k-Partition. The input in hypergraph k-partitioning problems is a hypergraph G = (V, E) with positive hyperedge costs along with a fixed positive integer k. The goal is to find a partition of V into k non-empty parts (V₁, V₂, …, V_k) - known as a k-partition - so as to minimize an objective of interest. 1) If the objective of interest is the maximum cut value of the parts, then the problem is known as Minmax-Hypergraph-k-Partition. A subset of hyperedges is a minmax-k-cut-set if it is the subset of hyperedges crossing an optimum k-partition for Minmax-Hypergraph-k-Partition. 2) If the objective of interest is the total cost of hyperedges crossing the k-partition, then the problem is known as Hypergraph-k-Cut. A subset of hyperedges is a min-k-cut-set if it is the subset of hyperedges crossing an optimum k-partition for Hypergraph-k-Cut. We give the first polynomial bound on the number of minmax-k-cut-sets and a polynomial-time algorithm to enumerate all of them in hypergraphs for every fixed k. Our technique is strong enough to also enable an n^{O(k)}p-time deterministic algorithm to enumerate all min-k-cut-sets in hypergraphs, thus improving on the previously known n^{O(k²)}p-time deterministic algorithm, where n is the number of vertices and p is the size of the hypergraph. The correctness analysis of our enumeration approach relies on a structural result that is a strong and unifying generalization of known structural results for Hypergraph-k-Cut and Minmax-Hypergraph-k-Partition. We believe that our structural result is likely to be of independent interest in the theory of hypergraphs (and graphs).

Cite as

Calvin Beideman, Karthekeyan Chandrasekaran, and Weihang Wang. Counting and Enumerating Optimum Cut Sets for Hypergraph k-Partitioning Problems for Fixed k. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{beideman_et_al:LIPIcs.ICALP.2022.16,
  author =	{Beideman, Calvin and Chandrasekaran, Karthekeyan and Wang, Weihang},
  title =	{{Counting and Enumerating Optimum Cut Sets for Hypergraph k-Partitioning Problems for Fixed k}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.16},
  URN =		{urn:nbn:de:0030-drops-163578},
  doi =		{10.4230/LIPIcs.ICALP.2022.16},
  annote =	{Keywords: hypergraphs, k-partitioning, counting, enumeration}
}
Document
Track A: Algorithms, Complexity and Games
Finding Monotone Patterns in Sublinear Time, Adaptively

Authors: Omri Ben-Eliezer, Shoham Letzter, and Erik Waingarten


Abstract
We investigate adaptive sublinear algorithms for finding monotone patterns in sequential data. Given fixed 2 ≤ k ∈ m N and ε > 0, consider the problem of finding a length-k increasing subsequence in a sequence f : [n] → ℝ, provided that f is ε-far from free of such subsequences. It was shown by Ben-Eliezer et al. [FOCS 2019] that the non-adaptive query complexity of the above task is Θ((log n)^⌊log₂ k⌋). In this work, we break the non-adaptive lower bound, presenting an adaptive algorithm for this problem which makes O(log n) queries. This is optimal, matching the classical Ω(log n) adaptive lower bound by Fischer [Inf. Comp. 2004] for monotonicity testing (which corresponds to the case k = 2). Equivalently, our result implies that testing whether a sequence decomposes into k monotone subsequences can be done with O(log n) queries.

Cite as

Omri Ben-Eliezer, Shoham Letzter, and Erik Waingarten. Finding Monotone Patterns in Sublinear Time, Adaptively. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 17:1-17:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ICALP.2022.17,
  author =	{Ben-Eliezer, Omri and Letzter, Shoham and Waingarten, Erik},
  title =	{{Finding Monotone Patterns in Sublinear Time, Adaptively}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{17:1--17:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.17},
  URN =		{urn:nbn:de:0030-drops-163586},
  doi =		{10.4230/LIPIcs.ICALP.2022.17},
  annote =	{Keywords: property testing, monotone patterns, monotone decomposition, adaptivity}
}
Document
Track A: Algorithms, Complexity and Games
Deciding Twin-Width at Most 4 Is NP-Complete

Authors: Pierre Bergé, Édouard Bonnet, and Hugues Déprés


Abstract
We show that determining if an n-vertex graph has twin-width at most 4 is NP-complete, and requires time 2^Ω(n/log n) unless the Exponential-Time Hypothesis fails. Along the way, we give an elementary proof that n-vertex graphs subdivided at least 2 log n times have twin-width at most 4. We also show how to encode trigraphs H (2-edge colored graphs involved in the definition of twin-width) into graphs G, in the sense that every d-sequence (sequence of vertex contractions witnessing that the twin-width is at most d) of G inevitably creates H as an induced subtrigraph, whereas there exists a partial d-sequence that actually goes from G to H. We believe that these facts and their proofs can be of independent interest.

Cite as

Pierre Bergé, Édouard Bonnet, and Hugues Déprés. Deciding Twin-Width at Most 4 Is NP-Complete. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 18:1-18:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{berge_et_al:LIPIcs.ICALP.2022.18,
  author =	{Berg\'{e}, Pierre and Bonnet, \'{E}douard and D\'{e}pr\'{e}s, Hugues},
  title =	{{Deciding Twin-Width at Most 4 Is NP-Complete}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{18:1--18:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.18},
  URN =		{urn:nbn:de:0030-drops-163595},
  doi =		{10.4230/LIPIcs.ICALP.2022.18},
  annote =	{Keywords: Twin-width, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Memoryless Worker-Task Assignment with Polylogarithmic Switching Cost

Authors: Aaron Berger, William Kuszmaul, Adam Polak, Jonathan Tidor, and Nicole Wein


Abstract
We study the basic problem of assigning memoryless workers to tasks with dynamically changing demands. Given a set of w workers and a multiset T ⊆ [t] of |T| = w tasks, a memoryless worker-task assignment function is any function ϕ that assigns the workers [w] to the tasks T based only on the current value of T. The assignment function ϕ is said to have switching cost at most k if, for every task multiset T, changing the contents of T by one task changes ϕ(T) by at most k worker assignments. The goal of memoryless worker task assignment is to construct an assignment function with the smallest possible switching cost. In past work, the problem of determining the optimal switching cost has been posed as an open question. There are no known sub-linear upper bounds, and after considerable effort, the best known lower bound remains 4 (ICALP 2020). We show that it is possible to achieve polylogarithmic switching cost. We give a construction via the probabilistic method that achieves switching cost O(log w log (wt)) and an explicit construction that achieves switching cost polylog (wt). We also prove a super-constant lower bound on switching cost: we show that for any value of w, there exists a value of t for which the optimal switching cost is w. Thus it is not possible to achieve a switching cost that is sublinear strictly as a function of w. Finally, we present an application of the worker-task assignment problem to a metric embeddings problem. In particular, we use our results to give the first low-distortion embedding from sparse binary vectors into low-dimensional Hamming space.

Cite as

Aaron Berger, William Kuszmaul, Adam Polak, Jonathan Tidor, and Nicole Wein. Memoryless Worker-Task Assignment with Polylogarithmic Switching Cost. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{berger_et_al:LIPIcs.ICALP.2022.19,
  author =	{Berger, Aaron and Kuszmaul, William and Polak, Adam and Tidor, Jonathan and Wein, Nicole},
  title =	{{Memoryless Worker-Task Assignment with Polylogarithmic Switching Cost}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.19},
  URN =		{urn:nbn:de:0030-drops-163608},
  doi =		{10.4230/LIPIcs.ICALP.2022.19},
  annote =	{Keywords: Distributed Task Allocation, Metric Embeddings, Probabilistic Method}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary

Authors: Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun


Abstract
Designing efficient dynamic graph algorithms against an adaptive adversary is a major goal in the field of dynamic graph algorithms and has witnessed many exciting recent developments in, e.g., dynamic matching (Wajc STOC'20) and decremental shortest paths (Chuzhoy and Khanna STOC'19). Compared to other graph primitives (e.g. spanning trees and matchings), designing such algorithms for graph spanners and (more broadly) graph sparsifiers poses a unique challenge since there is no fast deterministic algorithm known for static computation and the lack of a way to adjust the output slowly (known as "small recourse/replacements"). This paper presents the first non-trivial efficient adaptive algorithms for maintaining many sparsifiers against an adaptive adversary. Specifically, we present algorithms that maintain 1) a polylog(n)-spanner of size Õ(n) in polylog(n) amortized update time, 2) an O(k)-approximate cut sparsifier of size Õ(n) in Õ(n^{1/k}) amortized update time, and 3) a polylog(n)-approximate spectral sparsifier in polylog(n) amortized update time. Our bounds are the first non-trivial ones even when only the recourse is concerned. Our results hold even against a stronger adversary, who can access the random bits previously used by the algorithms and the amortized update time of all algorithms can be made worst-case by paying sub-polynomial factors. Our spanner result resolves an open question by Ahmed et al. (2019) and our results and techniques imply additional improvements over existing results, including (i) answering open questions about decremental single-source shortest paths by Chuzhoy and Khanna (STOC'19) and Gutenberg and Wulff-Nilsen (SODA'20), implying a nearly-quadratic time algorithm for approximating minimum-cost unit-capacity flow and (ii) de-amortizing a result of Abraham et al. (FOCS'16) for dynamic spectral sparsifiers. Our results are based on two novel techniques. The first technique is a generic black-box reduction that allows us to assume that the graph is initially an expander with almost uniform-degree and, more importantly, stays as an almost uniform-degree expander while undergoing only edge deletions. The second technique is called proactive resampling: here we constantly re-sample parts of the input graph so that, independent of an adversary’s computational power, a desired structure of the underlying graph can be always maintained. Despite its simplicity, the analysis of this sampling scheme is far from trivial, because the adversary can potentially create dependencies between the random choices used by the algorithm. We believe these two techniques could be useful for developing other adaptive algorithms.

Cite as

Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ICALP.2022.20,
  author =	{Bernstein, Aaron and van den Brand, Jan and Probst Gutenberg, Maximilian and Nanongkai, Danupon and Saranurak, Thatchaphol and Sidford, Aaron and Sun, He},
  title =	{{Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.20},
  URN =		{urn:nbn:de:0030-drops-163611},
  doi =		{10.4230/LIPIcs.ICALP.2022.20},
  annote =	{Keywords: dynamic graph algorithm, adaptive adversary, spanner, sparsifier}
}
Document
Track A: Algorithms, Complexity and Games
Fast Sampling via Spectral Independence Beyond Bounded-Degree Graphs

Authors: Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič


Abstract
Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal O(n log n) sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using L^p-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of L^p-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the L^p-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph G(n,d/n), where the previously known algorithms run in time n^O(log d) or applied only to large d. We refine these algorithmic bounds significantly, and develop fast nearly linear algorithms based on Glauber dynamics that apply to all constant d, throughout the uniqueness regime.

Cite as

Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič. Fast Sampling via Spectral Independence Beyond Bounded-Degree Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bezakova_et_al:LIPIcs.ICALP.2022.21,
  author =	{Bez\'{a}kov\'{a}, Ivona and Galanis, Andreas and Goldberg, Leslie Ann and \v{S}tefankovi\v{c}, Daniel},
  title =	{{Fast Sampling via Spectral Independence Beyond Bounded-Degree Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.21},
  URN =		{urn:nbn:de:0030-drops-163622},
  doi =		{10.4230/LIPIcs.ICALP.2022.21},
  annote =	{Keywords: Hard-core model, Random graphs, Markov chains}
}
Document
Track A: Algorithms, Complexity and Games
Deterministic Sensitivity Oracles for Diameter, Eccentricities and All Pairs Distances

Authors: Davide Bilò, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, and Martin Schirneck


Abstract
We construct data structures for extremal and pairwise distances in directed graphs in the presence of transient edge failures. Henzinger et al. [ITCS 2017] initiated the study of fault-tolerant (sensitivity) oracles for the diameter and vertex eccentricities. We extend this with a special focus on space efficiency. We present several new data structures, among them the first fault-tolerant eccentricity oracle for dual failures in subcubic space. We further prove lower bounds that show limits to approximation vs. space and diameter vs. space trade-offs for fault-tolerant oracles. They highlight key differences between data structures for undirected and directed graphs. Initially, our oracles are randomized leaning on a sampling technique frequently used in sensitivity analysis. Building on the work of Alon, Chechik, and Cohen [ICALP 2019] as well as Karthik and Parter [SODA 2021], we develop a hierarchical framework to derandomize fault-tolerant data structures. We first apply it to our own diameter and eccentricity oracles and then show its versatility by derandomizing algorithms from the literature: the distance sensitivity oracle of Ren [JCSS 2022] and the Single-Source Replacement Path algorithm of Chechik and Magen [ICALP 2020]. This way, we obtain the first deterministic distance sensitivity oracle with subcubic preprocessing time.

Cite as

Davide Bilò, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, and Martin Schirneck. Deterministic Sensitivity Oracles for Diameter, Eccentricities and All Pairs Distances. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ICALP.2022.22,
  author =	{Bil\`{o}, Davide and Choudhary, Keerti and Cohen, Sarel and Friedrich, Tobias and Schirneck, Martin},
  title =	{{Deterministic Sensitivity Oracles for Diameter, Eccentricities and All Pairs Distances}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.22},
  URN =		{urn:nbn:de:0030-drops-163633},
  doi =		{10.4230/LIPIcs.ICALP.2022.22},
  annote =	{Keywords: derandomization, diameter, eccentricity, fault-tolerant data structure, sensitivity oracle, space lower bound}
}
Document
Track A: Algorithms, Complexity and Games
Hodge Decomposition and General Laplacian Solvers for Embedded Simplicial Complexes

Authors: Mitchell Black and Amir Nayyeri


Abstract
We describe a nearly-linear time algorithm to solve the linear system L₁x = b parameterized by the first Betti number of the complex, where L₁ is the 1-Laplacian of a simplicial complex K that is a subcomplex of a collapsible complex X linearly embedded in ℝ³. Our algorithm generalizes the work of Black et al. [SODA2022] that solved the same problem but required that K have trivial first homology. Our algorithm works for complexes K with arbitrary first homology with running time that is nearly-linear with respect to the size of the complex and polynomial with respect to the first Betti number. The key to our solver is a new algorithm for computing the Hodge decomposition of 1-chains of K in nearly-linear time. Additionally, our algorithm implies a nearly quadratic solver and nearly quadratic Hodge decomposition for the 1-Laplacian of any simplicial complex K embedded in ℝ³, as K can always be expanded to a collapsible embedded complex of quadratic complexity.

Cite as

Mitchell Black and Amir Nayyeri. Hodge Decomposition and General Laplacian Solvers for Embedded Simplicial Complexes. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 23:1-23:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{black_et_al:LIPIcs.ICALP.2022.23,
  author =	{Black, Mitchell and Nayyeri, Amir},
  title =	{{Hodge Decomposition and General Laplacian Solvers for Embedded Simplicial Complexes}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{23:1--23:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.23},
  URN =		{urn:nbn:de:0030-drops-163641},
  doi =		{10.4230/LIPIcs.ICALP.2022.23},
  annote =	{Keywords: Computational Topology, Laplacian solvers, Combinatorial Laplacian, Hodge decomposition, Parameterized Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Reconstructing Decision Trees

Authors: Guy Blanc, Jane Lange, and Li-Yang Tan


Abstract
We give the first reconstruction algorithm for decision trees: given queries to a function f that is opt-close to a size-s decision tree, our algorithm provides query access to a decision tree T where: - T has size S := s^O((log s)²/ε³); - dist(f,T) ≤ O(opt)+ε; - Every query to T is answered with poly((log s)/ε)⋅ log n queries to f and in poly((log s)/ε)⋅ n log n time. This yields a tolerant tester that distinguishes functions that are close to size-s decision trees from those that are far from size-S decision trees. The polylogarithmic dependence on s in the efficiency of our tester is exponentially smaller than that of existing testers. Since decision tree complexity is well known to be related to numerous other boolean function properties, our results also provide a new algorithm for reconstructing and testing these properties.

Cite as

Guy Blanc, Jane Lange, and Li-Yang Tan. Reconstructing Decision Trees. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 24:1-24:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.ICALP.2022.24,
  author =	{Blanc, Guy and Lange, Jane and Tan, Li-Yang},
  title =	{{Reconstructing Decision Trees}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{24:1--24:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.24},
  URN =		{urn:nbn:de:0030-drops-163653},
  doi =		{10.4230/LIPIcs.ICALP.2022.24},
  annote =	{Keywords: Property reconstruction, property testing, tolerant testing, decision trees}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear-Round Parallel Matroid Intersection

Authors: Joakim Blikstad


Abstract
Despite a lot of recent progress in obtaining faster sequential matroid intersection algorithms, the fastest parallel poly(n)-query algorithm was still the straightforward O(n)-round parallel implementation of Edmonds' augmenting paths algorithm from the 1960s. Very recently, Chakrabarty-Chen-Khanna [FOCS'21] showed the lower bound that any, possibly randomized, parallel matroid intersection algorithm making poly(n) rank-queries requires Ω̃(n^{1/3}) rounds of adaptivity. They ask, as an open question, if the lower bound can be improved to Ω̃(n), or if there can be sublinear-round, poly(n)-query algorithms for matroid intersection. We resolve this open problem by presenting the first sublinear-round parallel matroid intersection algorithms. Perhaps surprisingly, we do not only break the Õ(n)-barrier in the rank-oracle model, but also in the weaker independence-oracle model. Our rank-query algorithm guarantees O(n^{3/4}) rounds of adaptivity, while the independence-query algorithm uses O(n^{7/8}) rounds of adaptivity, both making a total of poly(n) queries.

Cite as

Joakim Blikstad. Sublinear-Round Parallel Matroid Intersection. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blikstad:LIPIcs.ICALP.2022.25,
  author =	{Blikstad, Joakim},
  title =	{{Sublinear-Round Parallel Matroid Intersection}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.25},
  URN =		{urn:nbn:de:0030-drops-163662},
  doi =		{10.4230/LIPIcs.ICALP.2022.25},
  annote =	{Keywords: Matroid Intersection, Combinatorial Optimization, Parallel Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Privately Estimating Graph Parameters in Sublinear Time

Authors: Jeremiah Blocki, Elena Grigorescu, and Tamalika Mukherjee


Abstract
We initiate a systematic study of algorithms that are both differentially-private and run in sublinear time for several problems in which the goal is to estimate natural graph parameters. Our main result is a differentially-private (1+ρ)-approximation algorithm for the problem of computing the average degree of a graph, for every ρ > 0. The running time of the algorithm is roughly the same (for sparse graphs) as its non-private version proposed by Goldreich and Ron (Sublinear Algorithms, 2005). We also obtain the first differentially-private sublinear-time approximation algorithms for the maximum matching size and the minimum vertex cover size of a graph. An overarching technique we employ is the notion of coupled global sensitivity of randomized algorithms. Related variants of this notion of sensitivity have been used in the literature in ad-hoc ways. Here we formalize the notion and develop it as a unifying framework for privacy analysis of randomized approximation algorithms.

Cite as

Jeremiah Blocki, Elena Grigorescu, and Tamalika Mukherjee. Privately Estimating Graph Parameters in Sublinear Time. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ICALP.2022.26,
  author =	{Blocki, Jeremiah and Grigorescu, Elena and Mukherjee, Tamalika},
  title =	{{Privately Estimating Graph Parameters in Sublinear Time}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.26},
  URN =		{urn:nbn:de:0030-drops-163674},
  doi =		{10.4230/LIPIcs.ICALP.2022.26},
  annote =	{Keywords: differential privacy, sublinear time, graph algorithms}
}
Document
Track A: Algorithms, Complexity and Games
The Complexity of Finding Fair Many-To-One Matchings

Authors: Niclas Boehmer and Tomohiro Koana


Abstract
We analyze the (parameterized) computational complexity of "fair" variants of bipartite many-to-one matching, where each vertex from the "left" side is matched to exactly one vertex and each vertex from the "right" side may be matched to multiple vertices. We want to find a "fair" matching, in which each vertex from the right side is matched to a "fair" set of vertices. Assuming that each vertex from the left side has one color modeling its attribute, we study two fairness criteria. In one of them, we deem a vertex set fair if for any two colors, the difference between the numbers of their occurrences does not exceed a given threshold. Fairness is relevant when finding many-to-one matchings between students and colleges, voters and constituencies, and applicants and firms. Here colors may model sociodemographic attributes, party memberships, and qualifications, respectively. We show that finding a fair many-to-one matching is NP-hard even for three colors and maximum degree five. Our main contribution is the design of fixed-parameter tractable algorithms with respect to the number of vertices on the right side. Our algorithms make use of a variety of techniques including color coding. At the core lie integer linear programs encoding Hall like conditions. To establish the correctness of our integer programs, we prove a new separation result, inspired by Frank’s separation theorem [Frank, Discrete Math. 1982], which may also be of independent interest. We further obtain complete complexity dichotomies regarding the number of colors and the maximum degree of each side.

Cite as

Niclas Boehmer and Tomohiro Koana. The Complexity of Finding Fair Many-To-One Matchings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{boehmer_et_al:LIPIcs.ICALP.2022.27,
  author =	{Boehmer, Niclas and Koana, Tomohiro},
  title =	{{The Complexity of Finding Fair Many-To-One Matchings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{27:1--27:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.27},
  URN =		{urn:nbn:de:0030-drops-163680},
  doi =		{10.4230/LIPIcs.ICALP.2022.27},
  annote =	{Keywords: Graph theory, polynomial-time algorithms, NP-hardness, FPT, ILP, color coding, submodular and supermodular functions, algorithmic fairness}
}
Document
Track A: Algorithms, Complexity and Games
Factoring and Pairings Are Not Necessary for IO: Circular-Secure LWE Suffices

Authors: Zvika Brakerski, Nico Döttling, Sanjam Garg, and Giulio Malavolta


Abstract
We construct indistinguishability obfuscation (iO) solely under circular-security properties of encryption schemes based on the Learning with Errors (LWE) problem. Circular-security assumptions were used before to construct (non-leveled) fully-homomorphic encryption (FHE), but our assumption is stronger and requires circular randomness-leakage-resilience. In contrast with prior works, this assumption can be conjectured to be post-quantum secure; yielding the first provably secure iO construction that is (plausibly) post-quantum secure. Our work follows the high-level outline of the recent work of Gay and Pass [STOC 2021], who showed a way to remove the heuristic step from the homomorphic-encryption based iO approach of Brakerski, Döttling, Garg, and Malavolta [EUROCRYPT 2020]. They thus obtain a construction proved secure under circular security assumption of natural homomorphic encryption schemes - specifically, they use homomorphic encryption schemes based on LWE and DCR, respectively. In this work we show how to remove the DCR assumption and remain with a scheme based on the circular security of LWE alone. Along the way we relax some of the requirements in the Gay-Pass blueprint and thus obtain a scheme that is secure under a different assumption. Specifically, we do not require security in the presence of a key-cycle, but rather only in the presence of a key-randomness cycle. An additional contribution of our work is to point out a problem in one of the building blocks used by many iO candidates, including all existing provable post-quantum candidates. Namely, in the transformation from exponentially-efficient iO (XiO) from Lin, Pass, Seth and Telang [PKC 2016]. We show why their transformation inherently falls short of achieving the desired goal, and then rectify this situation by showing that shallow XiO (i.e. one where the obfuscator is depth-bounded) does translate to iO using LWE.

Cite as

Zvika Brakerski, Nico Döttling, Sanjam Garg, and Giulio Malavolta. Factoring and Pairings Are Not Necessary for IO: Circular-Secure LWE Suffices. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brakerski_et_al:LIPIcs.ICALP.2022.28,
  author =	{Brakerski, Zvika and D\"{o}ttling, Nico and Garg, Sanjam and Malavolta, Giulio},
  title =	{{Factoring and Pairings Are Not Necessary for IO: Circular-Secure LWE Suffices}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.28},
  URN =		{urn:nbn:de:0030-drops-163699},
  doi =		{10.4230/LIPIcs.ICALP.2022.28},
  annote =	{Keywords: Cryptography, Obfuscation}
}
Document
Track A: Algorithms, Complexity and Games
Characterization of Matrices with Bounded Graver Bases and Depth Parameters and Applications to Integer Programming

Authors: Marcin Briański, Martin Koutecký, Daniel Král', Kristýna Pekárková, and Felix Schröder


Abstract
An intensive line of research on fixed parameter tractability of integer programming is focused on exploiting the relation between the sparsity of a constraint matrix A and the norm of the elements of its Graver basis. In particular, integer programming is fixed parameter tractable when parameterized by the primal tree-depth and the entry complexity of A, and when parameterized by the dual tree-depth and the entry complexity of A; both these parameterization imply that A is sparse, in particular, the number of its non-zero entries is linear in the number of columns or rows, respectively. We study preconditioners transforming a given matrix to an equivalent sparse matrix if it exists and provide structural results characterizing the existence of a sparse equivalent matrix in terms of the structural properties of the associated column matroid. In particular, our results imply that the 𝓁₁-norm of the Graver basis is bounded by a function of the maximum 𝓁₁-norm of a circuit of A. We use our results to design a parameterized algorithm that constructs a matrix equivalent to an input matrix A that has small primal/dual tree-depth and entry complexity if such an equivalent matrix exists. Our results yield parameterized algorithms for integer programming when parameterized by the 𝓁₁-norm of the Graver basis of the constraint matrix, when parameterized by the 𝓁₁-norm of the circuits of the constraint matrix, when parameterized by the smallest primal tree-depth and entry complexity of a matrix equivalent to the constraint matrix, and when parameterized by the smallest dual tree-depth and entry complexity of a matrix equivalent to the constraint matrix.

Cite as

Marcin Briański, Martin Koutecký, Daniel Král', Kristýna Pekárková, and Felix Schröder. Characterization of Matrices with Bounded Graver Bases and Depth Parameters and Applications to Integer Programming. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 29:1-29:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brianski_et_al:LIPIcs.ICALP.2022.29,
  author =	{Bria\'{n}ski, Marcin and Kouteck\'{y}, Martin and Kr\'{a}l', Daniel and Pek\'{a}rkov\'{a}, Krist\'{y}na and Schr\"{o}der, Felix},
  title =	{{Characterization of Matrices with Bounded Graver Bases and Depth Parameters and Applications to Integer Programming}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{29:1--29:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.29},
  URN =		{urn:nbn:de:0030-drops-163702},
  doi =		{10.4230/LIPIcs.ICALP.2022.29},
  annote =	{Keywords: Integer programming, width parameters, matroids, Graver basis, tree-depth, fixed parameter tractability}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Track A: Algorithms, Complexity and Games
Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis


Abstract
We present new exact and approximation algorithms for 0-1-Knapsack and Unbounded Knapsack: - Exact Algorithm for 0-1-Knapsack: 0-1-Knapsack has known algorithms running in time Õ(n + min{n ⋅ OPT, n ⋅ W, OPT², W²}) [Bellman '57], where n is the number of items, W is the weight budget, and OPT is the optimal profit. We present an algorithm running in time Õ(n + (W + OPT)^{1.5}). This improves the running time in case n,W,OPT are roughly equal. - Exact Algorithm for Unbounded Knapsack: Unbounded Knapsack has known algorithms running in time Õ(n + min{n ⋅ p_max, n ⋅ w_max, p_max², w_max²}) [Axiotis, Tzamos '19, Jansen, Rohwedder '19, Chan, He '22], where n is the number of items, w_{max} is the largest weight of any item, and p_max is the largest profit of any item. We present an algorithm running in time Õ(n + (p_max + w_max)^{1.5}), giving a similar improvement as for 0-1-Knapsack. - Approximating Unbounded Knapsack with Resource Augmentation: Unbounded Knapsack has a known FPTAS with running time Õ(min{n/ε, n + 1/ε²}) [Jansen, Kraft '18]. We study weak approximation algorithms, which approximate the optimal profit but are allowed to overshoot the weight constraint (i.e. resource augmentation). We present the first approximation scheme for Unbounded Knapsack in this setting, achieving running time Õ(n + 1/ε^{1.5}). Along the way, we also give a simpler FPTAS with lower order improvement in the standard setting. For all of these problem settings the previously known results had matching conditional lower bounds. We avoid these lower bounds in the approximation setting by allowing resource augmentation, and in the exact setting by analyzing the time complexity in terms of weight and profit parameters (instead of only weight or only profit parameters). Our algorithms can be seen as reductions to Min-Plus-Convolution on monotone sequences with bounded entries. These structured instances of Min-Plus-Convolution can be solved in time O(n^1.5) [Chi, Duan, Xie, Zhang '22] (in contrast to the conjectured n^{2-o(1)} lower bound for the general case). We complement our results by showing reductions in the opposite direction, that is, we show that achieving our results with the constant 1.5 replaced by any constant < 2 implies subquadratic algorithms for Min-Plus-Convolution on monotone sequences with bounded entries.

Cite as

Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.31,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.31},
  URN =		{urn:nbn:de:0030-drops-163727},
  doi =		{10.4230/LIPIcs.ICALP.2022.31},
  annote =	{Keywords: Knapsack, Approximation Schemes, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Track A: Algorithms, Complexity and Games
Improved Sublinear-Time Edit Distance for Preprocessed Strings

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos


Abstract
We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios},
  title =	{{Improved Sublinear-Time Edit Distance for Preprocessed Strings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32},
  URN =		{urn:nbn:de:0030-drops-163734},
  doi =		{10.4230/LIPIcs.ICALP.2022.32},
  annote =	{Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling}
}
Document
Track A: Algorithms, Complexity and Games
Polynomial Delay Algorithm for Minimal Chordal Completions

Authors: Caroline Brosse, Vincent Limouzy, and Arnaud Mary


Abstract
Motivated by the problem of enumerating all tree decompositions of a graph, we consider in this article the problem of listing all the minimal chordal completions of a graph. In [Carmeli et al., 2020] (Pods 2017) Carmeli et al. proved that all minimal chordal completions or equivalently all proper tree decompositions of a graph can be listed in incremental polynomial time using exponential space. The total running time of their algorithm is quadratic in the number of solutions and the existence of an algorithm whose complexity depends only linearly on the number of solutions remained open. We close this question by providing a polynomial delay algorithm to solve this problem which, moreover, uses polynomial space. Our algorithm relies on Proximity Search, a framework recently introduced by Conte and Uno [Conte and Uno, 2019] (Stoc 2019) which has been shown powerful to obtain polynomial delay algorithms, but generally requires exponential space. In order to obtain a polynomial space algorithm for our problem, we introduce a new general method called canonical path reconstruction to design polynomial delay and polynomial space algorithms based on proximity search.

Cite as

Caroline Brosse, Vincent Limouzy, and Arnaud Mary. Polynomial Delay Algorithm for Minimal Chordal Completions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 33:1-33:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brosse_et_al:LIPIcs.ICALP.2022.33,
  author =	{Brosse, Caroline and Limouzy, Vincent and Mary, Arnaud},
  title =	{{Polynomial Delay Algorithm for Minimal Chordal Completions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{33:1--33:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.33},
  URN =		{urn:nbn:de:0030-drops-163740},
  doi =		{10.4230/LIPIcs.ICALP.2022.33},
  annote =	{Keywords: Graph Algorithm, Algorithmic Enumeration, Minimal chordal completions}
}
Document
Track A: Algorithms, Complexity and Games
Unique Assembly Verification in Two-Handed Self-Assembly

Authors: David Caballero, Timothy Gomez, Robert Schweller, and Tim Wylie


Abstract
One of the most fundamental and well-studied problems in tile self-assembly is the Unique Assembly Verification (UAV) problem. This algorithmic problem asks whether a given tile system uniquely assembles a specific assembly. The complexity of this problem in the 2-Handed Assembly Model (2HAM) at a constant temperature is a long-standing open problem since the model was introduced. Previously, only membership in the class coNP was known and that the problem is in P if the temperature is one (τ = 1). The problem is known to be hard for many generalizations of the model, such as allowing one step into the third dimension or allowing the temperature of the system to be a variable, but the most fundamental version has remained open. In this paper, we prove the UAV problem in the 2HAM is hard even with a small constant temperature (τ = 2), and finally answer the complexity of this problem (open since 2013). Further, this result proves that UAV in the staged self-assembly model is coNP-complete with a single bin and stage (open since 2007), and that UAV in the q-tile model is also coNP-complete (open since 2004). We reduce from Monotone Planar 3-SAT with Neighboring Variable Pairs, a special case of 3SAT recently proven to be NP-hard. We accompany this reduction with a positive result showing that UAV is solvable in polynomial time with the promise that the given target assembly will have a tree-shaped bond graph, i.e., contains no cycles. We provide a 𝒪(n⁵) algorithm for UAV on tree-bonded assemblies when the temperature is fixed to 2, and a 𝒪(n⁵log τ) time algorithm when the temperature is part of the input.

Cite as

David Caballero, Timothy Gomez, Robert Schweller, and Tim Wylie. Unique Assembly Verification in Two-Handed Self-Assembly. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 34:1-34:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{caballero_et_al:LIPIcs.ICALP.2022.34,
  author =	{Caballero, David and Gomez, Timothy and Schweller, Robert and Wylie, Tim},
  title =	{{Unique Assembly Verification in Two-Handed Self-Assembly}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{34:1--34:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.34},
  URN =		{urn:nbn:de:0030-drops-163751},
  doi =		{10.4230/LIPIcs.ICALP.2022.34},
  annote =	{Keywords: self-assembly, unique assembly verification, 2-handed assembly model}
}
Document
Track A: Algorithms, Complexity and Games
Pairwise Reachability Oracles and Preservers Under Failures

Authors: Diptarka Chakraborty, Kushagra Chatterjee, and Keerti Choudhary


Abstract
In this paper, we consider reachability oracles and reachability preservers for directed graphs/networks prone to edge/node failures. Let G = (V, E) be a directed graph on n-nodes, and P ⊆ V× V be a set of vertex pairs in G. We present the first non-trivial constructions of single and dual fault-tolerant pairwise reachability oracle with constant query time. Furthermore, we provide extremal bounds for sparse fault-tolerant reachability preservers, resilient to two or more failures. Prior to this work, such oracles and reachability preservers were widely studied for the special scenario of single-source and all-pairs settings. However, for the scenario of arbitrary pairs, no prior (non-trivial) results were known for dual (or more) failures, except those implied from the single-source setting. One of the main questions is whether it is possible to beat the O(n |P|) size bound (derived from the single-source setting) for reachability oracle and preserver for dual failures (or O(2^k n|P|) bound for k failures). We answer this question affirmatively. Below we summarize our contributions. - For an n-vertex directed graph G = (V, E) and P ⊆ V× V, we present a construction of O(n √{|P|}) sized dual fault-tolerant pairwise reachability oracle with constant query time. We further provide a matching (up to the word size) lower bound of Ω(n √{|P|}) on the size (in bits) of the oracle for the dual fault setting, thereby proving that our oracle is (near-)optimal. - Next, we provide a construction of O(n + min{|P|√ n,~n√{|P|}}) sized oracle with O(1) query time, resilient to single node/edge failure. In particular, for |P| bounded by O(√n) this yields an oracle of just O(n) size. We complement the upper bound with a lower bound of Ω(n^{2/3}|P|^{1/2}) (in bits), refuting the possibility of a linear-sized oracle for P of size ω(n^{2/3}). - We also present a construction of O(n^{4/3} |P|^{1/3}) sized pairwise reachability preservers resilient to dual edge/vertex failures. Previously, such preservers were known to exist only under single failure and had O(n+min{|P|√n,~n√ {|P|}}) size [Chakraborty and Choudhary, ICALP'20]. We also show a lower bound of Ω(n √{|P|}) edges on the size of dual fault-tolerant reachability preservers, thereby providing a sharp gap between single and dual fault-tolerant reachability preservers for |P| = o(n). - Finally, we provide a generic pairwise reachability preserver construction that provides a o(2^k n |P|) sized subgraph resilient to k failures, for any k ≥ 1. Before this work, we only knew of an O(2^k n |P|) bound implied from the single-source setting [Baswana, Choudhary, and Roditty, STOC'16].

Cite as

Diptarka Chakraborty, Kushagra Chatterjee, and Keerti Choudhary. Pairwise Reachability Oracles and Preservers Under Failures. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ICALP.2022.35,
  author =	{Chakraborty, Diptarka and Chatterjee, Kushagra and Choudhary, Keerti},
  title =	{{Pairwise Reachability Oracles and Preservers Under Failures}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.35},
  URN =		{urn:nbn:de:0030-drops-163768},
  doi =		{10.4230/LIPIcs.ICALP.2022.35},
  annote =	{Keywords: Fault-tolerant, Reachability Oracle, Reachability Preservers, Graph sparsification, Lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Separations Between Combinatorial Measures for Transitive Functions

Authors: Sourav Chakraborty, Chandrima Kayal, and Manaswi Paraashar


Abstract
The role of symmetry in Boolean functions f:{0, 1}ⁿ → {0, 1} has been extensively studied in complexity theory. For example, symmetric functions, that is, functions that are invariant under the action of 𝖲_n, is an important class of functions in the study of Boolean functions. A function f:{0, 1}ⁿ → {0, 1} is called transitive (or weakly-symmetric) if there exists a transitive group 𝖦 of 𝖲_n such that f is invariant under the action of 𝖦. In other words, the value of the function remains unchanged even after the input bits of f are moved around according to some permutation σ ∈ 𝖦. Understanding various complexity measures of transitive functions has been a rich area of research for the past few decades. This work studies transitive functions in light of several combinatorial measures. The question that we try to address in this paper is what are the maximum separations between various pairs of combinatorial measures for transitive functions. Such study for general Boolean functions has been going on for many years. Aaronson et al. (STOC, 2021) have nicely compiled the current best-known results for general Boolean functions. But before this paper, no such systematic study had been done on the case of transitive functions. Separations between a pair of combinatorial measures are shown by constructing interesting functions that demonstrate the separation. Over the past three decades, various interesting classes of functions have been designed for this purpose. In this context, one of the celebrated classes of functions is the "pointer functions". Ambainis et al. (JACM, 2017) constructed several functions, which are modifications of the pointer function in Göös et al. (SICOMP, 2018 / FOCS, 2015), to demonstrate the separation between various pairs of measures. In the last few years, pointer functions have been used to show separation between various other pairs of measures (Eg: Mukhopadhyay et al. (FSTTCS, 2015), Ben-David et al. (ITCS, 2017), Göös et al. (ToCT, 2018 / ICALP, 2017)). However, the pointer functions themselves are not transitive. Based on the various kinds of pointer functions, we construct new transitive functions, which we use to demonstrate similar separations between various pairs of combinatorial measures as demonstrated by the original pointer functions. Our construction of transitive functions depends crucially on the construction of particular classes of transitive groups whose actions, though involved, help to preserve certain structural features of the input strings. The transitive groups we construct may be of independent interest in other areas of mathematics and theoretical computer science. We summarize the current knowledge of relations between various combinatorial measures of transitive functions in a table similar to the table compiled by Aaronson et al. (STOC, 2021) for general functions.

Cite as

Sourav Chakraborty, Chandrima Kayal, and Manaswi Paraashar. Separations Between Combinatorial Measures for Transitive Functions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ICALP.2022.36,
  author =	{Chakraborty, Sourav and Kayal, Chandrima and Paraashar, Manaswi},
  title =	{{Separations Between Combinatorial Measures for Transitive Functions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{36:1--36:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.36},
  URN =		{urn:nbn:de:0030-drops-163779},
  doi =		{10.4230/LIPIcs.ICALP.2022.36},
  annote =	{Keywords: Transitive functions, Combinatorial complexity of Boolean functions}
}
Document
Track A: Algorithms, Complexity and Games
Approximating k-Edge-Connected Spanning Subgraphs via a Near-Linear Time LP Solver

Authors: Parinya Chalermsook, Chien-Chung Huang, Danupon Nanongkai, Thatchaphol Saranurak, Pattara Sukprasert, and Sorrachai Yingchareonthawornchai


Abstract
In the k-edge-connected spanning subgraph (kECSS) problem, our goal is to compute a minimum-cost sub-network that is resilient against up to k link failures: Given an n-node m-edge graph with a cost function on the edges, our goal is to compute a minimum-cost k-edge-connected spanning subgraph. This NP-hard problem generalizes the minimum spanning tree problem and is the "uniform case" of a much broader class of survival network design problems (SNDP). A factor of two has remained the best approximation ratio for polynomial-time algorithms for the whole class of SNDP, even for a special case of 2ECSS. The fastest 2-approximation algorithm is however rather slow, taking O(mn k) time [Khuller, Vishkin, STOC'92]. A faster time complexity of O(n²) can be obtained, but with a higher approximation guarantee of (2k-1) [Gabow, Goemans, Williamson, IPCO'93]. Our main contribution is an algorithm that (1+ε)-approximates the optimal fractional solution in Õ(m/ε²) time (independent of k), which can be turned into a (2+ε) approximation algorithm that runs in time Õ(m/(ε²) + {k²n^{1.5}}/ε²) for (integral) kECSS; this improves the running time of the aforementioned results while keeping the approximation ratio arbitrarily close to a factor of two.

Cite as

Parinya Chalermsook, Chien-Chung Huang, Danupon Nanongkai, Thatchaphol Saranurak, Pattara Sukprasert, and Sorrachai Yingchareonthawornchai. Approximating k-Edge-Connected Spanning Subgraphs via a Near-Linear Time LP Solver. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chalermsook_et_al:LIPIcs.ICALP.2022.37,
  author =	{Chalermsook, Parinya and Huang, Chien-Chung and Nanongkai, Danupon and Saranurak, Thatchaphol and Sukprasert, Pattara and Yingchareonthawornchai, Sorrachai},
  title =	{{Approximating k-Edge-Connected Spanning Subgraphs via a Near-Linear Time LP Solver}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.37},
  URN =		{urn:nbn:de:0030-drops-163785},
  doi =		{10.4230/LIPIcs.ICALP.2022.37},
  annote =	{Keywords: Approximation Algorithms, Data Structures}
}
Document
Track A: Algorithms, Complexity and Games
Polylogarithmic Sketches for Clustering

Authors: Moses Charikar and Erik Waingarten


Abstract
Given n points in 𝓁_p^d, we consider the problem of partitioning points into k clusters with associated centers. The cost of a clustering is the sum of p-th powers of distances of points to their cluster centers. For p ∈ [1,2], we design sketches of size poly(log(nd),k,1/ε) such that the cost of the optimal clustering can be estimated to within factor 1+ε, despite the fact that the compressed representation does not contain enough information to recover the cluster centers or the partition into clusters. This leads to a streaming algorithm for estimating the clustering cost with space poly(log(nd),k,1/ε). We also obtain a distributed memory algorithm, where the n points are arbitrarily partitioned amongst m machines, each of which sends information to a central party who then computes an approximation of the clustering cost. Prior to this work, no such streaming or distributed-memory algorithm was known with sublinear dependence on d for p ∈ [1,2).

Cite as

Moses Charikar and Erik Waingarten. Polylogarithmic Sketches for Clustering. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 38:1-38:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{charikar_et_al:LIPIcs.ICALP.2022.38,
  author =	{Charikar, Moses and Waingarten, Erik},
  title =	{{Polylogarithmic Sketches for Clustering}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{38:1--38:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.38},
  URN =		{urn:nbn:de:0030-drops-163793},
  doi =		{10.4230/LIPIcs.ICALP.2022.38},
  annote =	{Keywords: sketching, clustering}
}
Document
Track A: Algorithms, Complexity and Games
Approximation Algorithms for Interdiction Problem with Packing Constraints

Authors: Lin Chen, Xiaoyu Wu, and Guochuan Zhang


Abstract
We study a bilevel optimization problem which is a zero-sum Stackelberg game. In this problem, there are two players, a leader and a follower, who pick items from a common set. Both the leader and the follower have their own (multi-dimensional) budgets, respectively. Each item is associated with a profit, which is the same to the leader and the follower, and will consume the leader’s (follower’s) budget if it is selected by the leader (follower). The leader and the follower will select items in a sequential way: First, the leader selects items within the leader’s budget. Then the follower selects items from the remaining items within the follower’s budget. The goal of the leader is to minimize the maximum profit that the follower can obtain. Let s_A and s_B be the dimension of the leader’s and follower’s budget, respectively. A special case of our problem is the bilevel knapsack problem studied by Caprara et al. [SIAM Journal on Optimization, 2014], where s_A = s_B = 1. We consider the general problem and obtain an (s_B+ε)-approximation algorithm when s_A and s_B are both constant. In particular, if s_B = 1, our algorithm implies a PTAS for the bilevel knapsack problem, which is the first 𝒪(1)-approximation algorithm. We also complement our result by showing that there does not exist any (4/3-ε)-approximation algorithm even if s_A = 1 and s_B = 2. We also consider a variant of our problem with resource augmentation when s_A and s_B are both part of the input. We obtain an 𝒪(1)-approximation algorithm with 𝒪(1)-resource augmentation, that is, we give an algorithm that returns a solution which exceeds the given leader’s budget by 𝒪(1) times, and the objective value achieved by the solution is 𝒪(1) times the optimal objective value that respects the leader’s budget.

Cite as

Lin Chen, Xiaoyu Wu, and Guochuan Zhang. Approximation Algorithms for Interdiction Problem with Packing Constraints. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2022.39,
  author =	{Chen, Lin and Wu, Xiaoyu and Zhang, Guochuan},
  title =	{{Approximation Algorithms for Interdiction Problem with Packing Constraints}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.39},
  URN =		{urn:nbn:de:0030-drops-163806},
  doi =		{10.4230/LIPIcs.ICALP.2022.39},
  annote =	{Keywords: Bilevel Integer Programming, Interdiction Constraints, Knapsack}
}
Document
Track A: Algorithms, Complexity and Games
Online Weighted Cardinality Joint Replenishment Problem with Delay

Authors: Ryder Chen, Jahanvi Khatkar, and Seeun William Umboh


Abstract
We study a generalization of the classic Online Joint Replenishment Problem (JRP) with Delays that we call the Online Weighted Cardinality JRP with Delays. The JRP is an extensively studied inventory management problem wherein requests for different item types arrive at various points in time. A request is served by ordering its corresponding item type. The cost of serving a set of requests depends on the item types ordered. Furthermore, each request incurs a delay penalty while it is left unserved. The objective is to minimise the total service and delay costs. In the Weighted Cardinality JRP, each item type has a positive weight and the cost of ordering is a non-decreasing, concave function of the total weight of the item types ordered. This problem was first considered in the offline setting by Cheung et al. (2015) but nothing is known in the online setting. Our main result is a deterministic, constant competitive algorithm for this problem.

Cite as

Ryder Chen, Jahanvi Khatkar, and Seeun William Umboh. Online Weighted Cardinality Joint Replenishment Problem with Delay. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 40:1-40:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2022.40,
  author =	{Chen, Ryder and Khatkar, Jahanvi and Umboh, Seeun William},
  title =	{{Online Weighted Cardinality Joint Replenishment Problem with Delay}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{40:1--40:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.40},
  URN =		{urn:nbn:de:0030-drops-163815},
  doi =		{10.4230/LIPIcs.ICALP.2022.40},
  annote =	{Keywords: Online Algorithms, Delay, Joint Replenishment Problem}
}
Document
Track A: Algorithms, Complexity and Games
Limitations of Local Quantum Algorithms on Random MAX-k-XOR and Beyond

Authors: Chi-Ning Chou, Peter J. Love, Juspreet Singh Sandhu, and Jonathan Shi


Abstract
We introduce a notion of generic local algorithm, which strictly generalizes existing frameworks of local algorithms such as factors of i.i.d. by capturing local quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA). Motivated by a question of Farhi et al. [arXiv:1910.08187, 2019], we then show limitations of generic local algorithms including QAOA on random instances of constraint satisfaction problems (CSPs). Specifically, we show that any generic local algorithm whose assignment to a vertex depends only on a local neighborhood with o(n) other vertices (such as the QAOA at depth less than εlog(n)) cannot arbitrarily-well approximate boolean CSPs if the problem satisfies a geometric property from statistical physics called the coupled overlap-gap property (OGP) [Chen et al., Annals of Probability, 47(3), 2019]. We show that the random MAX-k-XOR problem has this property when k ≥ 4 is even by extending the corresponding result for diluted k-spin glasses. Our concentration lemmas confirm a conjecture of Brandao et al. [arXiv:1812.04170, 2018] asserting that the landscape independence of QAOA extends to logarithmic depth - in other words, for every fixed choice of QAOA angle parameters, the algorithm at logarithmic depth performs almost equally well on almost all instances. One of these lemmas is a strengthening of McDiarmid’s inequality, applicable when the random variables have a highly biased distribution, and may be of independent interest.

Cite as

Chi-Ning Chou, Peter J. Love, Juspreet Singh Sandhu, and Jonathan Shi. Limitations of Local Quantum Algorithms on Random MAX-k-XOR and Beyond. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chou_et_al:LIPIcs.ICALP.2022.41,
  author =	{Chou, Chi-Ning and Love, Peter J. and Sandhu, Juspreet Singh and Shi, Jonathan},
  title =	{{Limitations of Local Quantum Algorithms on Random MAX-k-XOR and Beyond}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.41},
  URN =		{urn:nbn:de:0030-drops-163822},
  doi =		{10.4230/LIPIcs.ICALP.2022.41},
  annote =	{Keywords: Quantum Algorithms, Spin Glasses, Hardness of Approximation, Local Algorithms, Concentration Inequalities, Overlap Gap Property}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Dynamic α + 2 Arboricity Decompositions and Implicit Colouring

Authors: Aleksander B. G. Christiansen and Eva Rotenberg


Abstract
The arboricity α of a graph is the smallest number of forests necessary to cover its edges, and an arboricity decomposition of a graph is a decomposition of its edges into forests. The best near-linear time algorithm for arboricity decomposition guarantees at most α +2 forests if the graph has arboricity α (Blumenstock and Fischer [Markus Blumenstock and Frank Fischer, 2020]). In this paper, we study arboricity decomposition for dynamic graphs, that is, graphs that are subject to insertions and deletions of edges. We give an algorithm that, provided the arboricity of the dynamic graph never exceeds α, maintains an α+2 arboricity decomposition of the graph in poly(log n,α) update time, thus matching the number of forests currently obtainable in near-linear time for static (non-changing) graphs. Our construction goes via dynamic bounded out-degree orientations, and we present a fully-dynamic algorithm that explicitly orients the edges of the dynamic graph, such that no vertex has an out-degree exceeding ⌊ (1+ε)α ⌋ + 2. Our algorithm is deterministic and has a worst-case update time of O(ε^{-6}α² log³ n). The state-of-the-art explicit, deterministic, worst-case algorithm for bounded out-degree orientations maintains a β⋅ α + log_β n out-orientation in O(β²α²+βαlog_β n) time [Tsvi Kopelowitz et al., 2014]. As a consequence, we get an algorithm that maintains an implicit vertex colouring with 4⋅ 2^α colours, in amortised poly-log n update time, and with O(α log n) worst-case query time. Thus, at the expense of log n-factors in the update time, we improve on the number of colours from 2^O(α) to O(2^α) compared to the state-of-the-art for implicit dynamic colouring [Monika Henzinger et al., 2020].

Cite as

Aleksander B. G. Christiansen and Eva Rotenberg. Fully-Dynamic α + 2 Arboricity Decompositions and Implicit Colouring. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{christiansen_et_al:LIPIcs.ICALP.2022.42,
  author =	{Christiansen, Aleksander B. G. and Rotenberg, Eva},
  title =	{{Fully-Dynamic \alpha + 2 Arboricity Decompositions and Implicit Colouring}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.42},
  URN =		{urn:nbn:de:0030-drops-163835},
  doi =		{10.4230/LIPIcs.ICALP.2022.42},
  annote =	{Keywords: Dynamic graphs, bounded arboricity, graph colouring, data structures}
}
Document
Track A: Algorithms, Complexity and Games
Expander Random Walks: The General Case and Limitations

Authors: Gil Cohen, Dor Minzer, Shir Peleg, Aaron Potechin, and Amnon Ta-Shma


Abstract
Cohen, Peri and Ta-Shma [Gil Cohen et al., 2021] considered the following question: Assume the vertices of an expander graph are labelled by ± 1. What "test" functions f : {±1}^t → {±1} can or cannot distinguish t independent samples from those obtained by a random walk? [Gil Cohen et al., 2021] considered only balanced labellings, and proved that for all symmetric functions the distinguishability goes down to zero with the spectral gap λ of the expander G. In addition, [Gil Cohen et al., 2021] show that functions computable by AC⁰ circuits are fooled by expanders with vanishing spectral expansion. We continue the study of this question. We generalize the result to all labelling, not merely balanced ones. We also improve the upper bound on the error of symmetric functions. More importantly, we give a matching lower bound and show a symmetric function with distinguishability going down to zero with λ but not with t. Moreover, we prove a lower bound on the error of functions in AC⁰ in particular, we prove that a random walk on expanders with constant spectral gap does not fool AC⁰.

Cite as

Gil Cohen, Dor Minzer, Shir Peleg, Aaron Potechin, and Amnon Ta-Shma. Expander Random Walks: The General Case and Limitations. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2022.43,
  author =	{Cohen, Gil and Minzer, Dor and Peleg, Shir and Potechin, Aaron and Ta-Shma, Amnon},
  title =	{{Expander Random Walks: The General Case and Limitations}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.43},
  URN =		{urn:nbn:de:0030-drops-163849},
  doi =		{10.4230/LIPIcs.ICALP.2022.43},
  annote =	{Keywords: Expander Graphs, Random Walks, Lower Bounds}
}
Document
Track A: Algorithms, Complexity and Games
LCC and LDC: Tailor-Made Distance Amplification and a Refined Separation

Authors: Gil Cohen and Tal Yankovitz


Abstract
The Alon-Edmonds-Luby distance amplification procedure (FOCS 1995) is an algorithm that transforms a code with vanishing distance to a code with constant distance. AEL was invoked by Kopparty, Meir, Ron-Zewi, and Saraf (J. ACM 2017) for obtaining their state-of-the-art LDC, LCC and LTC. Cohen and Yankovitz (CCC 2021) devised a procedure that can amplify inverse-polynomial distances, exponentially extending the regime of distances that can be amplified by AEL. However, the improved procedure only works for LDC and assuming rate 1-1/(poly log n). In this work we devise a distance amplification procedure for LCC with inverse-polynomial distances even for vanishing rate 1/(poly log log n). For LDC, we obtain a more modest improvement and require rate 1-1/(poly log log n). Thus, the tables have turned and it is now LCC that can be better amplified. Our key idea for accomplishing this, deviating from prior work, is to tailor the distance amplification procedure to the code at hand. Our second result concerns the relation between linear LDC and LCC. We prove the existence of linear LDC that are not LCC, qualitatively extending a separation by Kaufman and Viderman (RANDOM 2010).

Cite as

Gil Cohen and Tal Yankovitz. LCC and LDC: Tailor-Made Distance Amplification and a Refined Separation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 44:1-44:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2022.44,
  author =	{Cohen, Gil and Yankovitz, Tal},
  title =	{{LCC and LDC: Tailor-Made Distance Amplification and a Refined Separation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{44:1--44:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.44},
  URN =		{urn:nbn:de:0030-drops-163858},
  doi =		{10.4230/LIPIcs.ICALP.2022.44},
  annote =	{Keywords: Locally Correctable Codes, Locally Decodable Codes, Distance Amplifications}
}
Document
Track A: Algorithms, Complexity and Games
Metastability of the Potts Ferromagnet on Random Regular Graphs

Authors: Amin Coja-Oghlan, Andreas Galanis, Leslie Ann Goldberg, Jean Bernoulli Ravelomanana, Daniel Štefankovič, and Eric Vigoda


Abstract
We study the performance of Markov chains for the q-state ferromagnetic Potts model on random regular graphs. While the cases of the grid and the complete graph are by now well-understood, the case of random regular graphs has resisted a detailed analysis and, in fact, even analysing the properties of the Potts distribution has remained elusive. It is conjectured that the performance of Markov chains is dictated by metastability phenomena, i.e., the presence of "phases" (clusters) in the sample space where Markov chains with local update rules, such as the Glauber dynamics, are bound to take exponential time to escape, and therefore cause slow mixing. The phases that are believed to drive these metastability phenomena in the case of the Potts model emerge as local, rather than global, maxima of the so-called Bethe functional, and previous approaches of analysing these phases based on optimisation arguments fall short of the task. Our first contribution is to detail the emergence of the metastable phases for the q-state Potts model on the d-regular random graph for all integers q,d ≥ 3, and establish that for an interval of temperatures, delineated by the uniqueness and a broadcasting threshold on the d-regular tree, the two phases coexist. The proofs are based on a conceptual connection between spatial properties and the structure of the Potts distribution on the random regular graph, rather than complicated moment calculations. This significantly refines earlier results by Helmuth, Jenssen, and Perkins who had established phase coexistence for a small interval around the so-called ordered-disordered threshold (via different arguments) that applied for large q and d ≥ 5. Based on our new structural understanding of the model, we obtain various algorithmic consequences. We first complement recent fast mixing results for Glauber dynamics by Blanca and Gheissari below the uniqueness threshold, showing an exponential lower bound on the mixing time above the uniqueness threshold. Then, we obtain tight results even for the non-local and more elaborate Swendsen-Wang chain, where we establish slow mixing/metastability for the whole interval of temperatures where the chain is conjectured to mix slowly on the random regular graph. The key is to bound the conductance of the chains using a random graph "planting" argument combined with delicate bounds on random-graph percolation.

Cite as

Amin Coja-Oghlan, Andreas Galanis, Leslie Ann Goldberg, Jean Bernoulli Ravelomanana, Daniel Štefankovič, and Eric Vigoda. Metastability of the Potts Ferromagnet on Random Regular Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 45:1-45:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cojaoghlan_et_al:LIPIcs.ICALP.2022.45,
  author =	{Coja-Oghlan, Amin and Galanis, Andreas and Goldberg, Leslie Ann and Ravelomanana, Jean Bernoulli and \v{S}tefankovi\v{c}, Daniel and Vigoda, Eric},
  title =	{{Metastability of the Potts Ferromagnet on Random Regular Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{45:1--45:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.45},
  URN =		{urn:nbn:de:0030-drops-163865},
  doi =		{10.4230/LIPIcs.ICALP.2022.45},
  annote =	{Keywords: Markov chains, sampling, random regular graph, Potts model}
}
Document
Track A: Algorithms, Complexity and Games
On Computing the k-Shortcut Fréchet Distance

Authors: Jacobus Conradi and Anne Driemel


Abstract
The Fréchet distance is a popular measure of dissimilarity for polygonal curves. It is defined as a min-max formulation that considers all direction-preserving continuous bijections of the two curves. Because of its susceptibility to noise, Driemel and Har-Peled introduced the shortcut Fréchet distance in 2012, where one is allowed to take shortcuts along one of the curves, similar to the edit distance for sequences. We analyse the parameterized version of this problem, where the number of shortcuts is bounded by a parameter k. The corresponding decision problem can be stated as follows: Given two polygonal curves T and B of at most n vertices, a parameter k and a distance threshold δ, is it possible to introduce k shortcuts along B such that the Fréchet distance of the resulting curve and the curve T is at most δ? We study this problem for polygonal curves in the plane. We provide a complexity analysis for this problem with the following results: (i) assuming the exponential-time-hypothesis (ETH), there exists no algorithm with running time bounded by n^o(k); (ii) there exists a decision algorithm with running time in O(k n^{2k+2} log n). In contrast, we also show that efficient approximate decider algorithms are possible, even when k is large. We present a (3+ε)-approximate decider algorithm with running time in O(k n² log² n) for fixed ε. In addition, we can show that, if k is a constant and the two curves are c-packed for some constant c, then the approximate decider algorithm runs in near-linear time.

Cite as

Jacobus Conradi and Anne Driemel. On Computing the k-Shortcut Fréchet Distance. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{conradi_et_al:LIPIcs.ICALP.2022.46,
  author =	{Conradi, Jacobus and Driemel, Anne},
  title =	{{On Computing the k-Shortcut Fr\'{e}chet Distance}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.46},
  URN =		{urn:nbn:de:0030-drops-163875},
  doi =		{10.4230/LIPIcs.ICALP.2022.46},
  annote =	{Keywords: Fr\'{e}chet distance, Partial similarity, Conditional lower bounds, Approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Algorithms for Geometric Steiner Forest

Authors: Artur Czumaj, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý


Abstract
We consider an important generalization of the Steiner tree problem, the Steiner forest problem, in the Euclidean plane: the input is a multiset X ⊆ ℝ², partitioned into k color classes C₁, C₂, …, Cₖ ⊆ X. The goal is to find a minimum-cost Euclidean graph G such that every color class Cᵢ is connected in G. We study this Steiner forest problem in the streaming setting, where the stream consists of insertions and deletions of points to X. Each input point x ∈ X arrives with its color color(x) ∈ [k], and as usual for dynamic geometric streams, the input is restricted to the discrete grid {0, …, Δ}². We design a single-pass streaming algorithm that uses poly(k ⋅ log Δ) space and time, and estimates the cost of an optimal Steiner forest solution within ratio arbitrarily close to the famous Euclidean Steiner ratio α₂ (currently 1.1547 ≤ α₂ ≤ 1.214). This approximation guarantee matches the state of the art bound for streaming Steiner tree, i.e., when k = 1. Our approach relies on a novel combination of streaming techniques, like sampling and linear sketching, with the classical Arora-style dynamic-programming framework for geometric optimization problems, which usually requires large memory and has so far not been applied in the streaming setting. We complement our streaming algorithm for the Steiner forest problem with simple arguments showing that any finite approximation requires Ω(k) bits of space.

Cite as

Artur Czumaj, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Streaming Algorithms for Geometric Steiner Forest. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 47:1-47:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2022.47,
  author =	{Czumaj, Artur and Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Vesel\'{y}, Pavel},
  title =	{{Streaming Algorithms for Geometric Steiner Forest}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{47:1--47:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.47},
  URN =		{urn:nbn:de:0030-drops-163880},
  doi =		{10.4230/LIPIcs.ICALP.2022.47},
  annote =	{Keywords: Steiner forest, streaming, sublinear algorithms, dynamic programming}
}
Document
Track A: Algorithms, Complexity and Games
Improved Reconstruction of Random Geometric Graphs

Authors: Varsha Dani, Josep Díaz, Thomas P. Hayes, and Cristopher Moore


Abstract
Embedding graphs in a geographical or latent space, i.e. inferring locations for vertices in Euclidean space or on a smooth manifold or submanifold, is a common task in network analysis, statistical inference, and graph visualization. We consider the classic model of random geometric graphs where n points are scattered uniformly in a square of area n, and two points have an edge between them if and only if their Euclidean distance is less than r. The reconstruction problem then consists of inferring the vertex positions, up to the symmetries of the square, given only the adjacency matrix of the resulting graph. We give an algorithm that, if r = n^α for α > 0, with high probability reconstructs the vertex positions with a maximum error of O(n^β) where β = 1/2-(4/3)α, until α ≥ 3/8 where β = 0 and the error becomes O(√{log n}). This improves over earlier results, which were unable to reconstruct with error less than r. Our method estimates Euclidean distances using a hybrid of graph distances and short-range estimates based on the number of common neighbors. We extend our results to the surface of the sphere in ℝ³ and to hypercubes in any constant dimension.

Cite as

Varsha Dani, Josep Díaz, Thomas P. Hayes, and Cristopher Moore. Improved Reconstruction of Random Geometric Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 48:1-48:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dani_et_al:LIPIcs.ICALP.2022.48,
  author =	{Dani, Varsha and D{\'\i}az, Josep and Hayes, Thomas P. and Moore, Cristopher},
  title =	{{Improved Reconstruction of Random Geometric Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{48:1--48:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.48},
  URN =		{urn:nbn:de:0030-drops-163897},
  doi =		{10.4230/LIPIcs.ICALP.2022.48},
  annote =	{Keywords: Reconstruction algorithm, distances in RGG, d-dimensional hypercube, 3 dimensional sphere}
}
Document
Track A: Algorithms, Complexity and Games
Improved Approximation Algorithms for Dyck Edit Distance and RNA Folding

Authors: Debarati Das, Tomasz Kociumaka, and Barna Saha


Abstract
The Dyck language, which consists of well-balanced sequences of parentheses, is one of the most fundamental context-free languages. The Dyck edit distance quantifies the number of edits (character insertions, deletions, and substitutions) required to make a given length-n parenthesis sequence well-balanced. RNA Folding involves a similar problem, where a closing parenthesis can match an opening parenthesis of the same type irrespective of their ordering. For example, in RNA Folding, both () and )( are valid matches, whereas the Dyck language only allows () as a match. Both of these problems have been studied extensively in the literature. Using fast matrix multiplication, it is possible to compute their exact solutions in time O(n^2.687) (Chi, Duan, Xie, Zhang, STOC'22), and a (1+ε)-multiplicative approximation is known with a running time of Ω(n^2.372). The impracticality of fast matrix multiplication often makes combinatorial algorithms much more desirable. Unfortunately, it is known that the problems of (exactly) computing the Dyck edit distance and the folding distance are at least as hard as Boolean matrix multiplication. Thereby, they are unlikely to admit truly subcubic-time combinatorial algorithms. In terms of fast approximation algorithms that are combinatorial in nature, the state of the art for Dyck edit distance is an O(log n)-factor approximation algorithm that runs in near-linear time (Saha, FOCS'14), whereas for RNA Folding only an ε n-additive approximation in Õ(n²/ε) time (Saha, FOCS'17) is known. In this paper, we make substantial improvements to the state of the art for Dyck edit distance (with any number of parenthesis types). We design a constant-factor approximation algorithm that runs in Õ(n^1.971) time (the first constant-factor approximation in subquadratic time). Moreover, we develop a (1+ε)-factor approximation algorithm running in Õ(n²/ε) time, which improves upon the earlier additive approximation. Finally, we design a (3+ε)-approximation that takes Õ(nd/ε) time, where d ≥ 1 is an upper bound on the sought distance. As for RNA folding, for any s ≥ 1, we design a factor-s approximation algorithm that runs in O(n+(n/s)³) time. To the best of our knowledge, this is the first nontrivial approximation algorithm for RNA Folding that can go below the n² barrier. All our algorithms are combinatorial in nature.

Cite as

Debarati Das, Tomasz Kociumaka, and Barna Saha. Improved Approximation Algorithms for Dyck Edit Distance and RNA Folding. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 49:1-49:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.ICALP.2022.49,
  author =	{Das, Debarati and Kociumaka, Tomasz and Saha, Barna},
  title =	{{Improved Approximation Algorithms for Dyck Edit Distance and RNA Folding}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{49:1--49:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.49},
  URN =		{urn:nbn:de:0030-drops-163902},
  doi =		{10.4230/LIPIcs.ICALP.2022.49},
  annote =	{Keywords: Dyck Edit Distance, RNA Folding, String Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
New Additive Approximations for Shortest Paths and Cycles

Authors: Mingyang Deng, Yael Kirkpatrick, Victor Rong, Virginia Vassilevska Williams, and Ziqian Zhong


Abstract
This paper considers additive approximation algorithms for All-Pairs Shortest Paths (APSP) and Shortest Cycle in undirected unweighted graphs. The results are as follows: - We obtain the first +2-approximation algorithm for APSP in n-vertex graphs that improves upon Dor, Halperin and Zwick’s (SICOMP'00) Õ(n^{7/3}) time algorithm. The new algorithm runs in Õ(n^2.29) time and is obtained via a reduction to Min-Plus product of bounded difference matrices. - We obtain the first additive approximation scheme for Shortest Cycle, generalizing the approximation algorithms of Itai and Rodeh (SICOMP'78) and Roditty and Vassilevska W. (SODA'12). For every integer r ≥ 0, we give an Õ(n+n^{2+r}/m^r) time algorithm that returns a +(2r+1)-approximate shortest cycle in any n-vertex, m-edge graph.

Cite as

Mingyang Deng, Yael Kirkpatrick, Victor Rong, Virginia Vassilevska Williams, and Ziqian Zhong. New Additive Approximations for Shortest Paths and Cycles. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 50:1-50:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ICALP.2022.50,
  author =	{Deng, Mingyang and Kirkpatrick, Yael and Rong, Victor and Vassilevska Williams, Virginia and Zhong, Ziqian},
  title =	{{New Additive Approximations for Shortest Paths and Cycles}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{50:1--50:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.50},
  URN =		{urn:nbn:de:0030-drops-163919},
  doi =		{10.4230/LIPIcs.ICALP.2022.50},
  annote =	{Keywords: Fine-grained Complexity, Additive Approximation}
}
Document
Track A: Algorithms, Complexity and Games
One-Pass Additive-Error Subset Selection for 𝓁_p Subspace Approximation

Authors: Amit Deshpande and Rameshwar Pratap


Abstract
We consider the problem of subset selection for 𝓁_p subspace approximation, that is, to efficiently find a small subset of data points such that solving the problem optimally for this subset gives a good approximation to solving the problem optimally for the original input. Previously known subset selection algorithms based on volume sampling and adaptive sampling [Deshpande and Varadarajan, 2007], for the general case of p ∈ [1, ∞), require multiple passes over the data. In this paper, we give a one-pass subset selection with an additive approximation guarantee for 𝓁_p subspace approximation, for any p ∈ [1, ∞). Earlier subset selection algorithms that give a one-pass multiplicative (1+ε) approximation work under the special cases. Cohen et al. [Michael B. Cohen et al., 2017] gives a one-pass subset section that offers multiplicative (1+ε) approximation guarantee for the special case of 𝓁₂ subspace approximation. Mahabadi et al. [Sepideh Mahabadi et al., 2020] gives a one-pass noisy subset selection with (1+ε) approximation guarantee for 𝓁_p subspace approximation when p ∈ {1, 2}. Our subset selection algorithm gives a weaker, additive approximation guarantee, but it works for any p ∈ [1, ∞).

Cite as

Amit Deshpande and Rameshwar Pratap. One-Pass Additive-Error Subset Selection for 𝓁_p Subspace Approximation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:LIPIcs.ICALP.2022.51,
  author =	{Deshpande, Amit and Pratap, Rameshwar},
  title =	{{One-Pass Additive-Error Subset Selection for 𝓁\underlinep Subspace Approximation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.51},
  URN =		{urn:nbn:de:0030-drops-163924},
  doi =		{10.4230/LIPIcs.ICALP.2022.51},
  annote =	{Keywords: Subspace approximation, streaming algorithms, low-rank approximation, adaptive sampling, volume sampling, subset selection}
}
Document
Track A: Algorithms, Complexity and Games
Set Membership with Two Classical and Quantum Bit Probes

Authors: Shyam S. Dhamapurkar, Shubham Vivek Pawar, and Jaikumar Radhakrishnan


Abstract
We study the classical and quantum bit-probe versions of the static set membership problem : Given a subset, S (|S| ≤ n) of a universe, 𝒰 (|𝒰| = m ≫ n), represent it as a binary string in memory so that the query "Is x in S?" (x ∈ 𝒰) can be answered by making at most t probes into the string. Let s_{A}(m,n,t) denote the minimum length of the bit string in any scheme that solves this static set membership problem. We show that for n ≥ 4 s_A(m,n,t = 2) = 𝒪(m^{1-1/(n-1)}) (if n = 0 (mod 3)); 𝒪(m^{1-1/n}) (if n = 1,2 (mod 3)); 𝒪(m^{6/7}) (if n = 8,9). These bounds are shown using a common scheme that is based on a graph-theoretic observation on orienting the edges of a graph of high girth. For all n ≥ 4, these bounds substantially improve on the previous best bounds known for this problem, some of which required elaborate constructions [Mirza Galib Anwarul Husain Baig and Deepanjan Kesh, 2020]. Our schemes are explicit. A lower bound of the form s_A(m,n,2) = Ω(m^{1-1/⌊{n/4}⌋}) was known for this problem. We show an improved lower bound of s_A(m,n,2) = Ω(m^{1-2/(n+3)}); this bound was previously known only for n = 3,5 [Mirza Galib Anwarul Husain Baig and Deepanjan Kesh, 2020; Mirza Galib Anwarul Husain Baig et al., 2019; Mirza Galib Anwarul Husain Baig and Deepanjan Kesh, 2018; Mirza Galib Anwarul Husain Baig et al., 2019; Mirza Galib Anwarul Husain Baig and Deepanjan Kesh, 2020]. We consider the quantum version of the problem, where access to the bit-string b ∈ {0,1}^s is provided in the form of a quantum oracle that performs the transformation 𝒪_b: |i⟩ ↦ (-1)^{b_i} |i⟩. Let s_Q(m,n,2) denote the minimum length of the bit string that solves the above set membership problem in the quantum model (with adaptive queries but no error). We show that for all n ≤ m^{1/8}, we have s_{QA}(m,n,2) = 𝒪(m^{7/8}). This upper bound makes crucial use of Nash-William’s theorem [Diestel, 2005] for decomposing a graph into forests. This result is significant because, prior to this work, it was not known if quantum schemes yield any advantage over classical schemes. We also consider schemes that make a small number of quantum non-adaptive probes. In particular, we show that the space required in this case, s_{QN}(m,n = 2,t = 2) = O(√m) and s_{QN}(m,n = 2,t = 3) = O(m^{1/3}); in contrast, it is known that two non-adaptive classical probes yield no savings. Our quantum schemes are simple and use only the fact that the XOR of two bits of memory can be computed using just one quantum query to the oracle.

Cite as

Shyam S. Dhamapurkar, Shubham Vivek Pawar, and Jaikumar Radhakrishnan. Set Membership with Two Classical and Quantum Bit Probes. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 52:1-52:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dhamapurkar_et_al:LIPIcs.ICALP.2022.52,
  author =	{Dhamapurkar, Shyam S. and Pawar, Shubham Vivek and Radhakrishnan, Jaikumar},
  title =	{{Set Membership with Two Classical and Quantum Bit Probes}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{52:1--52:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.52},
  URN =		{urn:nbn:de:0030-drops-163932},
  doi =		{10.4230/LIPIcs.ICALP.2022.52},
  annote =	{Keywords: set membership problem, bit probe complexity, graphs with high girth, quantum data structure}
}
Document
Track A: Algorithms, Complexity and Games
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets

Authors: Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang


Abstract
We study linear equations in combinatorial Laplacians of k-dimensional simplicial complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial Laplacians play a crucial role in homology and are a central tool in topology. Beyond this, they have various applications in data analysis and physical modeling problems. It is known that nearly-linear time solvers exist for graph Laplacians. However, nearly-linear time solvers for combinatorial Laplacians are only known for restricted classes of complexes. This paper shows that linear equations in combinatorial Laplacians of 2-complexes are as hard to solve as general linear equations. More precisely, for any constant c ≥ 1, if we can solve linear equations in combinatorial Laplacians of 2-complexes up to high accuracy in time Õ((# of nonzero coefficients)^c), then we can solve general linear equations with polynomially bounded integer coefficients and condition numbers up to high accuracy in time Õ((# of nonzero coefficients)^c). We prove this by a nearly-linear time reduction from general linear equations to combinatorial Laplacians of 2-complexes. Our reduction preserves the sparsity of the problem instances up to poly-logarithmic factors.

Cite as

Ming Ding, Rasmus Kyng, Maximilian Probst Gutenberg, and Peng Zhang. Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 53:1-53:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.53,
  author =	{Ding, Ming and Kyng, Rasmus and Gutenberg, Maximilian Probst and Zhang, Peng},
  title =	{{Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{53:1--53:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.53},
  URN =		{urn:nbn:de:0030-drops-163945},
  doi =		{10.4230/LIPIcs.ICALP.2022.53},
  annote =	{Keywords: Simplicial Complexes, Combinatorial Laplacians, Linear Equations, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions

Authors: Ming Ding, Rasmus Kyng, and Peng Zhang


Abstract
We give a nearly-linear time reduction that encodes any linear program as a 2-commodity flow problem with only a small blow-up in size. Under mild assumptions similar to those employed by modern fast solvers for linear programs, our reduction causes only a polylogarithmic multiplicative increase in the size of the program, and runs in nearly-linear time. Our reduction applies to high-accuracy approximation algorithms and exact algorithms. Given an approximate solution to the 2-commodity flow problem, we can extract a solution to the linear program in linear time with only a polynomial factor increase in the error. This implies that any algorithm that solves the 2-commodity flow problem can solve linear programs in essentially the same time. Given a directed graph with edge capacities and two source-sink pairs, the goal of the 2-commodity flow problem is to maximize the sum of the flows routed between the two source-sink pairs subject to edge capacities and flow conservation. A 2-commodity flow problem can be formulated as a linear program, which can be solved to high accuracy in almost the current matrix multiplication time (Cohen-Lee-Song JACM'21). Our reduction shows that linear programs can be approximately solved, to high accuracy, using 2-commodity flow as well. Our proof follows the outline of Itai’s polynomial-time reduction of a linear program to a 2-commodity flow problem (JACM’78). Itai’s reduction shows that exactly solving 2-commodity flow and exactly solving linear programming are polynomial-time equivalent. We improve Itai’s reduction to nearly preserve the problem representation size in each step. In addition, we establish an error bound for approximately solving each intermediate problem in the reduction, and show that the accumulated error is polynomially bounded. We remark that our reduction does not run in strongly polynomial time and that it is open whether 2-commodity flow and linear programming are equivalent in strongly polynomial time.

Cite as

Ming Ding, Rasmus Kyng, and Peng Zhang. Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 54:1-54:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ICALP.2022.54,
  author =	{Ding, Ming and Kyng, Rasmus and Zhang, Peng},
  title =	{{Two-Commodity Flow Is Equivalent to Linear Programming Under Nearly-Linear Time Reductions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{54:1--54:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.54},
  URN =		{urn:nbn:de:0030-drops-163950},
  doi =		{10.4230/LIPIcs.ICALP.2022.54},
  annote =	{Keywords: Two-Commodity Flow Problems, Linear Programming, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
High-Probability List-Recovery, and Applications to Heavy Hitters

Authors: Dean Doron and Mary Wootters


Abstract
An error correcting code 𝒞 : Σ^k → Σⁿ is efficiently list-recoverable from input list size 𝓁 if for any sets ℒ₁, …, ℒ_n ⊆ Σ of size at most 𝓁, one can efficiently recover the list ℒ = {x ∈ Σ^k : ∀ j ∈ [n], 𝒞(x)_j ∈ ℒ_j}. While list-recovery has been well-studied in error correcting codes, all known constructions with "efficient" algorithms are not efficient in the parameter 𝓁. In this work, motivated by applications in algorithm design and pseudorandomness, we study list-recovery with the goal of obtaining a good dependence on 𝓁. We make a step towards this goal by obtaining it in the weaker case where we allow a randomized encoding map and a small failure probability, and where the input lists are derived from unions of codewords. As an application of our construction, we give a data structure for the heavy hitters problem in the strict turnstile model that, for some parameter regimes, obtains stronger guarantees than known constructions.

Cite as

Dean Doron and Mary Wootters. High-Probability List-Recovery, and Applications to Heavy Hitters. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 55:1-55:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.ICALP.2022.55,
  author =	{Doron, Dean and Wootters, Mary},
  title =	{{High-Probability List-Recovery, and Applications to Heavy Hitters}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{55:1--55:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.55},
  URN =		{urn:nbn:de:0030-drops-163961},
  doi =		{10.4230/LIPIcs.ICALP.2022.55},
  annote =	{Keywords: List recoverable codes, Heavy Hitters, high-dimensional expanders}
}
Document
Track A: Algorithms, Complexity and Games
Almost Optimal Bounds for Sublinear-Time Sampling of k-Cliques in Bounded Arboricity Graphs

Authors: Talya Eden, Dana Ron, and Will Rosenbaum


Abstract
Counting and sampling small subgraphs are fundamental algorithmic tasks. Motivated by the need to handle massive datasets efficiently, recent theoretical work has examined the problems in the sublinear time regime. In this work, we consider the problem of sampling a k-clique in a graph from an almost uniform distribution. Specifically the algorithm should output each k-clique with probability (1±ε)/n_k, where n_k denotes the number of k-cliques in the graph and ε is a given approximation parameter. To this end, the algorithm may perform degree, neighbor, and pair queries. We focus on the class of graphs with arboricity at most α, and prove that the query complexity of the problem is Θ^*(min{nα , max {(((nα)^(k/2))/n_k)^{1/(k-1)}, (nα^(k-1))/n_k}}), where n is the number of vertices in the graph, and Θ^*(⋅) suppresses dependencies on (log n/ε)^O(k). Our upper bound is based on defining a special auxiliary graph H_k, such that sampling edges almost uniformly in H_k translates to sampling k-cliques almost uniformly in the original graph G. We then build on a known edge-sampling algorithm (Eden, Ron and Rosenbaum, ICALP19) to sample edges in H_k. The challenge is simulating queries to H_k while being given query access only to G. Our lower bound follows from a construction of a family of graphs with arboricity α such that in each graph there are n_k k-cliques, where one of these cliques is "hidden" and hence hard to sample.

Cite as

Talya Eden, Dana Ron, and Will Rosenbaum. Almost Optimal Bounds for Sublinear-Time Sampling of k-Cliques in Bounded Arboricity Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 56:1-56:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ICALP.2022.56,
  author =	{Eden, Talya and Ron, Dana and Rosenbaum, Will},
  title =	{{Almost Optimal Bounds for Sublinear-Time Sampling of k-Cliques in Bounded Arboricity Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{56:1--56:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.56},
  URN =		{urn:nbn:de:0030-drops-163974},
  doi =		{10.4230/LIPIcs.ICALP.2022.56},
  annote =	{Keywords: sublinear time algorithms, graph algorithms, cliques, arboricity, uniform sampling}
}
Document
Track A: Algorithms, Complexity and Games
On Sampling Symmetric Gibbs Distributions on Sparse Random Graphs and Hypergraphs

Authors: Charilaos Efthymiou


Abstract
In this paper, we present a novel, polynomial time, algorithm for approximate sampling from symmetric Gibbs distributions on the sparse random graph and hypergraph. The examples of symmetric distributions we consider here include some important distributions on spin-systems and spin-glasses. These are: the q-state antiferromagnetic Potts model for q ≥ 2, including the (hyper)graph Ising model and random colourings. The uniform distribution over the Not-All-Equal solutions of a random k-SAT formula. Finally, we consider sampling from the spin-glass distribution called the k-spin model, i.e., this is the "diluted" version of the well-known Sherrington-Kirkpatrick model. Spin-glasses give rise to very intricate distributions which are also studied in mathematics, in neural computation, computational biology and many other areas. To our knowledge, this is the first rigorously analysed efficient sampling algorithm for spin-glasses which operates in a non trivial range of the parameters of the distribution. We present, what we believe to be, an elegant sampling algorithm. Our algorithm is unique in its approach and does not belong to any of the well-known families of sampling algorithms. We derive it by investigating the power and the limits of the approach that was introduced in [Efthymiou: SODA 2012] and combine it, in a novel way, with powerful notions from the Cavity method. Specifically, for a symmetric Gibbs distribution μ on the random (hyper)graph whose parameters are within an appropriate range, our sampling algorithm has the following properties: with probability 1-o(1) over the instances of the input (hyper)graph, it generates a configuration which is distributed within total variation distance n^{-Ω(1)} from μ. The time complexity is O((nlog n)²), where n is the size of the input (hyper)graph. We make a notable progress regarding impressive predictions of physicists relating phase-transitions of Gibbs distributions with the efficiency of the corresponding sampling algorithms. For most (if not all) the cases we consider here, our algorithm outperforms by far any other sampling algorithms in terms of the permitted range of the parameters of the Gibbs distributions. The use of notions and ideas from the Cavity method provides a new insight to the sampling problem. Our results imply that there is a lot of potential for further exploiting the Cavity method for algorithmic design.

Cite as

Charilaos Efthymiou. On Sampling Symmetric Gibbs Distributions on Sparse Random Graphs and Hypergraphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 57:1-57:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{efthymiou:LIPIcs.ICALP.2022.57,
  author =	{Efthymiou, Charilaos},
  title =	{{On Sampling Symmetric Gibbs Distributions on Sparse Random Graphs and Hypergraphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{57:1--57:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.57},
  URN =		{urn:nbn:de:0030-drops-163982},
  doi =		{10.4230/LIPIcs.ICALP.2022.57},
  annote =	{Keywords: spin-system, spin-glass, sparse random (hyper)graph, approximate sampling, efficient algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Testability and Local Certification of Monotone Properties in Minor-Closed Classes

Authors: Louis Esperet and Sergey Norin


Abstract
The main problem in the area of graph property testing is to understand which graph properties are testable, which means that with constantly many queries to any input graph G, a tester can decide with good probability whether G satisfies the property, or is far from satisfying the property. Testable properties are well understood in the dense model and in the bounded degree model, but little is known in sparse graph classes when graphs are allowed to have unbounded degree. This is the setting of the sparse model. We prove that for any proper minor-closed class 𝒢, any monotone property (i.e., any property that is closed under taking subgraphs) is testable for graphs from 𝒢 in the sparse model. This extends a result of Czumaj and Sohler (FOCS'19), who proved it for monotone properties with finitely many forbidden subgraphs. Our result implies for instance that for any integers k and t, k-colorability of K_t-minor free graphs is testable in the sparse model. Elek recently proved that monotone properties of bounded degree graphs from minor-closed classes that are closed under disjoint union can be verified by an approximate proof labeling scheme in constant time. We show again that the assumption of bounded degree can be omitted in his result.

Cite as

Louis Esperet and Sergey Norin. Testability and Local Certification of Monotone Properties in Minor-Closed Classes. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 58:1-58:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esperet_et_al:LIPIcs.ICALP.2022.58,
  author =	{Esperet, Louis and Norin, Sergey},
  title =	{{Testability and Local Certification of Monotone Properties in Minor-Closed Classes}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{58:1--58:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.58},
  URN =		{urn:nbn:de:0030-drops-163997},
  doi =		{10.4230/LIPIcs.ICALP.2022.58},
  annote =	{Keywords: Property testing, sparse model, local certification, minor-closed classes}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Submodular Maximization Under Matroid Constraints

Authors: Moran Feldman, Paul Liu, Ashkan Norouzi-Fard, Ola Svensson, and Rico Zenklusen


Abstract
Recent progress in (semi-)streaming algorithms for monotone submodular function maximization has led to tight results for a simple cardinality constraint. However, current techniques fail to give a similar understanding for natural generalizations, including matroid constraints. This paper aims at closing this gap. For a single matroid of rank k (i.e., any solution has cardinality at most k), our main results are: - A single-pass streaming algorithm that uses Õ(k) memory and achieves an approximation guarantee of 0.3178. - A multi-pass streaming algorithm that uses Õ(k) memory and achieves an approximation guarantee of (1-1/e - ε) by taking a constant (depending on ε) number of passes over the stream. This improves on the previously best approximation guarantees of 1/4 and 1/2 for single-pass and multi-pass streaming algorithms, respectively. In fact, our multi-pass streaming algorithm is tight in that any algorithm with a better guarantee than 1/2 must make several passes through the stream and any algorithm that beats our guarantee of 1-1/e must make linearly many passes (as well as an exponential number of value oracle queries). Moreover, we show how the approach we use for multi-pass streaming can be further strengthened if the elements of the stream arrive in uniformly random order, implying an improved result for p-matchoid constraints.

Cite as

Moran Feldman, Paul Liu, Ashkan Norouzi-Fard, Ola Svensson, and Rico Zenklusen. Streaming Submodular Maximization Under Matroid Constraints. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 59:1-59:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.ICALP.2022.59,
  author =	{Feldman, Moran and Liu, Paul and Norouzi-Fard, Ashkan and Svensson, Ola and Zenklusen, Rico},
  title =	{{Streaming Submodular Maximization Under Matroid Constraints}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{59:1--59:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.59},
  URN =		{urn:nbn:de:0030-drops-164007},
  doi =		{10.4230/LIPIcs.ICALP.2022.59},
  annote =	{Keywords: Submodular maximization, streaming, matroid, random order}
}
Document
Track A: Algorithms, Complexity and Games
(Re)packing Equal Disks into Rectangle

Authors: Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, and Meirav Zehavi


Abstract
The problem of packing of equal disks (or circles) into a rectangle is a fundamental geometric problem. (By a packing here we mean an arrangement of disks in a rectangle without overlapping.) We consider the following algorithmic generalization of the equal disk packing problem. In this problem, for a given packing of equal disks into a rectangle, the question is whether by changing positions of a small number of disks, we can allocate space for packing more disks. More formally, in the repacking problem, for a given set of n equal disks packed into a rectangle and integers k and h, we ask whether it is possible by changing positions of at most h disks to pack n+k disks. Thus the problem of packing equal disks is the special case of our problem with n = h = 0. While the computational complexity of packing equal disks into a rectangle remains open, we prove that the repacking problem is NP-hard already for h = 0. Our main algorithmic contribution is an algorithm that solves the repacking problem in time (h+k)^𝒪(h+k)⋅|I|^𝒪(1), where |I| is the input size. That is, the problem is fixed-parameter tractable parameterized by k and h.

Cite as

Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, and Meirav Zehavi. (Re)packing Equal Disks into Rectangle. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 60:1-60:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ICALP.2022.60,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Zehavi, Meirav},
  title =	{{(Re)packing Equal Disks into Rectangle}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{60:1--60:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.60},
  URN =		{urn:nbn:de:0030-drops-164011},
  doi =		{10.4230/LIPIcs.ICALP.2022.60},
  annote =	{Keywords: circle packing, unit disks, parameterized complexity, fixed-parameter tractability}
}
Document
Track A: Algorithms, Complexity and Games
Faster Cut Sparsification of Weighted Graphs

Authors: Sebastian Forster and Tijn de Vos


Abstract
A cut sparsifier is a reweighted subgraph that maintains the weights of the cuts of the original graph up to a multiplicative factor of (1±ε). This paper considers computing cut sparsifiers of weighted graphs of size O(nlog (n)/ε²). Our algorithm computes such a sparsifier in time O(m⋅min(α(n)log(m/n),log (n))), both for graphs with polynomially bounded and unbounded integer weights, where α(⋅) is the functional inverse of Ackermann’s function. This improves upon the state of the art by Benczúr and Karger (SICOMP 2015), which takes O(mlog² (n)) time. For unbounded weights, this directly gives the best known result for cut sparsification. Together with preprocessing by an algorithm of Fung et al. (SICOMP 2019), this also gives the best known result for polynomially-weighted graphs. Consequently, this implies the fastest approximate min-cut algorithm, both for graphs with polynomial and unbounded weights. In particular, we show that it is possible to adapt the state of the art algorithm of Fung et al. for unweighted graphs to weighted graphs, by letting the partial maximum spanning forest (MSF) packing take the place of the Nagamochi-Ibaraki (NI) forest packing. MSF packings have previously been used by Abraham at al. (FOCS 2016) in the dynamic setting, and are defined as follows: an M-partial MSF packing of G is a set ℱ = {F₁, … , F_M}, where F_i is a maximum spanning forest in G⧵ ⋃_{j = 1}^{i-1}F_j. Our method for computing (a sufficient estimation of) the MSF packing is the bottleneck in the running time of our sparsification algorithm.

Cite as

Sebastian Forster and Tijn de Vos. Faster Cut Sparsification of Weighted Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 61:1-61:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{forster_et_al:LIPIcs.ICALP.2022.61,
  author =	{Forster, Sebastian and de Vos, Tijn},
  title =	{{Faster Cut Sparsification of Weighted Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{61:1--61:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.61},
  URN =		{urn:nbn:de:0030-drops-164029},
  doi =		{10.4230/LIPIcs.ICALP.2022.61},
  annote =	{Keywords: Cut Sparsification, Graph Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Social Distancing Network Creation

Authors: Tobias Friedrich, Hans Gawendowicz, Pascal Lenzner, and Anna Melnichenko


Abstract
During a pandemic people have to find a trade-off between meeting others and staying safely at home. While meeting others is pleasant, it also increases the risk of infection. We consider this dilemma by introducing a game-theoretic network creation model in which selfish agents can form bilateral connections. They benefit from network neighbors, but at the same time, they want to maximize their distance to all other agents. This models the inherent conflict that social distancing rules impose on the behavior of selfish agents in a social network. Besides addressing this familiar issue, our model can be seen as the inverse to the well-studied Network Creation Game by Fabrikant et al. [PODC 2003] where agents aim at being as central as possible in the created network. Thus, our work is in-line with studies that compare minimization problems with their maximization versions. We look at two variants of network creation governed by social distancing. In the first variant, there are no restrictions on the connections being formed. We characterize optimal and equilibrium networks, and we derive asymptotically tight bounds on the Price of Anarchy and Price of Stability. The second variant is the model’s generalization that allows restrictions on the connections that can be formed. As our main result, we prove that Swap-Maximal Routing-Cost Spanning Trees, an efficiently computable weaker variant of Maximum Routing-Cost Spanning Trees, actually resemble equilibria for a significant range of the parameter space. Moreover, we give almost tight bounds on the Price of Anarchy and Price of Stability. These results imply that, compared the well-studied inverse models, under social distancing the agents' selfish behavior has a significantly stronger impact on the quality of the equilibria, i.e., allowing socially much worse stable states.

Cite as

Tobias Friedrich, Hans Gawendowicz, Pascal Lenzner, and Anna Melnichenko. Social Distancing Network Creation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 62:1-62:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{friedrich_et_al:LIPIcs.ICALP.2022.62,
  author =	{Friedrich, Tobias and Gawendowicz, Hans and Lenzner, Pascal and Melnichenko, Anna},
  title =	{{Social Distancing Network Creation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{62:1--62:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.62},
  URN =		{urn:nbn:de:0030-drops-164038},
  doi =		{10.4230/LIPIcs.ICALP.2022.62},
  annote =	{Keywords: Algorithmic Game Theory, Equilibrium Existence, Price of Anarchy, Network Creation Game, Social Distancing, Maximization vs. Minimization Problems}
}
Document
Track A: Algorithms, Complexity and Games
Approximating Observables Is as Hard as Counting

Authors: Andreas Galanis, Daniel Štefankovič, and Eric Vigoda


Abstract
We study the computational complexity of estimating local observables for Gibbs distributions. A simple combinatorial example is the average size of an independent set in a graph. A recent work of Galanis et al (2021) established NP-hardness of approximating the average size of an independent set utilizing hardness of the corresponding optimization problem and the related phase transition behavior. We instead consider settings where the underlying optimization problem is easily solvable. Our main contribution is to classify the complexity of approximating a wide class of observables via a generic reduction from approximate counting to the problem of estimating local observables. The key idea is to use the observables to interpolate the counting problem. Using this new approach, we are able to study observables on bipartite graphs where the underlying optimization problem is easy but the counting problem is believed to be hard. The most-well studied class of graphs that was excluded from previous hardness results were bipartite graphs. We establish hardness for estimating the average size of the independent set in bipartite graphs of maximum degree 6; more generally, we show tight hardness results for general vertex-edge observables for antiferromagnetic 2-spin systems on bipartite graphs. Our techniques go beyond 2-spin systems, and for the ferromagnetic Potts model we establish hardness of approximating the number of monochromatic edges in the same region as known hardness of approximate counting results.

Cite as

Andreas Galanis, Daniel Štefankovič, and Eric Vigoda. Approximating Observables Is as Hard as Counting. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 63:1-63:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2022.63,
  author =	{Galanis, Andreas and \v{S}tefankovi\v{c}, Daniel and Vigoda, Eric},
  title =	{{Approximating Observables Is as Hard as Counting}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{63:1--63:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.63},
  URN =		{urn:nbn:de:0030-drops-164047},
  doi =		{10.4230/LIPIcs.ICALP.2022.63},
  annote =	{Keywords: Approximate Counting, Averages, Phase Transitions, Random Structures}
}
Document
Track A: Algorithms, Complexity and Games
The Decision Problem for Perfect Matchings in Dense Hypergraphs

Authors: Luyining Gan and Jie Han


Abstract
Given 1 ≤ 𝓁 < k and δ ≥ 0, let PM(k,𝓁,δ) be the decision problem for the existence of perfect matchings in n-vertex k-uniform hypergraphs with minimum 𝓁-degree at least δ binom(n-𝓁,k-𝓁). For k ≥ 3, the decision problem in general k-uniform hypergraphs, equivalently PM(k,𝓁,0), is one of Karp’s 21 NP-complete problems. Moreover, for k ≥ 3, a reduction of Szymańska showed that PM(k, 𝓁, δ) is NP-complete for δ < 1-(1-1/k)^{k-𝓁}. A breakthrough by Keevash, Knox and Mycroft [STOC '13] resolved this problem for 𝓁 = k-1 by showing that PM(k, k-1, δ) is in P for δ > 1/k. Based on their result for 𝓁 = k-1, Keevash, Knox and Mycroft conjectured that PM(k, 𝓁, δ) is in P for every δ > 1-(1-1/k)^{k-𝓁}. In this paper it is shown that this decision problem for perfect matchings can be reduced to the study of the minimum 𝓁-degree condition forcing the existence of fractional perfect matchings. That is, we hopefully solve the "computational complexity" aspect of the problem by reducing it to a well-known extremal problem in hypergraph theory. In particular, together with existing results on fractional perfect matchings, this solves the conjecture of Keevash, Knox and Mycroft for 𝓁 ≥ 0.4k.

Cite as

Luyining Gan and Jie Han. The Decision Problem for Perfect Matchings in Dense Hypergraphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 64:1-64:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.ICALP.2022.64,
  author =	{Gan, Luyining and Han, Jie},
  title =	{{The Decision Problem for Perfect Matchings in Dense Hypergraphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{64:1--64:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.64},
  URN =		{urn:nbn:de:0030-drops-164057},
  doi =		{10.4230/LIPIcs.ICALP.2022.64},
  annote =	{Keywords: Computational Complexity, Perfect Matching, Hypergraph}
}
Document
Track A: Algorithms, Complexity and Games
Fully Functional Parameterized Suffix Trees in Compact Space

Authors: Arnab Ganguly, Rahul Shah, and Sharma V. Thankachan


Abstract
Two equal length strings are a parameterized match (p-match) iff there exists a one-to-one function that renames the symbols in one string to those in the other. The Parameterized Suffix Tree (PST) [Baker, STOC' 93] is a fundamental data structure that handles various string matching problems under this setting. The PST of a text T[1,n] over an alphabet Σ of size σ takes O(nlog n) bits of space. It can report any entry in (parameterized) (i) suffix array, (ii) inverse suffix array, and (iii) longest common prefix (LCP) array in O(1) time. Given any pattern P as a query, a position i in T is an occurrence iff T[i,i+|P|-1] and P are a p-match. The PST can count the number of occurrences of P in T in time O(|P|log σ) and then report each occurrence in time proportional to that of accessing a suffix array entry. An important question is, can we obtain a compressed version of PST that takes space close to the text’s size of nlogσ bits and still support all three functionalities mentioned earlier? In SODA' 17, Ganguly et al. answered this question partially by presenting an O(nlogσ) bit index that can support (parameterized) suffix array and inverse suffix array operations in O(log n) time. However, the compression of the (parameterized) LCP array and the possibility of faster suffix array and inverse suffix array queries in compact space were left open. In this work, we obtain a compact representation of the (parameterized) LCP array. With this result, in conjunction with three new (parameterized) suffix array representations, we obtain the first set of PST representations in o(nlog n) bits (when logσ = o(log n)) as follows. Here ε > 0 is an arbitrarily small constant. - Space O(n logσ) bits and query time O(log_σ^ε n); - Space O(n logσlog log_σ n) bits and query time O(log log_σ n); and - Space O(n logσ log^ε_σ n) bits and query time O(1). The first trade-off is an improvement over Ganguly et al.’s result, whereas our third trade-off matches the optimal time performance of Baker’s PST while squeezing the space by a factor roughly log_σ n. We highlight that our trade-offs match the space-and-time bounds of the best-known compressed text indexes for exact pattern matching and further improvement is highly unlikely.

Cite as

Arnab Ganguly, Rahul Shah, and Sharma V. Thankachan. Fully Functional Parameterized Suffix Trees in Compact Space. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 65:1-65:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ganguly_et_al:LIPIcs.ICALP.2022.65,
  author =	{Ganguly, Arnab and Shah, Rahul and Thankachan, Sharma V.},
  title =	{{Fully Functional Parameterized Suffix Trees in Compact Space}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{65:1--65:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.65},
  URN =		{urn:nbn:de:0030-drops-164061},
  doi =		{10.4230/LIPIcs.ICALP.2022.65},
  annote =	{Keywords: Data Structures, Suffix Trees, String Algorithms, Compression}
}
Document
Track A: Algorithms, Complexity and Games
The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width

Authors: Robert Ganian, Thekla Hamm, Viktoriia Korchemna, Karolina Okrasa, and Kirill Simonov


Abstract
The generic homomorphism problem, which asks whether an input graph G admits a homomorphism into a fixed target graph H, has been widely studied in the literature. In this article, we provide a fine-grained complexity classification of the running time of the homomorphism problem with respect to the clique-width of G (denoted cw) for virtually all choices of H under the Strong Exponential Time Hypothesis. In particular, we identify a property of H called the signature number s(H) and show that for each H, the homomorphism problem can be solved in time O^*(s(H)^cw). Crucially, we then show that this algorithm can be used to obtain essentially tight upper bounds. Specifically, we provide a reduction that yields matching lower bounds for each H that is either a projective core or a graph admitting a factorization with additional properties - allowing us to cover all possible target graphs under long-standing conjectures.

Cite as

Robert Ganian, Thekla Hamm, Viktoriia Korchemna, Karolina Okrasa, and Kirill Simonov. The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 66:1-66:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.ICALP.2022.66,
  author =	{Ganian, Robert and Hamm, Thekla and Korchemna, Viktoriia and Okrasa, Karolina and Simonov, Kirill},
  title =	{{The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{66:1--66:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.66},
  URN =		{urn:nbn:de:0030-drops-164076},
  doi =		{10.4230/LIPIcs.ICALP.2022.66},
  annote =	{Keywords: homomorphism, clique-width, fine-grained complexity}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear Dynamic Interval Scheduling (On One or Multiple Machines)

Authors: Paweł Gawrychowski and Karol Pokorski


Abstract
We revisit the complexity of the classical Interval Scheduling in the dynamic setting. In this problem, the goal is to maintain a set of intervals under insertions and deletions and report the size of the maximum size subset of pairwise disjoint intervals after each update. Nontrivial approximation algorithms are known for this problem, for both the unweighted and weighted versions [Henzinger, Neumann, Wiese, SoCG 2020]. Surprisingly, it was not known if the general exact version admits an exact solution working in sublinear time, that is, without recomputing the answer after each update. Our first contribution is a structure for Dynamic Interval Scheduling with amortized 𝒪̃(n^{1/3}) update time. Then, building on the ideas used for the case of one machine, we design a sublinear solution for any constant number of machines: we describe a structure for Dynamic Interval Scheduling on m ≥ 2 machines with amortized 𝒪̃(n^{1 - 1/m}) update time. We complement the above results by considering Dynamic Weighted Interval Scheduling on one machine, that is maintaining (the weight of) the maximum weight subset of pairwise disjoint intervals. We show an almost linear lower bound (conditioned on the hardness of Minimum Weight k-Clique) for the update/query time of any structure for this problem. Hence, in the weighted case one should indeed seek approximate solutions.

Cite as

Paweł Gawrychowski and Karol Pokorski. Sublinear Dynamic Interval Scheduling (On One or Multiple Machines). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 67:1-67:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2022.67,
  author =	{Gawrychowski, Pawe{\l} and Pokorski, Karol},
  title =	{{Sublinear Dynamic Interval Scheduling (On One or Multiple Machines)}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{67:1--67:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.67},
  URN =		{urn:nbn:de:0030-drops-164086},
  doi =		{10.4230/LIPIcs.ICALP.2022.67},
  annote =	{Keywords: interval scheduling, dynamic problems, data structures, greedy algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Galloping in Fast-Growth Natural Merge Sorts

Authors: Elahe Ghasemi, Vincent Jugé, and Ghazal Khalighinejad


Abstract
We study the impact of sub-array merging routines on merge-based sorting algorithms. More precisely, we focus on the galloping sub-routine that TimSort uses to merge monotonic (non-decreasing) sub-arrays, hereafter called runs, and on the impact on the number of element comparisons performed if one uses this sub-routine instead of a naive merging routine. The efficiency of TimSort and of similar sorting algorithms has often been explained by using the notion of runs and the associated run-length entropy. Here, we focus on the related notion of dual runs, which was introduced in the 1990s, and the associated dual run-length entropy. We prove, for this complexity measure, results that are similar to those already known when considering standard run-induced measures: in particular, TimSort requires only 𝒪(n + n log(σ)) element comparisons to sort arrays of length n with σ distinct values. In order to do so, we introduce new notions of fast- and middle-growth for natural merge sorts (i.e., algorithms based on merging runs). By using these notions, we prove that several merge sorting algorithms, provided that they use TimSort’s galloping sub-routine for merging runs, are as efficient as TimSort at sorting arrays with low run-induced or dual-run-induced complexities.

Cite as

Elahe Ghasemi, Vincent Jugé, and Ghazal Khalighinejad. Galloping in Fast-Growth Natural Merge Sorts. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 68:1-68:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ghasemi_et_al:LIPIcs.ICALP.2022.68,
  author =	{Ghasemi, Elahe and Jug\'{e}, Vincent and Khalighinejad, Ghazal},
  title =	{{Galloping in Fast-Growth Natural Merge Sorts}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{68:1--68:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.68},
  URN =		{urn:nbn:de:0030-drops-164098},
  doi =		{10.4230/LIPIcs.ICALP.2022.68},
  annote =	{Keywords: Sorting algorithms, Merge sorting algorithms, Analysis of algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Tolerant Bipartiteness Testing in Dense Graphs

Authors: Arijit Ghosh, Gopinath Mishra, Rahul Raychaudhury, and Sayantan Sen


Abstract
Bipartite testing has been a central problem in the area of property testing since its inception in the seminal work of Goldreich, Goldwasser, and Ron. Though the non-tolerant version of bipartite testing has been extensively studied in the literature, the tolerant variant is not well understood. In this paper, we consider the following version of tolerant bipartite testing problem: Given two parameters ε, δ ∈ (0,1), with δ > ε, and access to the adjacency matrix of a graph G, we have to decide whether G can be made bipartite by editing at most ε n² entries of the adjacency matrix of G, or we have to edit at least δ n² entries of the adjacency matrix to make G bipartite. In this paper, we prove that for δ = (2+Ω(1))ε, tolerant bipartite testing can be decided by performing 𝒪̃(1/ε³) many adjacency queries and in 2^𝒪̃(1/ε) time complexity. This improves upon the state-of-the-art query and time complexities of this problem of 𝒪̃(1/ε⁶) and 2^𝒪̃(1/ε²), respectively, due to Alon, Fernandez de la Vega, Kannan and Karpinski, where 𝒪̃(⋅) hides a factor polynomial in log (1/ε).

Cite as

Arijit Ghosh, Gopinath Mishra, Rahul Raychaudhury, and Sayantan Sen. Tolerant Bipartiteness Testing in Dense Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 69:1-69:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ICALP.2022.69,
  author =	{Ghosh, Arijit and Mishra, Gopinath and Raychaudhury, Rahul and Sen, Sayantan},
  title =	{{Tolerant Bipartiteness Testing in Dense Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{69:1--69:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.69},
  URN =		{urn:nbn:de:0030-drops-164101},
  doi =		{10.4230/LIPIcs.ICALP.2022.69},
  annote =	{Keywords: Tolerant Testing, Bipartite Testing, Query Complexity, Graph Property Testing}
}
Document
Track A: Algorithms, Complexity and Games
Homomorphism Tensors and Linear Equations

Authors: Martin Grohe, Gaurav Rattan, and Tim Seppelt


Abstract
Lovász (1967) showed that two graphs G and H are isomorphic if and only if they are homomorphism indistinguishable over the class of all graphs, i.e. for every graph F, the number of homomorphisms from F to G equals the number of homomorphisms from F to H. Recently, homomorphism indistinguishability over restricted classes of graphs such as bounded treewidth, bounded treedepth and planar graphs, has emerged as a surprisingly powerful framework for capturing diverse equivalence relations on graphs arising from logical equivalence and algebraic equation systems. In this paper, we provide a unified algebraic framework for such results by examining the linear-algebraic and representation-theoretic structure of tensors counting homomorphisms from labelled graphs. The existence of certain linear transformations between such homomorphism tensor subspaces can be interpreted both as homomorphism indistinguishability over a graph class and as feasibility of an equational system. Following this framework, we obtain characterisations of homomorphism indistinguishability over several natural graph classes, namely trees of bounded degree, graphs of bounded pathwidth (answering a question of Dell et al. (2018)), and graphs of bounded treedepth.

Cite as

Martin Grohe, Gaurav Rattan, and Tim Seppelt. Homomorphism Tensors and Linear Equations. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 70:1-70:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{grohe_et_al:LIPIcs.ICALP.2022.70,
  author =	{Grohe, Martin and Rattan, Gaurav and Seppelt, Tim},
  title =	{{Homomorphism Tensors and Linear Equations}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{70:1--70:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.70},
  URN =		{urn:nbn:de:0030-drops-164113},
  doi =		{10.4230/LIPIcs.ICALP.2022.70},
  annote =	{Keywords: homomorphisms, labelled graphs, treewidth, pathwidth, treedepth, linear equations, Sherali-Adams relaxation, Wiegmann-Specht Theorem, Weisfeiler-Leman}
}
Document
Track A: Algorithms, Complexity and Games
Downsampling for Testing and Learning in Product Distributions

Authors: Nathaniel Harms and Yuichi Yoshida


Abstract
We study distribution-free property testing and learning problems where the unknown probability distribution is a product distribution over ℝ^d. For many important classes of functions, such as intersections of halfspaces, polynomial threshold functions, convex sets, and k-alternating functions, the known algorithms either have complexity that depends on the support size of the distribution, or are proven to work only for specific examples of product distributions. We introduce a general method, which we call downsampling, that resolves these issues. Downsampling uses a notion of "rectilinear isoperimetry" for product distributions, which further strengthens the connection between isoperimetry, testing and learning. Using this technique, we attain new efficient distribution-free algorithms under product distributions on ℝ^d: 1) A simpler proof for non-adaptive, one-sided monotonicity testing of functions [n]^d → {0,1}, and improved sample complexity for testing monotonicity over unknown product distributions, from O(d⁷) [Black, Chakrabarty, & Seshadhri, SODA 2020] to O(d³). 2) Polynomial-time agnostic learning algorithms for functions of a constant number of halfspaces, and constant-degree polynomial threshold functions; 3) An exp{O(dlog(dk))}-time agnostic learning algorithm, and an exp{O(dlog(dk))}-sample tolerant tester, for functions of k convex sets; and a 2^O(d) sample-based one-sided tester for convex sets; 4) An exp{O(k√d)}-time agnostic learning algorithm for k-alternating functions, and a sample-based tolerant tester with the same complexity.

Cite as

Nathaniel Harms and Yuichi Yoshida. Downsampling for Testing and Learning in Product Distributions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{harms_et_al:LIPIcs.ICALP.2022.71,
  author =	{Harms, Nathaniel and Yoshida, Yuichi},
  title =	{{Downsampling for Testing and Learning in Product Distributions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.71},
  URN =		{urn:nbn:de:0030-drops-164123},
  doi =		{10.4230/LIPIcs.ICALP.2022.71},
  annote =	{Keywords: property testing, learning, monotonicity, halfspaces, intersections of halfspaces, polynomial threshold functions}
}
Document
Track A: Algorithms, Complexity and Games
A Fixed-Parameter Algorithm for the Kneser Problem

Authors: Ishay Haviv


Abstract
The Kneser graph K(n,k) is defined for integers n and k with n ≥ 2k as the graph whose vertices are all the k-subsets of {1,2,…,n} where two such sets are adjacent if they are disjoint. A classical result of Lovász asserts that the chromatic number of K(n,k) is n-2k+2. In the computational Kneser problem, we are given an oracle access to a coloring of the vertices of K(n,k) with n-2k+1 colors, and the goal is to find a monochromatic edge. We present a randomized algorithm for the Kneser problem with running time n^O(1) ⋅ k^O(k). This shows that the problem is fixed-parameter tractable with respect to the parameter k. The analysis involves structural results on intersecting families and on induced subgraphs of Kneser graphs. We also study the Agreeable-Set problem of assigning a small subset of a set of m items to a group of 𝓁 agents, so that all agents value the subset at least as much as its complement. As an application of our algorithm for the Kneser problem, we obtain a randomized polynomial-time algorithm for the Agreeable-Set problem for instances that satisfy 𝓁 ≥ m - O({log m}/{log log m}). We further show that the Agreeable-Set problem is at least as hard as a variant of the Kneser problem with an extended access to the input coloring.

Cite as

Ishay Haviv. A Fixed-Parameter Algorithm for the Kneser Problem. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 72:1-72:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{haviv:LIPIcs.ICALP.2022.72,
  author =	{Haviv, Ishay},
  title =	{{A Fixed-Parameter Algorithm for the Kneser Problem}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{72:1--72:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.72},
  URN =		{urn:nbn:de:0030-drops-164139},
  doi =		{10.4230/LIPIcs.ICALP.2022.72},
  annote =	{Keywords: Kneser graph, Fixed-parameter tractability, Agreeable Set}
}
Document
Track A: Algorithms, Complexity and Games
Delegation for Search Problems

Authors: Justin Holmgren, Andrea Lincoln, and Ron D. Rothblum


Abstract
The theory of proof systems in general, and interactive proofs in particular, has been immensely influential. Such proof systems allow a prover to convince a verifier whether a given statement is true or not - namely to solve a decision problem. In this work we initiate a study of interactive proofs for search problems. More precisely, we consider a setting in which a client C, given an input x, would like to find a solution y satisfying (x,y) ∈ R, for a given relation R. The client wishes to delegate this work to an (untrusted) advisor A, who has more resources than C. We seek solutions in which the communication from A is short, and, in particular, shorter than the length of the output y. (In particular, this precludes the trivial solution of the advisor sending y and then proving that (x,y) ∈ R using a standard interactive proof.) We show that such search delegation schemes exist for several problems of interest including (1) longest common subsequence (LCS) and edit distance, (2) parsing context-free grammars and (3) k-SAT.

Cite as

Justin Holmgren, Andrea Lincoln, and Ron D. Rothblum. Delegation for Search Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 73:1-73:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{holmgren_et_al:LIPIcs.ICALP.2022.73,
  author =	{Holmgren, Justin and Lincoln, Andrea and Rothblum, Ron D.},
  title =	{{Delegation for Search Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{73:1--73:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.73},
  URN =		{urn:nbn:de:0030-drops-164146},
  doi =		{10.4230/LIPIcs.ICALP.2022.73},
  annote =	{Keywords: Interactive Proofs, Fine-Grained Complexity, Delegation}
}
Document
Track A: Algorithms, Complexity and Games
Understanding the Moments of Tabulation Hashing via Chaoses

Authors: Jakob Bæk Tejs Houen and Mikkel Thorup


Abstract
Simple tabulation hashing dates back to Zobrist in 1970 and is defined as follows: Each key is viewed as c characters from some alphabet Σ, we have c fully random hash functions h₀, …, h_{c - 1} : Σ → {{0, …, 2^l - 1}}, and a key x = (x₀, …, x_{c - 1}) is hashed to h(x) = h₀(x₀) ⊕ … ⊕ h_{c - 1}(x_{c - 1}) where ⊕ is the bitwise XOR operation. The previous results on tabulation hashing by Pǎtraşcu and Thorup [J.ACM'11] and by Aamand et al. [STOC'20] focused on proving Chernoff-style tail bounds on hash-based sums, e.g., the number keys hashing to a given value, for simple tabulation hashing, but their bounds do not cover the entire tail. Thus their results cannot bound moments. The paper Dahlgaard et al. [FOCS'15] provides a bound on the moments of certain hash-based sums, but their bound only holds for constant moments, and we need logarithmic moments. Chaoses are random variables of the form ∑ a_{i₀, …, i_{c - 1}} X_{i₀} ⋅ … ⋅ X_{i_{c - 1}} where X_i are independent random variables. Chaoses are a well-studied concept from probability theory, and tight analysis has been proven in several instances, e.g., when the independent random variables are standard Gaussian variables and when the independent random variables have logarithmically convex tails. We notice that hash-based sums of simple tabulation hashing can be seen as a sum of chaoses that are not independent. This motivates us to use techniques from the theory of chaoses to analyze hash-based sums of simple tabulation hashing. In this paper, we obtain bounds for all the moments of hash-based sums for simple tabulation hashing which are tight up to constants depending only on c. In contrast with the previous attempts, our approach will mostly be analytical and does not employ intricate combinatorial arguments. The improved analysis of simple tabulation hashing allows us to obtain bounds for the moments of hash-based sums for the mixed tabulation hashing introduced by Dahlgaard et al. [FOCS'15]. With simple tabulation hashing, there are certain inputs for which the concentration is much worse than with fully random hashing. However, with mixed tabulation, we get logarithmic moment bounds that are only a constant factor worse than those with fully random hashing for any possible input. This is a strong addition to other powerful probabilistic properties of mixed tabulation hashing proved by Dahlgaard et al.

Cite as

Jakob Bæk Tejs Houen and Mikkel Thorup. Understanding the Moments of Tabulation Hashing via Chaoses. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 74:1-74:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{houen_et_al:LIPIcs.ICALP.2022.74,
  author =	{Houen, Jakob B{\ae}k Tejs and Thorup, Mikkel},
  title =	{{Understanding the Moments of Tabulation Hashing via Chaoses}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{74:1--74:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.74},
  URN =		{urn:nbn:de:0030-drops-164154},
  doi =		{10.4230/LIPIcs.ICALP.2022.74},
  annote =	{Keywords: hashing, concentration bounds, moment bounds}
}
Document
Track A: Algorithms, Complexity and Games
In-Range Farthest Point Queries and Related Problem in High Dimensions

Authors: Ziyun Huang and Jinhui Xu


Abstract
Range-aggregate query is an important type of queries with numerous applications. It aims to obtain some structural information (defined by an aggregate function F(⋅)) of the points (from a point set P) inside a given query range B. In this paper, we study the range-aggregate query problem in high dimensional space for two aggregate functions: (1) F(P ∩ B) is the farthest point in P ∩ B to a query point q in ℝ^d and (2) F(P ∩ B) is the minimum enclosing ball (MEB) of P ∩ B. For problem (1), called In-Range Farthest Point (IFP) Query, we develop a bi-criteria approximation scheme: For any ε > 0 that specifies the approximation ratio of the farthest distance and any γ > 0 that measures the "fuzziness" of the query range, we show that it is possible to pre-process P into a data structure of size Õ_{ε,γ}(dn^{1+ρ}) in Õ_{ε,γ}(dn^{1+ρ}) time such that given any ℝ^d query ball B and query point q, it outputs in Õ_{ε,γ}(dn^ρ) time a point p that is a (1-ε)-approximation of the farthest point to q among all points lying in a (1+γ)-expansion B(1+γ) of B, where 0 < ρ < 1 is a constant depending on ε and γ and the hidden constants in big-O notations depend only on ε, γ and Polylog(nd). For problem (2), we show that the IFP result can be applied to develop query scheme with similar time and space complexities to achieve a (1+ε)-approximation for MEB. To the best of our knowledge, these are the first theoretical results on such high dimensional range-aggregate query problems. Our results are based on several new techniques, such as multi-scale construction and ball difference range query, which are interesting in their own rights and could be potentially used to solve other range-aggregate problems in high dimensional space.

Cite as

Ziyun Huang and Jinhui Xu. In-Range Farthest Point Queries and Related Problem in High Dimensions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 75:1-75:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ICALP.2022.75,
  author =	{Huang, Ziyun and Xu, Jinhui},
  title =	{{In-Range Farthest Point Queries and Related Problem in High Dimensions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{75:1--75:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.75},
  URN =		{urn:nbn:de:0030-drops-164167},
  doi =		{10.4230/LIPIcs.ICALP.2022.75},
  annote =	{Keywords: Farthest Point Query, Range Aggregate Query, Minimum Enclosing Ball, Approximation, High Dimensional Space}
}
Document
Track A: Algorithms, Complexity and Games
Strong Approximations and Irrationality in Financial Networks with Derivatives

Authors: Stavros D. Ioannidis, Bart de Keijzer, and Carmine Ventre


Abstract
Financial networks model a set of financial institutions (firms) interconnected by obligations. Recent work has introduced to this model a class of obligations called credit default swaps, a certain kind of financial derivatives. The main computational challenge for such systems is known as the clearing problem, which is to determine which firms are in default and to compute their exposure to systemic risk, technically known as their recovery rates. It is known that the recovery rates form the set of fixed points of a simple function, and that these fixed points can be irrational. Furthermore, Schuldenzucker et al. (2016) have shown that finding a weakly (or "almost") approximate (rational) fixed point is PPAD-complete. We further study the clearing problem from the point of view of irrationality and approximation strength. Firstly, we observe that weakly approximate solutions may misrepresent the actual financial state of an institution. On this basis, we study the complexity of finding a strongly (or "near") approximate solution, and show FIXP-completeness. We then study the structural properties required for irrationality, and we give necessary conditions for irrational solutions to emerge: The presence of certain types of cycles in a financial network forces the recovery rates to take the form of roots of non-linear polynomials. In the absence of a large subclass of such cycles, we study the complexity of finding an exact fixed point, which we show to be a problem close to, albeit outside of, PPAD.

Cite as

Stavros D. Ioannidis, Bart de Keijzer, and Carmine Ventre. Strong Approximations and Irrationality in Financial Networks with Derivatives. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 76:1-76:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ioannidis_et_al:LIPIcs.ICALP.2022.76,
  author =	{Ioannidis, Stavros D. and de Keijzer, Bart and Ventre, Carmine},
  title =	{{Strong Approximations and Irrationality in Financial Networks with Derivatives}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{76:1--76:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.76},
  URN =		{urn:nbn:de:0030-drops-164172},
  doi =		{10.4230/LIPIcs.ICALP.2022.76},
  annote =	{Keywords: FIXP, Financial Networks, Systemic Risk}
}
Document
Track A: Algorithms, Complexity and Games
Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching

Authors: Arun Jambulapati, Yujia Jin, Aaron Sidford, and Kevin Tian


Abstract
Box-simplex games are a family of bilinear minimax objectives which encapsulate graph-structured problems such as maximum flow [Sherman, 2017], optimal transport [Arun Jambulapati et al., 2019], and bipartite matching [Sepehr Assadi et al., 2022]. We develop efficient near-linear time, high-accuracy solvers for regularized variants of these games. Beyond the immediate applications of such solvers for computing Sinkhorn distances, a prominent tool in machine learning, we show that these solvers can be used to obtain improved running times for maintaining a (fractional) ε-approximate maximum matching in a dynamic decremental bipartite graph against an adaptive adversary. We give a generic framework which reduces this dynamic matching problem to solving regularized graph-structured optimization problems to high accuracy. Through our reduction framework, our regularized box-simplex game solver implies a new algorithm for dynamic decremental bipartite matching in total time Õ(m ⋅ ε^{-3}), from an initial graph with m edges and n nodes. We further show how to use recent advances in flow optimization [Chen et al., 2022] to improve our runtime to m^{1 + o(1)} ⋅ ε^{-2}, thereby demonstrating the versatility of our reduction-based approach. These results improve upon the previous best runtime of Õ(m ⋅ ε^{-4}) [Aaron Bernstein et al., 2020] and illustrate the utility of using regularized optimization problem solvers for designing dynamic algorithms.

Cite as

Arun Jambulapati, Yujia Jin, Aaron Sidford, and Kevin Tian. Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 77:1-77:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{jambulapati_et_al:LIPIcs.ICALP.2022.77,
  author =	{Jambulapati, Arun and Jin, Yujia and Sidford, Aaron and Tian, Kevin},
  title =	{{Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{77:1--77:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.77},
  URN =		{urn:nbn:de:0030-drops-164181},
  doi =		{10.4230/LIPIcs.ICALP.2022.77},
  annote =	{Keywords: bipartite matching, decremental matching, dynamic algorithms, continuous optimization, box-simplex games, primal-dual method}
}
Document
Track A: Algorithms, Complexity and Games
A PTAS for Packing Hypercubes into a Knapsack

Authors: Klaus Jansen, Arindam Khan, Marvin Lira, and K. V. N. Sreenivas


Abstract
We study the d-dimensional hypercube knapsack problem ({d}-D Hc-Knapsack) where we are given a set of d-dimensional hypercubes with associated profits, and a knapsack which is a unit d-dimensional hypercube. The goal is to find an axis-aligned non-overlapping packing of a subset of hypercubes such that the profit of the packed hypercubes is maximized. For this problem, Harren (ICALP'06) gave an algorithm with an approximation ratio of (1+1/2^d+ε). For d = 2, Jansen and Solis-Oba (IPCO'08) showed that the problem admits a polynomial-time approximation scheme (PTAS); Heydrich and Wiese (SODA'17) further improved the running time and gave an efficient polynomial-time approximation scheme (EPTAS). Both the results use structural properties of 2-D packing, which do not generalize to higher dimensions. For d > 2, it remains open to obtain a PTAS, and in fact, there has been no improvement since Harren’s result. We settle the problem by providing a PTAS. Our main technical contribution is a structural lemma which shows that any packing of hypercubes can be converted into another structured packing such that a high profitable subset of hypercubes is packed into a constant number of special hypercuboids, called 𝒱-Boxes and 𝒩-Boxes. As a side result, we give an almost optimal algorithm for a variant of the strip packing problem in higher dimensions. This might have applications for other multidimensional geometric packing problems.

Cite as

Klaus Jansen, Arindam Khan, Marvin Lira, and K. V. N. Sreenivas. A PTAS for Packing Hypercubes into a Knapsack. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ICALP.2022.78,
  author =	{Jansen, Klaus and Khan, Arindam and Lira, Marvin and Sreenivas, K. V. N.},
  title =	{{A PTAS for Packing Hypercubes into a Knapsack}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.78},
  URN =		{urn:nbn:de:0030-drops-164192},
  doi =		{10.4230/LIPIcs.ICALP.2022.78},
  annote =	{Keywords: Multidimensional knapsack, geometric packing, cube packing, strip packing}
}
Document
Track A: Algorithms, Complexity and Games
A Faster Interior-Point Method for Sum-Of-Squares Optimization

Authors: Shunhua Jiang, Bento Natura, and Omri Weinstein


Abstract
We present a faster interior-point method for optimizing sum-of-squares (SOS) polynomials, which are a central tool in polynomial optimization and capture convex programming in the Lasserre hierarchy. Let p = ∑_i q²_i be an n-variate SOS polynomial of degree 2d. Denoting by L : = binom(n+d,d) and U : = binom(n+2d,2d) the dimensions of the vector spaces in which q_i’s and p live respectively, our algorithm runs in time Õ(LU^{1.87}). This is polynomially faster than state-of-art SOS and semidefinite programming solvers [Jiang et al., 2020; Huang et al., 2021; Papp and Yildiz, 2019], which achieve runtime Õ(L^{0.5} min{U^{2.37}, L^{4.24}}). The centerpiece of our algorithm is a dynamic data structure for maintaining the inverse of the Hessian of the SOS barrier function under the polynomial interpolant basis [Papp and Yildiz, 2019], which efficiently extends to multivariate SOS optimization, and requires maintaining spectral approximations to low-rank perturbations of elementwise (Hadamard) products. This is the main challenge and departure from recent IPM breakthroughs using inverse-maintenance, where low-rank updates to the slack matrix readily imply the same for the Hessian matrix.

Cite as

Shunhua Jiang, Bento Natura, and Omri Weinstein. A Faster Interior-Point Method for Sum-Of-Squares Optimization. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 79:1-79:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ICALP.2022.79,
  author =	{Jiang, Shunhua and Natura, Bento and Weinstein, Omri},
  title =	{{A Faster Interior-Point Method for Sum-Of-Squares Optimization}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{79:1--79:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.79},
  URN =		{urn:nbn:de:0030-drops-164205},
  doi =		{10.4230/LIPIcs.ICALP.2022.79},
  annote =	{Keywords: Interior Point Methods, Sum-of-squares Optimization, Dynamic Matrix Inverse}
}
Document
Track A: Algorithms, Complexity and Games
Tight Approximation Algorithms for Two-Dimensional Guillotine Strip Packing

Authors: Arindam Khan, Aditya Lonkar, Arnab Maiti, Amatya Sharma, and Andreas Wiese


Abstract
In the Strip Packing problem (SP), we are given a vertical half-strip [0,W]×[0,∞) and a set of n axis-aligned rectangles of width at most W. The goal is to find a non-overlapping packing of all rectangles into the strip such that the height of the packing is minimized. A well-studied and frequently used practical constraint is to allow only those packings that are guillotine separable, i.e., every rectangle in the packing can be obtained by recursively applying a sequence of edge-to-edge axis-parallel cuts (guillotine cuts) that do not intersect any item of the solution. In this paper, we study approximation algorithms for the Guillotine Strip Packing problem (GSP), i.e., the Strip Packing problem where we require additionally that the packing needs to be guillotine separable. This problem generalizes the classical Bin Packing problem and also makespan minimization on identical machines, and thus it is already strongly NP-hard. Moreover, due to a reduction from the Partition problem, it is NP-hard to obtain a polynomial-time (3/2-ε)-approximation algorithm for GSP for any ε > 0 (exactly as Strip Packing). We provide a matching polynomial time (3/2+ε)-approximation algorithm for GSP. Furthermore, we present a pseudo-polynomial time (1+ε)-approximation algorithm for GSP. This is surprising as it is NP-hard to obtain a (5/4-ε)-approximation algorithm for (general) Strip Packing in pseudo-polynomial time. Thus, our results essentially settle the approximability of GSP for both the polynomial and the pseudo-polynomial settings.

Cite as

Arindam Khan, Aditya Lonkar, Arnab Maiti, Amatya Sharma, and Andreas Wiese. Tight Approximation Algorithms for Two-Dimensional Guillotine Strip Packing. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 80:1-80:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{khan_et_al:LIPIcs.ICALP.2022.80,
  author =	{Khan, Arindam and Lonkar, Aditya and Maiti, Arnab and Sharma, Amatya and Wiese, Andreas},
  title =	{{Tight Approximation Algorithms for Two-Dimensional Guillotine Strip Packing}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{80:1--80:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.80},
  URN =		{urn:nbn:de:0030-drops-164215},
  doi =		{10.4230/LIPIcs.ICALP.2022.80},
  annote =	{Keywords: Approximation Algorithms, Two-Dimensional Packing, Rectangle Packing, Guillotine Cuts, Computational Geometry}
}
Document
Track A: Algorithms, Complexity and Games
A Study of Weisfeiler-Leman Colorings on Planar Graphs

Authors: Sandra Kiefer and Daniel Neuen


Abstract
The Weisfeiler-Leman (WL) algorithm is a combinatorial procedure that computes colorings on graphs, which can often be used to detect their (non-)isomorphism. Particularly the 1- and 2-dimensional versions 1-WL and 2-WL have received much attention, due to their numerous links to other areas of computer science. Knowing the expressive power of a certain dimension of the algorithm usually amounts to understanding the computed colorings. An increase in the dimension leads to finer computed colorings and, thus, more graphs can be distinguished. For example, on the class of planar graphs, 3-WL solves the isomorphism problem. However, the expressive power of 2-WL on the class is poorly understood (and, in particular, it may even well be that it decides isomorphism). In this paper, we investigate the colorings computed by 2-WL on planar graphs. Towards this end, we analyze the graphs induced by edge color classes in the graph. Based on the obtained classification, we show that for every 3-connected planar graph, it holds that: a) after coloring all pairs with their 2-WL color, the graph has fixing number 1 with respect to 1-WL, or b) there is a 2-WL-definable matching that can be used to transform the graph into a smaller one, or c) 2-WL detects a connected subgraph that is essentially the graph of a Platonic or Archimedean solid, a prism, a cycle, or a bipartite graph K_{2,𝓁}. In particular, the graphs from case (a) are identified by 2-WL.

Cite as

Sandra Kiefer and Daniel Neuen. A Study of Weisfeiler-Leman Colorings on Planar Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 81:1-81:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kiefer_et_al:LIPIcs.ICALP.2022.81,
  author =	{Kiefer, Sandra and Neuen, Daniel},
  title =	{{A Study of Weisfeiler-Leman Colorings on Planar Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{81:1--81:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.81},
  URN =		{urn:nbn:de:0030-drops-164228},
  doi =		{10.4230/LIPIcs.ICALP.2022.81},
  annote =	{Keywords: Weisfeiler-Leman algorithm, planar graphs, edge-transitive graphs, fixing number}
}
Document
Track A: Algorithms, Complexity and Games
Beating Matrix Multiplication for n^{1/3}-Directed Shortcuts

Authors: Shimon Kogan and Merav Parter


Abstract
For an n-vertex digraph G = (V,E) and integer parameter D, a D-shortcut is a small set H of directed edges taken from the transitive closure of G, satisfying that the diameter of G ∪ H is at most D. A recent work [Kogan and Parter, SODA 2022] presented shortcutting algorithms with improved diameter vs. size tradeoffs. Most notably, obtaining linear size D-shortcuts for D = Õ(n^{1/3}), breaking the √n-diameter barrier. These algorithms run in O(n^{ω}) time, as they are based on the computation of the transitive closure of the graph. We present a new algorithmic approach for D-shortcuts, that matches the bounds of [Kogan and Parter, SODA 2022], while running in o(n^{ω}) time for every D ≥ n^{1/3}. Our approach is based on a reduction to the min-cost max-flow problem, which can be solved in Õ(m+n^{3/2}) time due to the recent breakthrough result of [Brand et al., STOC 2021]. We also demonstrate the applicability of our techniques to computing the minimal chain covers and dipath decompositions for directed acyclic graphs. For an n-vertex m-edge digraph G = (V,E), our key results are: - An Õ(n^{1/3}⋅ m+n^{3/2})-time algorithm for computing D-shortcuts of linear size for D = Õ(n^{1/3}), and an Õ(n^{1/4}⋅ m+n^{7/4})-time algorithm for computing D-shortcuts of Õ(n^{3/4}) edges for D = Õ(n^{1/2}). - For a DAG G, we provide Õ(m+n^{3/2})-time algorithms for computing its minimum chain covers, maximum antichain, and decomposition into dipaths and independent sets. This improves considerably over the state-of-the-art bounds by [Caceres et al., SODA 2022] and [Grandoni et al., SODA 2021]. Our results also provide a new connection between shortcutting sets and the seemingly less related problems of minimum chain covers and the maximum antichains in DAGs.

Cite as

Shimon Kogan and Merav Parter. Beating Matrix Multiplication for n^{1/3}-Directed Shortcuts. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 82:1-82:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kogan_et_al:LIPIcs.ICALP.2022.82,
  author =	{Kogan, Shimon and Parter, Merav},
  title =	{{Beating Matrix Multiplication for n^\{1/3\}-Directed Shortcuts}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{82:1--82:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.82},
  URN =		{urn:nbn:de:0030-drops-164230},
  doi =		{10.4230/LIPIcs.ICALP.2022.82},
  annote =	{Keywords: Directed Shortcuts, Transitive Closure, Width}
}
Document
Track A: Algorithms, Complexity and Games
Monotone Arithmetic Complexity of Graph Homomorphism Polynomials

Authors: Balagopal Komarath, Anurag Pandey, and Chengot Sankaramenon Rahul


Abstract
We study homomorphism polynomials, which are polynomials that enumerate all homomorphisms from a pattern graph H to n-vertex graphs. These polynomials have received a lot of attention recently for their crucial role in several new algorithms for counting and detecting graph patterns, and also for obtaining natural polynomial families which are complete for algebraic complexity classes VBP, VP, and VNP. We discover that, in the monotone setting, the formula complexity, the ABP complexity, and the circuit complexity of such polynomial families are exactly characterized by the treedepth, the pathwidth, and the treewidth of the pattern graph respectively. Furthermore, we establish a single, unified framework, using our characterization, to collect several known results that were obtained independently via different methods. For instance, we attain superpolynomial separations between circuits, ABPs, and formulas in the monotone setting, where the polynomial families separating the classes all correspond to well-studied combinatorial problems. Moreover, our proofs rediscover fine-grained separations between these models for constant-degree polynomials.

Cite as

Balagopal Komarath, Anurag Pandey, and Chengot Sankaramenon Rahul. Monotone Arithmetic Complexity of Graph Homomorphism Polynomials. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 83:1-83:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{komarath_et_al:LIPIcs.ICALP.2022.83,
  author =	{Komarath, Balagopal and Pandey, Anurag and Rahul, Chengot Sankaramenon},
  title =	{{Monotone Arithmetic Complexity of Graph Homomorphism Polynomials}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{83:1--83:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.83},
  URN =		{urn:nbn:de:0030-drops-164245},
  doi =		{10.4230/LIPIcs.ICALP.2022.83},
  annote =	{Keywords: Homomorphism polynomials, Monotone complexity, Algebraic complexity, Graph algorithms, Fine-grained complexity, Fixed-parameter algorithms and complexity, Treewidth, Pathwidth, Treedepth, Graph homomorphisms, Algebraic circuits, Algebraic branching programs, Algebraic formulas}
}
Document
Track A: Algorithms, Complexity and Games
Exact Recovery Algorithm for Planted Bipartite Graph in Semi-Random Graphs

Authors: Akash Kumar, Anand Louis, and Rameesh Paul


Abstract
The problem of finding the largest induced balanced bipartite subgraph in a given graph is NP-hard. This problem is closely related to the problem of finding the smallest Odd Cycle Transversal. In this work, we consider the following model of instances: starting with a set of vertices V, a set S ⊆ V of k vertices is chosen and an arbitrary d-regular bipartite graph is added on it; edges between pairs of vertices in S× (V⧵S) and (V⧵S) × (V⧵S) are added with probability p. Since for d = 0, the problem reduces to recovering a planted independent set, we don't expect efficient algorithms for k = o(√n). This problem is a generalization of the planted balanced biclique problem where the bipartite graph induced on S is a complete bipartite graph; [Yevgeny Levanzov, 2018] gave an algorithm for recovering S in this problem when k = Ω(√n). Our main result is an efficient algorithm that recovers (w.h.p.) the planted bipartite graph when k = Ω_p(√{n log n}) for a large range of parameters. Our results also hold for a natural semi-random model of instances, which involve the presence of a monotone adversary. Our proof shows that a natural SDP relaxation for the problem is integral by constructing an appropriate solution to it’s dual formulation. Our main technical contribution is a new approach for construction the dual solution where we calibrate the eigenvectors of the adjacency matrix to be the eigenvectors of the dual matrix. We believe that this approach may have applications to other recovery problems in semi-random models as well. When k = Ω(√n), we give an algorithm for recovering S whose running time is exponential in the number of small eigenvalues in graph induced on S; this algorithm is based on subspace enumeration techniques due to the works of [Alexandra Kolla and Madhur Tulsiani, 2007; Arora et al., 2010; Kolla, 2011].

Cite as

Akash Kumar, Anand Louis, and Rameesh Paul. Exact Recovery Algorithm for Planted Bipartite Graph in Semi-Random Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 84:1-84:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ICALP.2022.84,
  author =	{Kumar, Akash and Louis, Anand and Paul, Rameesh},
  title =	{{Exact Recovery Algorithm for Planted Bipartite Graph in Semi-Random Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{84:1--84:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.84},
  URN =		{urn:nbn:de:0030-drops-164251},
  doi =		{10.4230/LIPIcs.ICALP.2022.84},
  annote =	{Keywords: SDP duality, Planted models, Semi-random models, Exact recovery, Threshold rank, Spectral embedding, Subspace enumeration}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Time-Backlog Tradeoffs for the Variable-Processor Cup Game

Authors: William Kuszmaul and Shyam Narayanan


Abstract
The p-processor cup game is a classic and widely studied scheduling problem that captures the setting in which a p-processor machine must assign tasks to processors over time in order to ensure that no individual task ever falls too far behind. The problem is formalized as a multi-round game in which two players, a filler (who assigns work to tasks) and an emptier (who schedules tasks) compete. The emptier’s goal is to minimize backlog, which is the maximum amount of outstanding work for any task. Recently, Kuszmaul and Westover (ITCS, 2021) proposed the variable-processor cup game, which considers the same problem, except that the amount of resources available to the players (i.e., the number p of processors) fluctuates between rounds of the game. They showed that this seemingly small modification fundamentally changes the dynamics of the game: whereas the optimal backlog in the fixed p-processor game is Θ(log n), independent of p, the optimal backlog in the variable-processor game is Θ(n). The latter result was only known to apply to games with exponentially many rounds, however, and it has remained an open question what the optimal tradeoff between time and backlog is for shorter games. This paper establishes a tight trade-off curve between time and backlog in the variable-processor cup game. We show that, for a game consisting of t rounds, the optimal backlog is Θ (b (t)) where b(t) = t (if t ≤ log n) t^{1/3} log^{2/3} ({n^3}/t + 1) (if log n < t ≤ n^3) n (if n ^ 3 < t). An important consequence is that the optimal backlog is Θ(n) if and only if t ≥ Ω(n³). Our techniques also allow for us to resolve several other open questions concerning how the variable-processor cup game behaves in beyond-worst-case-analysis settings.

Cite as

William Kuszmaul and Shyam Narayanan. Optimal Time-Backlog Tradeoffs for the Variable-Processor Cup Game. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 85:1-85:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kuszmaul_et_al:LIPIcs.ICALP.2022.85,
  author =	{Kuszmaul, William and Narayanan, Shyam},
  title =	{{Optimal Time-Backlog Tradeoffs for the Variable-Processor Cup Game}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{85:1--85:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.85},
  URN =		{urn:nbn:de:0030-drops-164263},
  doi =		{10.4230/LIPIcs.ICALP.2022.85},
  annote =	{Keywords: Cup Games, Potential Functions, Greedy}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Decremental Hopsets with Applications

Authors: Jakub Łącki and Yasamin Nazari


Abstract
Given a weighted undirected graph G = (V,E,w), a hopset H of hopbound β and stretch (1+ε) is a set of edges such that for any pair of nodes u, v ∈ V, there is a path in G ∪ H of at most β hops, whose length is within a (1+ε) factor from the distance between u and v in G. We show the first efficient decremental algorithm for maintaining hopsets with a polylogarithmic hopbound. The update time of our algorithm matches the best known static algorithm up to polylogarithmic factors. All the previous decremental hopset constructions had a superpolylogarithmic (but subpolynomial) hopbound of 2^{log^{Ω(1)} n} [Bernstein, FOCS'09; HKN, FOCS'14; Chechik, FOCS'18]. By applying our decremental hopset construction, we get improved or near optimal bounds for several distance problems. Most importantly, we show how to decrementally maintain (2k-1)(1+ε)-approximate all-pairs shortest paths (for any constant k ≥ 2), in Õ(n^{1/k}) amortized update time and O(k) query time. This improves (by a polynomial factor) over the update-time of the best previously known decremental algorithm in the constant query time regime. Moreover, it improves over the result of [Chechik, FOCS'18] that has a query time of O(log log(nW)), where W is the aspect ratio, and the amortized update time is n^{1/k}⋅(1/ε)^{Õ(√{log n})}). For sparse graphs our construction nearly matches the best known static running time / query time tradeoff. We also obtain near-optimal bounds for maintaining approximate multi-source shortest paths and distance sketches, and get improved bounds for approximate single-source shortest paths. Our algorithms are randomized and our bounds hold with high probability against an oblivious adversary.

Cite as

Jakub Łącki and Yasamin Nazari. Near-Optimal Decremental Hopsets with Applications. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 86:1-86:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lacki_et_al:LIPIcs.ICALP.2022.86,
  author =	{{\L}\k{a}cki, Jakub and Nazari, Yasamin},
  title =	{{Near-Optimal Decremental Hopsets with Applications}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{86:1--86:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.86},
  URN =		{urn:nbn:de:0030-drops-164277},
  doi =		{10.4230/LIPIcs.ICALP.2022.86},
  annote =	{Keywords: Dynamic Algorithms, Data Structures, Shortest Paths, Hopsets}
}
Document
Track A: Algorithms, Complexity and Games
Tight Vector Bin Packing with Few Small Items via Fast Exact Matching in Multigraphs

Authors: Alexandra Lassota, Aleksander Łukasiewicz, and Adam Polak


Abstract
We solve the Bin Packing problem in O^*(2^k) time, where k is the number of items less or equal to one third of the bin capacity. This parameter measures the distance from the polynomially solvable case of only large (i.e., greater than one third) items. Our algorithm is actually designed to work for a more general Vector Bin Packing problem, in which items are multidimensional vectors. We improve over the previous fastest O^*(k! ⋅ 4^k) time algorithm. Our algorithm works by reducing the problem to finding an exact weight perfect matching in a (multi-)graph with O^*(2^k) edges, whose weights are integers of the order of O^*(2^k). To solve the matching problem in the desired time, we give a variant of the classic Mulmuley-Vazirani-Vazirani algorithm with only a linear dependence on the edge weights and the number of edges - which may be of independent interest. Moreover, we give a tight lower bound, under the Strong Exponential Time Hypothesis (SETH), showing that the constant 2 in the base of the exponent cannot be further improved for Vector Bin Packing. Our techniques also lead to improved algorithms for Vector Multiple Knapsack, Vector Bin Covering, and Perfect Matching with Hitting Constraints.

Cite as

Alexandra Lassota, Aleksander Łukasiewicz, and Adam Polak. Tight Vector Bin Packing with Few Small Items via Fast Exact Matching in Multigraphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 87:1-87:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lassota_et_al:LIPIcs.ICALP.2022.87,
  author =	{Lassota, Alexandra and {\L}ukasiewicz, Aleksander and Polak, Adam},
  title =	{{Tight Vector Bin Packing with Few Small Items via Fast Exact Matching in Multigraphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{87:1--87:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.87},
  URN =		{urn:nbn:de:0030-drops-164286},
  doi =		{10.4230/LIPIcs.ICALP.2022.87},
  annote =	{Keywords: Bin Packing, Vector Bin Packing, Parameterized Complexity, Matching}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Complexity of Untangling Knots

Authors: Clément Legrand-Duchesne, Ashutosh Rai, and Martin Tancer


Abstract
Deciding whether a diagram of a knot can be untangled with a given number of moves (as a part of the input) is known to be NP-complete. In this paper we determine the parameterized complexity of this problem with respect to a natural parameter called defect. Roughly speaking, it measures the efficiency of the moves used in the shortest untangling sequence of Reidemeister moves. We show that the II^- moves in a shortest untangling sequence can be essentially performed greedily. Using that, we show that this problem belongs to W[P] when parameterized by the defect. We also show that this problem is W[P]-hard by a reduction from Minimum axiom set.

Cite as

Clément Legrand-Duchesne, Ashutosh Rai, and Martin Tancer. Parameterized Complexity of Untangling Knots. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 88:1-88:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{legrandduchesne_et_al:LIPIcs.ICALP.2022.88,
  author =	{Legrand-Duchesne, Cl\'{e}ment and Rai, Ashutosh and Tancer, Martin},
  title =	{{Parameterized Complexity of Untangling Knots}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{88:1--88:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.88},
  URN =		{urn:nbn:de:0030-drops-164296},
  doi =		{10.4230/LIPIcs.ICALP.2022.88},
  annote =	{Keywords: unknot recognition, parameterized complexity, Reidemeister moves, W\lbrackP\rbrack-complete}
}
Document
Track A: Algorithms, Complexity and Games
Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity

Authors: Chao Liao, Qingyun Chen, Bundit Laekhanukit, and Yuhao Zhang


Abstract
In the k-outconnected directed Steiner tree problem (k-DST), we are given an n-vertex directed graph G = (V,E) with edge costs, a connectivity requirement k, a root r ∈ V and a set of terminals T ⊆ V. The goal is to find a minimum-cost subgraph H ⊆ G that has k edge-disjoint paths from the root vertex r to every terminal t ∈ T. The problem is NP-hard, and inapproximability results are known in several parameters, e.g., hardness in terms of n: log^{2-ε}n-hardness for k = 1 [Halperin and Krauthgamer, STOC'03], 2^{log^{1-ε}n}-hardness for general case [Cheriyan, Laekhanukit, Naves and Vetta, SODA'12], hardness in terms of k [Cheriyan et al., SODA'12; Laekhanukit, SODA'14; Manurangsi, IPL'19] and hardness in terms of |T| [Laekhanukit, SODA'14]. In this paper, we show the approximation hardness of k-DST for various parameters. - Ω(|T|/log |T|)-approximation hardness, which holds under the standard complexity assumption NP≠ ZPP. The inapproximability ratio is tightened to Ω(|T|) under the Strongish Planted Clique Hypothesis [Manurangsi, Rubinstein and Schramm, ITCS 2021]. The latter hardness result matches the approximation ratio of |T| obtained by a trivial approximation algorithm, thus closing the long-standing open problem. - Ω(2^{k/2} / k)-approximation hardness for the general case of k-DST under the assumption NP≠ZPP. This is the first hardness result known for survivable network design problems with an inapproximability ratio exponential in k. - Ω((k/L)^{L/4})-approximation hardness for k-DST on L-layered graphs for L ≤ O(log n). This almost matches the approximation ratio of O(k^{L-1}⋅ L ⋅ log |T|) achieved in O(n^L)-time due to Laekhanukit [ICALP'16]. We further extend our hardness results in terms of |T| to the undirected cases of k-DST, namely the single-source k-vertex-connected Steiner tree and the k-edge-connected group Steiner tree problems. Thus, we obtain Ω(|T|/log |T|) and Ω(|T|) approximation hardness for both problems under the assumption NP≠ ZPP and the Strongish Planted Clique Hypothesis, respectively. This again matches the upper bound obtained by trivial algorithms.

Cite as

Chao Liao, Qingyun Chen, Bundit Laekhanukit, and Yuhao Zhang. Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liao_et_al:LIPIcs.ICALP.2022.89,
  author =	{Liao, Chao and Chen, Qingyun and Laekhanukit, Bundit and Zhang, Yuhao},
  title =	{{Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.89},
  URN =		{urn:nbn:de:0030-drops-164309},
  doi =		{10.4230/LIPIcs.ICALP.2022.89},
  annote =	{Keywords: Directed Steiner Tree, Hardness of Approximation, Fault-Tolerant and Survivable Network Design}
}
Document
Track A: Algorithms, Complexity and Games
On Lower Bounds of Approximating Parameterized k-Clique

Authors: Bingkai Lin, Xuandi Ren, Yican Sun, and Xiuhan Wang


Abstract
Given a simple graph G and an integer k, the goal of the k-Clique problem is to decide if G contains a complete subgraph of size k. We say an algorithm approximates k-Clique within a factor g(k) if it can find a clique of size at least k/g(k) when G is guaranteed to have a k-clique. Recently, it was shown that approximating k-Clique within a constant factor is W[1]-hard [Bingkai Lin, 2021]. We study the approximation of k-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Bingkai Lin, 2021] already implies an n^Ω(√[6]{log k})-time lower bound under ETH. We improve this lower bound to n^Ω(log k). Using the gap-amplification technique by expander graphs, we also prove that there is no k^o(1) factor FPT-approximation algorithm for k-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no n^O(k/(log k))-time algorithm to approximate k-Clique within a constant factor, then PIH is true.

Cite as

Bingkai Lin, Xuandi Ren, Yican Sun, and Xiuhan Wang. On Lower Bounds of Approximating Parameterized k-Clique. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 90:1-90:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ICALP.2022.90,
  author =	{Lin, Bingkai and Ren, Xuandi and Sun, Yican and Wang, Xiuhan},
  title =	{{On Lower Bounds of Approximating Parameterized k-Clique}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{90:1--90:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.90},
  URN =		{urn:nbn:de:0030-drops-164317},
  doi =		{10.4230/LIPIcs.ICALP.2022.90},
  annote =	{Keywords: parameterized complexity, k-clique, hardness of approximation}
}
Document
Track A: Algorithms, Complexity and Games
Backdoor Sets on Nowhere Dense SAT

Authors: Daniel Lokshtanov, Fahad Panolan, and M. S. Ramanujan


Abstract
For a satisfiable CNF formula ϕ and an integer t, a weak backdoor set to treewidth-t is a set of variables such that there is an assignment to this set that reduces ϕ to a satisfiable formula that has an incidence graph of treewidth at most t. A natural research program in the work on fixed-parameter algorithms (FPT algorithms) for SAT is to delineate the tractability borders for the problem of detecting a small weak backdoor set to treewidth-t formulas. In this line of research, Gaspers and Szeider (ICALP 2012) showed that detecting a weak backdoor set of size at most k to treewidth-1 is W[2]-hard parameterized by k if the input is an arbitrary CNF formula. Fomin, Lokshtanov, Misra, Ramanujan and Saurabh (SODA 2015), showed that if the input is d-CNF, then detecting a weak backdoor set of size at most k to treewidth-t is fixed-parameter tractable (parameterized by k,t,d). These two results indicate that sparsity of the input plays a role in determining the parameterized complexity of detecting weak backdoor sets to treewidth-t. In this work, we take a major step towards characterizing the precise impact of sparsity on the parameterized complexity of this problem by obtaining algorithmic results for detecting small weak backdoor sets to treewidth-t for input formulas whose incidence graphs belong to a nowhere-dense graph class. Nowhere density provides a robust and well-understood notion of sparsity that is at the heart of several advances on model checking and structural graph theory. Moreover, nowhere-dense graph classes contain many well-studied graph classes such as bounded treewidth graphs, graphs that exclude a fixed (topological) minor and graphs of bounded expansion. Our main contribution is an algorithm that, given a formula ϕ whose incidence graph belongs to a fixed nowhere-dense graph class and an integer k, in time f(t,k)|ϕ|^O(1), either finds a satisfying assignment of ϕ, or concludes correctly that ϕ has no weak backdoor set of size at most k to treewidth-t. To obtain this algorithm, we develop a strategy that only relies on the fact that nowhere-dense graph classes are biclique-free. That is, for every nowhere-dense graph class, there is a p such that it is contained in the class of graphs that exclude K_{p,p} as a subgraph. This is a significant feature of our techniques since the class of biclique-free graphs also generalizes the class of graphs of bounded degeneracy, which are incomparable with nowhere-dense graph classes. As a result, our algorithm also generalizes the results of Fomin, Lokshtanov, Misra, Ramanujan and Saurabh (SODA 2015) for the special case of d-CNF formulas as input when d is fixed. This is because the incidence graphs of such formulas exclude K_{d+1,d+1} as a subgraph.

Cite as

Daniel Lokshtanov, Fahad Panolan, and M. S. Ramanujan. Backdoor Sets on Nowhere Dense SAT. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 91:1-91:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ICALP.2022.91,
  author =	{Lokshtanov, Daniel and Panolan, Fahad and Ramanujan, M. S.},
  title =	{{Backdoor Sets on Nowhere Dense SAT}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{91:1--91:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.91},
  URN =		{urn:nbn:de:0030-drops-164323},
  doi =		{10.4230/LIPIcs.ICALP.2022.91},
  annote =	{Keywords: Fixed-parameter Tractability, Satisfiability, Backdoors, Treewidth}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Coding Theorems in Time-Bounded Kolmogorov Complexity

Authors: Zhenjian Lu, Igor C. Oliveira, and Marius Zimand


Abstract
The classical coding theorem in Kolmogorov complexity states that if an n-bit string x is sampled with probability δ by an algorithm with prefix-free domain then 𝖪(x) ≤ log(1/δ) + O(1). In a recent work, Lu and Oliveira [Zhenjian Lu and Igor C. Oliveira, 2021] established an unconditional time-bounded version of this result, by showing that if x can be efficiently sampled with probability δ then rKt(x) = O(log(1/δ)) + O(log n), where rKt denotes the randomized analogue of Levin’s Kt complexity. Unfortunately, this result is often insufficient when transferring applications of the classical coding theorem to the time-bounded setting, as it achieves a O(log(1/δ)) bound instead of the information-theoretic optimal log(1/δ). Motivated by this discrepancy, we investigate optimal coding theorems in the time-bounded setting. Our main contributions can be summarised as follows. • Efficient coding theorem for rKt with a factor of 2. Addressing a question from [Zhenjian Lu and Igor C. Oliveira, 2021], we show that if x can be efficiently sampled with probability at least δ then rKt(x) ≤ (2 + o(1)) ⋅ log(1/δ) + O(log n). As in previous work, our coding theorem is efficient in the sense that it provides a polynomial-time probabilistic algorithm that, when given x, the code of the sampler, and δ, it outputs, with probability ≥ 0.99, a probabilistic representation of x that certifies this rKt complexity bound. • Optimality under a cryptographic assumption. Under a hypothesis about the security of cryptographic pseudorandom generators, we show that no efficient coding theorem can achieve a bound of the form rKt(x) ≤ (2 - o(1)) ⋅ log(1/δ) + poly(log n). Under a weaker assumption, we exhibit a gap between efficient coding theorems and existential coding theorems with near-optimal parameters. • Optimal coding theorem for pK^t and unconditional Antunes-Fortnow. We consider pK^t complexity [Halley Goldberg et al., 2022], a variant of rKt where the randomness is public and the time bound is fixed. We observe the existence of an optimal coding theorem for pK^t, and employ this result to establish an unconditional version of a theorem of Antunes and Fortnow [Luis Filipe Coelho Antunes and Lance Fortnow, 2009] which characterizes the worst-case running times of languages that are in average polynomial-time over all 𝖯-samplable distributions.

Cite as

Zhenjian Lu, Igor C. Oliveira, and Marius Zimand. Optimal Coding Theorems in Time-Bounded Kolmogorov Complexity. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 92:1-92:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ICALP.2022.92,
  author =	{Lu, Zhenjian and Oliveira, Igor C. and Zimand, Marius},
  title =	{{Optimal Coding Theorems in Time-Bounded Kolmogorov Complexity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{92:1--92:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.92},
  URN =		{urn:nbn:de:0030-drops-164331},
  doi =		{10.4230/LIPIcs.ICALP.2022.92},
  annote =	{Keywords: computational complexity, randomized algorithms, Kolmogorov complexity}
}
Document
Track A: Algorithms, Complexity and Games
Max Weight Independent Set in Graphs with No Long Claws: An Analog of the Gyárfás' Path Argument

Authors: Konrad Majewski, Tomáš Masařík, Jana Novotná, Karolina Okrasa, Marcin Pilipczuk, Paweł Rzążewski, and Marek Sokołowski


Abstract
We revisit recent developments for the Maximum Weight Independent Set problem in graphs excluding a subdivided claw S_{t,t,t} as an induced subgraph [Chudnovsky, Pilipczuk, Pilipczuk, Thomassé, SODA 2020] and provide a subexponential-time algorithm with improved running time 2^𝒪(√nlog n) and a quasipolynomial-time approximation scheme with improved running time 2^𝒪(ε^{-1} log⁵ n). The Gyárfás' path argument, a powerful tool that is the main building block for many algorithms in P_t-free graphs, ensures that given an n-vertex P_t-free graph, in polynomial time we can find a set P of at most t-1 vertices, such that every connected component of G-N[P] has at most n/2 vertices. Our main technical contribution is an analog of this result for S_{t,t,t}-free graphs: given an n-vertex S_{t,t,t}-free graph, in polynomial time we can find a set P of 𝒪(t log n) vertices and an extended strip decomposition (an appropriate analog of the decomposition into connected components) of G-N[P] such that every particle (an appropriate analog of a connected component to recurse on) of the said extended strip decomposition has at most n/2 vertices.

Cite as

Konrad Majewski, Tomáš Masařík, Jana Novotná, Karolina Okrasa, Marcin Pilipczuk, Paweł Rzążewski, and Marek Sokołowski. Max Weight Independent Set in Graphs with No Long Claws: An Analog of the Gyárfás' Path Argument. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 93:1-93:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{majewski_et_al:LIPIcs.ICALP.2022.93,
  author =	{Majewski, Konrad and Masa\v{r}{\'\i}k, Tom\'{a}\v{s} and Novotn\'{a}, Jana and Okrasa, Karolina and Pilipczuk, Marcin and Rz\k{a}\.{z}ewski, Pawe{\l} and Soko{\l}owski, Marek},
  title =	{{Max Weight Independent Set in Graphs with No Long Claws: An Analog of the Gy\'{a}rf\'{a}s' Path Argument}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{93:1--93:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.93},
  URN =		{urn:nbn:de:0030-drops-164343},
  doi =		{10.4230/LIPIcs.ICALP.2022.93},
  annote =	{Keywords: Max Independent Set, subdivided claw, QPTAS, subexponential-time algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds

Authors: Surya Mathialagan, Virginia Vassilevska Williams, and Yinzhan Xu


Abstract
The AP-LCA problem asks, given an n-node directed acyclic graph (DAG), to compute for every pair of vertices u and v in the DAG a lowest common ancestor (LCA) of u and v if one exists, i.e. a node that is an ancestor of both u and v but no proper descendent of it is their common ancestor. Recently [Grandoni et al. SODA'21] obtained the first sub-n^{2.5} time algorithm for AP-LCA running in O(n^{2.447}) time. Meanwhile, the only known conditional lower bound for AP-LCA is that the problem requires n^{ω-o(1)} time where ω is the matrix multiplication exponent. In this paper we study several interesting variants of AP-LCA, providing both algorithms and fine-grained lower bounds for them. The lower bounds we obtain are the first conditional lower bounds for LCA problems higher than n^{ω-o(1)}. Some of our results include: - In any DAG, we can detect all vertex pairs that have at most two LCAs and list all of their LCAs in O(n^ω) time. This algorithm extends a result of [Kowaluk and Lingas ESA'07] which showed an Õ(n^ω) time algorithm that detects all pairs with a unique LCA in a DAG and outputs their corresponding LCAs. - Listing 7 LCAs per vertex pair in DAGs requires n^{3-o(1)} time under the popular assumption that 3-uniform 5-hyperclique detection requires n^{5-o(1)} time. This is surprising since essentially cubic time is sufficient to list all LCAs (if ω = 2). - Counting the number of LCAs for every vertex pair in a DAG requires n^{3-o(1)} time under the Strong Exponential Time Hypothesis, and n^{ω(1,2,1)-o(1)} time under the 4-Clique hypothesis. This shows that the algorithm of [Echkardt, Mühling and Nowak ESA'07] for listing all LCAs for every pair of vertices is likely optimal. - Given a DAG and a vertex w_{u,v} for every vertex pair u,v, verifying whether all w_{u,v} are valid LCAs requires n^{2.5-o(1)} time assuming 3-uniform 4-hyperclique requires n^{4-o(1)} time. This defies the common intuition that verification is easier than computation since returning some LCA per vertex pair can be solved in O(n^{2.447}) time.

Cite as

Surya Mathialagan, Virginia Vassilevska Williams, and Yinzhan Xu. Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 94:1-94:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mathialagan_et_al:LIPIcs.ICALP.2022.94,
  author =	{Mathialagan, Surya and Vassilevska Williams, Virginia and Xu, Yinzhan},
  title =	{{Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{94:1--94:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.94},
  URN =		{urn:nbn:de:0030-drops-164359},
  doi =		{10.4230/LIPIcs.ICALP.2022.94},
  annote =	{Keywords: All-Pairs Lowest Common Ancestor, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
A PTAS for Capacitated Vehicle Routing on Trees

Authors: Claire Mathieu and Hang Zhou


Abstract
We give a polynomial time approximation scheme (PTAS) for the unit demand capacitated vehicle routing problem (CVRP) on trees, for the entire range of the tour capacity. The result extends to the splittable CVRP.

Cite as

Claire Mathieu and Hang Zhou. A PTAS for Capacitated Vehicle Routing on Trees. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 95:1-95:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mathieu_et_al:LIPIcs.ICALP.2022.95,
  author =	{Mathieu, Claire and Zhou, Hang},
  title =	{{A PTAS for Capacitated Vehicle Routing on Trees}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{95:1--95:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.95},
  URN =		{urn:nbn:de:0030-drops-164369},
  doi =		{10.4230/LIPIcs.ICALP.2022.95},
  annote =	{Keywords: approximation algorithms, capacitated vehicle routing, graph algorithms, combinatorial optimization}
}
Document
Track A: Algorithms, Complexity and Games
Graph Reconstruction from Random Subgraphs

Authors: Andrew McGregor and Rik Sengupta


Abstract
We consider the problem of reconstructing a graph G in two natural sampling models: 1) each sample corresponds to a random induced subgraph and 2) for a fixed adjacency matrix A_G for G, each sample corresponds to a random principal submatrix (i.e., a submatrix formed by deleting the same set of rows and columns) of A_G. We refer to these models as the "unordered" and "ordered" models respectively. The two models are motivated by work on the reconstruction conjecture in combinatorics and trace reconstruction in theoretical computer science. Despite the superficial similarities between the models, we show that the sample complexity of reconstruction can be exponentially different. Our main results are as follows: - In the unordered model, we show that almost all graphs can be reconstructed with Θ(p^{-2} log n) samples if each node is included in the random subgraph with any constant probability p; this is optimal. We show our upper bound extends to smaller values of p as well. In contrast, for arbitrary graphs, we show that exp(Ω(n)) samples are required for reconstruction even for 2-regular graphs. One of the key technical steps in the first result is showing that, with high probability, any subgraph isomorphism in a random graph has at most O(log n) non-fixed points. - In the ordered model, we show that any graph with constant arboricity or degeneracy (i.e., every induced subgraph has constant average degree) can be reconstructed with exp(Õ(n^{1/3})) samples and that arbitrary graphs can be reconstructed with exp(Õ(n^{1/2})) samples. The results about almost all graphs in the first model carry over to the second. One of the key technical steps in the first result is showing that reconstruction of low degeneracy graphs can be reduced to learning a small number of moments of sets of the form {i-j: j < i,(i,j) ∈ E} and {j-i: i < j,(i,j) ∈ E} where G = ([n],E) is the unknown graph.

Cite as

Andrew McGregor and Rik Sengupta. Graph Reconstruction from Random Subgraphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 96:1-96:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mcgregor_et_al:LIPIcs.ICALP.2022.96,
  author =	{McGregor, Andrew and Sengupta, Rik},
  title =	{{Graph Reconstruction from Random Subgraphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{96:1--96:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.96},
  URN =		{urn:nbn:de:0030-drops-164373},
  doi =		{10.4230/LIPIcs.ICALP.2022.96},
  annote =	{Keywords: graph reconstruction, sample complexity, deletion channel}
}
Document
Track A: Algorithms, Complexity and Games
The SDP Value of Random 2CSPs

Authors: Amulya Musipatla, Ryan O'Donnell, Tselil Schramm, and Xinyu Wu


Abstract
We consider a very wide class of models for sparse random Boolean 2CSPs; equivalently, degree-2 optimization problems over {±1}ⁿ. For each model ℳ, we identify the "high-probability value" s^*_ℳ of the natural SDP relaxation (equivalently, the quantum value). That is, for all ε > 0 we show that the SDP optimum of a random n-variable instance is (when normalized by n) in the range (s^*_ℳ-ε, s^*_ℳ+ε) with high probability. Our class of models includes non-regular CSPs, and ones where the SDP relaxation value is strictly smaller than the spectral relaxation value.

Cite as

Amulya Musipatla, Ryan O'Donnell, Tselil Schramm, and Xinyu Wu. The SDP Value of Random 2CSPs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 97:1-97:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{musipatla_et_al:LIPIcs.ICALP.2022.97,
  author =	{Musipatla, Amulya and O'Donnell, Ryan and Schramm, Tselil and Wu, Xinyu},
  title =	{{The SDP Value of Random 2CSPs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{97:1--97:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.97},
  URN =		{urn:nbn:de:0030-drops-164381},
  doi =		{10.4230/LIPIcs.ICALP.2022.97},
  annote =	{Keywords: Random constraint satisfaction problems}
}
Document
Track A: Algorithms, Complexity and Games
Strongly Sublinear Algorithms for Testing Pattern Freeness

Authors: Ilan Newman and Nithin Varma


Abstract
For a permutation π:[k] → [k], a function f:[n] → ℝ contains a π-appearance if there exists 1 ≤ i₁ < i₂ < … < i_k ≤ n such that for all s,t ∈ [k], f(i_s) < f(i_t) if and only if π(s) < π(t). The function is π-free if it has no π-appearances. In this paper, we investigate the problem of testing whether an input function f is π-free or whether f differs on at least ε n values from every π-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler [Ilan Newman et al., 2019]. We show that for all constants k ∈ ℕ, ε ∈ (0,1), and permutation π:[k] → [k], there is a one-sided error ε-testing algorithm for π-freeness of functions f:[n] → ℝ that makes Õ(n^o(1)) queries. We improve significantly upon the previous best upper bound O(n^{1 - 1/(k-1)}) by Ben-Eliezer and Canonne [Omri Ben-Eliezer and Clément L. Canonne, 2018]. Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.

Cite as

Ilan Newman and Nithin Varma. Strongly Sublinear Algorithms for Testing Pattern Freeness. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 98:1-98:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{newman_et_al:LIPIcs.ICALP.2022.98,
  author =	{Newman, Ilan and Varma, Nithin},
  title =	{{Strongly Sublinear Algorithms for Testing Pattern Freeness}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{98:1--98:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.98},
  URN =		{urn:nbn:de:0030-drops-164390},
  doi =		{10.4230/LIPIcs.ICALP.2022.98},
  annote =	{Keywords: Property testing, Pattern freeness, Sublinear algorithms}
}
Document
Track A: Algorithms, Complexity and Games
An Optimal-Time RLBWT Construction in BWT-Runs Bounded Space

Authors: Takaaki Nishimoto, Shunsuke Kanda, and Yasuo Tabei


Abstract
The compression of highly repetitive strings (i.e., strings with many repetitions) has been a central research topic in string processing, and quite a few compression methods for these strings have been proposed thus far. Among them, an efficient compression format gathering increasing attention is the run-length Burrows-Wheeler transform (RLBWT), which is a run-length encoded BWT as a reversible permutation of an input string on the lexicographical order of suffixes. State-of-the-art construction algorithms of RLBWT have a serious issue with respect to (i) non-optimal computation time or (ii) a working space that is linearly proportional to the length of an input string. In this paper, we present r-comp, the first optimal-time construction algorithm of RLBWT in BWT-runs bounded space. That is, the computational complexity of r-comp is O(n + r log r) time and O(r log n) bits of working space for the length n of an input string and the number r of equal-letter runs in BWT. The computation time is optimal (i.e., O(n)) for strings with the property r = O(n/log n), which holds for most highly repetitive strings. Experiments using a real-world dataset of highly repetitive strings show the effectiveness of r-comp with respect to computation time and space.

Cite as

Takaaki Nishimoto, Shunsuke Kanda, and Yasuo Tabei. An Optimal-Time RLBWT Construction in BWT-Runs Bounded Space. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 99:1-99:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{nishimoto_et_al:LIPIcs.ICALP.2022.99,
  author =	{Nishimoto, Takaaki and Kanda, Shunsuke and Tabei, Yasuo},
  title =	{{An Optimal-Time RLBWT Construction in BWT-Runs Bounded Space}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{99:1--99:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.99},
  URN =		{urn:nbn:de:0030-drops-164403},
  doi =		{10.4230/LIPIcs.ICALP.2022.99},
  annote =	{Keywords: lossless data compression, Burrows-Wheeler transform, highly repetitive text collections}
}
Document
Track A: Algorithms, Complexity and Games
Space Characterizations of Complexity Measures and Size-Space Trade-Offs in Propositional Proof Systems

Authors: Theodoros Papamakarios and Alexander Razborov


Abstract
We identify two new big clusters of proof complexity measures equivalent up to polynomial and log n factors. The first cluster contains, among others, the logarithm of tree-like resolution size, regularized (that is, multiplied by the logarithm of proof length) clause and monomial space, and clause space, both ordinary and regularized, in regular and tree-like resolution. As a consequence, separating clause or monomial space from the (logarithm of) tree-like resolution size is the same as showing a strong trade-off between clause or monomial space and proof length, and is the same as showing a super-critical trade-off between clause space and depth. The second cluster contains width, Σ₂ space (a generalization of clause space to depth 2 Frege systems), both ordinary and regularized, as well as the logarithm of tree-like size in the system R(log). As an application of some of these simulations, we improve a known size-space trade-off for polynomial calculus with resolution. In terms of lower bounds, we show a quadratic lower bound on tree-like resolution size for formulas refutable in clause space 4. We introduce on our way yet another proof complexity measure intermediate between depth and the logarithm of tree-like size that might be of independent interest.

Cite as

Theodoros Papamakarios and Alexander Razborov. Space Characterizations of Complexity Measures and Size-Space Trade-Offs in Propositional Proof Systems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 100:1-100:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{papamakarios_et_al:LIPIcs.ICALP.2022.100,
  author =	{Papamakarios, Theodoros and Razborov, Alexander},
  title =	{{Space Characterizations of Complexity Measures and Size-Space Trade-Offs in Propositional Proof Systems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{100:1--100:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.100},
  URN =		{urn:nbn:de:0030-drops-164419},
  doi =		{10.4230/LIPIcs.ICALP.2022.100},
  annote =	{Keywords: Proof Complexity, Resolution, Size-Space Trade-offs}
}
Document
Track A: Algorithms, Complexity and Games
Learning Algorithms Versus Automatability of Frege Systems

Authors: Ján Pich and Rahul Santhanam


Abstract
We connect learning algorithms and algorithms automating proof search in propositional proof systems: for every sufficiently strong, well-behaved propositional proof system P, we prove that the following statements are equivalent, - Provable learning. P proves efficiently that p-size circuits are learnable by subexponential-size circuits over the uniform distribution with membership queries. - Provable automatability. P proves efficiently that P is automatable by non-uniform circuits on propositional formulas expressing p-size circuit lower bounds. Here, P is sufficiently strong and well-behaved if I.-III. holds: I. P p-simulates Jeřábek’s system WF (which strengthens the Extended Frege system EF by a surjective weak pigeonhole principle); II. P satisfies some basic properties of standard proof systems which p-simulate WF; III. P proves efficiently for some Boolean function h that h is hard on average for circuits of subexponential size. For example, if III. holds for P = WF, then Items 1 and 2 are equivalent for P = WF. The notion of automatability in Item 2 is slightly modified so that the automating algorithm outputs a proof of a given formula (expressing a p-size circuit lower bound) in p-time in the length of the shortest proof of a closely related but different formula (expressing an average-case subexponential-size circuit lower bound). If there is a function h ∈ NE∩ coNE which is hard on average for circuits of size 2^{n/4}, for each sufficiently big n, then there is an explicit propositional proof system P satisfying properties I.-III., i.e. the equivalence of Items 1 and 2 holds for P.

Cite as

Ján Pich and Rahul Santhanam. Learning Algorithms Versus Automatability of Frege Systems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 101:1-101:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{pich_et_al:LIPIcs.ICALP.2022.101,
  author =	{Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{Learning Algorithms Versus Automatability of Frege Systems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{101:1--101:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.101},
  URN =		{urn:nbn:de:0030-drops-164427},
  doi =		{10.4230/LIPIcs.ICALP.2022.101},
  annote =	{Keywords: learning algorithms, automatability, proof complexity}
}
Document
Track A: Algorithms, Complexity and Games
Algorithms and Data Structures for First-Order Logic with Connectivity Under Vertex Failures

Authors: Michał Pilipczuk, Nicole Schirrmacher, Sebastian Siebertz, Szymon Toruńczyk, and Alexandre Vigny


Abstract
We introduce a new data structure for answering connectivity queries in undirected graphs subject to batched vertex failures. Precisely, given any graph G and integer parameter k, we can in fixed-parameter time construct a data structure that can later be used to answer queries of the form: "are vertices s and t connected via a path that avoids vertices u₁,…, u_k?" in time 2^𝒪(k). In the terminology of the literature on data structures, this gives the first deterministic data structure for connectivity under vertex failures where for every fixed number of failures, all operations can be performed in constant time. With the aim to understand the power and the limitations of our new techniques, we prove an algorithmic meta theorem for the recently introduced separator logic, which extends first-order logic with atoms for connectivity under vertex failures. We prove that the model-checking problem for separator logic is fixed-parameter tractable on every class of graphs that exclude a fixed topological minor. We also show a weak converse. This implies that from the point of view of parameterized complexity, under standard complexity theoretical assumptions, the frontier of tractability of separator logic is almost exactly delimited by classes excluding a fixed topological minor. The backbone of our proof relies on a decomposition theorem of Cygan, Lokshtanov, Pilipczuk, Pilipczuk, and Saurabh [SICOMP '19], which provides a tree decomposition of a given graph into bags that are unbreakable. Crucially, unbreakability allows to reduce separator logic to plain first-order logic within each bag individually. Guided by this observation, we design our model-checking algorithm using dynamic programming over the tree decomposition, where the transition at each bag amounts to running a suitable model-checking subprocedure for plain first-order logic. This approach is robust enough to provide also an extension to efficient enumeration of answers to a query expressed in separator logic.

Cite as

Michał Pilipczuk, Nicole Schirrmacher, Sebastian Siebertz, Szymon Toruńczyk, and Alexandre Vigny. Algorithms and Data Structures for First-Order Logic with Connectivity Under Vertex Failures. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 102:1-102:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{pilipczuk_et_al:LIPIcs.ICALP.2022.102,
  author =	{Pilipczuk, Micha{\l} and Schirrmacher, Nicole and Siebertz, Sebastian and Toru\'{n}czyk, Szymon and Vigny, Alexandre},
  title =	{{Algorithms and Data Structures for First-Order Logic with Connectivity Under Vertex Failures}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{102:1--102:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.102},
  URN =		{urn:nbn:de:0030-drops-164432},
  doi =		{10.4230/LIPIcs.ICALP.2022.102},
  annote =	{Keywords: Combinatorics and graph theory, Computational applications of logic, Data structures, Fixed-parameter algorithms and complexity, Graph algorithms}
}
Document
Track A: Algorithms, Complexity and Games
A Perfect Sampler for Hypergraph Independent Sets

Authors: Guoliang Qiu, Yanheng Wang, and Chihao Zhang


Abstract
The problem of uniformly sampling hypergraph independent sets is revisited. We design an efficient perfect sampler for the problem under a similar condition of the asymmetric Lovász local lemma. When specialized to d-regular k-uniform hypergraphs on n vertices, our sampler terminates in expected O(n log n) time provided d ≤ c⋅ 2^{k/2} where c > 0 is a constant, matching the rapid mixing condition for Glauber dynamics in Hermon, Sly and Zhang [Hermon et al., 2019]. The analysis of our algorithm is simple and clean.

Cite as

Guoliang Qiu, Yanheng Wang, and Chihao Zhang. A Perfect Sampler for Hypergraph Independent Sets. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 103:1-103:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{qiu_et_al:LIPIcs.ICALP.2022.103,
  author =	{Qiu, Guoliang and Wang, Yanheng and Zhang, Chihao},
  title =	{{A Perfect Sampler for Hypergraph Independent Sets}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{103:1--103:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.103},
  URN =		{urn:nbn:de:0030-drops-164442},
  doi =		{10.4230/LIPIcs.ICALP.2022.103},
  annote =	{Keywords: Coupling from the past, Markov chains, Hypergraph independent sets}
}
Document
Track A: Algorithms, Complexity and Games
Threshold Rates of Code Ensembles: Linear Is Best

Authors: Nicolas Resch and Chen Yuan


Abstract
In this work, we prove new results concerning the combinatorial properties of random linear codes. By applying the thresholds framework from Mosheiff et al. (FOCS 2020) we derive fine-grained results concerning the list-decodability and -recoverability of random linear codes. Firstly, we prove a lower bound on the list-size required for random linear codes over 𝔽_q ε-close to capacity to list-recover with error radius ρ and input lists of size 𝓁. We show that the list-size L must be at least {log_q binom{q,𝓁}}-R}/ε, where R is the rate of the random linear code. This is analogous to a lower bound for list-decoding that was recently obtained by Guruswami et al. (IEEE TIT 2021B). As a comparison, we also pin down the list size of random codes which is {log_q binom{q,𝓁}}/ε. This result almost closes the O({q log L}/L) gap left by Guruswami et al. (IEEE TIT 2021A). This leaves open the possibility (that we consider likely) that random linear codes perform better than the random codes for list-recoverability, which is in contrast to a recent gap shown for the case of list-recovery from erasures (Guruswami et al., IEEE TIT 2021B). Next, we consider list-decoding with constant list-sizes. Specifically, we obtain new lower bounds on the rate required for: - List-of-3 decodability of random linear codes over 𝔽₂; - List-of-2 decodability of random linear codes over 𝔽_q (for any q). This expands upon Guruswami et al. (IEEE TIT 2021A) which only studied list-of-2 decodability of random linear codes over 𝔽₂. Further, in both cases we are able to show that the rate is larger than that which is possible for uniformly random codes. A conclusion that we draw from our work is that, for many combinatorial properties of interest, random linear codes actually perform better than uniformly random codes, in contrast to the apparently standard intuition that uniformly random codes are best.

Cite as

Nicolas Resch and Chen Yuan. Threshold Rates of Code Ensembles: Linear Is Best. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 104:1-104:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ICALP.2022.104,
  author =	{Resch, Nicolas and Yuan, Chen},
  title =	{{Threshold Rates of Code Ensembles: Linear Is Best}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{104:1--104:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.104},
  URN =		{urn:nbn:de:0030-drops-164456},
  doi =		{10.4230/LIPIcs.ICALP.2022.104},
  annote =	{Keywords: Random Linear Codes, List-Decoding, List-Recovery, Threshold Rates}
}
Document
Track A: Algorithms, Complexity and Games
Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel

Authors: Ittai Rubinstein


Abstract
Two of the most common models for channels with synchronisation errors are the Binary Deletion Channel with parameter p (BDC_p) - a channel where every bit of the codeword is deleted i.i.d with probability p, and the Poisson Repeat Channel with parameter λ (PRC_λ) - a channel where every bit of the codeword is repeated Poisson(λ) times. Previous constructions based on synchronisation strings yielded codes with rates far lower than the capacities of these channels [Con and Shpilka, 2019; Guruswami and Li, 2018], and the only efficient construction to achieve capacity on the BDC at the time of writing this paper is based on the far more advanced methods of polar codes [Tal et al., 2021]. In this work, we present a new method for concatenating synchronisation codes and use it to construct simple and efficient encoding and decoding algorithms for both channels with nearly optimal rates.

Cite as

Ittai Rubinstein. Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 105:1-105:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rubinstein:LIPIcs.ICALP.2022.105,
  author =	{Rubinstein, Ittai},
  title =	{{Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{105:1--105:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.105},
  URN =		{urn:nbn:de:0030-drops-164466},
  doi =		{10.4230/LIPIcs.ICALP.2022.105},
  annote =	{Keywords: Error Correcting Codes, Algorithmic Coding Theory, Binary Deletion Channel}
}
Document
Track A: Algorithms, Complexity and Games
Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation

Authors: Aviad Rubinstein and Junyao Zhao


Abstract
In this work we give two new algorithms that use similar techniques for (non-monotone) submodular function maximization subject to a cardinality constraint. The first is an offline fixed-parameter tractable algorithm that guarantees a 0.539-approximation for all non-negative submodular functions. The second algorithm works in the random-order streaming model. It guarantees a (1/2+c)-approximation for symmetric functions, and we complement it by showing that no space-efficient algorithm can beat 1/2 for asymmetric functions. To the best of our knowledge this is the first provable separation between symmetric and asymmetric submodular function maximization.

Cite as

Aviad Rubinstein and Junyao Zhao. Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 106:1-106:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ICALP.2022.106,
  author =	{Rubinstein, Aviad and Zhao, Junyao},
  title =	{{Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{106:1--106:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.106},
  URN =		{urn:nbn:de:0030-drops-164478},
  doi =		{10.4230/LIPIcs.ICALP.2022.106},
  annote =	{Keywords: Submodular optimization, Fixed-parameter tractability, Random-order streaming}
}
Document
Track A: Algorithms, Complexity and Games
Approximate Triangle Counting via Sampling and Fast Matrix Multiplication

Authors: Jakub Tětek


Abstract
There is a simple O(n³/{ε²T}) time algorithm for 1±ε-approximate triangle counting where T is the number of triangles in the graph and n the number of vertices. At the same time, one may count triangles exactly using fast matrix multiplication in time Õ(n^ω). Is it possible to get a negative dependency on the number of triangles T while retaining the state-of-the-art n^ω dependency on n? We answer this question positively by providing an algorithm which runs in time O({n^ω}/T^{ω-2})⋅poly(n^o(1)/ε). This is optimal in the sense that as long as the exponent of T is independent of n, T, it cannot be improved while retaining the dependency on n. Our algorithm improves upon the state of the art when T ≫ 1 and T ≪ n. We also consider the problem of approximate triangle counting in sparse graphs, parameterized by the number of edges m. The best known algorithm runs in time Õ_ε(m^{3/2}/T) [Eden et al., SIAM Journal on Computing, 2017]. An algorithm by Alon et al. [JACM, 1995] for exact triangle counting that runs in time Õ(m^{2ω/(ω + 1)}). We again get an algorithm whose complexity has a state-of-the-art dependency on m while having negative dependency on T. Specifically, our algorithm runs in time O({m^{2ω/(ω+1)}}/{T^{2(ω-1)/(ω+1)}}) ⋅ poly(n^o(1)/ε). This is again optimal in the sense that no better constant exponent of T is possible without worsening the dependency on m. This algorithm improves upon the state of the art when T ≫ 1 and T ≪ √m. In both cases, algorithms with time complexity matching query complexity lower bounds were known on some range of parameters. While those algorithms have optimal query complexity for the whole range of T, the time complexity departs from the query complexity and is no longer optimal (as we show) for T ≪ n and T ≪ √m, respectively. We focus on the time complexity in this range of T. To the best of our knowledge, this is the first paper considering the discrepancy between query and time complexity in graph parameter estimation.

Cite as

Jakub Tětek. Approximate Triangle Counting via Sampling and Fast Matrix Multiplication. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 107:1-107:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tetek:LIPIcs.ICALP.2022.107,
  author =	{T\v{e}tek, Jakub},
  title =	{{Approximate Triangle Counting via Sampling and Fast Matrix Multiplication}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{107:1--107:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.107},
  URN =		{urn:nbn:de:0030-drops-164485},
  doi =		{10.4230/LIPIcs.ICALP.2022.107},
  annote =	{Keywords: Approximate triangle counting, Fast matrix multiplication, Sampling}
}
Document
Track A: Algorithms, Complexity and Games
Polynomial-Time Approximation of Zero-Free Partition Functions

Authors: Penghui Yao, Yitong Yin, and Xinyuan Zhang


Abstract
Zero-free based algorithms are a major technique for deterministic approximate counting. In Barvinok’s original framework [Barvinok, 2017], by calculating truncated Taylor expansions, a quasi-polynomial time algorithm was given for estimating zero-free partition functions. Patel and Regts [Patel and Regts, 2017] later gave a refinement of Barvinok’s framework, which gave a polynomial-time algorithm for a class of zero-free graph polynomials that can be expressed as counting induced subgraphs in bounded-degree graphs. In this paper, we give a polynomial-time algorithm for estimating classical and quantum partition functions specified by local Hamiltonians with bounded maximum degree, assuming a zero-free property for the temperature. Consequently, when the inverse temperature is close enough to zero by a constant gap, we have a polynomial-time approximation algorithm for all such partition functions. Our result is based on a new abstract framework that extends and generalizes the approach of Patel and Regts.

Cite as

Penghui Yao, Yitong Yin, and Xinyuan Zhang. Polynomial-Time Approximation of Zero-Free Partition Functions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 108:1-108:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{yao_et_al:LIPIcs.ICALP.2022.108,
  author =	{Yao, Penghui and Yin, Yitong and Zhang, Xinyuan},
  title =	{{Polynomial-Time Approximation of Zero-Free Partition Functions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{108:1--108:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.108},
  URN =		{urn:nbn:de:0030-drops-164494},
  doi =		{10.4230/LIPIcs.ICALP.2022.108},
  annote =	{Keywords: partition function, zero-freeness, local Hamiltonian}
}
Document
Track A: Algorithms, Complexity and Games
Faster Cut-Equivalent Trees in Simple Graphs

Authors: Tianyi Zhang


Abstract
Let G = (V, E) be an undirected connected simple graph on n vertices. A cut-equivalent tree of G is an edge-weighted tree on the same vertex set V, such that for any pair of vertices s, t ∈ V, the minimum (s, t)-cut in the tree is also a minimum (s, t)-cut in G, and these two cuts have the same cut value. In a recent paper [Abboud, Krauthgamer and Trabelsi, STOC 2021], the authors propose the first subcubic time algorithm for constructing a cut-equivalent tree. More specifically, their algorithm has Õ(n^{2.5}) running time. Later on, this running time was significantly improved to n^{2+o(1)} by two independent works [Abboud, Krauthgamer and Trabelsi, FOCS 2021] and [Li, Panigrahi, Saranurak, FOCS 2021], and then to (m+n^{1.9})^{1+o(1)} by [Abboud, Krauthgamer and Trabelsi, SODA 2022]. In this paper, we improve the running time to Õ(n²) graphs if near-linear time max-flow algorithms exist, or Õ(n^{17/8}) using the currently fastest max-flow algorithm. Although our algorithm is slower than previous works, the runtime bound becomes better by a sub-polynomial factor in dense simple graphs when assuming near-linear time max-flow algorithms.

Cite as

Tianyi Zhang. Faster Cut-Equivalent Trees in Simple Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 109:1-109:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ICALP.2022.109,
  author =	{Zhang, Tianyi},
  title =	{{Faster Cut-Equivalent Trees in Simple Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{109:1--109:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.109},
  URN =		{urn:nbn:de:0030-drops-164507},
  doi =		{10.4230/LIPIcs.ICALP.2022.109},
  annote =	{Keywords: graph algorithms, minimum cuts, max-flow}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Universal Complexity Bounds Based on Value Iteration and Application to Entropy Games

Authors: Xavier Allamigeon, Stéphane Gaubert, Ricardo D. Katz, and Mateusz Skomra


Abstract
We develop value iteration-based algorithms to solve in a unified manner different classes of combinatorial zero-sum games with mean-payoff type rewards. These algorithms rely on an oracle, evaluating the dynamic programming operator up to a given precision. We show that the number of calls to the oracle needed to determine exact optimal (positional) strategies is, up to a factor polynomial in the dimension, of order R/sep, where the "separation" sep is defined as the minimal difference between distinct values arising from strategies, and R is a metric estimate, involving the norm of approximate sub and super-eigenvectors of the dynamic programming operator. We illustrate this method by two applications. The first one is a new proof, leading to improved complexity estimates, of a theorem of Boros, Elbassioni, Gurvich and Makino, showing that turn-based mean payoff games with a fixed number of random positions can be solved in pseudo-polynomial time. The second one concerns entropy games, a model introduced by Asarin, Cervelle, Degorre, Dima, Horn and Kozyakin. The rank of an entropy game is defined as the maximal rank among all the ambiguity matrices determined by strategies of the two players. We show that entropy games with a fixed rank, in their original formulation, can be solved in polynomial time, and that an extension of entropy games incorporating weights can be solved in pseudo-polynomial time under the same fixed rank condition.

Cite as

Xavier Allamigeon, Stéphane Gaubert, Ricardo D. Katz, and Mateusz Skomra. Universal Complexity Bounds Based on Value Iteration and Application to Entropy Games. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 110:1-110:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{allamigeon_et_al:LIPIcs.ICALP.2022.110,
  author =	{Allamigeon, Xavier and Gaubert, St\'{e}phane and Katz, Ricardo D. and Skomra, Mateusz},
  title =	{{Universal Complexity Bounds Based on Value Iteration and Application to Entropy Games}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{110:1--110:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.110},
  URN =		{urn:nbn:de:0030-drops-164511},
  doi =		{10.4230/LIPIcs.ICALP.2022.110},
  annote =	{Keywords: Mean-payoff games, entropy games, value iteration, Perron root, separation bounds, parameterized complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Computability of Finite Simplicial Complexes

Authors: Djamel Eddine Amir and Mathieu Hoyrup


Abstract
The topological properties of a set have a strong impact on its computability properties. A striking illustration of this idea is given by spheres and closed manifolds: if a set X is homeomorphic to a sphere or a closed manifold, then any algorithm that semicomputes X in some sense can be converted into an algorithm that fully computes X. In other words, the topological properties of X enable one to derive full information about X from partial information about X. In that case, we say that X has computable type. Those results have been obtained by Miller, Iljazović, Sušić and others in the recent years. A similar notion of computable type was also defined for pairs (X,A) in order to cover more spaces, such as compact manifolds with boundary and finite graphs with endpoints. We investigate the higher dimensional analog of graphs, namely the pairs (X,A) where X is a finite simplicial complex and A is a subcomplex of X. We give two topological characterizations of the pairs having computable type. The first one uses a global property of the pair, that we call the ε-surjection property. The second one uses a local property of neighborhoods of vertices, called the surjection property. We give a further characterization for 2-dimensional simplicial complexes, by identifying which local neighborhoods have the surjection property. Using these characterizations, we give non-trivial applications to two famous sets: we prove that the dunce hat does not have computable type whereas Bing’s house does. Important concepts from topology, such as absolute neighborhood retracts and topological cones, play a key role in our proofs.

Cite as

Djamel Eddine Amir and Mathieu Hoyrup. Computability of Finite Simplicial Complexes. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 111:1-111:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ICALP.2022.111,
  author =	{Amir, Djamel Eddine and Hoyrup, Mathieu},
  title =	{{Computability of Finite Simplicial Complexes}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{111:1--111:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.111},
  URN =		{urn:nbn:de:0030-drops-164522},
  doi =		{10.4230/LIPIcs.ICALP.2022.111},
  annote =	{Keywords: Computable Type, Simplicial Complex, Surjection Property, Topological Cone, Absolute Neighborhood Retract, Dunce Hat, Bing’s House}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Unboundedness for Recursion Schemes: A Simpler Type System

Authors: David Barozzini, Paweł Parys, and Jan Wróblewski


Abstract
Decidability of the problems of unboundedness and simultaneous unboundedness (aka. the diagonal problem) for higher-order recursion schemes was established by Clemente, Parys, Salvati, and Walukiewicz (2016). Then a procedure of optimal complexity was presented by Parys (2017); this procedure used a complicated type system, involving multiple flags and markers. We present here a simpler and much more intuitive type system serving the same purpose. We prove that this type system allows to solve the unboundedness problem for a widely considered subclass of recursion schemes, called safe schemes. For unsafe recursion schemes we only have soundness of the type system: if one can establish a type derivation claiming that a recursion scheme is unbounded then it is indeed unbounded. Completeness of the type system for unsafe recursion schemes is left as an open question. Going further, we discuss an extension of the type system that allows to handle the simultaneous unboundedness problem. We also design and implement an algorithm that fully automatically checks unboundedness of a given recursion scheme, completing in a short time for a wide variety of inputs.

Cite as

David Barozzini, Paweł Parys, and Jan Wróblewski. Unboundedness for Recursion Schemes: A Simpler Type System. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 112:1-112:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{barozzini_et_al:LIPIcs.ICALP.2022.112,
  author =	{Barozzini, David and Parys, Pawe{\l} and Wr\'{o}blewski, Jan},
  title =	{{Unboundedness for Recursion Schemes: A Simpler Type System}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{112:1--112:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.112},
  URN =		{urn:nbn:de:0030-drops-164533},
  doi =		{10.4230/LIPIcs.ICALP.2022.112},
  annote =	{Keywords: Higher-order recursion schemes, boundedness, intersection types, safe schemes}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Parameterized Safety Verification of Round-Based Shared-Memory Systems

Authors: Nathalie Bertrand, Nicolas Markey, Ocan Sankur, and Nicolas Waldburger


Abstract
We consider the parameterized verification problem for distributed algorithms where the goal is to develop techniques to prove the correctness of a given algorithm regardless of the number of participating processes. Motivated by an asynchronous binary consensus algorithm [James Aspnes, 2002], we consider round-based distributed algorithms communicating with shared memory. A particular challenge in these systems is that 1) the number of processes is unbounded, and, more importantly, 2) there is a fresh set of registers at each round. A verification algorithm thus needs to manage both sources of infinity. In this setting, we prove that the safety verification problem, which consists in deciding whether all possible executions avoid a given error state, is PSPACE-complete. For negative instances of the safety verification problem, we also provide exponential lower and upper bounds on the minimal number of processes needed for an error execution and on the minimal round on which the error state can be covered.

Cite as

Nathalie Bertrand, Nicolas Markey, Ocan Sankur, and Nicolas Waldburger. Parameterized Safety Verification of Round-Based Shared-Memory Systems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 113:1-113:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bertrand_et_al:LIPIcs.ICALP.2022.113,
  author =	{Bertrand, Nathalie and Markey, Nicolas and Sankur, Ocan and Waldburger, Nicolas},
  title =	{{Parameterized Safety Verification of Round-Based Shared-Memory Systems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{113:1--113:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.113},
  URN =		{urn:nbn:de:0030-drops-164541},
  doi =		{10.4230/LIPIcs.ICALP.2022.113},
  annote =	{Keywords: Verification, Parameterized models, Distributed algorithms}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Passive Learning of Deterministic Büchi Automata by Combinations of DFAs

Authors: León Bohn and Christof Löding


Abstract
We present an algorithm that constructs a deterministic Büchi automaton in polynomial time from given sets of positive and negative example words. This learner constructs multiple DFAs using a polynomial-time active learning algorithm on finite words as black box using an oracle that we implement based on the given sample of ω-words, and combines these DFAs into a single DBA. We prove that the resulting algorithm can learn a DBA for each DBA-recognizable language in the limit by providing a characteristic sample for each DBA-recognizable language. We can only guarantee completeness of our algorithm for the full class of DBAs through characteristic samples that are, in general, exponential in the size of a minimal DBA for the target language. But we show that for each fixed k these characteristic samples are of polynomial size for the class of DBAs in which each subset of pairwise language-equivalent states has size at most k.

Cite as

León Bohn and Christof Löding. Passive Learning of Deterministic Büchi Automata by Combinations of DFAs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 114:1-114:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bohn_et_al:LIPIcs.ICALP.2022.114,
  author =	{Bohn, Le\'{o}n and L\"{o}ding, Christof},
  title =	{{Passive Learning of Deterministic B\"{u}chi Automata by Combinations of DFAs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{114:1--114:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.114},
  URN =		{urn:nbn:de:0030-drops-164553},
  doi =		{10.4230/LIPIcs.ICALP.2022.114},
  annote =	{Keywords: deterministic B\"{u}chi automata, learning from examples, learning in the limit, active learning}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Strategy Synthesis for Global Window PCTL

Authors: Benjamin Bordais, Damien Busatto-Gaston, Shibashis Guha, and Jean-François Raskin


Abstract
Given a Markov decision process (MDP) M and a formula Φ, the strategy synthesis problem asks if there exists a strategy σ s.t. the resulting Markov chain M[σ] satisfies Φ. This problem is known to be undecidable for the probabilistic temporal logic PCTL. We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced all along an execution. Moreover, we allow for linear expressions in the probabilistic inequalities. This logic is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis is decidable when strategies are deterministic while the general problem is undecidable.

Cite as

Benjamin Bordais, Damien Busatto-Gaston, Shibashis Guha, and Jean-François Raskin. Strategy Synthesis for Global Window PCTL. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 115:1-115:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bordais_et_al:LIPIcs.ICALP.2022.115,
  author =	{Bordais, Benjamin and Busatto-Gaston, Damien and Guha, Shibashis and Raskin, Jean-Fran\c{c}ois},
  title =	{{Strategy Synthesis for Global Window PCTL}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{115:1--115:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.115},
  URN =		{urn:nbn:de:0030-drops-164562},
  doi =		{10.4230/LIPIcs.ICALP.2022.115},
  annote =	{Keywords: Markov decision processes, synthesis, PCTL}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Complexity of SPEs in Mean-Payoff Games

Authors: Léonard Brice, Jean-François Raskin, and Marie van den Bogaard


Abstract
We establish that the subgame perfect equilibrium (SPE) threshold problem for mean-payoff games is NP-complete. While the SPE threshold problem was recently shown to be decidable (in doubly exponential time) and NP-hard, its exact worst case complexity was left open.

Cite as

Léonard Brice, Jean-François Raskin, and Marie van den Bogaard. The Complexity of SPEs in Mean-Payoff Games. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 116:1-116:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brice_et_al:LIPIcs.ICALP.2022.116,
  author =	{Brice, L\'{e}onard and Raskin, Jean-Fran\c{c}ois and van den Bogaard, Marie},
  title =	{{The Complexity of SPEs in Mean-Payoff Games}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{116:1--116:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.116},
  URN =		{urn:nbn:de:0030-drops-164574},
  doi =		{10.4230/LIPIcs.ICALP.2022.116},
  annote =	{Keywords: Games on graphs, subgame-perfect equilibria, mean-payoff objectives}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On the Size of Good-For-Games Rabin Automata and Its Link with the Memory in Muller Games

Authors: Antonio Casares, Thomas Colcombet, and Karoliina Lehtinen


Abstract
In this paper, we look at good-for-games Rabin automata that recognise a Muller language (a language that is entirely characterised by the set of letters that appear infinitely often in each word). We establish that minimal such automata are exactly of the same size as the minimal memory required for winning Muller games that have this language as their winning condition. We show how to effectively construct such minimal automata. Finally, we establish that these automata can be exponentially more succinct than equivalent deterministic ones, thus proving as a consequence that chromatic memory for winning a Muller game can be exponentially larger than unconstrained memory.

Cite as

Antonio Casares, Thomas Colcombet, and Karoliina Lehtinen. On the Size of Good-For-Games Rabin Automata and Its Link with the Memory in Muller Games. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 117:1-117:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{casares_et_al:LIPIcs.ICALP.2022.117,
  author =	{Casares, Antonio and Colcombet, Thomas and Lehtinen, Karoliina},
  title =	{{On the Size of Good-For-Games Rabin Automata and Its Link with the Memory in Muller Games}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{117:1--117:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.117},
  URN =		{urn:nbn:de:0030-drops-164580},
  doi =		{10.4230/LIPIcs.ICALP.2022.117},
  annote =	{Keywords: Infinite duration games, Muller games, Rabin conditions, omega-regular languages, memory in games, good-for-games automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Dynamic Meta-Theorems for Distance and Matching

Authors: Samir Datta, Chetan Gupta, Rahul Jain, Anish Mukherjee, Vimal Raj Sharma, and Raghunath Tewari


Abstract
Reachability, distance, and matching are some of the most fundamental graph problems that have been of particular interest in dynamic complexity theory in recent years [Samir Datta et al., 2018; Samir Datta et al., 2018; Samir Datta et al., 2020]. Reachability can be maintained with first-order update formulas, or equivalently in DynFO in general graphs with n nodes [Samir Datta et al., 2018], even under O(log(n)/log log(n)) changes per step [Samir Datta et al., 2018]. In the context of how large the number of changes can be handled, it has recently been shown [Samir Datta et al., 2020] that under a polylogarithmic number of changes, reachability is in DynFOpar in planar, bounded treewidth, and related graph classes - in fact in any graph where small non-zero circulation weights can be computed in NC. We continue this line of investigation and extend the meta-theorem for reachability to distance and bipartite maximum matching with the same bounds. These are amongst the most general classes of graphs known where we can maintain these problems deterministically without using a majority quantifier and even maintain witnesses. For the bipartite matching result, modifying the approach from [Stephen A. Fenner et al., 2016], we convert the static non-zero circulation weights to dynamic matching-isolating weights. While reachability is in DynFOar under O(log(n)/log log(n)) changes, no such bound is known for either distance or matching in any non-trivial class of graphs under non-constant changes. We show that, in the same classes of graphs as before, bipartite maximum matching is in DynFOar under O(log(n)/log log(n)) changes per step. En route to showing this we prove that the rank of a matrix can be maintained in DynFOar, also under O(log(n)/log log(n)) entry changes, improving upon the previous O(1) bound [Samir Datta et al., 2018]. This implies a similar extension for the non-uniform DynFO bound for maximum matching in general graphs and an alternate algorithm for maintaining reachability under O(log(n)/log log(n)) changes [Samir Datta et al., 2018].

Cite as

Samir Datta, Chetan Gupta, Rahul Jain, Anish Mukherjee, Vimal Raj Sharma, and Raghunath Tewari. Dynamic Meta-Theorems for Distance and Matching. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 118:1-118:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.ICALP.2022.118,
  author =	{Datta, Samir and Gupta, Chetan and Jain, Rahul and Mukherjee, Anish and Sharma, Vimal Raj and Tewari, Raghunath},
  title =	{{Dynamic Meta-Theorems for Distance and Matching}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{118:1--118:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.118},
  URN =		{urn:nbn:de:0030-drops-164598},
  doi =		{10.4230/LIPIcs.ICALP.2022.118},
  annote =	{Keywords: Dynamic Complexity, Distance, Matching, Derandomization, Isolation, Matrix Rank}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Circuit Extraction for ZX-Diagrams Can Be #P-Hard

Authors: Niel de Beaudrap, Aleks Kissinger, and John van de Wetering


Abstract
The ZX-calculus is a graphical language for reasoning about quantum computation using ZX-diagrams, a certain flexible generalisation of quantum circuits that can be used to represent linear maps from m to n qubits for any m,n ≥ 0. Some applications for the ZX-calculus, such as quantum circuit optimisation and synthesis, rely on being able to efficiently translate a ZX-diagram back into a quantum circuit of comparable size. While several sufficient conditions are known for describing families of ZX-diagrams that can be efficiently transformed back into circuits, it has previously been conjectured that the general problem of circuit extraction is hard. That is, that it should not be possible to efficiently convert an arbitrary ZX-diagram describing a unitary linear map into an equivalent quantum circuit. In this paper we prove this conjecture by showing that the circuit extraction problem is #P-hard, and so is itself at least as hard as strong simulation of quantum circuits. In addition to our main hardness result, which relies specifically on the circuit representation, we give a representation-agnostic hardness result. Namely, we show that any oracle that takes as input a ZX-diagram description of a unitary and produces samples of the output of the associated quantum computation enables efficient probabilistic solutions to NP-complete problems.

Cite as

Niel de Beaudrap, Aleks Kissinger, and John van de Wetering. Circuit Extraction for ZX-Diagrams Can Be #P-Hard. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 119:1-119:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{debeaudrap_et_al:LIPIcs.ICALP.2022.119,
  author =	{de Beaudrap, Niel and Kissinger, Aleks and van de Wetering, John},
  title =	{{Circuit Extraction for ZX-Diagrams Can Be #P-Hard}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{119:1--119:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.119},
  URN =		{urn:nbn:de:0030-drops-164601},
  doi =		{10.4230/LIPIcs.ICALP.2022.119},
  annote =	{Keywords: ZX-calculus, circuit extraction, quantum circuits, #P}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Hiding Pebbles When the Output Alphabet Is Unary

Authors: Gaëtan Douéneau-Tabot


Abstract
Pebble transducers are nested two-way transducers which can drop marks (named "pebbles") on their input word. Blind transducers have been introduced by Nguyên et al. as a subclass of pebble transducers, which can nest two-way transducers but cannot drop pebbles on their input. In this paper, we study the classes of functions computed by pebble and blind transducers, when the output alphabet is unary. Our main result shows how to decide if a function computed by a pebble transducer can be computed by a blind transducer. We also provide characterizations of these classes in terms of Cauchy and Hadamard products, in the spirit of rational series. Furthermore, pumping-like characterizations of the functions computed by blind transducers are given.

Cite as

Gaëtan Douéneau-Tabot. Hiding Pebbles When the Output Alphabet Is Unary. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 120:1-120:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{doueneautabot:LIPIcs.ICALP.2022.120,
  author =	{Dou\'{e}neau-Tabot, Ga\"{e}tan},
  title =	{{Hiding Pebbles When the Output Alphabet Is Unary}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{120:1--120:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.120},
  URN =		{urn:nbn:de:0030-drops-164613},
  doi =		{10.4230/LIPIcs.ICALP.2022.120},
  annote =	{Keywords: polyregular functions, pebble transducers, rational series, factorization forests, Cauchy product, Hadamard product}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Regular Expressions for Tree-Width 2 Graphs

Authors: Amina Doumane


Abstract
We propose a syntax of regular expressions, which describes languages of tree-width 2 graphs. We show that these languages correspond exactly to those languages of tree-width 2 graphs, definable in the counting monadic second-order logic (CMSO).

Cite as

Amina Doumane. Regular Expressions for Tree-Width 2 Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 121:1-121:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{doumane:LIPIcs.ICALP.2022.121,
  author =	{Doumane, Amina},
  title =	{{Regular Expressions for Tree-Width 2 Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{121:1--121:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.121},
  URN =		{urn:nbn:de:0030-drops-164627},
  doi =		{10.4230/LIPIcs.ICALP.2022.121},
  annote =	{Keywords: Tree width, MSO, Regular expressions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
A Generic Solution to Register-Bounded Synthesis with an Application to Discrete Orders

Authors: Léo Exibard, Emmanuel Filiot, and Ayrat Khalimov


Abstract
We study synthesis of reactive systems interacting with environments using an infinite data domain. A popular formalism for specifying and modelling such systems is register automata and transducers. They extend finite-state automata by adding registers to store data values and to compare the incoming data values against stored ones. Synthesis from nondeterministic or universal register automata is undecidable in general. However, its register-bounded variant, where additionally a bound on the number of registers in a sought transducer is given, is known to be decidable for universal register automata which can compare data for equality, i.e., for data domain (ℕ, =). This paper extends the decidability border to the domain (ℕ, <) of natural numbers with linear order. Our solution is generic: we define a sufficient condition on data domains (regular approximability) for decidability of register-bounded synthesis. The condition is satisfied by natural data domains like (ℕ, <). It allows one to use simple language-theoretic arguments and avoid technical game-theoretic reasoning. Further, by defining a generic notion of reducibility between data domains, we show the decidability of synthesis in the domain (ℕ^d, <^d) of tuples of numbers equipped with the component-wise partial order and in the domain (Σ^*,≺) of finite strings with the prefix relation.

Cite as

Léo Exibard, Emmanuel Filiot, and Ayrat Khalimov. A Generic Solution to Register-Bounded Synthesis with an Application to Discrete Orders. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 122:1-122:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{exibard_et_al:LIPIcs.ICALP.2022.122,
  author =	{Exibard, L\'{e}o and Filiot, Emmanuel and Khalimov, Ayrat},
  title =	{{A Generic Solution to Register-Bounded Synthesis with an Application to Discrete Orders}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{122:1--122:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.122},
  URN =		{urn:nbn:de:0030-drops-164634},
  doi =		{10.4230/LIPIcs.ICALP.2022.122},
  annote =	{Keywords: Synthesis, Register Automata, Transducers, Ordered Data Domains}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Twin-Width and Types

Authors: Jakub Gajarský, Michał Pilipczuk, Wojciech Przybyszewski, and Szymon Toruńczyk


Abstract
We study problems connected to first-order logic in graphs of bounded twin-width. Inspired by the approach of Bonnet et al. [FOCS 2020], we introduce a robust methodology of local types and describe their behavior in contraction sequences - the decomposition notion underlying twin-width. We showcase the applicability of the methodology by proving the following two algorithmic results. In both statements, we fix a first-order formula φ(x_1,…,x_k) and a constant d, and we assume that on input we are given a graph G together with a contraction sequence of width at most d. - One can in time 𝒪(n) construct a data structure that can answer the following queries in time 𝒪(log log n): given w_1,…,w_k, decide whether φ(w_1,…,w_k) holds in G. - After 𝒪(n)-time preprocessing, one can enumerate all tuples w₁,…,w_k that satisfy φ(x_1,…,x_k) in G with 𝒪(1) delay. In the first case, the query time can be reduced to 𝒪(1/ε) at the expense of increasing the construction time to 𝒪(n^{1+ε}), for any fixed ε > 0. Finally, we also apply our tools to prove the following statement, which shows optimal bounds on the VC density of set systems that are first-order definable in graphs of bounded twin-width. - Let G be a graph of twin-width d, A be a subset of vertices of G, and φ(x_1,…,x_k,y_1,…,y_l) be a first-order formula. Then the number of different subsets of A^k definable by φ using l-tuples of vertices from G as parameters, is bounded by O(|A|^l).

Cite as

Jakub Gajarský, Michał Pilipczuk, Wojciech Przybyszewski, and Szymon Toruńczyk. Twin-Width and Types. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 123:1-123:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gajarsky_et_al:LIPIcs.ICALP.2022.123,
  author =	{Gajarsk\'{y}, Jakub and Pilipczuk, Micha{\l} and Przybyszewski, Wojciech and Toru\'{n}czyk, Szymon},
  title =	{{Twin-Width and Types}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{123:1--123:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.123},
  URN =		{urn:nbn:de:0030-drops-164640},
  doi =		{10.4230/LIPIcs.ICALP.2022.123},
  annote =	{Keywords: twin-width, FO logic, model checking, query answering, enumeration}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Reachability in Bidirected Pushdown VASS

Authors: Moses Ganardi, Rupak Majumdar, Andreas Pavlogiannis, Lia Schütze, and Georg Zetzsche


Abstract
A pushdown vector addition system with states (PVASS) extends the model of vector addition systems with a pushdown store. A PVASS is said to be bidirected if every transition (pushing/popping a symbol or modifying a counter) has an accompanying opposite transition that reverses the effect. Bidirectedness arises naturally in many models; it can also be seen as a overapproximation of reachability. We show that the reachability problem for bidirected PVASS is decidable in Ackermann time and primitive recursive for any fixed dimension. For the special case of one-dimensional bidirected PVASS, we show reachability is in PSPACE, and in fact in polynomial time if the stack is polynomially bounded. Our results are in contrast to the directed setting, where decidability of reachability is a long-standing open problem already for one dimensional PVASS, and there is a PSPACE-lower bound already for one-dimensional PVASS with bounded stack. The reachability relation in the bidirected (stateless) case is a congruence over ℕ^d. Our upper bounds exploit saturation techniques over congruences. In particular, we show novel elementary-time constructions of semilinear representations of congruences generated by finitely many vector pairs. In the case of one-dimensional PVASS, we employ a saturation procedure over bounded-size counters. We complement our upper bound with a TOWER-hardness result for arbitrary dimension and k-EXPSPACE hardness in dimension 2k+6 using a technique by Lazić and Totzke to implement iterative exponentiations.

Cite as

Moses Ganardi, Rupak Majumdar, Andreas Pavlogiannis, Lia Schütze, and Georg Zetzsche. Reachability in Bidirected Pushdown VASS. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 124:1-124:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ganardi_et_al:LIPIcs.ICALP.2022.124,
  author =	{Ganardi, Moses and Majumdar, Rupak and Pavlogiannis, Andreas and Sch\"{u}tze, Lia and Zetzsche, Georg},
  title =	{{Reachability in Bidirected Pushdown VASS}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{124:1--124:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.124},
  URN =		{urn:nbn:de:0030-drops-164651},
  doi =		{10.4230/LIPIcs.ICALP.2022.124},
  annote =	{Keywords: Vector addition systems, Pushdown, Reachability, Decidability, Complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Distributed Controller Synthesis for Deadlock Avoidance

Authors: Hugo Gimbert, Corto Mascle, Anca Muscholl, and Igor Walukiewicz


Abstract
We consider the distributed control synthesis problem for systems with locks. The goal is to find local controllers so that the global system does not deadlock. With no restriction this problem is undecidable even for three processes each using a fixed number of locks. We propose two restrictions that make distributed control decidable. The first one is to allow each process to use at most two locks. The problem then becomes complete for the second level of the polynomial time hierarchy, and even in Ptime under some additional assumptions. The dining philosophers problem satisfies these assumptions. The second restriction is a nested usage of locks. In this case the synthesis problem is Nexptime-complete. The drinking philosophers problem falls in this case.

Cite as

Hugo Gimbert, Corto Mascle, Anca Muscholl, and Igor Walukiewicz. Distributed Controller Synthesis for Deadlock Avoidance. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 125:1-125:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gimbert_et_al:LIPIcs.ICALP.2022.125,
  author =	{Gimbert, Hugo and Mascle, Corto and Muscholl, Anca and Walukiewicz, Igor},
  title =	{{Distributed Controller Synthesis for Deadlock Avoidance}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{125:1--125:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.125},
  URN =		{urn:nbn:de:0030-drops-164668},
  doi =		{10.4230/LIPIcs.ICALP.2022.125},
  annote =	{Keywords: Distributed Synthesis, Concurrency, Lock Synchronisation, Deadlock Avoidance}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Lower Bounds for Unambiguous Automata via Communication Complexity

Authors: Mika Göös, Stefan Kiefer, and Weiqiang Yuan


Abstract
We use results from communication complexity, both new and old ones, to prove lower bounds for unambiguous finite automata (UFAs). We show three results. 1) Complement: There is a language L recognised by an n-state UFA such that the complement language ̅L requires NFAs with n^Ω̃(log n) states. This improves on a lower bound by Raskin. 2) Union: There are languages L₁, L₂ recognised by n-state UFAs such that the union L₁∪L₂ requires UFAs with n^Ω̃(log n) states. 3) Separation: There is a language L such that both L and ̅L are recognised by n-state NFAs but such that L requires UFAs with n^Ω(log n) states. This refutes a conjecture by Colcombet.

Cite as

Mika Göös, Stefan Kiefer, and Weiqiang Yuan. Lower Bounds for Unambiguous Automata via Communication Complexity. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 126:1-126:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.ICALP.2022.126,
  author =	{G\"{o}\"{o}s, Mika and Kiefer, Stefan and Yuan, Weiqiang},
  title =	{{Lower Bounds for Unambiguous Automata via Communication Complexity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{126:1--126:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.126},
  URN =		{urn:nbn:de:0030-drops-164679},
  doi =		{10.4230/LIPIcs.ICALP.2022.126},
  annote =	{Keywords: Unambiguous automata, communication complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Satisfiability Problems for Finite Groups

Authors: Paweł M. Idziak, Piotr Kawałek, Jacek Krzaczkowski, and Armin Weiß


Abstract
Over twenty years ago, Goldmann and Russell initiated the study of the complexity of the equation satisfiability problem (PolSat and the NUDFA program satisfiability problem (ProgramSat) in finite groups. They showed that these problems are in 𝖯 for nilpotent groups while they are NP-complete for non-solvable groups. In this work we completely characterize finite groups for which the problem ProgramSat can be solved in randomized polynomial time under the assumptions of the Randomized Exponential Time Hypothesis and the Constant Degree Hypothesis. We also determine the complexity of PolSat for a wide class of finite groups. As a by-product, we obtain a classification for ListPolSat, a version of PolSat where each variable can be restricted to an arbitrary subset. Finally, we also prove unconditional algorithms for these problems in certain cases.

Cite as

Paweł M. Idziak, Piotr Kawałek, Jacek Krzaczkowski, and Armin Weiß. Satisfiability Problems for Finite Groups. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 127:1-127:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{idziak_et_al:LIPIcs.ICALP.2022.127,
  author =	{Idziak, Pawe{\l} M. and Kawa{\l}ek, Piotr and Krzaczkowski, Jacek and Wei{\ss}, Armin},
  title =	{{Satisfiability Problems for Finite Groups}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{127:1--127:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.127},
  URN =		{urn:nbn:de:0030-drops-164685},
  doi =		{10.4230/LIPIcs.ICALP.2022.127},
  annote =	{Keywords: Satisifiability, Solvable groups, ProgramSat, PolSat, Exponential Time Hypothesis}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Linearly Ordered Colourings of Hypergraphs

Authors: Tamio-Vesa Nakajima and Stanislav Živný


Abstract
A linearly ordered (LO) k-colouring of an r-uniform hypergraph assigns an integer from {1, …, k} to every vertex so that, in every edge, the (multi)set of colours has a unique maximum. Equivalently, for r = 3, if two vertices in an edge are assigned the same colour, then the third vertex is assigned a larger colour (as opposed to a different colour, as in classic non-monochromatic colouring). Barto, Battistelli, and Berg [STACS'21] studied LO colourings on 3-uniform hypergraphs in the context of promise constraint satisfaction problems (PCSPs). We show two results. First, given a 3-uniform hypergraph that admits an LO 2-colouring, one can find in polynomial time an LO k-colouring with k = O(√{nlog log n}/log n), where n is the number of vertices of the input hypergraph. This is established by building on ideas from algorithms designed for approximate graph colourings. Second, given an r-uniform hypergraph that admits an LO 2-colouring, we establish NP-hardness of finding an LO 3-colouring for every constant uniformity r ≥ 5. In fact, we determine the precise relationship of polymorphism minions for all uniformities r ≥ 3, which reveals a key difference between r = 3,4 and r ≥ 5 and which may be of independent interest. Using the algebraic approach to PCSPs, we actually show a more general result establishing NP-hardness of finding an LO (k+1)-colouring for LO k-colourable r-uniform hypergraphs for k ≥ 2 and r ≥ 5.

Cite as

Tamio-Vesa Nakajima and Stanislav Živný. Linearly Ordered Colourings of Hypergraphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 128:1-128:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{nakajima_et_al:LIPIcs.ICALP.2022.128,
  author =	{Nakajima, Tamio-Vesa and \v{Z}ivn\'{y}, Stanislav},
  title =	{{Linearly Ordered Colourings of Hypergraphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{128:1--128:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.128},
  URN =		{urn:nbn:de:0030-drops-164692},
  doi =		{10.4230/LIPIcs.ICALP.2022.128},
  annote =	{Keywords: hypegraph colourings, promise constraint satisfaction, PCSP, polymorphisms, minions, algebraic approach}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Variance-Penalized Stochastic Shortest Path Problem

Authors: Jakob Piribauer, Ocan Sankur, and Christel Baier


Abstract
The stochastic shortest path problem (SSPP) asks to resolve the non-deterministic choices in a Markov decision process (MDP) such that the expected accumulated weight before reaching a target state is maximized. This paper addresses the optimization of the variance-penalized expectation (VPE) of the accumulated weight, which is a variant of the SSPP in which a multiple of the variance of accumulated weights is incurred as a penalty. It is shown that the optimal VPE in MDPs with non-negative weights as well as an optimal deterministic finite-memory scheduler can be computed in exponential space. The threshold problem whether the maximal VPE exceeds a given rational is shown to be EXPTIME-hard and to lie in NEXPTIME. Furthermore, a result of interest in its own right obtained on the way is that a variance-minimal scheduler among all expectation-optimal schedulers can be computed in polynomial time.

Cite as

Jakob Piribauer, Ocan Sankur, and Christel Baier. The Variance-Penalized Stochastic Shortest Path Problem. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 129:1-129:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{piribauer_et_al:LIPIcs.ICALP.2022.129,
  author =	{Piribauer, Jakob and Sankur, Ocan and Baier, Christel},
  title =	{{The Variance-Penalized Stochastic Shortest Path Problem}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{129:1--129:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.129},
  URN =		{urn:nbn:de:0030-drops-164705},
  doi =		{10.4230/LIPIcs.ICALP.2022.129},
  annote =	{Keywords: Markov decision process, variance, stochastic shortest path problem}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Functions and References in the Pi-Calculus: Full Abstraction and Proof Techniques

Authors: Enguerrand Prebet


Abstract
We present a fully abstract encoding of λ^{ref}, the call-by-value λ-calculus with references, in the π-calculus. By contrast with previous full abstraction results for sequential languages in the π-calculus, the characterisation of contextual equivalence in the source language uses a labelled bisimilarity. To define the latter equivalence, we refine existing notions of typed bisimulation in the π-calculus, and introduce in particular a specific component to handle divergences. We obtain a proof technique that allows us to prove equivalences between λ^{ref} programs via the encoding. The resulting proofs correspond closely to normal form bisimulations in the λ-calculus, making proofs in the π-calculus expressible as if reasoning in λ^{ref}. We study how standard and new up-to techniques can be used to reason about diverging terms and simplify proofs of equivalence using the bisimulation we introduce. This shows how the π-calculus theory can be used to prove interesting equivalences between λ^{ref} programs, using algebraic reasoning and up-to techniques.

Cite as

Enguerrand Prebet. Functions and References in the Pi-Calculus: Full Abstraction and Proof Techniques. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 130:1-130:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{prebet:LIPIcs.ICALP.2022.130,
  author =	{Prebet, Enguerrand},
  title =	{{Functions and References in the Pi-Calculus: Full Abstraction and Proof Techniques}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{130:1--130:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.130},
  URN =		{urn:nbn:de:0030-drops-164715},
  doi =		{10.4230/LIPIcs.ICALP.2022.130},
  annote =	{Keywords: Call-by-value \lambda-calculus, imperative Programming, \pi-calculus, Bisimulation, Type System}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
What Can Oracles Teach Us About the Ultimate Fate of Life?

Authors: Ville Salo and Ilkka Törmä


Abstract
We settle two long-standing open problems about Conway’s Life, a two-dimensional cellular automaton. We solve the Generalized grandfather problem: for all n ≥ 0, there exists a configuration that has an nth predecessor but not an (n+1)st one. We also solve (one interpretation of) the Unique father problem: there exists a finite stable configuration that contains a finite subpattern that has no predecessor patterns except itself. In particular this gives the first example of an unsynthesizable still life. The new key concept is that of a spatiotemporally periodic configuration (agar) that has a unique chain of preimages; we show that this property is semidecidable, and find examples of such agars using a SAT solver. Our results about the topological dynamics of Game of Life are as follows: it never reaches its limit set; its dynamics on its limit set is chain-wandering, in particular it is not topologically transitive and does not have dense periodic points; and the spatial dynamics of its limit set is non-sofic, and does not admit a sublinear gluing radius in the cardinal directions (in particular it is not block-gluing). Our computability results are that Game of Life’s reachability problem, as well as the language of its limit set, are PSPACE-hard.

Cite as

Ville Salo and Ilkka Törmä. What Can Oracles Teach Us About the Ultimate Fate of Life?. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 131:1-131:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{salo_et_al:LIPIcs.ICALP.2022.131,
  author =	{Salo, Ville and T\"{o}rm\"{a}, Ilkka},
  title =	{{What Can Oracles Teach Us About the Ultimate Fate of Life?}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{131:1--131:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.131},
  URN =		{urn:nbn:de:0030-drops-164721},
  doi =		{10.4230/LIPIcs.ICALP.2022.131},
  annote =	{Keywords: Game of Life, cellular automata, limit set, symbolic dynamics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Processes Parametrised by an Algebraic Theory

Authors: Todd Schmid, Wojciech Różowski, Jurriaan Rot, and Alexandra Silva


Abstract
We develop a (co)algebraic framework to study a family of process calculi with monadic branching structures and recursion operators. Our framework features a uniform semantics of process terms and a complete axiomatisation of semantic equivalence. We show that there are uniformly defined fragments of our calculi that capture well-known examples from the literature like regular expressions modulo bisimilarity and guarded Kleene algebra with tests. We also derive new calculi for probabilistic and convex processes with an analogue of Kleene star.

Cite as

Todd Schmid, Wojciech Różowski, Jurriaan Rot, and Alexandra Silva. Processes Parametrised by an Algebraic Theory. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 132:1-132:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{schmid_et_al:LIPIcs.ICALP.2022.132,
  author =	{Schmid, Todd and R\'{o}\.{z}owski, Wojciech and Rot, Jurriaan and Silva, Alexandra},
  title =	{{Processes Parametrised by an Algebraic Theory}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{132:1--132:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.132},
  URN =		{urn:nbn:de:0030-drops-164735},
  doi =		{10.4230/LIPIcs.ICALP.2022.132},
  annote =	{Keywords: process algebra, program semantics, coalgebra, regular expressions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Dimension Spectrum Conjecture for Planar Lines

Authors: D. M. Stull


Abstract
Let L_{a,b} be a line in the Euclidean plane with slope a and intercept b. The dimension spectrum sp(L_{a,b}) is the set of all effective dimensions of individual points on L_{a,b}. Jack Lutz, in the early 2000s posed the dimension spectrum conjecture. This conjecture states that, for every line L_{a,b}, the spectrum of L_{a,b} contains a unit interval. In this paper we prove that the dimension spectrum conjecture is true. Specifically, let (a,b) be a slope-intercept pair, and let d = min{dim(a,b), 1}. For every s ∈ [0, 1], we construct a point x such that dim(x, ax + b) = d + s. Thus, we show that sp(L_{a,b}) contains the interval [d, 1+ d].

Cite as

D. M. Stull. The Dimension Spectrum Conjecture for Planar Lines. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 133:1-133:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{stull:LIPIcs.ICALP.2022.133,
  author =	{Stull, D. M.},
  title =	{{The Dimension Spectrum Conjecture for Planar Lines}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{133:1--133:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.133},
  URN =		{urn:nbn:de:0030-drops-164749},
  doi =		{10.4230/LIPIcs.ICALP.2022.133},
  annote =	{Keywords: Algorithmic randomness, Kolmogorov complexity, effective dimension}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail