LIPIcs, Volume 297

51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)



Thumbnail PDF

Event

ICALP 2024, July 8-12, 2024, Tallinn, Estonia

Editors

Karl Bringmann
  • Saarland University, Saarland Informatics Campus, Saarbrücken, Germany , Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Martin Grohe
  • RWTH Aachen University, Germany
Gabriele Puppis
  • University of Udine, Italy
Ola Svensson
  • EPFL, Lausanne, Switzerland

Publication Details

  • published at: 2024-07-02
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-322-5

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 297, ICALP 2024, Complete Volume

Authors: Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola Svensson


Abstract
LIPIcs, Volume 297, ICALP 2024, Complete Volume

Cite as

51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 1-2938, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Proceedings{bringmann_et_al:LIPIcs.ICALP.2024,
  title =	{{LIPIcs, Volume 297, ICALP 2024, Complete Volume}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{1--2938},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024},
  URN =		{urn:nbn:de:0030-drops-201424},
  doi =		{10.4230/LIPIcs.ICALP.2024},
  annote =	{Keywords: LIPIcs, Volume 297, ICALP 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola Svensson


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 0:i-0:xl, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2024.0,
  author =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{0:i--0:xl},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.0},
  URN =		{urn:nbn:de:0030-drops-201437},
  doi =		{10.4230/LIPIcs.ICALP.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Limits of Symmetric Computation (Invited Talk)

Authors: Anuj Dawar


Abstract
I survey recent work on symmetric computation. A number of strands of work, from logic, circuit complexity, combinatorial optimization and other areas have converged on similar notions of symmetry in computation. This write-up of an invited talk gives a whirlwind tour through the results and pointers to the relevant literature.

Cite as

Anuj Dawar. Limits of Symmetric Computation (Invited Talk). In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 1:1-1:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dawar:LIPIcs.ICALP.2024.1,
  author =	{Dawar, Anuj},
  title =	{{Limits of Symmetric Computation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{1:1--1:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.1},
  URN =		{urn:nbn:de:0030-drops-201444},
  doi =		{10.4230/LIPIcs.ICALP.2024.1},
  annote =	{Keywords: Logic, Complexity Theory, Symmetric Computation}
}
Document
Invited Talk
Group Fairness: Multiwinner Voting and Beyond (Invited Talk)

Authors: Edith Elkind


Abstract
In multiwinner voting with approval ballots the agents are presented with a set of alternatives, each agent indicates which of these alternatives they approve, and the goal is to select a fixed-size subset of the alternatives, in a way that reflects the voters' preferences. This framework captures a variety of group decision-making scenarios, from choosing a list of speakers for an event to appointing a set of validators in a proof-of-stake blockchain. An important concern in many of these scenarios is group fairness: every sufficiently large group of agents with similar preferences should be represented in the winning set of alternatives. In this talk, we discuss how to formalise this idea and whether the resulting axioms can be satisfied by efficiently computable voting rules. We also discuss extensions of our framework to the more expressive setting of participatory budgeting, where the agents are presented with a slate of projects (which may have different costs) and the goal is to select a subset of projects subject to a budget constraint.

Cite as

Edith Elkind. Group Fairness: Multiwinner Voting and Beyond (Invited Talk). In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{elkind:LIPIcs.ICALP.2024.2,
  author =	{Elkind, Edith},
  title =	{{Group Fairness: Multiwinner Voting and Beyond}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.2},
  URN =		{urn:nbn:de:0030-drops-201459},
  doi =		{10.4230/LIPIcs.ICALP.2024.2},
  annote =	{Keywords: multiwinner voting, participatory budgeting, justified representation}
}
Document
Invited Talk
Cross-Paradigm Graph Algorithms (Invited Talk)

Authors: Danupon Nanongkai


Abstract
A goal of the theory of graph algorithms is algorithmic techniques that enable computing devices to process graph data with little resources (time, space, communication overhead, etc.). This led to extensive studies of graph algorithms in various models of computation (sequential algorithms, distributed algorithms, streaming algorithms, etc.) by many sub-communities. Cross-paradigm graph algorithms is an effort to attack the same problem in many models of computation simultaneously, with the goal to generate new insights that may not emerge from the isolated viewpoint of a single model and to ultimately develop techniques that can be used to solve graph problems near-optimally across many models of computation. In this talk, I will discuss some recent advances in graph algorithmic techniques for basic graph problems (e.g. minimum cut, shortest path, and maximum flow) in connection to this research program, especially some insights that led to cross-paradigm algorithms and to answering notorious open questions. No background will be assumed from the audience beyond familiarity with textbook graph algorithms.

Cite as

Danupon Nanongkai. Cross-Paradigm Graph Algorithms (Invited Talk). In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nanongkai:LIPIcs.ICALP.2024.3,
  author =	{Nanongkai, Danupon},
  title =	{{Cross-Paradigm Graph Algorithms}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.3},
  URN =		{urn:nbn:de:0030-drops-201467},
  doi =		{10.4230/LIPIcs.ICALP.2024.3},
  annote =	{Keywords: Graph Algorithms and Complexity, Efficient Algorithms, Models of Computation}
}
Document
Invited Talk
Graphs Shortcuts: New Bounds and Algorithms (Invited Talk)

Authors: Merav Parter


Abstract
For an n-vertex digraph G = (V,E), a shortcut set is a (small) subset of edges H taken from the transitive closure of G that, when added to G guarantees that the diameter of G ∪ H is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every n-vertex digraph admits a shortcut set of linear size (i.e., of O(n) edges) that reduces the diameter to Õ(√n). Despite extensive research over the years, the question of whether one can reduce the diameter to o(√n) with Õ(n) shortcut edges has been left open. In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the √n diameter barrier. Specifically, we show an O(n^ω)-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to Õ(n^{1/3}). I also present time efficient algorithms for computing these shortcuts and explain the limitations of the current approaches. Finally, I will draw some connections between shortcuts and several forms of graph sparsification (e.g., reachability preservers, spanners). Based on a joint work with Shimon Kogan (SODA 2022, ICALP 2022, FOCS 2022, SODA 2023).

Cite as

Merav Parter. Graphs Shortcuts: New Bounds and Algorithms (Invited Talk). In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{parter:LIPIcs.ICALP.2024.4,
  author =	{Parter, Merav},
  title =	{{Graphs Shortcuts: New Bounds and Algorithms}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.4},
  URN =		{urn:nbn:de:0030-drops-201476},
  doi =		{10.4230/LIPIcs.ICALP.2024.4},
  annote =	{Keywords: Shortcuts, Spanners, Distance Preservers}
}
Document
Track A: Algorithms, Complexity and Games
An O(loglog n)-Approximation for Submodular Facility Location

Authors: Fateme Abbasi, Marek Adamczyk, Miguel Bosch-Calvo, Jarosław Byrka, Fabrizio Grandoni, Krzysztof Sornat, and Antoine Tinguely


Abstract
In the Submodular Facility Location problem (SFL) we are given a collection of n clients and m facilities in a metric space. A feasible solution consists of an assignment of each client to some facility. For each client, one has to pay the distance to the associated facility. Furthermore, for each facility f to which we assign the subset of clients S^f, one has to pay the opening cost g(S^f), where g() is a monotone submodular function with g(emptyset)=0. SFL is APX-hard since it includes the classical (metric uncapacitated) Facility Location problem (with uniform facility costs) as a special case. Svitkina and Tardos [SODA'06] gave the current-best O(log n) approximation algorithm for SFL. The same authors pose the open problem whether SFL admits a constant approximation and provide such an approximation for a very restricted special case of the problem. We make some progress towards the solution of the above open problem by presenting an O(loglog n) approximation. Our approach is rather flexible and can be easily extended to generalizations and variants of SFL. In more detail, we achieve the same approximation factor for the natural generalizations of SFL where the opening cost of each facility f is of the form p_f + g(S^f) or w_f * g(S^f), where p_f, w_f >= 0 are input values. We also obtain an improved approximation algorithm for the related Universal Stochastic Facility Location problem. In this problem one is given a classical (metric) facility location instance and has to a priori assign each client to some facility. Then a subset of active clients is sampled from some given distribution, and one has to pay (a posteriori) only the connection and opening costs induced by the active clients. The expected opening cost of each facility f can be modelled with a submodular function of the set of clients assigned to f.

Cite as

Fateme Abbasi, Marek Adamczyk, Miguel Bosch-Calvo, Jarosław Byrka, Fabrizio Grandoni, Krzysztof Sornat, and Antoine Tinguely. An O(loglog n)-Approximation for Submodular Facility Location. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abbasi_et_al:LIPIcs.ICALP.2024.5,
  author =	{Abbasi, Fateme and Adamczyk, Marek and Bosch-Calvo, Miguel and Byrka, Jaros{\l}aw and Grandoni, Fabrizio and Sornat, Krzysztof and Tinguely, Antoine},
  title =	{{An O(loglog n)-Approximation for Submodular Facility Location}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.5},
  URN =		{urn:nbn:de:0030-drops-201488},
  doi =		{10.4230/LIPIcs.ICALP.2024.5},
  annote =	{Keywords: approximation algorithms, facility location, submodular facility location, universal stochastic facility location}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces

Authors: Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase


Abstract
We consider the well-studied Robust (k,z)-Clustering problem, which generalizes the classic k-Median, k-Means, and k-Center problems and arises in the domains of robust optimization [Anthony, Goyal, Gupta, Nagarajan, Math. Oper. Res. 2010] and in algorithmic fairness [Abbasi, Bhaskara, Venkatasubramanian, 2021 & Ghadiri, Samadi, Vempala, 2022]. Given a constant z ≥ 1, the input to Robust (k,z)-Clustering is a set P of n points in a metric space (M,δ), a weight function w: P → ℝ_{≥ 0} and a positive integer k. Further, each point belongs to one (or more) of the m many different groups S_1,S_2,…,S_m ⊆ P. Our goal is to find a set X of k centers such that max_{i ∈ [m]} ∑_{p ∈ S_i} w(p) δ(p,X)^z is minimized. Complementing recent work on this problem, we give a comprehensive understanding of the parameterized approximability of the problem in geometric spaces where the parameter is the number k of centers. We prove the following results: [(i)] 1) For a universal constant η₀ > 0.0006, we devise a 3^z(1-η₀)-factor FPT approximation algorithm for Robust (k,z)-Clustering in discrete high-dimensional Euclidean spaces where the set of potential centers is finite. This shows that the lower bound of 3^z for general metrics [Goyal, Jaiswal, Inf. Proc. Letters, 2023] no longer holds when the metric has geometric structure. 2) We show that Robust (k,z)-Clustering in discrete Euclidean spaces is (√{3/2}- o(1))-hard to approximate for FPT algorithms, even if we consider the special case k-Center in logarithmic dimensions. This rules out a (1+ε)-approximation algorithm running in time f(k,ε)poly(m,n) (also called efficient parameterized approximation scheme or EPAS), giving a striking contrast with the recent EPAS for the continuous setting where centers can be placed anywhere in the space [Abbasi et al., FOCS'23]. 3) However, we obtain an EPAS for Robust (k,z)-Clustering in discrete Euclidean spaces when the dimension is sublogarithmic (for the discrete problem, earlier work [Abbasi et al., FOCS'23] provides an EPAS only in dimension o(log log n)). Our EPAS works also for metrics of sub-logarithmic doubling dimension.

Cite as

Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase. Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abbasi_et_al:LIPIcs.ICALP.2024.6,
  author =	{Abbasi, Fateme and Banerjee, Sandip and Byrka, Jaros{\l}aw and Chalermsook, Parinya and Gadekar, Ameet and Khodamoradi, Kamyar and Marx, D\'{a}niel and Sharma, Roohani and Spoerhase, Joachim},
  title =	{{Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.6},
  URN =		{urn:nbn:de:0030-drops-201494},
  doi =		{10.4230/LIPIcs.ICALP.2024.6},
  annote =	{Keywords: Clustering, approximation algorithms, parameterized complexity}
}
Document
Track A: Algorithms, Complexity and Games
Finer-Grained Reductions in Fine-Grained Hardness of Approximation

Authors: Elie Abboud and Noga Ron-Zewi


Abstract
We investigate the relation between δ and ε required for obtaining a (1+δ)-approximation in time N^{2-ε} for closest pair problems under various distance metrics, and for other related problems in fine-grained complexity. Specifically, our main result shows that if it is impossible to (exactly) solve the (bichromatic) inner product (IP) problem for vectors of dimension c log N in time N^{2-ε}, then there is no (1+δ)-approximation algorithm for (bichromatic) Euclidean Closest Pair running in time N^{2-2ε}, where δ ≈ (ε/c)² (where ≈ hides polylog factors). This improves on the prior result due to Chen and Williams (SODA 2019) which gave a smaller polynomial dependence of δ on ε, on the order of δ ≈ (ε/c)⁶. Our result implies in turn that no (1+δ)-approximation algorithm exists for Euclidean closest pair for δ ≈ ε⁴, unless an algorithmic improvement for IP is obtained. This in turn is very close to the approximation guarantee of δ ≈ ε³ for Euclidean closest pair, given by the best known algorithm of Almam, Chan, and Williams (FOCS 2016). By known reductions, a similar result follows for a host of other related problems in fine-grained hardness of approximation. Our reduction combines the hardness of approximation framework of Chen and Williams, together with an MA communication protocol for IP over a small alphabet, that is inspired by the MA protocol of Chen (Theory of Computing, 2020).

Cite as

Elie Abboud and Noga Ron-Zewi. Finer-Grained Reductions in Fine-Grained Hardness of Approximation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2024.7,
  author =	{Abboud, Elie and Ron-Zewi, Noga},
  title =	{{Finer-Grained Reductions in Fine-Grained Hardness of Approximation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.7},
  URN =		{urn:nbn:de:0030-drops-201507},
  doi =		{10.4230/LIPIcs.ICALP.2024.7},
  annote =	{Keywords: Fine-grained complexity, conditional lower bound, fine-grained reduction, Approximation algorithms, Analysis of algorithms, Computational geometry, Computational and structural complexity theory}
}
Document
Track A: Algorithms, Complexity and Games
Approximation Schemes for Geometric Knapsack for Packing Spheres and Fat Objects

Authors: Pritam Acharya, Sujoy Bhore, Aaryan Gupta, Arindam Khan, Bratin Mondal, and Andreas Wiese


Abstract
We study the geometric knapsack problem in which we are given a set of d-dimensional objects (each with associated profits) and the goal is to find the maximum profit subset that can be packed non-overlappingly into a given d-dimensional (unit hypercube) knapsack. Even if d = 2 and all input objects are disks, this problem is known to be NP-hard [Demaine, Fekete, Lang, 2010]. In this paper, we give polynomial time (1+ε)-approximation algorithms for the following types of input objects in any constant dimension d: - disks and hyperspheres, - a class of fat convex polygons that generalizes regular k-gons for k ≥ 5 (formally, polygons with a constant number of edges, whose lengths are in a bounded range, and in which each angle is strictly larger than π/2), - arbitrary fat convex objects that are sufficiently small compared to the knapsack. We remark that in our PTAS for disks and hyperspheres, we output the computed set of objects, but for a O_ε(1) of them we determine their coordinates only up to an exponentially small error. However, it is not clear whether there always exists a (1+ε)-approximate solution that uses only rational coordinates for the disks' centers. We leave this as an open problem which is related to well-studied geometric questions in the realm of circle packing.

Cite as

Pritam Acharya, Sujoy Bhore, Aaryan Gupta, Arindam Khan, Bratin Mondal, and Andreas Wiese. Approximation Schemes for Geometric Knapsack for Packing Spheres and Fat Objects. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{acharya_et_al:LIPIcs.ICALP.2024.8,
  author =	{Acharya, Pritam and Bhore, Sujoy and Gupta, Aaryan and Khan, Arindam and Mondal, Bratin and Wiese, Andreas},
  title =	{{Approximation Schemes for Geometric Knapsack for Packing Spheres and Fat Objects}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.8},
  URN =		{urn:nbn:de:0030-drops-201511},
  doi =		{10.4230/LIPIcs.ICALP.2024.8},
  annote =	{Keywords: Approximation Algorithms, Polygon Packing, Circle Packing, Sphere Packing, Geometric Knapsack, Resource Augmentation}
}
Document
Track A: Algorithms, Complexity and Games
Detecting Disjoint Shortest Paths in Linear Time and More

Authors: Shyan Akmal, Virginia Vassilevska Williams, and Nicole Wein


Abstract
In the k-Disjoint Shortest Paths (k-DSP) problem, we are given a graph G (with positive edge weights) on n nodes and m edges with specified source vertices s_1, … , s_k, and target vertices t_1, … , t_k, and are tasked with determining if G contains vertex-disjoint (s_i,t_i)-shortest paths. For any constant k, it is known that k-DSP can be solved in polynomial time over undirected graphs and directed acyclic graphs (DAGs). However, the exact time complexity of k-DSP remains mysterious, with large gaps between the fastest known algorithms and best conditional lower bounds. In this paper, we obtain faster algorithms for important cases of k-DSP, and present better conditional lower bounds for k-DSP and its variants. Previous work solved 2-DSP over weighted undirected graphs in O(n⁷) time, and weighted DAGs in O(mn) time. For the main result of this paper, we present optimal linear time algorithms for solving 2-DSP on weighted undirected graphs and DAGs. Our linear time algorithms are algebraic however, and so only solve the detection rather than search version of 2-DSP (we show how to solve the search version in O(mn) time, which is faster than the previous best runtime in weighted undirected graphs, but only matches the previous best runtime for DAGs). We also obtain a faster algorithm for k-Edge Disjoint Shortest Paths (k-EDSP) in DAGs, the variant of k-DSP where one seeks edge-disjoint instead of vertex-disjoint paths between sources and their corresponding targets. Algorithms for k-EDSP on DAGs from previous work take Ω(m^k) time. We show that k-EDSP can be solved over DAGs in O(mn^{k-1}) time, matching the fastest known runtime for solving k-DSP over DAGs. Previous work established conditional lower bounds for solving k-DSP and its variants via reductions from detecting cliques in graphs. Prior work implied that k-Clique can be reduced to 2k-DSP in DAGs and undirected graphs with O((kn)²) nodes. We improve this reduction, by showing how to reduce from k-Clique to k-DSP in DAGs and undirected graphs with O((kn)²) nodes (halving the number of paths needed in the reduced instance). A variant of k-DSP is the k-Disjoint Paths (k-DP) problem, where the solution paths no longer need to be shortest paths. Previous work reduced from k-Clique to p-DP in DAGs with O(kn) nodes, for p = k + k(k-1)/2. We improve this by showing a reduction from k-Clique to p-DP, for p = k + ⌊k²/4⌋. Under the k-Clique Hypothesis from fine-grained complexity, our results establish better conditional lower bounds for k-DSP for all k ≥ 4, and better conditional lower bounds for p-DP for all p ≤ 4031. Notably, our work gives the first nontrivial conditional lower bounds 4-DP in DAGs and 4-DSP in undirected graphs and DAGs. Before our work, nontrivial conditional lower bounds were only known for k-DP and k-DSP on such graphs when k ≥ 6.

Cite as

Shyan Akmal, Virginia Vassilevska Williams, and Nicole Wein. Detecting Disjoint Shortest Paths in Linear Time and More. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ICALP.2024.9,
  author =	{Akmal, Shyan and Vassilevska Williams, Virginia and Wein, Nicole},
  title =	{{Detecting Disjoint Shortest Paths in Linear Time and More}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.9},
  URN =		{urn:nbn:de:0030-drops-201529},
  doi =		{10.4230/LIPIcs.ICALP.2024.9},
  annote =	{Keywords: disjoint shortest paths, algebraic graph algorithms, disjoint paths, fine-grained complexity, clique}
}
Document
Track A: Algorithms, Complexity and Games
The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants

Authors: Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang


Abstract
Many iterative algorithms in computer science require repeated computation of some algebraic expression whose input varies slightly from one iteration to the next. Although efficient data structures have been proposed for maintaining the solution of such algebraic expressions under low-rank updates, most of these results are only analyzed under exact arithmetic (real-RAM model and finite fields) which may not accurately reflect the more limited complexity guarantees of real computers. In this paper, we analyze the stability and bit complexity of such data structures for expressions that involve the inversion, multiplication, addition, and subtraction of matrices under the word-RAM model. We show that the bit complexity only increases linearly in the number of matrix operations in the expression. In addition, we consider the bit complexity of maintaining the determinant of a matrix expression. We show that the required bit complexity depends on the logarithm of the condition number of matrices instead of the logarithm of their determinant. Finally, we discuss rank maintenance and its connections to determinant maintenance. Our results have wide applications ranging from computational geometry (e.g., computing the volume of a polytope) to optimization (e.g., solving linear programs using the simplex algorithm).

Cite as

Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang. The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{anand_et_al:LIPIcs.ICALP.2024.10,
  author =	{Anand, Emile and van den Brand, Jan and Ghadiri, Mehrdad and Zhang, Daniel J.},
  title =	{{The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.10},
  URN =		{urn:nbn:de:0030-drops-201538},
  doi =		{10.4230/LIPIcs.ICALP.2024.10},
  annote =	{Keywords: Data Structures, Online Algorithms, Bit Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Approximate Counting for Spin Systems in Sub-Quadratic Time

Authors: Konrad Anand, Weiming Feng, Graham Freifeld, Heng Guo, and Jiaheng Wang


Abstract
We present two randomised approximate counting algorithms with Õ(n^{2-c}/ε²) running time for some constant c > 0 and accuracy ε: 1) for the hard-core model with fugacity λ on graphs with maximum degree Δ when λ = O(Δ^{-1.5-c₁}) where c₁ = c/(2-2c); 2) for spin systems with strong spatial mixing (SSM) on planar graphs with quadratic growth, such as ℤ². For the hard-core model, Weitz’s algorithm (STOC, 2006) achieves sub-quadratic running time when correlation decays faster than the neighbourhood growth, namely when λ = o(Δ^{-2}). Our first algorithm does not require this property and extends the range where sub-quadratic algorithms exist. Our second algorithm appears to be the first to achieve sub-quadratic running time up to the SSM threshold, albeit on a restricted family of graphs. It also extends to (not necessarily planar) graphs with polynomial growth, such as ℤ^d, but with a running time of the form Õ(n²ε^{-2}/2^{c(log n)^{1/d}}) where d is the exponent of the polynomial growth and c > 0 is some constant.

Cite as

Konrad Anand, Weiming Feng, Graham Freifeld, Heng Guo, and Jiaheng Wang. Approximate Counting for Spin Systems in Sub-Quadratic Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{anand_et_al:LIPIcs.ICALP.2024.11,
  author =	{Anand, Konrad and Feng, Weiming and Freifeld, Graham and Guo, Heng and Wang, Jiaheng},
  title =	{{Approximate Counting for Spin Systems in Sub-Quadratic Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.11},
  URN =		{urn:nbn:de:0030-drops-201543},
  doi =		{10.4230/LIPIcs.ICALP.2024.11},
  annote =	{Keywords: Randomised algorithm, Approximate counting, Spin system, Sub-quadratic algorithm}
}
Document
Track A: Algorithms, Complexity and Games
From Proof Complexity to Circuit Complexity via Interactive Protocols

Authors: Noel Arteche, Erfan Khaniki, Ján Pich, and Rahul Santhanam


Abstract
Folklore in complexity theory suspects that circuit lower bounds against NC¹ or P/poly, currently out of reach, are a necessary step towards proving strong proof complexity lower bounds for systems like Frege or Extended Frege. Establishing such a connection formally, however, is already daunting, as it would imply the breakthrough separation NEXP ⊈ P/poly, as recently observed by Pich and Santhanam [Pich and Santhanam, 2023]. We show such a connection conditionally for the Implicit Extended Frege proof system (iEF) introduced by Krajíček [Krajíček, 2004], capable of formalizing most of contemporary complexity theory. In particular, we show that if iEF proves efficiently the standard derandomization assumption that a concrete Boolean function is hard on average for subexponential-size circuits, then any superpolynomial lower bound on the length of iEF proofs implies #P ⊈ FP/poly (which would in turn imply, for example, PSPACE ⊈ P/poly). Our proof exploits the formalization inside iEF of the soundness of the sum-check protocol of Lund, Fortnow, Karloff, and Nisan [Lund et al., 1992]. This has consequences for the self-provability of circuit upper bounds in iEF. Interestingly, further improving our result seems to require progress in constructing interactive proof systems with more efficient provers.

Cite as

Noel Arteche, Erfan Khaniki, Ján Pich, and Rahul Santhanam. From Proof Complexity to Circuit Complexity via Interactive Protocols. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arteche_et_al:LIPIcs.ICALP.2024.12,
  author =	{Arteche, Noel and Khaniki, Erfan and Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{From Proof Complexity to Circuit Complexity via Interactive Protocols}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.12},
  URN =		{urn:nbn:de:0030-drops-201557},
  doi =		{10.4230/LIPIcs.ICALP.2024.12},
  annote =	{Keywords: proof complexity, circuit complexity, interactive protocols}
}
Document
Track A: Algorithms, Complexity and Games
Learning Low-Degree Quantum Objects

Authors: Srinivasan Arunachalam, Arkopal Dutt, Francisco Escudero Gutiérrez, and Carlos Palazuelos


Abstract
We consider the problem of learning low-degree quantum objects up to ε-error in 𝓁₂-distance. We show the following results: (i) unknown n-qubit degree-d (in the Pauli basis) quantum channels and unitaries can be learned using O(1/ε^d) queries (which is independent of n), (ii) polynomials p:{-1,1}ⁿ → [-1,1] arising from d-query quantum algorithms can be learned from O((1/ε)^d ⋅ log n) many random examples (x,p(x)) (which implies learnability even for d = O(log n)), and (iii) degree-d polynomials p:{-1,1}ⁿ → [-1,1] can be learned through O(1/ε^d) queries to a quantum unitary U_p that block-encodes p. Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely bounded polynomials.

Cite as

Srinivasan Arunachalam, Arkopal Dutt, Francisco Escudero Gutiérrez, and Carlos Palazuelos. Learning Low-Degree Quantum Objects. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.ICALP.2024.13,
  author =	{Arunachalam, Srinivasan and Dutt, Arkopal and Escudero Guti\'{e}rrez, Francisco and Palazuelos, Carlos},
  title =	{{Learning Low-Degree Quantum Objects}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{13:1--13:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.13},
  URN =		{urn:nbn:de:0030-drops-201563},
  doi =		{10.4230/LIPIcs.ICALP.2024.13},
  annote =	{Keywords: Tomography}
}
Document
Track A: Algorithms, Complexity and Games
A Multivariate to Bivariate Reduction for Noncommutative Rank and Related Results

Authors: Vikraman Arvind and Pushkar S. Joglekar


Abstract
We study the noncommutative rank problem, ncRANK, of computing the rank of matrices with linear entries in n noncommuting variables and the problem of noncommutative Rational Identity Testing, RIT, which is to decide if a given rational formula in n noncommuting variables is zero on its domain of definition. Motivated by the question whether these problems have deterministic NC algorithms, we revisit their interrelationship from a parallel complexity point of view. We show the following results: 1) Based on Cohn’s embedding theorem [Cohn, 1990; Cohn, 2006] we show deterministic NC reductions from multivariate ncRANK to bivariate ncRANK and from multivariate RIT to bivariate RIT. 2) We obtain a deterministic NC-Turing reduction from bivariate RIT to bivariate ncRANK, thereby proving that a deterministic NC algorithm for bivariate ncRANK would imply that both multivariate RIT and multivariate ncRANK are in deterministic NC.

Cite as

Vikraman Arvind and Pushkar S. Joglekar. A Multivariate to Bivariate Reduction for Noncommutative Rank and Related Results. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.ICALP.2024.14,
  author =	{Arvind, Vikraman and Joglekar, Pushkar S.},
  title =	{{A Multivariate to Bivariate Reduction for Noncommutative Rank and Related Results}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.14},
  URN =		{urn:nbn:de:0030-drops-201571},
  doi =		{10.4230/LIPIcs.ICALP.2024.14},
  annote =	{Keywords: noncommutative rank, rational formulas, identity testing, parallel complexity}
}
Document
Track A: Algorithms, Complexity and Games
List Update with Delays or Time Windows

Authors: Yossi Azar, Shahar Lewkowicz, and Danny Vainstein


Abstract
We address the problem of List Update, which is considered one of the fundamental problems in online algorithms and competitive analysis. In this context, we are presented with a list of elements and receive requests for these elements over time. Our objective is to fulfill these requests, incurring a cost proportional to their position in the list. Additionally, we can swap any two consecutive elements at a cost of 1. The renowned "Move to Front" algorithm, introduced by Sleator and Tarjan, immediately moves any requested element to the front of the list. They demonstrated that this algorithm achieves a competitive ratio of 2. While this bound is impressive, the actual cost of the algorithm’s solution can be excessively high. For example, if we request the last half of the list, the resulting solution cost becomes quadratic in the list’s length. To address this issue, we consider a more generalized problem called List Update with Time Windows. In this variant, each request arrives with a specific deadline by which it must be served, rather than being served immediately. Moreover, we allow the algorithm to process multiple requests simultaneously, accessing the corresponding elements in a single pass. The cost incurred in this case is determined by the position of the furthest element accessed, leading to a significant reduction in the total solution cost. We introduce this problem to explore lower solution costs, but it necessitates the development of new algorithms. For instance, Move-to-Front fails when handling the simple scenario of requesting the last half of the list with overlapping time windows. In our work, we present a natural O(1) competitive algorithm for this problem. While the algorithm itself is intuitive, its analysis is intricate, requiring the use of a novel potential function. Additionally, we delve into a more general problem called List Update with Delays, where the fixed deadlines are replaced with arbitrary delay functions. In this case, the cost includes not only the access and swapping costs, but also penalties for the delays incurred until the requests are served. This problem encompasses a special case known as the prize collecting version, where a request may go unserved up to a given deadline, resulting in a specified penalty. For this more comprehensive problem, we establish an O(1) competitive algorithm. However, the algorithm for the delay version is more complex, and its analysis involves significantly more intricate considerations.

Cite as

Yossi Azar, Shahar Lewkowicz, and Danny Vainstein. List Update with Delays or Time Windows. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 15:1-15:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{azar_et_al:LIPIcs.ICALP.2024.15,
  author =	{Azar, Yossi and Lewkowicz, Shahar and Vainstein, Danny},
  title =	{{List Update with Delays or Time Windows}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{15:1--15:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.15},
  URN =		{urn:nbn:de:0030-drops-201583},
  doi =		{10.4230/LIPIcs.ICALP.2024.15},
  annote =	{Keywords: Online, List Update, Delay, Time Window, Deadline}
}
Document
Track A: Algorithms, Complexity and Games
NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials

Authors: Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha


Abstract
An s-sparse polynomial has at most s monomials with nonzero coefficients. The Equivalence Testing problem for sparse polynomials (ETsparse) asks to decide if a given polynomial f is equivalent to (i.e., in the orbit of) some s-sparse polynomial. In other words, given f ∈ 𝔽[𝐱] and s ∈ ℕ, ETsparse asks to check if there exist A ∈ GL(|𝐱|, 𝔽) and 𝐛 ∈ 𝔽^|𝐱| such that f(A𝐱 + 𝐛) is s-sparse. We show that ETsparse is NP-hard over any field 𝔽, if f is given in the sparse representation, i.e., as a list of nonzero coefficients and exponent vectors. This answers a question posed by Gupta, Saha and Thankey (SODA 2023) and also, more explicitly, by Baraskar, Dewan and Saha (STACS 2024). The result implies that the Minimum Circuit Size Problem (MCSP) is NP-hard for a dense subclass of depth-3 arithmetic circuits if the input is given in sparse representation. We also show that approximating the smallest s₀ such that a given s-sparse polynomial f is in the orbit of some s₀-sparse polynomial to within a factor of s^{1/3 - ε} is NP-hard for any ε > 0; observe that s-factor approximation is trivial as the input is s-sparse. Finally, we show that for any constant σ ≥ 6, checking if a polynomial (given in sparse representation) is in the orbit of some support-σ polynomial is NP-hard. Support of a polynomial f is the maximum number of variables present in any monomial of f. These results are obtained via direct reductions from the 3-SAT problem.

Cite as

Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha. NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 16:1-16:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baraskar_et_al:LIPIcs.ICALP.2024.16,
  author =	{Baraskar, Omkar and Dewan, Agrim and Saha, Chandan and Sinha, Pulkit},
  title =	{{NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{16:1--16:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.16},
  URN =		{urn:nbn:de:0030-drops-201598},
  doi =		{10.4230/LIPIcs.ICALP.2024.16},
  annote =	{Keywords: Equivalence testing, MCSP, sparse polynomials, 3SAT}
}
Document
Track A: Algorithms, Complexity and Games
Vital Edges for (s,t)-Mincut: Efficient Algorithms, Compact Structures, & Optimal Sensitivity Oracles

Authors: Surender Baswana and Koustav Bhanja


Abstract
Let G be a directed weighted graph on n vertices and m edges with designated source and sink vertices s and t. An edge in G is vital if its removal reduces the capacity of (s,t)-mincut. Since the seminal work of Ford and Fulkerson [CJM 1956], a long line of work has been done on computing the most vital edge and all vital edges of G. However, even after 60 years, the existing results are for either undirected or unweighted graphs. We present the following result for directed weighted graphs that also solves an open problem by Ausiello, Franciosa, Lari, and Ribichini [NETWORKS 2019]. 1. Algorithmic Results: There is an algorithm that computes all vital edges as well as the most vital edge of G using {O}(n) maximum (s,t)-flow computations. Vital edges play a crucial role in the design of sensitivity oracle for (s,t)-mincut - a compact data structure for reporting (s,t)-mincut after insertion/failure of any edge. For directed graphs, the only existing sensitivity oracle is for unweighted graphs by Picard and Queyranne [MPS 1982]. We present the first and optimal sensitivity oracle for directed weighted graphs as follows. 2. Sensitivity Oracles: a) There is an optimal O(n²) space data structure that can report an (s,t)-mincut C in O(|C|) time after the failure/insertion of any edge. b) There is an O(n) space data structure that can report the capacity of (s,t)-mincut after failure or insertion of any edge e in O(1) time if the capacity of edge e is known. A mincut for a vital edge e is an (s,t)-cut of the least capacity in which edge e is outgoing. For unweighted graphs, in a classical work, Picard and Queyranne [MPS 1982] designed an O(m) space directed acyclic graph (DAG) that stores and characterizes all mincuts for all vital edges. Conversely, there is a set containing at most n-1 (s,t)-cuts such that at least one mincut for every vital edge belongs to the set. We generalize these results for directed weighted graphs as follows. 3. Structural & Combinatorial Results: a) There is a set M containing at most n-1 (s,t)-cuts such that at least one mincut for every vital edge belongs to the set. This bound is tight as well. We also show that set M can be computed using O(n) maximum (s,t)-flow computations. b) We design two compact structures for storing and characterizing all mincuts for all vital edges - (i) an O(m) space DAG for partial and (ii) an O(mn) space structure for complete characterization. To arrive at our results, we develop new techniques, especially a generalization of maxflow-mincut Theorem by Ford and Fulkerson [CJM 1956], which might be of independent interest.

Cite as

Surender Baswana and Koustav Bhanja. Vital Edges for (s,t)-Mincut: Efficient Algorithms, Compact Structures, & Optimal Sensitivity Oracles. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baswana_et_al:LIPIcs.ICALP.2024.17,
  author =	{Baswana, Surender and Bhanja, Koustav},
  title =	{{Vital Edges for (s,t)-Mincut: Efficient Algorithms, Compact Structures, \& Optimal Sensitivity Oracles}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.17},
  URN =		{urn:nbn:de:0030-drops-201601},
  doi =		{10.4230/LIPIcs.ICALP.2024.17},
  annote =	{Keywords: maxflow, vital edges, graph algorithms, structures, st-cuts, sensitivity oracle}
}
Document
Track A: Algorithms, Complexity and Games
It’s Hard to HAC Average Linkage!

Authors: MohammadHossein Bateni, Laxman Dhulipala, Kishen N. Gowda, D. Ellis Hershkowitz, Rajesh Jayaram, and Jakub Łącki


Abstract
Average linkage Hierarchical Agglomerative Clustering (HAC) is an extensively studied and applied method for hierarchical clustering. Recent applications to massive datasets have driven significant interest in near-linear-time and efficient parallel algorithms for average linkage HAC. We provide hardness results that rule out such algorithms. On the sequential side, we establish a runtime lower bound of n^{3/2-ε} on n node graphs for sequential combinatorial algorithms under standard fine-grained complexity assumptions. This essentially matches the best-known running time for average linkage HAC. On the parallel side, we prove that average linkage HAC likely cannot be parallelized even on simple graphs by showing that it is CC-hard on trees of diameter 4. On the possibility side, we demonstrate that average linkage HAC can be efficiently parallelized (i.e., it is in NC) on paths and can be solved in near-linear time when the height of the output cluster hierarchy is small.

Cite as

MohammadHossein Bateni, Laxman Dhulipala, Kishen N. Gowda, D. Ellis Hershkowitz, Rajesh Jayaram, and Jakub Łącki. It’s Hard to HAC Average Linkage!. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bateni_et_al:LIPIcs.ICALP.2024.18,
  author =	{Bateni, MohammadHossein and Dhulipala, Laxman and Gowda, Kishen N. and Hershkowitz, D. Ellis and Jayaram, Rajesh and {\L}\k{a}cki, Jakub},
  title =	{{It’s Hard to HAC Average Linkage!}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.18},
  URN =		{urn:nbn:de:0030-drops-201613},
  doi =		{10.4230/LIPIcs.ICALP.2024.18},
  annote =	{Keywords: Clustering, Hierarchical Graph Clustering, HAC, Fine-Grained Complexity, Parallel Algorithms, CC}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear Algorithms for TSP via Path Covers

Authors: Soheil Behnezhad, Mohammad Roghani, Aviad Rubinstein, and Amin Saberi


Abstract
We study sublinear time algorithms for the traveling salesman problem (TSP). First, we focus on the closely related maximum path cover problem, which asks for a collection of vertex disjoint paths that include the maximum number of edges. We show that for any fixed ε > 0, there is an algorithm that (1/2 - ε)-approximates the maximum path cover size of an n-vertex graph in Õ(n) time. This improves upon a (3/8-ε)-approximate Õ(n √n)-time algorithm of Chen, Kannan, and Khanna [ICALP'20]. Equipped with our path cover algorithm, we give an Õ(n) time algorithm that estimates the cost of (1,2)-TSP within a factor of (1.5+ε) which is an improvement over a folklore (1.75 + ε)-approximate Õ(n)-time algorithm, as well as a (1.625+ε)-approximate Õ(n√n)-time algorithm of [CHK ICALP'20]. For graphic TSP, we present an Õ(n) algorithm that estimates the cost of graphic TSP within a factor of 1.83 which is an improvement over a 1.92-approximate Õ(n) time algorithm due to [CHK ICALP'20, Behnezhad FOCS'21]. We show that the approximation can be further improved to 1.66 using n^{2-Ω(1)} time. All of our Õ(n) time algorithms are information-theoretically time-optimal up to polylog n factors. Additionally, we show that our approximation guarantees for path cover and (1,2)-TSP hit a natural barrier: We show better approximations require better sublinear time algorithms for the well-studied maximum matching problem.

Cite as

Soheil Behnezhad, Mohammad Roghani, Aviad Rubinstein, and Amin Saberi. Sublinear Algorithms for TSP via Path Covers. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{behnezhad_et_al:LIPIcs.ICALP.2024.19,
  author =	{Behnezhad, Soheil and Roghani, Mohammad and Rubinstein, Aviad and Saberi, Amin},
  title =	{{Sublinear Algorithms for TSP via Path Covers}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.19},
  URN =		{urn:nbn:de:0030-drops-201623},
  doi =		{10.4230/LIPIcs.ICALP.2024.19},
  annote =	{Keywords: Sublinear Algorithms, Traveling Salesman Problem, Approximation Algorithm, (1, 2)-TSP, Graphic TSP}
}
Document
Track A: Algorithms, Complexity and Games
Better Space-Time-Robustness Trade-Offs for Set Reconciliation

Authors: Djamal Belazzougui, Gregory Kucherov, and Stefan Walzer


Abstract
We consider the problem of reconstructing the symmetric difference between similar sets from their representations (sketches) of size linear in the number of differences. Exact solutions to this problem are based on error-correcting coding techniques and suffer from a large decoding time. Existing probabilistic solutions based on Invertible Bloom Lookup Tables (IBLTs) are time-efficient but offer insufficient success guarantees for many applications. Here we propose a tunable trade-off between the two approaches combining the efficiency of IBLTs with exponentially decreasing failure probability. The proof relies on a refined analysis of IBLTs proposed in (Bæk Tejs Houen et al. SOSA 2023) which has an independent interest. We also propose a modification of our algorithm that enables telling apart the elements of each set in the symmetric difference.

Cite as

Djamal Belazzougui, Gregory Kucherov, and Stefan Walzer. Better Space-Time-Robustness Trade-Offs for Set Reconciliation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 20:1-20:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{belazzougui_et_al:LIPIcs.ICALP.2024.20,
  author =	{Belazzougui, Djamal and Kucherov, Gregory and Walzer, Stefan},
  title =	{{Better Space-Time-Robustness Trade-Offs for Set Reconciliation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{20:1--20:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.20},
  URN =		{urn:nbn:de:0030-drops-201639},
  doi =		{10.4230/LIPIcs.ICALP.2024.20},
  annote =	{Keywords: data structures, hashing, set reconciliation, invertible Bloom lookup tables, random hypergraphs, BCH codes}
}
Document
Track A: Algorithms, Complexity and Games
Oracle Separation of QMA and QCMA with Bounded Adaptivity

Authors: Shalev Ben-David and Srijita Kundu


Abstract
We give an oracle separation between QMA and QCMA for quantum algorithms that have bounded adaptivity in their oracle queries; that is, the number of rounds of oracle calls is small, though each round may involve polynomially many queries in parallel. Our oracle construction is a simplified version of the construction used recently by Li, Liu, Pelecanos, and Yamakawa (2023), who showed an oracle separation between QMA and QCMA when the quantum algorithms are only allowed to access the oracle classically. To prove our results, we introduce a property of relations called slipperiness, which may be useful for getting a fully general classical oracle separation between QMA and QCMA.

Cite as

Shalev Ben-David and Srijita Kundu. Oracle Separation of QMA and QCMA with Bounded Adaptivity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bendavid_et_al:LIPIcs.ICALP.2024.21,
  author =	{Ben-David, Shalev and Kundu, Srijita},
  title =	{{Oracle Separation of QMA and QCMA with Bounded Adaptivity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.21},
  URN =		{urn:nbn:de:0030-drops-201642},
  doi =		{10.4230/LIPIcs.ICALP.2024.21},
  annote =	{Keywords: Quantum computing, computational complexity}
}
Document
Track A: Algorithms, Complexity and Games
Two-Sets Cut-Uncut on Planar Graphs

Authors: Matthias Bentert, Pål Grønås Drange, Fedor V. Fomin, Petr A. Golovach, and Tuukka Korhonen


Abstract
We study Two-Sets Cut-Uncut on planar graphs. Therein, one is given an undirected planar graph G and two disjoint sets S and T of vertices as input. The question is, what is the minimum number of edges to remove from G, such that all vertices in S are separated from all vertices in T, while maintaining that every vertex in S, and respectively in T, stays in the same connected component. We show that this problem can be solved in 2^{|S|+|T|} n^𝒪(1) time with a one-sided-error randomized algorithm. Our algorithm implies a polynomial-time algorithm for the network diversion problem on planar graphs, which resolves an open question from the literature. More generally, we show that Two-Sets Cut-Uncut is fixed-parameter tractable when parameterized by the number r of faces in a planar embedding covering the terminals S ∪ T, by providing a 2^𝒪(r) n^𝒪(1)-time algorithm.

Cite as

Matthias Bentert, Pål Grønås Drange, Fedor V. Fomin, Petr A. Golovach, and Tuukka Korhonen. Two-Sets Cut-Uncut on Planar Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ICALP.2024.22,
  author =	{Bentert, Matthias and Drange, P\r{a}l Gr{\o}n\r{a}s and Fomin, Fedor V. and Golovach, Petr A. and Korhonen, Tuukka},
  title =	{{Two-Sets Cut-Uncut on Planar Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.22},
  URN =		{urn:nbn:de:0030-drops-201654},
  doi =		{10.4230/LIPIcs.ICALP.2024.22},
  annote =	{Keywords: planar graphs, cut-uncut, group-constrained paths}
}
Document
Track A: Algorithms, Complexity and Games
Splitting-Off in Hypergraphs

Authors: Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni


Abstract
The splitting-off operation in undirected graphs is a fundamental reduction operation that detaches all edges incident to a given vertex and adds new edges between the neighbors of that vertex while preserving their degrees. Lovász [Lov{á}sz, 1974; Lov{á}sz, 1993] and Mader [Mader, 1978] showed the existence of this operation while preserving global and local connectivities respectively in graphs under certain conditions. These results have far-reaching applications in graph algorithms literature [Lovász, 1976; Mader, 1978; Frank, 1993; Frank and Király, 2002; Király and Lau, 2008; Frank, 1992; Goemans and Bertsimas, 1993; Frank, 1994; Bang-Jensen et al., 1995; Frank, 2011; Nagamochi and Ibaraki, 2008; Nagamochi et al., 1997; Henzinger and Williamson, 1996; Goemans, 2001; Jordán, 2003; Kriesell, 2003; Jain et al., 2003; Chan et al., 2011; Bhalgat et al., 2008; Lau, 2007; Chekuri and Shepherd, 2008; Nägele and Zenklusen, 2020; Blauth and Nägele, 2023]. In this work, we introduce a splitting-off operation in hypergraphs. We show that there exists a local connectivity preserving complete splitting-off in hypergraphs and give a strongly polynomial-time algorithm to compute it in weighted hypergraphs. We illustrate the usefulness of our splitting-off operation in hypergraphs by showing two applications: (1) we give a constructive characterization of k-hyperedge-connected hypergraphs and (2) we give an alternate proof of an approximate min-max relation for max Steiner rooted-connected orientation of graphs and hypergraphs (due to Király and Lau [Király and Lau, 2008]). Our proof of the approximate min-max relation for graphs circumvents the Nash-Williams' strong orientation theorem and uses tools developed for hypergraphs.

Cite as

Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni. Splitting-Off in Hypergraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{berczi_et_al:LIPIcs.ICALP.2024.23,
  author =	{B\'{e}rczi, Krist\'{o}f and Chandrasekaran, Karthekeyan and Kir\'{a}ly, Tam\'{a}s and Kulkarni, Shubhang},
  title =	{{Splitting-Off in Hypergraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.23},
  URN =		{urn:nbn:de:0030-drops-201660},
  doi =		{10.4230/LIPIcs.ICALP.2024.23},
  annote =	{Keywords: Hypergraphs, Hypergraph Connectivity, Splitting-off, Constructive Characterizations, Hypergraph Orientations, Submodular Functions, Combinatorial Optimization}
}
Document
Track A: Algorithms, Complexity and Games
Exponential Lower Bounds via Exponential Sums

Authors: Somnath Bhattacharjee, Markus Bläser, Pranjal Dutta, and Saswata Mukherjee


Abstract
Valiant’s famous VP vs. VNP conjecture states that the symbolic permanent polynomial does not have polynomial-size algebraic circuits. However, the best upper bound on the size of the circuits computing the permanent is exponential. Informally, VNP is an exponential sum of VP-circuits. In this paper we study whether, in general, exponential sums (of algebraic circuits) require exponential-size algebraic circuits. We show that the famous Shub-Smale τ-conjecture indeed implies such an exponential lower bound for an exponential sum. Our main tools come from parameterized complexity. Along the way, we also prove an exponential fpt (fixed-parameter tractable) lower bound for the parameterized algebraic complexity class VW⁰_{nb}[𝖯], assuming the same conjecture. VW⁰_{nb}[𝖯] can be thought of as the weighted sums of (unbounded-degree) circuits, where only ± 1 constants are cost-free. To the best of our knowledge, this is the first time the Shub-Smale τ-conjecture has been applied to prove explicit exponential lower bounds. Furthermore, we prove that when this class is fpt, then a variant of the counting hierarchy, namely the linear counting hierarchy collapses. Moreover, if a certain type of parameterized exponential sums is fpt, then integers, as well as polynomials with coefficients being definable in the linear counting hierarchy have subpolynomial τ-complexity. Finally, we characterize a related class VW[𝖥], in terms of permanents, where we consider an exponential sum of algebraic formulas instead of circuits. We show that when we sum over cycle covers that have one long cycle and all other cycles have constant length, then the resulting family of polynomials is complete for VW[𝖥] on certain types of graphs.

Cite as

Somnath Bhattacharjee, Markus Bläser, Pranjal Dutta, and Saswata Mukherjee. Exponential Lower Bounds via Exponential Sums. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ICALP.2024.24,
  author =	{Bhattacharjee, Somnath and Bl\"{a}ser, Markus and Dutta, Pranjal and Mukherjee, Saswata},
  title =	{{Exponential Lower Bounds via Exponential Sums}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.24},
  URN =		{urn:nbn:de:0030-drops-201677},
  doi =		{10.4230/LIPIcs.ICALP.2024.24},
  annote =	{Keywords: Algebraic complexity, parameterized complexity, exponential sums, counting hierarchy, tau conjecture}
}
Document
Track A: Algorithms, Complexity and Games
Random Separating Hyperplane Theorem and Learning Polytopes

Authors: Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar


Abstract
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. The theorem asserts that for a point a not in a closed convex set K, there is a hyperplane with K on one side and a strictly on the other side. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. RSH asserts that if the distance between a and a polytope K with k vertices and unit diameter in ℜ^d is at least δ, where δ is a fixed constant in (0,1), then a randomly chosen hyperplane separates a and K with probability at least 1/poly(k) and margin at least Ω (δ/√d). RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the "Hausdorff problem", of learning a unit diameter polytope K within Hausdorff distance δ, given an optimization oracle for K. Using RSH, we show that with polynomially many random queries to the optimization oracle, K can be approximated within error O(δ). To our knowledge, this is the first provable algorithm for the Hausdorff Problem in this setting. Building on this result, we show that if the vertices of K are well-separated, then an optimization oracle can be used to generate a list of points, each within distance O(δ) of K, with the property that the list contains a point close to each vertex of K. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption. This assumption states that each vertex of K is far from the convex hull of the remaining vertices of K, and is much weaker than other assumptions behind algorithms in the literature which find vertices of the latent polytope.

Cite as

Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar. Random Separating Hyperplane Theorem and Learning Polytopes. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ICALP.2024.25,
  author =	{Bhattacharyya, Chiranjib and Kannan, Ravindran and Kumar, Amit},
  title =	{{Random Separating Hyperplane Theorem and Learning Polytopes}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.25},
  URN =		{urn:nbn:de:0030-drops-201687},
  doi =		{10.4230/LIPIcs.ICALP.2024.25},
  annote =	{Keywords: Separating Hyperplane Theorem, Learning Polytopes, Optimization Oracles}
}
Document
Track A: Algorithms, Complexity and Games
Another Hamiltonian Cycle in Bipartite Pfaffian Graphs

Authors: Andreas Björklund, Petteri Kaski, and Jesper Nederlof


Abstract
Finding a Hamiltonian cycle in a given graph is computationally challenging, and in general remains so even when one is further given one Hamiltonian cycle in the graph and asked to find another. In fact, no significantly faster algorithms are known for finding another Hamiltonian cycle than for finding a first one even in the setting where another Hamiltonian cycle is structurally guaranteed to exist, such as for odd-degree graphs. We identify a graph class - the bipartite Pfaffian graphs of minimum degree three - where it is NP-complete to decide whether a given graph in the class is Hamiltonian, but when presented with a Hamiltonian cycle as part of the input, another Hamiltonian cycle can be found efficiently. We prove that Thomason’s lollipop method [Ann. Discrete Math., 1978], a well-known algorithm for finding another Hamiltonian cycle, runs in a linear number of steps in cubic bipartite Pfaffian graphs. This was conjectured for cubic bipartite planar graphs by Haddadan [MSc thesis, Waterloo, 2015]; in contrast, examples are known of both cubic bipartite graphs and cubic planar graphs where the lollipop method takes exponential time. Beyond the reach of the lollipop method, we address a slightly more general graph class and present two algorithms, one running in linear-time and one operating in logarithmic space, that take as input (i) a bipartite Pfaffian graph G of minimum degree three, (ii) a Hamiltonian cycle H in G, and (iii) an edge e in H, and output at least three other Hamiltonian cycles through the edge e in G. We also present further improved algorithms for finding optimal traveling salesperson tours and counting Hamiltonian cycles in bipartite planar graphs with running times that are not achieved yet in general planar graphs. Our technique also has purely graph-theoretical consequences; for example, we show that every cubic bipartite Pfaffian graph has either zero or at least six distinct Hamiltonian cycles; the latter case is tight for the cube graph.

Cite as

Andreas Björklund, Petteri Kaski, and Jesper Nederlof. Another Hamiltonian Cycle in Bipartite Pfaffian Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 26:1-26:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bjorklund_et_al:LIPIcs.ICALP.2024.26,
  author =	{Bj\"{o}rklund, Andreas and Kaski, Petteri and Nederlof, Jesper},
  title =	{{Another Hamiltonian Cycle in Bipartite Pfaffian Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{26:1--26:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.26},
  URN =		{urn:nbn:de:0030-drops-201692},
  doi =		{10.4230/LIPIcs.ICALP.2024.26},
  annote =	{Keywords: Another Hamiltonian cycle, Pfaffian graph, planar graph, Thomason’s lollipop method}
}
Document
Track A: Algorithms, Complexity and Games
The Discrepancy of Shortest Paths

Authors: Greg Bodwin, Chengyuan Deng, Jie Gao, Gary Hoppenworth, Jalaj Upadhyay, and Chen Wang


Abstract
The hereditary discrepancy of a set system is a quantitative measure of the pseudorandom properties of the system. Roughly speaking, hereditary discrepancy measures how well one can 2-color the elements of the system so that each set contains approximately the same number of elements of each color. Hereditary discrepancy has numerous applications in computational geometry, communication complexity and derandomization. More recently, the hereditary discrepancy of the set system of shortest paths has found applications in differential privacy [Chen et al. SODA 23]. The contribution of this paper is to improve the upper and lower bounds on the hereditary discrepancy of set systems of unique shortest paths in graphs. In particular, we show that any system of unique shortest paths in an undirected weighted graph has hereditary discrepancy O(n^{1/4}), and we construct lower bound examples demonstrating that this bound is tight up to polylog n factors. Our lower bounds hold even for planar graphs and bipartite graphs, and improve a previous lower bound of Ω(n^{1/6}) obtained by applying the trace bound of Chazelle and Lvov [SoCG'00] to a classical point-line system of Erdős. As applications, we improve the lower bound on the additive error for differentially-private all pairs shortest distances from Ω(n^{1/6}) [Chen et al. SODA 23] to Ω̃(n^{1/4}), and we improve the lower bound on additive error for the differentially-private all sets range queries problem to Ω̃(n^{1/4}), which is tight up to polylog n factors [Deng et al. WADS 23].

Cite as

Greg Bodwin, Chengyuan Deng, Jie Gao, Gary Hoppenworth, Jalaj Upadhyay, and Chen Wang. The Discrepancy of Shortest Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 27:1-27:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2024.27,
  author =	{Bodwin, Greg and Deng, Chengyuan and Gao, Jie and Hoppenworth, Gary and Upadhyay, Jalaj and Wang, Chen},
  title =	{{The Discrepancy of Shortest Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{27:1--27:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.27},
  URN =		{urn:nbn:de:0030-drops-201705},
  doi =		{10.4230/LIPIcs.ICALP.2024.27},
  annote =	{Keywords: Discrepancy, hereditary discrepancy, shortest paths, differential privacy}
}
Document
Track A: Algorithms, Complexity and Games
Additive Spanner Lower Bounds with Optimal Inner Graph Structure

Authors: Greg Bodwin, Gary Hoppenworth, Virginia Vassilevska Williams, Nicole Wein, and Zixuan Xu


Abstract
We construct n-node graphs on which any O(n)-size spanner has additive error at least +Ω(n^{3/17}), improving on the previous best lower bound of Ω(n^{1/7}) [Bodwin-Hoppenworth FOCS '22]. Our construction completes the first two steps of a particular three-step research program, introduced in prior work and overviewed here, aimed at producing tight bounds for the problem by aligning aspects of the upper and lower bound constructions. More specifically, we develop techniques that enable the use of inner graphs in the lower bound framework whose technical properties are provably tight with the corresponding assumptions made in the upper bounds. As an additional application of our techniques, we improve the corresponding lower bound for O(n)-size additive emulators to +Ω(n^{1/14}).

Cite as

Greg Bodwin, Gary Hoppenworth, Virginia Vassilevska Williams, Nicole Wein, and Zixuan Xu. Additive Spanner Lower Bounds with Optimal Inner Graph Structure. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2024.28,
  author =	{Bodwin, Greg and Hoppenworth, Gary and Vassilevska Williams, Virginia and Wein, Nicole and Xu, Zixuan},
  title =	{{Additive Spanner Lower Bounds with Optimal Inner Graph Structure}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.28},
  URN =		{urn:nbn:de:0030-drops-201715},
  doi =		{10.4230/LIPIcs.ICALP.2024.28},
  annote =	{Keywords: Additive Spanners, Graph Theory}
}
Document
Track A: Algorithms, Complexity and Games
A Tight Monte-Carlo Algorithm for Steiner Tree Parameterized by Clique-Width

Authors: Narek Bojikian and Stefan Kratsch


Abstract
Given a graph G = (V,E), a set T ⊆ V, and an integer b, the Steiner Tree problem asks whether G has a connected subgraph H with at most b vertices that spans all of T. This work presents a 3^k⋅ n^𝒪(1) time one-sided Monte-Carlo algorithm for solving Steiner Tree when additionally a clique-expression of width k is provided. Known lower bounds for less expressive parameters imply that this dependence on the clique-width of G is optimal assuming the Strong Exponential-Time Hypothesis (SETH). Indeed our work establishes that the parameter dependence of Steiner Tree is the same for any graph parameter between cutwidth and clique-width, assuming SETH. Our work contributes to the program of determining the exact parameterized complexity of fundamental hard problems relative to structural graph parameters such as treewidth, which was initiated by Lokshtanov et al. [SODA 2011 & TALG 2018] and which by now has seen a plethora of results. Since the cut-and-count framework of Cygan et al. [FOCS 2011 & TALG 2022], connectivity problems have played a key role in this program as they pose many challenges for developing tight upper and lower bounds. Recently, Hegerfeld and Kratsch [ESA 2023] gave the first application of the cut-and-count technique to problems parameterized by clique-width and obtained tight bounds for Connected Dominating Set and Connected Vertex Cover, leaving open the complexity of other benchmark connectivity problems such as Steiner Tree and Feedback Vertex Set. Our algorithm for Steiner Tree does not follow the cut-and-count technique and instead works with the connectivity patterns of partial solutions. As a first technical contribution we identify a special family of so-called complete patterns that has strong (existential) representation properties, and using these at least one solution will be preserved. Furthermore, there is a family of 3^k basis patterns that (parity) represents the complete patterns, i.e., it has the same number of solutions modulo two. Our main technical contribution, a new technique called "isolating a representative," allows us to leverage both forms of representation (existential and parity). Both complete patterns and isolation of a representative will likely be applicable to other (connectivity) problems.

Cite as

Narek Bojikian and Stefan Kratsch. A Tight Monte-Carlo Algorithm for Steiner Tree Parameterized by Clique-Width. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bojikian_et_al:LIPIcs.ICALP.2024.29,
  author =	{Bojikian, Narek and Kratsch, Stefan},
  title =	{{A Tight Monte-Carlo Algorithm for Steiner Tree Parameterized by Clique-Width}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.29},
  URN =		{urn:nbn:de:0030-drops-201728},
  doi =		{10.4230/LIPIcs.ICALP.2024.29},
  annote =	{Keywords: Parameterized complexity, Steiner tree, clique-width}
}
Document
Track A: Algorithms, Complexity and Games
Õptimal Dynamic Time Warping on Run-Length Encoded Strings

Authors: Itai Boneh, Shay Golan, Shay Mozes, and Oren Weimann


Abstract
Dynamic Time Warping (DTW) distance is the optimal cost of matching two strings when extending runs of letters is for free. Therefore, it is natural to measure the time complexity of DTW in terms of the number of runs n (rather than the string lengths N). In this paper, we give an Õ(n²) time algorithm for computing the DTW distance. This matches (up to log factors) the known (conditional) lower bound, and should be compared with the previous fastest O(n³) time exact algorithm and the Õ(n²) time approximation algorithm. Our method also immediately implies an Õ(nk) time algorithm when the distance is bounded by k. This should be compared with the previous fastest O(n²k) and O(Nk) time exact algorithms and the Õ(nk) time approximation algorithm.

Cite as

Itai Boneh, Shay Golan, Shay Mozes, and Oren Weimann. Õptimal Dynamic Time Warping on Run-Length Encoded Strings. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.ICALP.2024.30,
  author =	{Boneh, Itai and Golan, Shay and Mozes, Shay and Weimann, Oren},
  title =	{{\~{O}ptimal Dynamic Time Warping on Run-Length Encoded Strings}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.30},
  URN =		{urn:nbn:de:0030-drops-201730},
  doi =		{10.4230/LIPIcs.ICALP.2024.30},
  annote =	{Keywords: Dynamic time warping, Fr\'{e}chet distance, edit distance, run-length encoding}
}
Document
Track A: Algorithms, Complexity and Games
Tight Bounds on Adjacency Labels for Monotone Graph Classes

Authors: Édouard Bonnet, Julien Duron, John Sylvester, Viktor Zamaraev, and Maksim Zhukovskii


Abstract
A class of graphs admits an adjacency labeling scheme of size b(n), if the vertices in each of its n-vertex graphs can be assigned binary strings (called labels) of length b(n) so that the adjacency of two vertices can be determined solely from their labels. We give bounds on the size of adjacency labels for every family of monotone (i.e., subgraph-closed) classes with a "well-behaved" growth function between 2^Ω(n log n) and 2^O(n^{2-δ}) for any δ > 0. Specifically, we show that for any function f: ℕ → ℝ satisfying log n ⩽ f(n) ⩽ n^{1-δ} for any fixed δ > 0, and some sub-multiplicativity condition, there are monotone graph classes with growth 2^O(nf(n)) that do not admit adjacency labels of size at most f(n) log n. On the other hand, any such class does admit adjacency labels of size O(f(n)log n). Surprisingly this bound is a Θ(log n) factor away from the information-theoretic bound of Ω(f(n)). Our bounds are tight upto constant factors, and the special case when f = log implies that the recently-refuted Implicit Graph Conjecture [Hatami and Hatami, FOCS 2022] also fails within monotone classes. We further show that the Implicit Graph Conjecture holds for all monotone small classes. In other words, any monotone class with growth rate at most n! cⁿ for some constant c > 0, admits adjacency labels of information-theoretic order optimal size. In fact, we show a more general result that is of independent interest: any monotone small class of graphs has bounded degeneracy. We conjecture that the Implicit Graph Conjecture holds for all hereditary small classes.

Cite as

Édouard Bonnet, Julien Duron, John Sylvester, Viktor Zamaraev, and Maksim Zhukovskii. Tight Bounds on Adjacency Labels for Monotone Graph Classes. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ICALP.2024.31,
  author =	{Bonnet, \'{E}douard and Duron, Julien and Sylvester, John and Zamaraev, Viktor and Zhukovskii, Maksim},
  title =	{{Tight Bounds on Adjacency Labels for Monotone Graph Classes}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{31:1--31:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.31},
  URN =		{urn:nbn:de:0030-drops-201741},
  doi =		{10.4230/LIPIcs.ICALP.2024.31},
  annote =	{Keywords: Adjacency labeling, degeneracy, monotone classes, small classes, factorial classes, implicit graph conjecture}
}
Document
Track A: Algorithms, Complexity and Games
Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents

Authors: Michaela Borzechowski, John Fearnley, Spencer Gordon, Rahul Savani, Patrick Schnider, and Simon Weber


Abstract
We provide polynomial-time reductions between three search problems from three distinct areas: the P-matrix linear complementarity problem (P-LCP), finding the sink of a unique sink orientation (USO), and a variant of the α-Ham Sandwich problem. For all three settings, we show that "two choices are enough", meaning that the general non-binary version of the problem can be reduced in polynomial time to the binary version. This specifically means that generalized P-LCPs are equivalent to P-LCPs, and grid USOs are equivalent to cube USOs. These results are obtained by showing that both the P-LCP and our α-Ham Sandwich variant are equivalent to a new problem we introduce, P-Lin-Bellman. This problem can be seen as a new tool for formulating problems as P-LCPs.

Cite as

Michaela Borzechowski, John Fearnley, Spencer Gordon, Rahul Savani, Patrick Schnider, and Simon Weber. Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{borzechowski_et_al:LIPIcs.ICALP.2024.32,
  author =	{Borzechowski, Michaela and Fearnley, John and Gordon, Spencer and Savani, Rahul and Schnider, Patrick and Weber, Simon},
  title =	{{Two Choices Are Enough for P-LCPs, USOs, and Colorful Tangents}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.32},
  URN =		{urn:nbn:de:0030-drops-201751},
  doi =		{10.4230/LIPIcs.ICALP.2024.32},
  annote =	{Keywords: P-LCP, Unique Sink Orientation, \alpha-Ham Sandwich, search complexity, TFNP, UEOPL}
}
Document
Track A: Algorithms, Complexity and Games
Kernelization Dichotomies for Hitting Subgraphs Under Structural Parameterizations

Authors: Marin Bougeret, Bart M. P. Jansen, and Ignasi Sau


Abstract
For a fixed graph H, the H-Subgraph Hitting problem consists in deleting the minimum number of vertices from an input graph to obtain a graph without any occurrence of H as a subgraph. This problem can be seen as a generalization of Vertex Cover, which corresponds to the case H = K₂. We initiate a study of H-Subgraph Hitting from the point of view of characterizing structural parameterizations that allow for polynomial kernels, within the recently active framework of taking as the parameter the number of vertex deletions to obtain a graph in a "simple" class 𝒞. Our main contribution is to identify graph parameters that, when H-Subgraph Hitting is parameterized by the vertex-deletion distance to a class 𝒞 where any of these parameters is bounded, and assuming standard complexity assumptions and that H is biconnected, allow us to prove the following sharp dichotomy: the problem admits a polynomial kernel if and only if H is a clique. These new graph parameters are inspired by the notion of 𝒞-elimination distance introduced by Bulian and Dawar [Algorithmica 2016], and generalize it in two directions. Our results also apply to the version of the problem where one wants to hit H as an induced subgraph, and imply in particular, that the problems of hitting minors and hitting (induced) subgraphs have a substantially different behavior with respect to the existence of polynomial kernels under structural parameterizations.

Cite as

Marin Bougeret, Bart M. P. Jansen, and Ignasi Sau. Kernelization Dichotomies for Hitting Subgraphs Under Structural Parameterizations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bougeret_et_al:LIPIcs.ICALP.2024.33,
  author =	{Bougeret, Marin and Jansen, Bart M. P. and Sau, Ignasi},
  title =	{{Kernelization Dichotomies for Hitting Subgraphs Under Structural Parameterizations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.33},
  URN =		{urn:nbn:de:0030-drops-201766},
  doi =		{10.4230/LIPIcs.ICALP.2024.33},
  annote =	{Keywords: hitting subgraphs, hitting induced subgraphs, parameterized complexity, polynomial kernel, complexity dichotomy, elimination distance}
}
Document
Track A: Algorithms, Complexity and Games
Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness

Authors: Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski


Abstract
It is known for many algorithmic problems that if a tree decomposition of width t is given in the input, then the problem can be solved with exponential dependence on t. A line of research initiated by Lokshtanov, Marx, and Saurabh [SODA 2011] produced lower bounds showing that in many cases known algorithms already achieve the best possible exponential dependence on t, assuming the Strong Exponential-Time Hypothesis (SETH). The main message of this paper is showing that the same lower bounds can already be obtained in a much more restricted setting: informally, a graph consisting of a block of t vertices connected to components of constant size already has the same hardness as a general tree decomposition of width t. Formally, a (σ,δ)-hub is a set Q of vertices such that every component of Q has size at most σ and is adjacent to at most δ vertices of Q. We explore if the known tight lower bounds parameterized by the width of the given tree decomposition remain valid if we parameterize by the size of the given hub. - For every ε > 0, there are σ,δ > 0 such that Independent Set (equivalently Vertex Cover) cannot be solved in time (2-ε)^p⋅ n, even if a (σ, δ)-hub of size p is given in the input, assuming the SETH. This matches the earlier tight lower bounds parameterized by width of the tree decomposition. Similar tight bounds are obtained for Odd Cycle Transversal, Max Cut, q-Coloring, and edge/vertex deletions versions of q-Coloring. - For every ε > 0, there are σ,δ > 0 such that △-Partition cannot be solved in time (2-ε)^p ⋅ n, even if a (σ, δ)-hub of size p is given in the input, assuming the Set Cover Conjecture (SCC). In fact, we prove that this statement is equivalent to the SCC, thus it is unlikely that this could be proved assuming the SETH. - For Dominating Set, we can prove a non-tight lower bound ruling out (2-ε)^p ⋅ n^𝒪(1) algorithms, assuming either the SETH or the SCC, but this does not match the 3^p⋅ n^{𝒪(1)} upper bound. Thus our results reveal that, for many problems, the research on lower bounds on the dependence on tree width was never really about tree decompositions, but the real source of hardness comes from a much simpler structure. Additionally, we study if the same lower bounds can be obtained if σ and δ are fixed universal constants (not depending on ε). We show that lower bounds of this form are possible for Max Cut and the edge-deletion version of q-Coloring, under the Max 3-Sat Hypothesis (M3SH). However, no such lower bounds are possible for Independent Set, Odd Cycle Transversal, and the vertex-deletion version of q-Coloring: better than brute force algorithms are possible for every fixed (σ,δ).

Cite as

Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski. Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{canesmer_et_al:LIPIcs.ICALP.2024.34,
  author =	{Can Esmer, Bar{\i}\c{s} and Focke, Jacob and Marx, D\'{a}niel and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{34:1--34:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.34},
  URN =		{urn:nbn:de:0030-drops-201772},
  doi =		{10.4230/LIPIcs.ICALP.2024.34},
  annote =	{Keywords: Parameterized Complexity, Tight Bounds, Hub, Treewidth, Strong Exponential Time Hypothesis, Vertex Coloring, Vertex Deletion, Edge Deletion, Triangle Packing, Triangle Partition, Set Cover Hypothesis, Dominating Set}
}
Document
Track A: Algorithms, Complexity and Games
A Spectral Approach to Approximately Counting Independent Sets in Dense Bipartite Graphs

Authors: Charlie Carlson, Ewan Davies, Alexandra Kolla, and Aditya Potukuchi


Abstract
We give a randomized algorithm that approximates the number of independent sets in a dense, regular bipartite graph - in the language of approximate counting, we give an FPRAS for #BIS on the class of dense, regular bipartite graphs. Efficient counting algorithms typically apply to "high-temperature" problems on bounded-degree graphs, and our contribution is a notable exception as it applies to dense graphs in a low-temperature setting. Our methods give a counting-focused complement to the long line of work in combinatorial optimization showing that CSPs such as Max-Cut and Unique Games are easy on dense graphs via spectral arguments. Our contributions include a novel extension of the method of graph containers that differs considerably from other recent low-temperature algorithms. The additional key insights come from spectral graph theory and have previously been successful in approximation algorithms. As a result, we can overcome some limitations that seem inherent to the aforementioned class of algorithms. In particular, we exploit the fact that dense, regular graphs exhibit a kind of small-set expansion (i.e., bounded threshold rank), which, via subspace enumeration, lets us enumerate small cuts efficiently.

Cite as

Charlie Carlson, Ewan Davies, Alexandra Kolla, and Aditya Potukuchi. A Spectral Approach to Approximately Counting Independent Sets in Dense Bipartite Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 35:1-35:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{carlson_et_al:LIPIcs.ICALP.2024.35,
  author =	{Carlson, Charlie and Davies, Ewan and Kolla, Alexandra and Potukuchi, Aditya},
  title =	{{A Spectral Approach to Approximately Counting Independent Sets in Dense Bipartite Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{35:1--35:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.35},
  URN =		{urn:nbn:de:0030-drops-201782},
  doi =		{10.4230/LIPIcs.ICALP.2024.35},
  annote =	{Keywords: approximate counting, independent sets, bipartite graphs, graph containers}
}
Document
Track A: Algorithms, Complexity and Games
Vertex-Minor Universal Graphs for Generating Entangled Quantum Subsystems

Authors: Maxime Cautrès, Nathan Claudet, Mehdi Mhalla, Simon Perdrix, Valentin Savin, and Stéphan Thomassé


Abstract
We study the notion of k-stabilizer universal quantum state, that is, an n-qubit quantum state, such that it is possible to induce any stabilizer state on any k qubits, by using only local operations and classical communications. These states generalize the notion of k-pairable states introduced by Bravyi et al., and can be studied from a combinatorial perspective using graph states and k-vertex-minor universal graphs. First, we demonstrate the existence of k-stabilizer universal graph states that are optimal in size with n = Θ(k²) qubits. We also provide parameters for which a random graph state on Θ(k²) qubits is k-stabilizer universal with high probability. Our second contribution consists of two explicit constructions of k-stabilizer universal graph states on n = O(k⁴) qubits. Both rely upon the incidence graph of the projective plane over a finite field 𝔽_q. This provides a major improvement over the previously known explicit construction of k-pairable graph states with n = O(2^{3k}), bringing forth a new and potentially powerful family of multipartite quantum resources.

Cite as

Maxime Cautrès, Nathan Claudet, Mehdi Mhalla, Simon Perdrix, Valentin Savin, and Stéphan Thomassé. Vertex-Minor Universal Graphs for Generating Entangled Quantum Subsystems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 36:1-36:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cautres_et_al:LIPIcs.ICALP.2024.36,
  author =	{Cautr\`{e}s, Maxime and Claudet, Nathan and Mhalla, Mehdi and Perdrix, Simon and Savin, Valentin and Thomass\'{e}, St\'{e}phan},
  title =	{{Vertex-Minor Universal Graphs for Generating Entangled Quantum Subsystems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{36:1--36:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.36},
  URN =		{urn:nbn:de:0030-drops-201796},
  doi =		{10.4230/LIPIcs.ICALP.2024.36},
  annote =	{Keywords: Quantum networks, graph states, vertex-minors, k-pairability}
}
Document
Track A: Algorithms, Complexity and Games
Fast Approximate Counting of Cycles

Authors: Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams


Abstract
We consider the problem of approximate counting of triangles and longer fixed length cycles in directed graphs. For triangles, Tětek [ICALP'22] gave an algorithm that returns a (1±ε)-approximation in Õ(n^ω/t^{ω-2}) time, where t is the unknown number of triangles in the given n node graph and ω < 2.372 is the matrix multiplication exponent. We obtain an improved algorithm whose running time is, within polylogarithmic factors the same as that for multiplying an n× n/t matrix by an n/t × n matrix. We then extend our framework to obtain the first nontrivial (1± ε)-approximation algorithms for the number of h-cycles in a graph, for any constant h ≥ 3. Our running time is Õ(MM(n,n/t^{1/(h-2)},n)), the time to multiply n × n/(t^{1/(h-2)}) by n/(t^{1/(h-2)) × n matrices. Finally, we show that under popular fine-grained hypotheses, this running time is optimal.

Cite as

Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams. Fast Approximate Counting of Cycles. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.ICALP.2024.37,
  author =	{Censor-Hillel, Keren and Even, Tomer and Vassilevska Williams, Virginia},
  title =	{{Fast Approximate Counting of Cycles}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.37},
  URN =		{urn:nbn:de:0030-drops-201809},
  doi =		{10.4230/LIPIcs.ICALP.2024.37},
  annote =	{Keywords: Approximate triangle counting, Approximate cycle counting Fast matrix multiplication, Fast rectangular matrix multiplication}
}
Document
Track A: Algorithms, Complexity and Games
The Group Access Bounds for Binary Search Trees

Authors: Parinya Chalermsook, Manoj Gupta, Wanchote Jiamjitrak, Akash Pareek, and Sorrachai Yingchareonthawornchai


Abstract
The access lemma (Sleator and Tarjan, JACM 1985) is a property of binary search trees (BSTs) that implies interesting consequences such as static optimality, static finger, and working set property on any access sequence X = (x_1,x_2,… ,x_m). However, there are known corollaries of the dynamic optimality that cannot be derived via the access lemma, such as the dynamic finger, and any o(log n)-competitive ratio to the optimal BST where n is the number of keys. In this paper, we introduce the group access bound that can be defined with respect to a reference group access tree. Group access bounds generalize the access lemma and imply properties that are far stronger than those implied by the classical access lemma. For each of the following results, there is a group access tree whose group access bound 1) Is O(√{log n})-competitive to the optimal BST. 2) Achieves the k-finger bound with an additive term of O(m log k log log n) (randomized) when the reference tree is an almost complete binary tree. 3) Satisfies the unified bound with an additive term of O(m log log n). 4) Matches the unified bound with a time window k with an additive term of O(m log k log log n) (randomized). Furthermore, we prove the simulation theorem: For every group access tree, there is an online BST algorithm that is O(1)-competitive with its group access bound. In particular, any new group access bound will automatically imply a new BST algorithm achieving the same bound. Thereby, we obtain an improved k-finger bound (reference tree is an almost complete binary tree), an improved unified bound with a time window k, and matching the best-known bound for Unified bound in the BST model. Since any dynamically optimal BST must achieve the group access bounds, we believe our results provide a new direction towards proving o(log n)-competitiveness of the Splay tree and Greedy, two prime candidates for the dynamic optimality conjecture.

Cite as

Parinya Chalermsook, Manoj Gupta, Wanchote Jiamjitrak, Akash Pareek, and Sorrachai Yingchareonthawornchai. The Group Access Bounds for Binary Search Trees. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 38:1-38:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chalermsook_et_al:LIPIcs.ICALP.2024.38,
  author =	{Chalermsook, Parinya and Gupta, Manoj and Jiamjitrak, Wanchote and Pareek, Akash and Yingchareonthawornchai, Sorrachai},
  title =	{{The Group Access Bounds for Binary Search Trees}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{38:1--38:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.38},
  URN =		{urn:nbn:de:0030-drops-201817},
  doi =		{10.4230/LIPIcs.ICALP.2024.38},
  annote =	{Keywords: Dynamic Optimality, Binary Search Tree, Online Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Bounds for Distinct Quartics

Authors: Panagiotis Charalampopoulos, Paweł Gawrychowski, and Samah Ghazawi


Abstract
A fundamental concept related to strings is that of repetitions. It has been extensively studied in many versions, from both purely combinatorial and algorithmic angles. One of the most basic questions is how many distinct squares, i.e., distinct strings of the form UU, a string of length n can contain as fragments. It turns out that this is always 𝒪(n), and the bound cannot be improved to sublinear in n [Fraenkel and Simpson, JCTA 1998]. Several similar questions about repetitions in strings have been considered, and by now we seem to have a good understanding of their repetitive structure. For higher-dimensional strings, the basic concept of periodicity has been successfully extended and applied to design efficient algorithms - it is inherently more complex than for regular strings. Extending the notion of repetitions and understanding the repetitive structure of higher-dimensional strings is however far from complete. Quartics were introduced by Apostolico and Brimkov [TCS 2000] as analogues of squares in two dimensions. Charalampopoulos, Radoszewski, Rytter, Waleń, and Zuba [ESA 2020] proved that the number of distinct quartics in an n×n 2D string is 𝒪(n²log²n) and that they can be computed in 𝒪(n²log²n) time. Gawrychowski, Ghazawi, and Landau [SPIRE 2021] constructed an infinite family of n×n 2D strings with Ω(n²log n) distinct quartics. This brings the challenge of determining asymptotically tight bounds. Here, we settle both the combinatorial and the algorithmic aspects of this question: the number of distinct quartics in an n×n 2D string is 𝒪(n²log n) and they can be computed in the worst-case optimal 𝒪(n²log n) time. As expected, our solution heavily exploits the periodic structure implied by occurrences of quartics. However, the two-dimensional nature of the problem introduces some technical challenges. Somewhat surprisingly, we overcome the final challenge for the combinatorial bound using a result of Marcus and Tardos [JCTA 2004] for permutation avoidance on matrices.

Cite as

Panagiotis Charalampopoulos, Paweł Gawrychowski, and Samah Ghazawi. Optimal Bounds for Distinct Quartics. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 39:1-39:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{charalampopoulos_et_al:LIPIcs.ICALP.2024.39,
  author =	{Charalampopoulos, Panagiotis and Gawrychowski, Pawe{\l} and Ghazawi, Samah},
  title =	{{Optimal Bounds for Distinct Quartics}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{39:1--39:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.39},
  URN =		{urn:nbn:de:0030-drops-201823},
  doi =		{10.4230/LIPIcs.ICALP.2024.39},
  annote =	{Keywords: 2D strings, quartics, repetitions, periodicity}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Edge Coloring with Subquadratic Palette Size

Authors: Shiri Chechik, Doron Mukhtar, and Tianyi Zhang


Abstract
In this paper, we study the problem of computing an edge-coloring in the (one-pass) W-streaming model. In this setting, the edges of an n-node graph arrive in an arbitrary order to a machine with a relatively small space, and the goal is to design an algorithm that outputs, as a stream, a proper coloring of the edges using the fewest possible number of colors. Behnezhad et al. [Behnezhad et al., 2019] devised the first non-trivial algorithm for this problem, which computes in Õ(n) space a proper O(Δ²)-coloring w.h.p. (here Δ is the maximum degree of the graph). Subsequent papers improved upon this result, where latest of them [Ansari et al., 2022] showed that it is possible to deterministically compute an O(Δ²/s)-coloring in O(ns) space. However, none of the improvements succeeded in reducing the number of colors to O(Δ^{2-ε}) while keeping the same space bound of Õ(n). In particular, no progress was made on the question of whether computing an O(Δ)-coloring is possible with roughly O(n) space, which was stated in [Behnezhad et al., 2019] to be an interesting open problem. In this paper we bypass the quadratic bound by presenting a new randomized Õ(n)-space algorithm that uses Õ(Δ^{1.5}) colors.

Cite as

Shiri Chechik, Doron Mukhtar, and Tianyi Zhang. Streaming Edge Coloring with Subquadratic Palette Size. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 40:1-40:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.40,
  author =	{Chechik, Shiri and Mukhtar, Doron and Zhang, Tianyi},
  title =	{{Streaming Edge Coloring with Subquadratic Palette Size}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{40:1--40:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.40},
  URN =		{urn:nbn:de:0030-drops-201831},
  doi =		{10.4230/LIPIcs.ICALP.2024.40},
  annote =	{Keywords: graph algorithms, streaming algorithms, edge coloring}
}
Document
Track A: Algorithms, Complexity and Games
Faster Algorithms for Dual-Failure Replacement Paths

Authors: Shiri Chechik and Tianyi Zhang


Abstract
Given a simple weighted directed graph G = (V, E, ω) on n vertices as well as two designated terminals s, t ∈ V, our goal is to compute the shortest path from s to t avoiding any pair of presumably failed edges f₁, f₂ ∈ E, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where ω ≡ 1, the authors presented an algebraic algorithm with runtime Õ(n^{2.9146}), as well as a conditional lower bound of n^{8/3-o(1)} against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is Õ(n^{3-1/18}). Besides, we also study algebraic algorithms for digraphs with small integer edge weights from {-M, -M+1, ⋯, M-1, M}. As our secondary result, we obtained a runtime of Õ(Mn^{2.8716}), which is faster than the previous bound of Õ(M^{2/3}n^{2.9144} + Mn^{2.8716}) from [Vassilevska Williams, Woldeghebriela and Xu, 2022].

Cite as

Shiri Chechik and Tianyi Zhang. Faster Algorithms for Dual-Failure Replacement Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.41,
  author =	{Chechik, Shiri and Zhang, Tianyi},
  title =	{{Faster Algorithms for Dual-Failure Replacement Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.41},
  URN =		{urn:nbn:de:0030-drops-201849},
  doi =		{10.4230/LIPIcs.ICALP.2024.41},
  annote =	{Keywords: graph algorithms, shortest paths, replacement paths}
}
Document
Track A: Algorithms, Complexity and Games
Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size

Authors: Shiri Chechik and Tianyi Zhang


Abstract
Given an undirected graph G = (V, E, 𝐰) on n vertices with positive edge weights, a distance oracle is a space-efficient data structure that answers pairwise distance queries in fast runtime. The quality of a distance oracle is measured by three parameters: space, query time, and stretch. In a landmark paper by [Thorup and Zwick, 2001], they showed that for any integer parameter k ≥ 1, there exists a distance oracle with size O(kn^{1+1/k}), O(k) query time, and (2k-1)-stretch error on the approximate distances. After that, there has been a line of subsequent improvements which culminated in the optimal trade-off of O(n^{1+1/k}) space, O(1) query time, and (2k-1)-stretch [Chechik, 2015]. However, these line of constructions did not require that the distance oracle is capable of printing an actual path besides an approximate distance estimate, and there has been a performance gap between path-reporting distance oracles and ones that are not path-reporting. It is known that the earliest construction by [Thorup and Zwick, 2001] is path-reporting, but the parameters are worse by a factor of k. In a later construction by [Wulff-Nilsen, 2013], the query time was improved from O(k) to O(log k). Better trade-offs were discovered in [Elkin and Pettie, 2015] where the authors broke the O(kn^{1+1/k}) space barrier and achieved O(n^{1+1/k}log k) space with O(log k) query time, but their stretch was blown up to a polynomial O(k^{log_{4/3}7}); they also gave an alternative choice of O(n^{1+1/k}) space which is optimal, and O(k)-stretch which is also optimal up to a constant factor, but their query time rose exponentially to O(n^ε). In a recent work [Elkin and Shabat, 2023], the authors obtained significant improvements of O(n^{1+1/k}log k) space, O(k)-stretch, and O(log log k) query time, or O(n^{1+1/k}) space, O(klog k)-stretch, and O(log log k) query time. All the above constructions of path-reporting distance oracles share a common barrier; that is, they could not achieve optimal space O(n^{1+1/k}) and stretch O(k) simultaneously within logarithmic query time; for example, in the natural regime where k = ⌈log n⌉, previous distance oracles had to pay an extra factor of log log n either in the space or stretch. As our result, we bypass this barrier by a new construction of path-reporting distance oracles with O(n^{1+1/k}) space and O(k)-stretch and O(log log k) query time.

Cite as

Shiri Chechik and Tianyi Zhang. Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.42,
  author =	{Chechik, Shiri and Zhang, Tianyi},
  title =	{{Path-Reporting Distance Oracles with Logarithmic Stretch and Linear Size}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.42},
  URN =		{urn:nbn:de:0030-drops-201859},
  doi =		{10.4230/LIPIcs.ICALP.2024.42},
  annote =	{Keywords: graph algorithms, shortest paths, distance oracles}
}
Document
Track A: Algorithms, Complexity and Games
Robot Positioning Using Torus Packing for Multisets

Authors: Chung Shue Chen, Peter Keevash, Sean Kennedy, Élie de Panafieu, and Adrian Vetta


Abstract
We consider the design of a positioning system where a robot determines its position from local observations. This is a well-studied problem of considerable practical importance and mathematical interest. The dominant paradigm derives from the classical theory of de Bruijn sequences, where the robot has access to a window within a larger code and can determine its position if these windows are distinct. We propose an alternative model in which the robot has more limited observational powers, which we argue is more realistic in terms of engineering: the robot does not have access to the full pattern of colours (or letters) in the window, but only to the intensity of each colour (or the number of occurrences of each letter). This leads to a mathematically interesting problem with a different flavour to that arising in the classical paradigm, requiring new construction techniques. The parameters of our construction are optimal up to a constant factor, and computing the position requires only a constant number of arithmetic operations.

Cite as

Chung Shue Chen, Peter Keevash, Sean Kennedy, Élie de Panafieu, and Adrian Vetta. Robot Positioning Using Torus Packing for Multisets. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.43,
  author =	{Chen, Chung Shue and Keevash, Peter and Kennedy, Sean and de Panafieu, \'{E}lie and Vetta, Adrian},
  title =	{{Robot Positioning Using Torus Packing for Multisets}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.43},
  URN =		{urn:nbn:de:0030-drops-201862},
  doi =		{10.4230/LIPIcs.ICALP.2024.43},
  annote =	{Keywords: Universal cycles, positioning systems, de Bruijn sequences}
}
Document
Track A: Algorithms, Complexity and Games
Bayesian Calibrated Click-Through Auctions

Authors: Junjie Chen, Minming Li, Haifeng Xu, and Song Zuo


Abstract
We study information design in click-through auctions, in which the bidders/advertisers bid for winning an opportunity to show their ads but only pay for realized clicks. The payment may or may not happen, and its probability is called the click-through rate (CTR). This auction format is widely used in the industry of online advertising. Bidders have private values, whereas the seller has private information about each bidder’s CTRs. We are interested in the seller’s problem of partially revealing CTR information to maximize revenue. Information design in click-through auctions turns out to be intriguingly different from almost all previous studies in this space since any revealed information about CTRs will never affect bidders' bidding behaviors - they will always bid their true value per click - but only affect the auction’s allocation and payment rule. In some sense, this makes information design effectively a constrained mechanism design problem. Our first result is an FPTAS to compute an approximately optimal mechanism under a constant number of bidders. The design of this algorithm leverages Bayesian bidder values which help to "smooth" the seller’s revenue function and lead to better tractability. The design of this FPTAS is complex and primarily algorithmic. Our second main result pursues the design of "simple" mechanisms that are approximately optimal yet more practical. We primarily focus on the two-bidder situation, which is already notoriously challenging as demonstrated in recent works. When bidders' CTR distribution is symmetric, we develop a simple prior-free signaling scheme, whose construction relies on a parameter termed optimal signal ratio. The constructed scheme provably obtains a good approximation as long as the maximum and minimum of bidders' value density functions do not differ much.

Cite as

Junjie Chen, Minming Li, Haifeng Xu, and Song Zuo. Bayesian Calibrated Click-Through Auctions. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.44,
  author =	{Chen, Junjie and Li, Minming and Xu, Haifeng and Zuo, Song},
  title =	{{Bayesian Calibrated Click-Through Auctions}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.44},
  URN =		{urn:nbn:de:0030-drops-201878},
  doi =		{10.4230/LIPIcs.ICALP.2024.44},
  annote =	{Keywords: information design, ad auctions, online advertising, mechanism design}
}
Document
Track A: Algorithms, Complexity and Games
High-Accuracy Multicommodity Flows via Iterative Refinement

Authors: Li Chen and Mingquan Ye


Abstract
The multicommodity flow problem is a classic problem in network flow and combinatorial optimization, with applications in transportation, communication, logistics, and supply chain management, etc. Existing algorithms often focus on low-accuracy approximate solutions, while high-accuracy algorithms typically rely on general linear program solvers. In this paper, we present efficient high-accuracy algorithms for a broad family of multicommodity flow problems on undirected graphs, demonstrating improved running times compared to general linear program solvers. Our main result shows that we can solve the 𝓁_{q, p}-norm multicommodity flow problem to a (1 + ε) approximation in time O_{q, p}(m^{1+o(1)} k² log(1/ε)), where k is the number of commodities, and O_{q, p}(⋅) hides constants depending only on q or p. As q and p approach to 1 and ∞ respectively, 𝓁_{q, p}-norm flow tends to maximum concurrent flow. We introduce the first iterative refinement framework for 𝓁_{q, p}-norm minimization problems, which reduces the problem to solving a series of decomposable residual problems. In the case of k-commodity flow, each residual problem can be decomposed into k single commodity convex flow problems, each of which can be solved in almost-linear time. As many classical variants of multicommodity flows were shown to be complete for linear programs in the high-accuracy regime [Ding-Kyng-Zhang, ICALP'22], our result provides new directions for studying more efficient high-accuracy multicommodity flow algorithms.

Cite as

Li Chen and Mingquan Ye. High-Accuracy Multicommodity Flows via Iterative Refinement. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 45:1-45:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.45,
  author =	{Chen, Li and Ye, Mingquan},
  title =	{{High-Accuracy Multicommodity Flows via Iterative Refinement}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{45:1--45:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.45},
  URN =		{urn:nbn:de:0030-drops-201887},
  doi =		{10.4230/LIPIcs.ICALP.2024.45},
  annote =	{Keywords: High-accuracy multicommodity flow, Iterative refinement framework, Convex flow solver}
}
Document
Track A: Algorithms, Complexity and Games
On the Streaming Complexity of Expander Decomposition

Authors: Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali


Abstract
In this paper we study the problem of finding (ε, ϕ)-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a ϕ-expander, while the number of inter-cluster edges is only an ε fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a (O(ϕ log n), ϕ)-expander decomposition of an n-vertex graph using Õ(n/ϕ²) bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter ϕ is inherent. We move towards answering this question on two fronts. We prove that a (O(ϕ log n), ϕ)-expander decomposition can be found using Õ(n) space, for every ϕ. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of ϕ, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of (O(ϕ log n), ϕ)-expander decompositions requires Ω̃(n/ϕ) bits of space, even in insertion only streams.

Cite as

Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali. On the Streaming Complexity of Expander Decomposition. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.46,
  author =	{Chen, Yu and Kapralov, Michael and Makarov, Mikhail and Mazzali, Davide},
  title =	{{On the Streaming Complexity of Expander Decomposition}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.46},
  URN =		{urn:nbn:de:0030-drops-201890},
  doi =		{10.4230/LIPIcs.ICALP.2024.46},
  annote =	{Keywords: Graph Sketching, Dynamic Streaming, Expander Decomposition}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds on 0-Extension with Steiner Nodes

Authors: Yu Chen and Zihan Tan


Abstract
In the 0-Extension problem, we are given an edge-weighted graph G = (V,E,c), a set T ⊆ V of its vertices called terminals, and a semi-metric D over T, and the goal is to find an assignment f of each non-terminal vertex to a terminal, minimizing the sum, over all edges (u,v) ∈ E, the product of the edge weight c(u,v) and the distance D(f(u),f(v)) between the terminals that u,v are mapped to. Current best approximation algorithms on 0-Extension are based on rounding a linear programming relaxation called the semi-metric LP relaxation. The integrality gap of this LP, is upper bounded by O(log|T|/log log|T|) and lower bounded by Ω((log|T|)^{2/3}), has been shown to be closely related to the quality of cut and flow vertex sparsifiers. We study a variant of the 0-Extension problem where Steiner vertices are allowed. Specifically, we focus on the integrality gap of the same semi-metric LP relaxation to this new problem. Following from previous work, this new integrality gap turns out to be closely related to the quality achievable by cut/flow vertex sparsifiers with Steiner nodes, a major open problem in graph compression. We show that the new integrality gap stays superconstant Ω(log log |T|) even if we allow a super-linear O(|T|log^{1-ε}|T|) number of Steiner nodes.

Cite as

Yu Chen and Zihan Tan. Lower Bounds on 0-Extension with Steiner Nodes. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.47,
  author =	{Chen, Yu and Tan, Zihan},
  title =	{{Lower Bounds on 0-Extension with Steiner Nodes}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.47},
  URN =		{urn:nbn:de:0030-drops-201905},
  doi =		{10.4230/LIPIcs.ICALP.2024.47},
  annote =	{Keywords: Graph Algorithms, Zero Extension, Integrality Gap}
}
Document
Track A: Algorithms, Complexity and Games
Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games

Authors: Andrei Constantinescu, Pascal Lenzner, Rebecca Reiffenhäuser, Daniel Schmand, and Giovanna Varricchio


Abstract
A decade ago, Gerhard Woeginger posed an open problem that became well-known as "Woeginger’s Hiking Problem": Consider a group of n people that want to go hiking; everyone expresses preferences over the size of their hiking group in the form of an interval between 1 and n. Is it possible to efficiently assign the n people to a set of hiking subgroups so that every person approves the size of their assigned subgroup? The problem is also known as efficiently deciding if an instance of an anonymous Hedonic Game with interval approval preferences admits a wonderful partition. We resolve the open problem in the affirmative by presenting an O(n⁵) time algorithm for Woeginger’s Hiking Problem. Our solution is based on employing a dynamic programming approach for a specific rectangle stabbing problem from computational geometry. Moreover, we propose natural, more demanding extensions of the problem, e.g., maximizing the number of satisfied participants and variants with single-peaked preferences, and show that they are also efficiently solvable. Last but not least, we employ our solution to efficiently compute a partition that maximizes the egalitarian welfare for anonymous single-peaked Hedonic Games.

Cite as

Andrei Constantinescu, Pascal Lenzner, Rebecca Reiffenhäuser, Daniel Schmand, and Giovanna Varricchio. Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 48:1-48:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{constantinescu_et_al:LIPIcs.ICALP.2024.48,
  author =	{Constantinescu, Andrei and Lenzner, Pascal and Reiffenh\"{a}user, Rebecca and Schmand, Daniel and Varricchio, Giovanna},
  title =	{{Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{48:1--48:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.48},
  URN =		{urn:nbn:de:0030-drops-201910},
  doi =		{10.4230/LIPIcs.ICALP.2024.48},
  annote =	{Keywords: Algorithmic Game Theory, Dynamic Programming, Anonymous Hedonic Games, Single-Peaked Preferences, Social Optimum, Wonderful Partitions}
}
Document
Track A: Algorithms, Complexity and Games
An Optimal Sparsification Lemma for Low-Crossing Matchings and Its Applications to Discrepancy and Approximations

Authors: Mónika Csikós and Nabil H. Mustafa


Abstract
Matchings with low crossing numbers were originally introduced in the late 1980s in the seminal works of Welzl [Welzl, 1988; Welzl, 1992] and Chazelle-Welzl [Chazelle and Welzl, 1989]. They have since become fundamental structures in combinatorics, computational geometry, and algorithms. In this paper, we study matchings with low crossing numbers and their relation to random samples. In particular, our main technical result states that, given a set system (X, 𝒮) with dual VC-dimension d and a parameter α ∈ (0, 1], a random set of Θ̃(n^{1+α}) edges of binom(X,2) contains a linear-sized matching with crossing number O (n^{1-α/d}). Furthermore, we show that this bound is optimal up to a logarithmic factor. By incorporating the above sampling step to existing algorithms, we obtain improved running times, by a factor of Θ̃(n), for computing matchings with low crossing numbers. This immediately implies new bounds for a number of well-studied problems, such as combinatorial discrepancy, ε-approximations and their applications. To the best of our knowledge, these are the first near-linear time algorithms for general, non-geometric set systems, for a) matchings with sub-linear crossing numbers, and b) discrepancy beating the standard deviation bound. As an immediate consequence we get fast algorithms for computing o(1/ε²)-sized ε-approximations.

Cite as

Mónika Csikós and Nabil H. Mustafa. An Optimal Sparsification Lemma for Low-Crossing Matchings and Its Applications to Discrepancy and Approximations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 49:1-49:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{csikos_et_al:LIPIcs.ICALP.2024.49,
  author =	{Csik\'{o}s, M\'{o}nika and Mustafa, Nabil H.},
  title =	{{An Optimal Sparsification Lemma for Low-Crossing Matchings and Its Applications to Discrepancy and Approximations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{49:1--49:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.49},
  URN =		{urn:nbn:de:0030-drops-201925},
  doi =		{10.4230/LIPIcs.ICALP.2024.49},
  annote =	{Keywords: low-crossing matchings, uniform sampling, discrepancy, approximations}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Scalable MPC Algorithms for Clustering in High Dimension

Authors: Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý


Abstract
We design new parallel algorithms for clustering in high-dimensional Euclidean spaces. These algorithms run in the Massively Parallel Computation (MPC) model, and are fully scalable, meaning that the local memory in each machine may be n^σ for arbitrarily small fixed σ > 0. Importantly, the local memory may be substantially smaller than the number of clusters k, yet all our algorithms are fast, i.e., run in O(1) rounds. We first devise a fast MPC algorithm for O(1)-approximation of uniform Facility Location. This is the first fully-scalable MPC algorithm that achieves O(1)-approximation for any clustering problem in general geometric setting; previous algorithms only provide poly(log n)-approximation or apply to restricted inputs, like low dimension or small number of clusters k; e.g. [Bhaskara and Wijewardena, ICML'18; Cohen-Addad et al., NeurIPS'21; Cohen-Addad et al., ICML'22]. We then build on this Facility Location result and devise a fast MPC algorithm that achieves O(1)-bicriteria approximation for k-Median and for k-Means, namely, it computes (1+ε)k clusters of cost within O(1/ε²)-factor of the optimum for k clusters. A primary technical tool that we introduce, and may be of independent interest, is a new MPC primitive for geometric aggregation, namely, computing for every data point a statistic of its approximate neighborhood, for statistics like range counting and nearest-neighbor search. Our implementation of this primitive works in high dimension, and is based on consistent hashing (aka sparse partition), a technique that was recently used for streaming algorithms [Czumaj et al., FOCS'22].

Cite as

Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully-Scalable MPC Algorithms for Clustering in High Dimension. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2024.50,
  author =	{Czumaj, Artur and Gao, Guichen and Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Vesel\'{y}, Pavel},
  title =	{{Fully-Scalable MPC Algorithms for Clustering in High Dimension}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.50},
  URN =		{urn:nbn:de:0030-drops-201938},
  doi =		{10.4230/LIPIcs.ICALP.2024.50},
  annote =	{Keywords: Massively parallel computing, high dimension, facility location, k-median, k-means}
}
Document
Track A: Algorithms, Complexity and Games
Computing Tree Decompositions with Small Independence Number

Authors: Clément Dallard, Fedor V. Fomin, Petr A. Golovach, Tuukka Korhonen, and Martin Milanič


Abstract
The independence number of a tree decomposition is the maximum of the independence numbers of the subgraphs induced by its bags. The tree-independence number of a graph is the minimum independence number of a tree decomposition of it. Several NP-hard graph problems, like maximum weight independent set, can be solved in time n^𝒪(k) if the input n-vertex graph is given together with a tree decomposition of independence number k. Yolov in [SODA 2018] gave an algorithm that given an n-vertex graph G and an integer k, in time n^𝒪(k³) either constructs a tree decomposition of G whose independence number is 𝒪(k³) or correctly reports that the tree-independence number of G is larger than k. In this paper, we first give an algorithm for computing the tree-independence number with a better approximation ratio and running time and then prove that our algorithm is, in some sense, the best one can hope for. More precisely, our algorithm runs in time 2^𝒪(k²) n^𝒪(k) and either outputs a tree decomposition of G with independence number at most 8k, or determines that the tree-independence number of G is larger than k. This implies 2^𝒪(k²) n^𝒪(k)-time algorithms for various problems, like maximum weight independent set, parameterized by the tree-independence number k without needing the decomposition as an input. Assuming Gap-ETH, an n^Ω(k) factor in the running time is unavoidable for any approximation algorithm for the tree-independence number. Our second result is that the exact computation of the tree-independence number is para-NP-hard: We show that for every constant k ≥ 4 it is NP-hard to decide if a given graph has the tree-independence number at most k.

Cite as

Clément Dallard, Fedor V. Fomin, Petr A. Golovach, Tuukka Korhonen, and Martin Milanič. Computing Tree Decompositions with Small Independence Number. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 51:1-51:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dallard_et_al:LIPIcs.ICALP.2024.51,
  author =	{Dallard, Cl\'{e}ment and Fomin, Fedor V. and Golovach, Petr A. and Korhonen, Tuukka and Milani\v{c}, Martin},
  title =	{{Computing Tree Decompositions with Small Independence Number}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{51:1--51:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.51},
  URN =		{urn:nbn:de:0030-drops-201945},
  doi =		{10.4230/LIPIcs.ICALP.2024.51},
  annote =	{Keywords: tree-independence number, approximation, parameterized algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering

Authors: Sami Davies, Benjamin Moseley, and Heather Newman


Abstract
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously O(1)-approximate for all 𝓁_p-norms of the disagreement vector; in other words, a combinatorial O(1)-approximation of the all-norms objective for correlation clustering. This is the first proof that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. In addition, our algorithm is the first combinatorial approximation algorithm for the 𝓁₂-norm objective, and more generally the first combinatorial algorithm for the 𝓁_p-norm objective when 1 < p < ∞. It is also faster than all previous algorithms that minimize the 𝓁_p-norm of the disagreement vector, with run-time O(n^ω), where O(n^ω) is the time for matrix multiplication on n × n matrices. When the maximum positive degree in the graph is at most Δ, this can be improved to a run-time of O(nΔ² log n).

Cite as

Sami Davies, Benjamin Moseley, and Heather Newman. Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{davies_et_al:LIPIcs.ICALP.2024.52,
  author =	{Davies, Sami and Moseley, Benjamin and Newman, Heather},
  title =	{{Simultaneously Approximating All 𝓁\underlinep-Norms in Correlation Clustering}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.52},
  URN =		{urn:nbn:de:0030-drops-201950},
  doi =		{10.4230/LIPIcs.ICALP.2024.52},
  annote =	{Keywords: Approximation algorithms, correlation clustering, all-norms, lp-norms}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy

Authors: Argyrios Deligkas, Eduard Eiben, Robert Ganian, Iyad Kanj, and M. S. Ramanujan


Abstract
We study the parameterized complexity of a generalization of the coordinated motion planning problem on graphs, where the goal is to route a specified subset of a given set of k robots to their destinations with the aim of minimizing the total energy (i.e., the total length traveled). We develop novel techniques to push beyond previously-established results that were restricted to solid grids. We design a fixed-parameter additive approximation algorithm for this problem parameterized by k alone. This result, which is of independent interest, allows us to prove the following two results pertaining to well-studied coordinated motion planning problems: (1) A fixed-parameter algorithm, parameterized by k, for routing a single robot to its destination while avoiding the other robots, which is related to the famous Rush-Hour Puzzle; and (2) a fixed-parameter algorithm, parameterized by k plus the treewidth of the input graph, for the standard Coordinated Motion Planning (CMP) problem in which we need to route all the k robots to their destinations. The latter of these results implies, among others, the fixed-parameter tractability of CMP parameterized by k on graphs of bounded outerplanarity, which include bounded-height subgrids. We complement the above results with a lower bound which rules out the fixed-parameter tractability for CMP when parameterized by the total energy. This contrasts the recently-obtained tractability of the problem on solid grids under the same parameterization. As our final result, we strengthen the aforementioned fixed-parameter tractability to hold not only on solid grids but all graphs of bounded local treewidth - a class including, among others, all graphs of bounded genus.

Cite as

Argyrios Deligkas, Eduard Eiben, Robert Ganian, Iyad Kanj, and M. S. Ramanujan. Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 53:1-53:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{deligkas_et_al:LIPIcs.ICALP.2024.53,
  author =	{Deligkas, Argyrios and Eiben, Eduard and Ganian, Robert and Kanj, Iyad and Ramanujan, M. S.},
  title =	{{Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{53:1--53:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.53},
  URN =		{urn:nbn:de:0030-drops-201968},
  doi =		{10.4230/LIPIcs.ICALP.2024.53},
  annote =	{Keywords: coordinated motion planning, multi-agent path finding, parameterized complexity}
}
Document
Track A: Algorithms, Complexity and Games
Nearly Optimal Independence Oracle Algorithms for Edge Estimation in Hypergraphs

Authors: Holger Dell, John Lapinskas, and Kitty Meeks


Abstract
Consider a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. Several recent results (Dell and Lapinskas, STOC 2018; Dell, Lapinskas, and Meeks, SODA 2020) give efficient algorithms to approximately count the hypergraph’s edges in the colourful setting. These algorithms immediately imply fine-grained reductions from approximate counting to decision, with overhead only log^Θ(k) n over the running time n^α of the original decision algorithm, for many well-studied problems including k-Orthogonal Vectors, k-SUM, subgraph isomorphism problems including k-Clique and colourful-H, graph motifs, and k-variable first-order model checking. We explore the limits of what is achievable in this setting, obtaining unconditional lower bounds on the oracle cost of algorithms to approximately count the hypergraph’s edges in both the colourful and uncoloured settings. In both settings, we also obtain algorithms which essentially match these lower bounds; in the colourful setting, this requires significant changes to the algorithm of Dell, Lapinskas, and Meeks (SODA 2020) and reduces the total overhead to log^{Θ(k-α)}n. Our lower bound for the uncoloured setting shows that there is no fine-grained reduction from approximate counting to the corresponding uncoloured decision problem (except in the case α ≥ k-1): without an algorithm for the colourful decision problem, we cannot hope to avoid the much larger overhead of roughly n^{(k-α)²/4}. The uncoloured setting has previously been studied for the special case k = 2 (Peled, Ramamoorthy, Rashtchian, Sinha, ITCS 2018; Chen, Levi, and Waingarten, SODA 2020), and our work generalises the existing algorithms and lower bounds for this special case to k > 2 and to oracles with cost.

Cite as

Holger Dell, John Lapinskas, and Kitty Meeks. Nearly Optimal Independence Oracle Algorithms for Edge Estimation in Hypergraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dell_et_al:LIPIcs.ICALP.2024.54,
  author =	{Dell, Holger and Lapinskas, John and Meeks, Kitty},
  title =	{{Nearly Optimal Independence Oracle Algorithms for Edge Estimation in Hypergraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.54},
  URN =		{urn:nbn:de:0030-drops-201977},
  doi =		{10.4230/LIPIcs.ICALP.2024.54},
  annote =	{Keywords: Graph oracles, Fine-grained complexity, Approximate counting, Hypergraphs}
}
Document
Track A: Algorithms, Complexity and Games
Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous

Authors: Konstantinos Dogeas, Thomas Erlebach, Frank Kammer, Johannes Meintrup, and William K. Moses Jr.


Abstract
Temporal graphs are dynamic graphs where the edge set can change in each time step, while the vertex set stays the same. Exploration of temporal graphs whose snapshot in each time step is a connected graph, called connected temporal graphs, has been widely studied. In this paper, we extend the concept of graph automorphisms from static graphs to temporal graphs and show for the first time that symmetries enable faster exploration: We prove that a connected temporal graph with n vertices and orbit number r (i.e., r is the number of automorphism orbits) can be explored in O(r n^{1+ε}) time steps, for any fixed ε > 0. For r = O(n^c) for constant c < 1, this is a significant improvement over the known tight worst-case bound of Θ(n²) time steps for arbitrary connected temporal graphs. We also give two lower bounds for temporal exploration, showing that Ω(n log n) time steps are required for some inputs with r = O(1) and that Ω(rn) time steps are required for some inputs for any r with 1 ≤ r ≤ n. Moreover, we show that the techniques we develop for fast exploration can be used to derive the following result for rendezvous: Two agents with different programs and without communication ability are placed by an adversary at arbitrary vertices and given full information about the connected temporal graph, except that they do not have consistent vertex labels. Then the two agents can meet at a common vertex after O(n^{1+ε}) time steps, for any constant ε > 0. For some connected temporal graphs with the orbit number being a constant, we also present a complementary lower bound of Ω(nlog n) time steps.

Cite as

Konstantinos Dogeas, Thomas Erlebach, Frank Kammer, Johannes Meintrup, and William K. Moses Jr.. Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 55:1-55:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dogeas_et_al:LIPIcs.ICALP.2024.55,
  author =	{Dogeas, Konstantinos and Erlebach, Thomas and Kammer, Frank and Meintrup, Johannes and Moses Jr., William K.},
  title =	{{Exploiting Automorphisms of Temporal Graphs for Fast Exploration and Rendezvous}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{55:1--55:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.55},
  URN =		{urn:nbn:de:0030-drops-201989},
  doi =		{10.4230/LIPIcs.ICALP.2024.55},
  annote =	{Keywords: dynamic graphs, parameterized algorithms, algorithmic graph theory, graph automorphism, orbit number}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds for Matroid Optimization Problems with a Linear Constraint

Authors: Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai


Abstract
We study a family of matroid optimization problems with a linear constraint (MOL). In these problems, we seek a subset of elements which optimizes (i.e., maximizes or minimizes) a linear objective function subject to (i) a matroid independent set, or a matroid basis constraint, (ii) additional linear constraint. A notable member in this family is budgeted matroid independent set (BM), which can be viewed as classic 0/1-Knapsack with a matroid constraint. While special cases of BM, such as knapsack with cardinality constraint and multiple-choice knapsack, admit a fully polynomial-time approximation scheme (Fully PTAS), the best known result for BM on a general matroid is an Efficient PTAS. Prior to this work, the existence of a Fully PTAS for BM, and more generally, for any problem in the family of MOL problems, has been open. In this paper, we answer this question negatively by showing that none of the (non-trivial) problems in this family admits a Fully PTAS. This resolves the complexity status of several well studied problems. Our main result is obtained by showing first that exact weight matroid basis (EMB) does not admit a pseudo-polynomial time algorithm. This distinguishes EMB from the special cases of k-subset sum and EMB on a linear matroid, which are solvable in pseudo-polynomial time. We then obtain unconditional hardness results for the family of MOL problems in the oracle model (even if randomization is allowed), and show that the same results hold when the matroids are encoded as part of the input, assuming P ≠ NP. For the hardness proof of EMB, we introduce the Π-matroid family. This intricate subclass of matroids, which exploits the interaction between a weight function and the matroid constraint, may find use in tackling other matroid optimization problems.

Cite as

Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai. Lower Bounds for Matroid Optimization Problems with a Linear Constraint. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ICALP.2024.56,
  author =	{Doron-Arad, Ilan and Kulik, Ariel and Shachnai, Hadas},
  title =	{{Lower Bounds for Matroid Optimization Problems with a Linear Constraint}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{56:1--56:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.56},
  URN =		{urn:nbn:de:0030-drops-201990},
  doi =		{10.4230/LIPIcs.ICALP.2024.56},
  annote =	{Keywords: matroid optimization, budgeted problems, knapsack, approximation schemes}
}
Document
Track A: Algorithms, Complexity and Games
Non-Linear Paging

Authors: Ilan Doron-Arad and Joseph (Seffi) Naor


Abstract
We formulate and study non-linear paging - a broad model of online paging where the size of subsets of pages is determined by a monotone non-linear set function of the pages. This model captures the well-studied classic weighted paging and generalized paging problems, and also submodular and supermodular paging, studied here for the first time, that have a range of applications from virtual memory to machine learning. Unlike classic paging, the cache threshold parameter k does not yield good competitive ratios for non-linear paging. Instead, we introduce a novel parameter 𝓁 that generalizes the notion of cache size to the non-linear setting. We obtain a tight deterministic 𝓁-competitive algorithm for general non-linear paging and a o(log²𝓁)-competitive lower bound for randomized algorithms. Our algorithm is based on a new generic LP for the problem that captures both submodular and supermodular paging, in contrast to LPs used for submodular cover settings. We finally focus on the supermodular paging problem, which is a variant of online set cover and online submodular cover, where sets are repeatedly requested to be removed from the cover. We obtain polylogarithmic lower and upper bounds and an offline approximation algorithm.

Cite as

Ilan Doron-Arad and Joseph (Seffi) Naor. Non-Linear Paging. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 57:1-57:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ICALP.2024.57,
  author =	{Doron-Arad, Ilan and Naor, Joseph (Seffi)},
  title =	{{Non-Linear Paging}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{57:1--57:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.57},
  URN =		{urn:nbn:de:0030-drops-202000},
  doi =		{10.4230/LIPIcs.ICALP.2024.57},
  annote =	{Keywords: paging, competitive analysis, non-linear paging, submodular and supermodular functions}
}
Document
Track A: Algorithms, Complexity and Games
New Tradeoffs for Decremental Approximate All-Pairs Shortest Paths

Authors: Michal Dory, Sebastian Forster, Yasamin Nazari, and Tijn de Vos


Abstract
We provide new tradeoffs between approximation and running time for the decremental all-pairs shortest paths (APSP) problem. For undirected graphs with m edges and n nodes undergoing edge deletions, we provide four new approximate decremental APSP algorithms, two for weighted and two for unweighted graphs. Our first result is (2+ε)-APSP with total update time Õ(m^{1/2}n^{3/2}) (when m = n^{1+c} for any constant 0 < c < 1). Prior to our work the fastest algorithm for weighted graphs with approximation at most 3 had total Õ(mn) update time for (1+ε)-APSP [Bernstein, SICOMP 2016]. Our second result is (2+ε, W_{u,v})-APSP with total update time Õ(nm^{3/4}), where the second term is an additive stretch with respect to W_{u,v}, the maximum weight on the shortest path from u to v. Our third result is (2+ε)-APSP for unweighted graphs in Õ(m^{7/4}) update time, which for sparse graphs (m = o(n^{8/7})) is the first subquadratic (2+ε)-approximation. Our last result for unweighted graphs is (1+ε, 2(k-1))-APSP, for k ≥ 2, with Õ(n^{2-1/k}m^{1/k}) total update time (when m = n^{1+c} for any constant c > 0). For comparison, in the special case of (1+ε, 2)-approximation, this improves over the state-of-the-art algorithm by [Henzinger, Krinninger, Nanongkai, SICOMP 2016] with total update time of Õ(n^{2.5}). All of our results are randomized, work against an oblivious adversary, and have constant query time.

Cite as

Michal Dory, Sebastian Forster, Yasamin Nazari, and Tijn de Vos. New Tradeoffs for Decremental Approximate All-Pairs Shortest Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 58:1-58:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dory_et_al:LIPIcs.ICALP.2024.58,
  author =	{Dory, Michal and Forster, Sebastian and Nazari, Yasamin and de Vos, Tijn},
  title =	{{New Tradeoffs for Decremental Approximate All-Pairs Shortest Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{58:1--58:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.58},
  URN =		{urn:nbn:de:0030-drops-202012},
  doi =		{10.4230/LIPIcs.ICALP.2024.58},
  annote =	{Keywords: Decremental Shortest Path, All-Pairs Shortest Paths}
}
Document
Track A: Algorithms, Complexity and Games
Decremental Matching in General Weighted Graphs

Authors: Aditi Dudeja


Abstract
In this paper, we consider the problem of maintaining a (1-ε)-approximate maximum weight matching in a dynamic graph G, while the adversary makes changes to the edges of the graph. In the fully dynamic setting, where both edge insertions and deletions are allowed, Gupta and Peng [Manoj Gupta and Richard Peng, 2013] gave an algorithm for this problem with an update time of Õ_ε(√m). We study a natural relaxation of this problem, namely the decremental model, where the adversary is only allowed to delete edges. For the unweighted version of this problem in general (possibly, non-bipartite) graphs, [Sepehr Assadi et al., 2022] gave a decremental algorithm with update time O_ε(poly(log n)). However, beating Õ_ε(√m) update time remained an open problem for the weighted version in general graphs. In this paper, we bridge the gap between unweighted and weighted general graphs for the decremental setting. We give a O_ε(poly(log n)) update time algorithm that maintains a (1-ε) approximate maximum weight matching under adversarial deletions. Like the decremental algorithm of [Sepehr Assadi et al., 2022], our algorithm is randomized, but works against an adaptive adversary. It also matches the time bound for the unweighted version upto dependencies on ε and a log R factor, where R is the ratio between the maximum and minimum edge weight in G.

Cite as

Aditi Dudeja. Decremental Matching in General Weighted Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 59:1-59:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dudeja:LIPIcs.ICALP.2024.59,
  author =	{Dudeja, Aditi},
  title =	{{Decremental Matching in General Weighted Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{59:1--59:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.59},
  URN =		{urn:nbn:de:0030-drops-202020},
  doi =		{10.4230/LIPIcs.ICALP.2024.59},
  annote =	{Keywords: Weighted Matching, Dynamic Algorithms, Adaptive Adversary}
}
Document
Track A: Algorithms, Complexity and Games
Testing C_k-Freeness in Bounded-Arboricity Graphs

Authors: Talya Eden, Reut Levi, and Dana Ron


Abstract
We study the problem of testing C_k-freeness (k-cycle-freeness) for fixed constant k > 3 in graphs with bounded arboricity (but unbounded degrees). In particular, we are interested in one-sided error algorithms, so that they must detect a copy of C_k with high constant probability when the graph is ε-far from C_k-free. We next state our results for constant arboricity and constant ε with a focus on the dependence on the number of graph vertices, n. The query complexity of all our algorithms grows polynomially with 1/ε. 1) As opposed to the case of k = 3, where the complexity of testing C₃-freeness grows with the arboricity of the graph but not with the size of the graph (Levi, ICALP 2021) this is no longer the case already for k = 4. We show that Ω(n^{1/4}) queries are necessary for testing C₄-freeness, and that Õ(n^{1/4}) are sufficient. The same bounds hold for C₅. 2) For every fixed k ≥ 6, any one-sided error algorithm for testing C_k-freeness must perform Ω(n^{1/3}) queries. 3) For k = 6 we give a testing algorithm whose query complexity is Õ(n^{1/2}). 4) For any fixed k, the query complexity of testing C_k-freeness is upper bounded by {O}(n^{1-1/⌊k/2⌋}). The last upper bound builds on another result in which we show that for any fixed subgraph F, the query complexity of testing F-freeness is upper bounded by O(n^{1-1/𝓁(F)}), where 𝓁(F) is a parameter of F that is always upper bounded by the number of vertices in F (and in particular is k/2 in C_k for even k). We extend some of our results to bounded (non-constant) arboricity, where in particular, we obtain sublinear upper bounds for all k. Our Ω(n^{1/4}) lower bound for testing C₄-freeness in constant arboricity graphs provides a negative answer to an open problem posed by (Goldreich, 2021).

Cite as

Talya Eden, Reut Levi, and Dana Ron. Testing C_k-Freeness in Bounded-Arboricity Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 60:1-60:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ICALP.2024.60,
  author =	{Eden, Talya and Levi, Reut and Ron, Dana},
  title =	{{Testing C\underlinek-Freeness in Bounded-Arboricity Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{60:1--60:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.60},
  URN =		{urn:nbn:de:0030-drops-202033},
  doi =		{10.4230/LIPIcs.ICALP.2024.60},
  annote =	{Keywords: Property Testing, Cycle-Freeness, Bounded Arboricity}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Algorithms for Steiner Forest in Bounded Width Graphs

Authors: Andreas Emil Feldmann and Michael Lampis


Abstract
In this paper we reassess the parameterized complexity and approximability of the well-studied Steiner Forest problem in several graph classes of bounded width. The problem takes an edge-weighted graph and pairs of vertices as input, and the aim is to find a minimum cost subgraph in which each given vertex pair lies in the same connected component. It is known that this problem is APX-hard in general, and NP-hard on graphs of treewidth 3, treedepth 4, and feedback vertex set size 2. However, Bateni, Hajiaghayi and Marx [JACM, 2011] gave an approximation scheme with a runtime of n^O(k²/ε) on graphs of treewidth k. Our main result is a much faster efficient parameterized approximation scheme (EPAS) with a runtime of 2^O(k²/ε log k/ε)⋅n^O(1). If k instead is the vertex cover number of the input graph, we show how to compute the optimum solution in 2^O(k log k)⋅n^O(1) time, and we also prove that this runtime dependence on k is asymptotically best possible, under ETH. Furthermore, if k is the size of a feedback edge set, then we obtain a faster 2^O(k)⋅n^O(1) time algorithm, which again cannot be improved under ETH.

Cite as

Andreas Emil Feldmann and Michael Lampis. Parameterized Algorithms for Steiner Forest in Bounded Width Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 61:1-61:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feldmann_et_al:LIPIcs.ICALP.2024.61,
  author =	{Feldmann, Andreas Emil and Lampis, Michael},
  title =	{{Parameterized Algorithms for Steiner Forest in Bounded Width Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{61:1--61:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.61},
  URN =		{urn:nbn:de:0030-drops-202048},
  doi =		{10.4230/LIPIcs.ICALP.2024.61},
  annote =	{Keywords: Steiner Forest, Approximation Algorithms, FPT algorithms}
}
Document
Track A: Algorithms, Complexity and Games
An FPRAS for Two Terminal Reliability in Directed Acyclic Graphs

Authors: Weiming Feng and Heng Guo


Abstract
We give a fully polynomial-time randomized approximation scheme (FPRAS) for two terminal reliability in directed acyclic graphs (DAGs). In contrast, we also show the complementing problem of approximating two terminal unreliability in DAGs is #BIS-hard.

Cite as

Weiming Feng and Heng Guo. An FPRAS for Two Terminal Reliability in Directed Acyclic Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 62:1-62:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ICALP.2024.62,
  author =	{Feng, Weiming and Guo, Heng},
  title =	{{An FPRAS for Two Terminal Reliability in Directed Acyclic Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{62:1--62:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.62},
  URN =		{urn:nbn:de:0030-drops-202057},
  doi =		{10.4230/LIPIcs.ICALP.2024.62},
  annote =	{Keywords: Approximate counting, Network reliability, Sampling algorithm}
}
Document
Track A: Algorithms, Complexity and Games
A Note on Approximating Weighted Nash Social Welfare with Additive Valuations

Authors: Yuda Feng and Shi Li


Abstract
We give the first O(1)-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is e^{1/e} + ε ≈ 1.445 + ε, which matches the best known approximation ratio for the unweighted case [Barman et al., 2018]. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems [Shmoys and Tardos, 1993]. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most e^{1/e} ≈ 1.445 by Barman et al. [Barman et al., 2018], leading to our approximation ratio.

Cite as

Yuda Feng and Shi Li. A Note on Approximating Weighted Nash Social Welfare with Additive Valuations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 63:1-63:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ICALP.2024.63,
  author =	{Feng, Yuda and Li, Shi},
  title =	{{A Note on Approximating Weighted Nash Social Welfare with Additive Valuations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{63:1--63:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.63},
  URN =		{urn:nbn:de:0030-drops-202068},
  doi =		{10.4230/LIPIcs.ICALP.2024.63},
  annote =	{Keywords: Nash Social Welfare, Configuration LP, Approximation Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time

Authors: Nick Fischer and Leo Wennmann


Abstract
In this work we revisit the elementary scheduling problem 1||∑ p_j U_j. The goal is to select, among n jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time O(nP), where P is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore’s algorithm for 1||∑ p_j U_j: First to time Õ(P^{7/4}) [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time Õ(P^{5/3}) [Klein, Polak, Rohwedder; SODA'23], and finally to time Õ(P^{7/5}) [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time Õ(P) for the 1||∑ p_j U_j problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of m machines in time Õ(P^m). In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.

Cite as

Nick Fischer and Leo Wennmann. Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 64:1-64:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ICALP.2024.64,
  author =	{Fischer, Nick and Wennmann, Leo},
  title =	{{Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{64:1--64:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.64},
  URN =		{urn:nbn:de:0030-drops-202079},
  doi =		{10.4230/LIPIcs.ICALP.2024.64},
  annote =	{Keywords: Scheduling, Fine-Grained Complexity, Dynamic Strings}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Electrical Oblivious Routing on Expanders

Authors: Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva


Abstract
In this paper, we investigate the question of whether the electrical flow routing is a good oblivious routing scheme on an m-edge graph G = (V, E) that is a Φ-expander, i.e. where |∂ S| ≥ Φ ⋅ vol(S) for every S ⊆ V, vol(S) ≤ vol(V)/2. Beyond its simplicity and structural importance, this question is well-motivated by the current state-of-the-art of fast algorithms for 𝓁_∞ oblivious routings that reduce to the expander-case which is in turn solved by electrical flow routing. Our main result proves that the electrical routing is an O(Φ^{-1} log m)-competitive oblivious routing in the 𝓁₁- and 𝓁_∞-norms. We further observe that the oblivious routing is O(log² m)-competitive in the 𝓁₂-norm and, in fact, O(log m)-competitive if 𝓁₂-localization is O(log m) which is widely believed. Using these three upper bounds, we can smoothly interpolate to obtain upper bounds for every p ∈ [2, ∞] and q given by 1/p + 1/q = 1. Assuming 𝓁₂-localization in O(log m), we obtain that in 𝓁_p and 𝓁_q, the electrical oblivious routing is O(Φ^{-(1-2/p)}log m) competitive. Using the currently known result for 𝓁₂-localization, this ratio deteriorates by at most a sublogarithmic factor for every p, q ≠ 2. We complement our upper bounds with lower bounds that show that the electrical routing for any such p and q is Ω(Φ^{-(1-2/p)} log m)-competitive. This renders our results in 𝓁₁ and 𝓁_∞ unconditionally tight up to constants, and the result in any 𝓁_p- and 𝓁_q-norm to be tight in case of 𝓁₂-localization in O(log m).

Cite as

Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Optimal Electrical Oblivious Routing on Expanders. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{florescu_et_al:LIPIcs.ICALP.2024.65,
  author =	{Florescu, Cella and Kyng, Rasmus and Gutenberg, Maximilian Probst and Sachdeva, Sushant},
  title =	{{Optimal Electrical Oblivious Routing on Expanders}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.65},
  URN =		{urn:nbn:de:0030-drops-202083},
  doi =		{10.4230/LIPIcs.ICALP.2024.65},
  annote =	{Keywords: Expanders, Oblivious routing for 𝓁\underlinep, Electrical flow routing}
}
Document
Track A: Algorithms, Complexity and Games
Problems in NP Can Admit Double-Exponential Lower Bounds When Parameterized by Treewidth or Vertex Cover

Authors: Florent Foucaud, Esther Galby, Liana Khazaliya, Shaohua Li, Fionn Mc Inerney, Roohani Sharma, and Prafullkumar Tale


Abstract
Treewidth serves as an important parameter that, when bounded, yields tractability for a wide class of problems. For example, graph problems expressible in Monadic Second Order (MSO) logic and Quantified SAT or, more generally, Quantified CSP, are fixed-parameter tractable parameterized by the treewidth {of the input’s (primal) graph} plus the length of the MSO-formula [Courcelle, Information & Computation 1990] and the quantifier rank [Chen, ECAI 2004], respectively. The algorithms generated by these (meta-)results have running times whose dependence on treewidth is a tower of exponents. A conditional lower bound by Fichte, Hecher, and Pfandler [LICS 2020] shows that, for Quantified SAT, the height of this tower is equal to the number of quantifier alternations. These types of lower bounds, which show that at least double-exponential factors in the running time are necessary, exhibit the extraordinary level of computational hardness for such problems, and are rare in the current literature: there are only a handful of such lower bounds (for treewidth and vertex cover parameterizations) and all of them are for problems that are #NP-complete, Σ₂^p-complete, Π₂^p-complete, or complete for even higher levels of the polynomial hierarchy. Our results demonstrate, for the first time, that it is not necessary to go higher up in the polynomial hierarchy to achieve double-exponential lower bounds: we derive double-exponential lower bounds in the treewidth (tw) and the vertex cover number (vc), for natural, important, and well-studied NP-complete graph problems. Specifically, we design a technique to obtain such lower bounds and show its versatility by applying it to three different problems: Metric Dimension, Strong Metric Dimension, and Geodetic Set. We prove that these problems do not admit 2^{2^o(tw)}⋅n^𝒪(1)-time algorithms, even on bounded diameter graphs, unless the ETH fails (here, n is the number of vertices in the graph). In fact, for Strong Metric Dimension, the double-exponential lower bound holds even for the vertex cover number. We further complement all our lower bounds with matching (and sometimes non-trivial) upper bounds. For the conditional lower bounds, we design and use a novel, yet simple technique based on Sperner families of sets. We believe that the amenability of our technique will lead to obtaining such lower bounds for many other problems in NP.

Cite as

Florent Foucaud, Esther Galby, Liana Khazaliya, Shaohua Li, Fionn Mc Inerney, Roohani Sharma, and Prafullkumar Tale. Problems in NP Can Admit Double-Exponential Lower Bounds When Parameterized by Treewidth or Vertex Cover. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 66:1-66:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{foucaud_et_al:LIPIcs.ICALP.2024.66,
  author =	{Foucaud, Florent and Galby, Esther and Khazaliya, Liana and Li, Shaohua and Mc Inerney, Fionn and Sharma, Roohani and Tale, Prafullkumar},
  title =	{{Problems in NP Can Admit Double-Exponential Lower Bounds When Parameterized by Treewidth or Vertex Cover}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{66:1--66:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.66},
  URN =		{urn:nbn:de:0030-drops-202091},
  doi =		{10.4230/LIPIcs.ICALP.2024.66},
  annote =	{Keywords: Parameterized Complexity, ETH-based Lower Bounds, Double-Exponential Lower Bounds, Kernelization, Vertex Cover, Treewidth, Diameter, Metric Dimension, Strong Metric Dimension, Geodetic Sets}
}
Document
Track A: Algorithms, Complexity and Games
Subexponential Parameterized Directed Steiner Network Problems on Planar Graphs: A Complete Classification

Authors: Esther Galby, Sándor Kisfaludi-Bak, Dániel Marx, and Roohani Sharma


Abstract
In the Directed Steiner Network problem, the input is a directed graph G, a set T ⊆ V(G) of k terminals, and a demand graph D on T. The task is to find a subgraph H ⊆ G with the minimum number of edges such that for every (s,t) ∈ E(D), the solution H contains a directed s → t path. The goal of this paper is to investigate how the complexity of the problem depends on the demand pattern in planar graphs. Formally, if 𝒟 is a class of directed graphs, then the 𝒟-Steiner Network (𝒟-DSN) problem is the special case where the demand graph D is restricted to be from 𝒟. We give a complete characterization of the behavior of every 𝒟-DSN problem on planar graphs. We classify every class 𝒟 closed under transitive equivalence and identification of vertices into three cases: assuming ETH, either the problem is 1) solvable in time 2^O(k)⋅n^O(1), i.e., FPT parameterized by the number k of terminals, but not solvable in time 2^o(k)⋅n^O(1), 2) solvable in time f(k)⋅n^O(√k), but cannot be solved in time f(k)⋅n^o(√k), or 3) solvable in time f(k)⋅n^O(k), but cannot be solved in time f(k)⋅n^o(k). Our result is a far-reaching generalization and unification of earlier results on Directed Steiner Tree, Directed Steiner Network, and Strongly Connected Steiner Subgraph on planar graphs. As an important step of our lower bound proof, we discover a rare example of a genuinely planar problem (i.e., described by a planar graph and two sets of vertices) that cannot be solved in time f(k)⋅n^o(k): given two sets of terminals S and T with |S|+|T| = k, find a subgraph with minimum number of edges such that every vertex of T is reachable from every vertex of S.

Cite as

Esther Galby, Sándor Kisfaludi-Bak, Dániel Marx, and Roohani Sharma. Subexponential Parameterized Directed Steiner Network Problems on Planar Graphs: A Complete Classification. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 67:1-67:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{galby_et_al:LIPIcs.ICALP.2024.67,
  author =	{Galby, Esther and Kisfaludi-Bak, S\'{a}ndor and Marx, D\'{a}niel and Sharma, Roohani},
  title =	{{Subexponential Parameterized Directed Steiner Network Problems on Planar Graphs: A Complete Classification}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{67:1--67:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.67},
  URN =		{urn:nbn:de:0030-drops-202104},
  doi =		{10.4230/LIPIcs.ICALP.2024.67},
  annote =	{Keywords: Directed Steiner Network, Sub-exponential algorithm}
}
Document
Track A: Algorithms, Complexity and Games
A Tight Subexponential-Time Algorithm for Two-Page Book Embedding

Authors: Robert Ganian, Haiko Müller, Sebastian Ordyniak, Giacomo Paesani, and Mateusz Rychlicki


Abstract
A book embedding of a graph is a drawing that maps vertices onto a line and edges to simple pairwise non-crossing curves drawn into "pages", which are half-planes bounded by that line. Two-page book embeddings, i.e., book embeddings into 2 pages, are of special importance as they are both NP-hard to compute and have specific applications. We obtain a 2^𝒪(√n) algorithm for computing a book embedding of an n-vertex graph on two pages - a result which is asymptotically tight under the Exponential Time Hypothesis. As a key tool in our approach, we obtain a single-exponential fixed-parameter algorithm for the same problem when parameterized by the treewidth of the input graph. We conclude by establishing the fixed-parameter tractability of computing minimum-page book embeddings when parameterized by the feedback edge number, settling an open question arising from previous work on the problem.

Cite as

Robert Ganian, Haiko Müller, Sebastian Ordyniak, Giacomo Paesani, and Mateusz Rychlicki. A Tight Subexponential-Time Algorithm for Two-Page Book Embedding. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 68:1-68:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.ICALP.2024.68,
  author =	{Ganian, Robert and M\"{u}ller, Haiko and Ordyniak, Sebastian and Paesani, Giacomo and Rychlicki, Mateusz},
  title =	{{A Tight Subexponential-Time Algorithm for Two-Page Book Embedding}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{68:1--68:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.68},
  URN =		{urn:nbn:de:0030-drops-202114},
  doi =		{10.4230/LIPIcs.ICALP.2024.68},
  annote =	{Keywords: book embedding, page number, subexponential algorithms, subhamiltonicity, feedback edge number}
}
Document
Track A: Algorithms, Complexity and Games
Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems

Authors: Serge Gaspers and Jerry Zirui Li


Abstract
Let U be a universe on n elements, let k be a positive integer, and let ℱ be a family of (implicitly defined) subsets of U. We consider the problems of partitioning U into k sets from ℱ, covering U with k sets from ℱ, and packing k non-intersecting sets from ℱ into U. Classically, these problems can be solved via inclusion-exclusion in 2ⁿ n^O(1) time [Andreas Björklund et al., 2009]. Quantumly, there are faster algorithms for graph coloring with running time O(1.9140ⁿ) [Kazuya Shimizu and Ryuhei Mori, 2022] and for Set Cover with a small number of sets with running time O(1.7274ⁿ |ℱ|^O(1)) [Andris Ambainis et al., 2019]. In this paper, we give a quantum speedup for Set Partition, Set Cover, and Set Packing whenever there is a classical enumeration algorithm that lends itself to a quadratic quantum speedup, which, for any subinstance on a set X ⊆ U, enumerates at least one member of a k-partition, k-cover, or k-packing (if one exists) restricted to (or projected onto, in the case of k-cover) the set X in c^|X| n^O(1) time with c < 2. Our bounded-error quantum algorithm runs in time (2+c)^{n/2} n^O(1) for Set Partition, Set Cover, and Set Packing. It is obtained by combining three algorithms that have the best running time for some values of c. When c ≤ 1.147899, our algorithm is slightly faster than (2+c)^{n/2} n^O(1); when c approaches 1, it matches the O(1.7274ⁿ |ℱ|^O(1)) running time of [Andris Ambainis et al., 2019] for Set Cover when |ℱ| is subexponential in n. For covering, packing, and partitioning into maximal independent sets, maximal cliques, maximal bicliques, maximal cluster graphs, maximal triangle-free graphs, maximal cographs, maximal claw-free graphs, maximal trivially-perfect graphs, maximal threshold graphs, maximal split graphs, maximal line graphs, and maximal induced forests, we obtain bounded-error quantum algorithms with running times ranging from O(1.8554ⁿ) to O(1.9629ⁿ). Packing and covering by maximal induced matchings can be done quantumly in O(1.8934ⁿ) time. For Graph Coloring (covering with k maximal independent sets), we further improve the running time to O(1.7956ⁿ) by leveraging faster algorithms for coloring with a small number of colors to better balance our divide-and-conquer steps. For Domatic Number (packing k minimal dominating sets), we obtain a O((2-ε)ⁿ) running time for some ε > 0. Several of our results should be of interest to proponents of classical computing: - We present an inclusion-exclusion algorithm with running time O^*(∑_{i=0}^⌊αn⌋ binom(n,i)), which determines, for each X ⊆ U of size at most α n, 0 ≤ α ≤ 1, whether (X,ℱ) has a k-cover, k-partition, or k-packing. This running time is best-possible, up to polynomial factors. - We prove that for any linear-sized vertex subset X ⊆ V of a graph G = (V,E), the number of minimal dominating sets of G that are subsets of X is O((2-ε)^|X|) for some ε > 0.

Cite as

Serge Gaspers and Jerry Zirui Li. Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 69:1-69:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gaspers_et_al:LIPIcs.ICALP.2024.69,
  author =	{Gaspers, Serge and Li, Jerry Zirui},
  title =	{{Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{69:1--69:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.69},
  URN =		{urn:nbn:de:0030-drops-202124},
  doi =		{10.4230/LIPIcs.ICALP.2024.69},
  annote =	{Keywords: Graph algorithms, quantum algorithms, graph coloring, domatic number, set cover, set partition, set packing}
}
Document
Track A: Algorithms, Complexity and Games
BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting

Authors: Sevag Gharibian and Jonas Kamminga


Abstract
What is the power of polynomial-time quantum computation with access to an NP oracle? In this work, we focus on two fundamental tasks from the study of Boolean satisfiability (SAT) problems: search-to-decision reductions, and approximate counting. We first show that, in strong contrast to the classical setting where a poly-time Turing machine requires Θ(n) queries to an NP oracle to compute a witness to a given SAT formula, quantumly Θ(log n) queries suffice. We then show this is tight in the black-box model - any quantum algorithm with "NP-like" query access to a formula requires Ω(log n) queries to extract a solution with constant probability. Moving to approximate counting of SAT solutions, by exploiting a quantum link between search-to-decision reductions and approximate counting, we show that existing classical approximate counting algorithms are likely optimal. First, we give a lower bound in the "NP-like" black-box query setting: Approximate counting requires Ω(log n) queries, even on a quantum computer. We then give a "white-box" lower bound (i.e. where the input formula is not hidden in the oracle) - if there exists a randomized poly-time classical or quantum algorithm for approximate counting making o(log n) NP queries, then BPP^NP[o(n)] contains a 𝖯^NP-complete problem if the algorithm is classical and FBQP^NP[o(n)] contains an FP^NP-complete problem if the algorithm is quantum.

Cite as

Sevag Gharibian and Jonas Kamminga. BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 70:1-70:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gharibian_et_al:LIPIcs.ICALP.2024.70,
  author =	{Gharibian, Sevag and Kamminga, Jonas},
  title =	{{BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{70:1--70:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.70},
  URN =		{urn:nbn:de:0030-drops-202134},
  doi =		{10.4230/LIPIcs.ICALP.2024.70},
  annote =	{Keywords: Approximate Counting, Search to Decision Reduction, BQP, NP, Oracle Complexity Class}
}
Document
Track A: Algorithms, Complexity and Games
Low-Memory Algorithms for Online Edge Coloring

Authors: Prantar Ghosh and Manuel Stoeckl


Abstract
For edge coloring, the online and the W-streaming models seem somewhat orthogonal: the former needs edges to be assigned colors immediately after insertion, typically without any space restrictions, while the latter limits memory to be sublinear in the input size but allows an edge’s color to be announced any time after its insertion. We aim for the best of both worlds by designing small-space online algorithms for edge coloring. Our online algorithms significantly improve upon the memory used by prior ones while achieving an O(1)-competitive ratio. We study the problem under both (adversarial) edge arrivals and vertex arrivals. Under vertex arrivals of any n-node graph with maximum vertex-degree Δ, our online O(Δ)-coloring algorithm uses only semi-streaming space (i.e., Õ(n) space, where the Õ(.) notation hides polylog(n) factors). Under edge arrivals, we obtain an online O(Δ)-coloring in Õ(n√Δ) space. We also achieve a smooth color-space tradeoff: for any t = O(Δ), we get an O(Δt(log²Δ))-coloring in Õ(n√{Δ/t}) space, improving upon the state of the art that used Õ(nΔ/t) space for the same number of colors. The improvements stem from extensive use of random permutations that enable us to avoid previously used colors. Most of our algorithms can be derandomized and extended to multigraphs, where edge coloring is known to be considerably harder than for simple graphs.

Cite as

Prantar Ghosh and Manuel Stoeckl. Low-Memory Algorithms for Online Edge Coloring. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ICALP.2024.71,
  author =	{Ghosh, Prantar and Stoeckl, Manuel},
  title =	{{Low-Memory Algorithms for Online Edge Coloring}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.71},
  URN =		{urn:nbn:de:0030-drops-202146},
  doi =		{10.4230/LIPIcs.ICALP.2024.71},
  annote =	{Keywords: Edge coloring, streaming model, online algorithms}
}
Document
Track A: Algorithms, Complexity and Games
On the Smoothed Complexity of Combinatorial Local Search

Authors: Yiannis Giannakopoulos, Alexander Grosz, and Themistoklis Melissourgos


Abstract
We propose a unifying framework for smoothed analysis of combinatorial local optimization problems, and show how a diverse selection of problems within the complexity class PLS can be cast within this model. This abstraction allows us to identify key structural properties, and corresponding parameters, that determine the smoothed running time of local search dynamics. We formalize this via a black-box tool that provides concrete bounds on the expected maximum number of steps needed until local search reaches an exact local optimum. This bound is particularly strong, in the sense that it holds for any starting feasible solution, any choice of pivoting rule, and does not rely on the choice of specific noise distributions that are applied on the input, but it is parameterized by just a global upper bound ϕ on the probability density. The power of this tool can be demonstrated by instantiating it for various PLS-hard problems of interest to derive efficient smoothed running times (as a function of ϕ and the input size). Most notably, we focus on the important local optimization problem of finding pure Nash equilibria in Congestion Games, that has not been studied before from a smoothed analysis perspective. Specifically, we propose novel smoothed analysis models for general and Network Congestion Games, under various representations, including explicit, step-function, and polynomial resource latencies. We study PLS-hard instances of these problems and show that their standard local search algorithms run in polynomial smoothed time. Further applications of our framework to a wide range of additional combinatorial problems can be found in the full version of our paper.

Cite as

Yiannis Giannakopoulos, Alexander Grosz, and Themistoklis Melissourgos. On the Smoothed Complexity of Combinatorial Local Search. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 72:1-72:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{giannakopoulos_et_al:LIPIcs.ICALP.2024.72,
  author =	{Giannakopoulos, Yiannis and Grosz, Alexander and Melissourgos, Themistoklis},
  title =	{{On the Smoothed Complexity of Combinatorial Local Search}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{72:1--72:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.72},
  URN =		{urn:nbn:de:0030-drops-202154},
  doi =		{10.4230/LIPIcs.ICALP.2024.72},
  annote =	{Keywords: Smoothed Analysis, local search, better-response dynamics, PLS-hardness, combinatorial local optimization, congestion games, pure Nash equilibria}
}
Document
Track A: Algorithms, Complexity and Games
A Characterization of Complexity in Public Goods Games

Authors: Matan Gilboa


Abstract
We complete the characterization of the computational complexity of equilibrium in public goods games on graphs. In this model, each vertex represents an agent deciding whether to produce a public good, with utility defined by a "best-response pattern" determining the best response to any number of productive neighbors. We prove that the equilibrium problem is NP-complete for every finite non-monotone best-response pattern. This answers the open problem of [Gilboa and Nisan, 2022], and completes the answer to a question raised by [Papadimitriou and Peng, 2021], for all finite best-response patterns.

Cite as

Matan Gilboa. A Characterization of Complexity in Public Goods Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 73:1-73:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gilboa:LIPIcs.ICALP.2024.73,
  author =	{Gilboa, Matan},
  title =	{{A Characterization of Complexity in Public Goods Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{73:1--73:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.73},
  URN =		{urn:nbn:de:0030-drops-202164},
  doi =		{10.4230/LIPIcs.ICALP.2024.73},
  annote =	{Keywords: Nash Equilibrium, Public Goods, Computational Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error

Authors: Guy Goldberg


Abstract
Relaxed locally decodable codes (RLDCs) are error-correcting codes in which individual bits of the message can be recovered by querying only a few bits from a noisy codeword. For uncorrupted codewords, and for every bit, the decoder must decode the bit correctly with high probability. However, for a noisy codeword, a relaxed local decoder is allowed to output a "rejection" symbol, indicating that the decoding failed. We study the power of adaptivity and two-sided error for RLDCs. Our main result is that if the underlying code is linear, adaptivity and two-sided error do not give any power to relaxed local decoding. We construct a reduction from adaptive, two-sided error relaxed local decoders to non-adaptive, one-sided error ones. That is, the reduction produces a relaxed local decoder that never errs or rejects if its input is a valid codeword and makes queries based on its internal randomness (and the requested index to decode), independently of the input. The reduction essentially maintains the query complexity, requiring at most one additional query. For any input, the decoder’s error probability increases at most two-fold. Furthermore, assuming the underlying code is in systematic form, where the original message is embedded as the first bits of its encoding, the reduction also conserves both the code itself and its rate and distance properties We base the reduction on our new notion of additive promise problems. A promise problem is additive if the sum of any two YES-instances is a YES-instance and the sum of any NO-instance and a YES-instance is a NO-instance. This novel framework captures both linear RLDCs and property testing (of linear properties), despite their significant differences. We prove that in general, algorithms for any additive promise problem do not gain power from adaptivity or two-sided error, and obtain the result for RLDCs as a special case. The result also holds for relaxed locally correctable codes (RLCCs), where a codeword bit should be recovered. As an application, we improve the best known lower bound for linear adaptive RLDCs. Specifically, we prove that such codes require block length of n ≥ k^{1+Ω(1/q²)}, where k denotes the message length and q denotes the number of queries.

Cite as

Guy Goldberg. Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 74:1-74:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2024.74,
  author =	{Goldberg, Guy},
  title =	{{Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{74:1--74:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.74},
  URN =		{urn:nbn:de:0030-drops-202174},
  doi =		{10.4230/LIPIcs.ICALP.2024.74},
  annote =	{Keywords: Locally decodable codes, Relaxed locally correctable codes, Relaxed locally decodable codes}
}
Document
Track A: Algorithms, Complexity and Games
Sharp Noisy Binary Search with Monotonic Probabilities

Authors: Lucas Gretta and Eric Price


Abstract
We revisit the noisy binary search model of [Karp and Kleinberg, 2007], in which we have n coins with unknown probabilities p_i that we can flip. The coins are sorted by increasing p_i, and we would like to find where the probability crosses (to within ε) of a target value τ. This generalized the fixed-noise model of [Burnashev and Zigangirov, 1974], in which p_i = 1/2 ± ε, to a setting where coins near the target may be indistinguishable from it. It was shown in [Karp and Kleinberg, 2007] that Θ(1/ε² log n) samples are necessary and sufficient for this task. We produce a practical algorithm by solving two theoretical challenges: high-probability behavior and sharp constants. We give an algorithm that succeeds with probability 1-δ from 1/C_{τ, ε} ⋅ (log₂ n + O(log^{2/3} n log^{1/3} 1/(δ) + log 1/(δ))) samples, where C_{τ, ε} is the optimal such constant achievable. For δ > n^{-o(1)} this is within 1 + o(1) of optimal, and for δ ≪ 1 it is the first bound within constant factors of optimal.

Cite as

Lucas Gretta and Eric Price. Sharp Noisy Binary Search with Monotonic Probabilities. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 75:1-75:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gretta_et_al:LIPIcs.ICALP.2024.75,
  author =	{Gretta, Lucas and Price, Eric},
  title =	{{Sharp Noisy Binary Search with Monotonic Probabilities}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{75:1--75:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.75},
  URN =		{urn:nbn:de:0030-drops-202188},
  doi =		{10.4230/LIPIcs.ICALP.2024.75},
  annote =	{Keywords: fine-grained algorithms, randomized/probabilistic methods, sublinear/streaming algorithms, noisy binary search}
}
Document
Track A: Algorithms, Complexity and Games
Solution Discovery via Reconfiguration for Problems in P

Authors: Mario Grobler, Stephanie Maaz, Nicole Megow, Amer E. Mouawad, Vijayaragunathan Ramamoorthi, Daniel Schmand, and Sebastian Siebertz


Abstract
In the recently introduced framework of solution discovery via reconfiguration [Fellows et al., ECAI 2023], we are given an initial configuration of k tokens on a graph and the question is whether we can transform this configuration into a feasible solution (for some problem) via a bounded number b of small modification steps. In this work, we study solution discovery variants of polynomial-time solvable problems, namely Spanning Tree Discovery, Shortest Path Discovery, Matching Discovery, and Vertex/Edge Cut Discovery in the unrestricted token addition/removal model, the token jumping model, and the token sliding model. In the unrestricted token addition/removal model, we show that all four discovery variants remain in P. For the token jumping model we also prove containment in P, except for Vertex/Edge Cut Discovery, for which we prove NP-completeness. Finally, in the token sliding model, almost all considered problems become NP-complete, the exception being Spanning Tree Discovery, which remains polynomial-time solvable. We then study the parameterized complexity of the NP-complete problems and provide a full classification of tractability with respect to the parameters solution size (number of tokens) k and transformation budget (number of steps) b. Along the way, we observe strong connections between the solution discovery variants of our base problems and their (weighted) rainbow variants as well as their red-blue variants with cardinality constraints.

Cite as

Mario Grobler, Stephanie Maaz, Nicole Megow, Amer E. Mouawad, Vijayaragunathan Ramamoorthi, Daniel Schmand, and Sebastian Siebertz. Solution Discovery via Reconfiguration for Problems in P. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 76:1-76:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grobler_et_al:LIPIcs.ICALP.2024.76,
  author =	{Grobler, Mario and Maaz, Stephanie and Megow, Nicole and Mouawad, Amer E. and Ramamoorthi, Vijayaragunathan and Schmand, Daniel and Siebertz, Sebastian},
  title =	{{Solution Discovery via Reconfiguration for Problems in P}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{76:1--76:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.76},
  URN =		{urn:nbn:de:0030-drops-202195},
  doi =		{10.4230/LIPIcs.ICALP.2024.76},
  annote =	{Keywords: solution discovery, reconfiguration, spanning tree, shortest path, matching, cut}
}
Document
Track A: Algorithms, Complexity and Games
Towards Tight Bounds for the Graph Homomorphism Problem Parameterized by Cutwidth via Asymptotic Matrix Parameters

Authors: Carla Groenland, Isja Mannens, Jesper Nederlof, Marta Piecyk, and Paweł Rzążewski


Abstract
A homomorphism from a graph G to a graph H is an edge-preserving mapping from V(G) to V(H). In the graph homomorphism problem, denoted by Hom(H), the graph H is fixed and we need to determine if there exists a homomorphism from an instance graph G to H. We study the complexity of the problem parameterized by the cutwidth of G, i.e., we assume that G is given along with a linear ordering v_1,…,v_n of V(G) such that, for each i ∈ {1,…,n-1}, the number of edges with one endpoint in {v_1,…,v_i} and the other in {v_{i+1},…,v_n} is at most k. We aim, for each H, for algorithms for Hom(H) running in time c_H^k n^𝒪(1) and matching lower bounds that exclude c_H^{k⋅o(1)} n^𝒪(1) or c_H^{k(1-Ω(1))} n^𝒪(1) time algorithms under the (Strong) Exponential Time Hypothesis. In the paper we introduce a new parameter that we call mimsup(H). Our main contribution is strong evidence of a close connection between c_H and mimsup(H): - an information-theoretic argument that the number of states needed in a natural dynamic programming algorithm is at most mimsup(H)^k, - lower bounds that show that for almost all graphs H indeed we have c_H ≥ mimsup(H), assuming the (Strong) Exponential-Time Hypothesis, and - an algorithm with running time exp(𝒪(mimsup(H)⋅k log k)) n^𝒪(1). In the last result we do not need to assume that H is a fixed graph. Thus, as a consequence, we obtain that the problem of deciding whether G admits a homomorphism to H is fixed-parameter tractable, when parameterized by cutwidth of G and mimsup(H). The parameter mimsup(H) can be thought of as the p-th root of the maximum induced matching number in the graph obtained by multiplying p copies of H via a certain graph product, where p tends to infinity. It can also be defined as an asymptotic rank parameter of the adjacency matrix of H. Such parameters play a central role in, among others, algebraic complexity theory and additive combinatorics. Our results tightly link the parameterized complexity of a problem to such an asymptotic matrix parameter for the first time.

Cite as

Carla Groenland, Isja Mannens, Jesper Nederlof, Marta Piecyk, and Paweł Rzążewski. Towards Tight Bounds for the Graph Homomorphism Problem Parameterized by Cutwidth via Asymptotic Matrix Parameters. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 77:1-77:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{groenland_et_al:LIPIcs.ICALP.2024.77,
  author =	{Groenland, Carla and Mannens, Isja and Nederlof, Jesper and Piecyk, Marta and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Towards Tight Bounds for the Graph Homomorphism Problem Parameterized by Cutwidth via Asymptotic Matrix Parameters}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{77:1--77:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.77},
  URN =		{urn:nbn:de:0030-drops-202208},
  doi =		{10.4230/LIPIcs.ICALP.2024.77},
  annote =	{Keywords: graph homomorphism, cutwidth, asymptotic matrix parameters}
}
Document
Track A: Algorithms, Complexity and Games
Isomorphism for Tournaments of Small Twin Width

Authors: Martin Grohe and Daniel Neuen


Abstract
We prove that isomorphism of tournaments of twin width at most k can be decided in time k^O(log k) n^O(1). This implies that the isomorphism problem for classes of tournaments of bounded or moderately growing twin width is in polynomial time. By comparison, there are classes of undirected graphs of bounded twin width that are isomorphism complete, that is, the isomorphism problem for the classes is as hard as the general graph isomorphism problem. Twin width is a graph parameter that has been introduced only recently (Bonnet et al., FOCS 2020), but has received a lot of attention in structural graph theory since then. On directed graphs, it is functionally smaller than clique width. We prove that on tournaments (but not on general directed graphs) it is also functionally smaller than directed tree width (and thus, the same also holds for cut width and directed path width). Hence, our result implies that tournament isomorphism testing is also fixed-parameter tractable when parameterized by any of these parameters. Our isomorphism algorithm heavily employs group-theoretic techniques. This seems to be necessary: as a second main result, we show that the combinatorial Weisfeiler-Leman algorithm does not decide isomorphism of tournaments of twin width at most 35 if its dimension is o(n). (Throughout this abstract, n is the order of the input graphs.)

Cite as

Martin Grohe and Daniel Neuen. Isomorphism for Tournaments of Small Twin Width. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grohe_et_al:LIPIcs.ICALP.2024.78,
  author =	{Grohe, Martin and Neuen, Daniel},
  title =	{{Isomorphism for Tournaments of Small Twin Width}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.78},
  URN =		{urn:nbn:de:0030-drops-202216},
  doi =		{10.4230/LIPIcs.ICALP.2024.78},
  annote =	{Keywords: tournament isomorphism, twin width, fixed-parameter tractability, Weisfeiler-Leman algorithm}
}
Document
Track A: Algorithms, Complexity and Games
From Trees to Polynomials and Back Again: New Capacity Bounds with Applications to TSP

Authors: Leonid Gurvits, Nathan Klein, and Jonathan Leake


Abstract
We give simply exponential lower bounds on the probabilities of a given strongly Rayleigh distribution, depending only on its expectation. This resolves a weak version of a problem left open by Karlin-Klein-Oveis Gharan in their recent breakthrough work on metric TSP, and this resolution leads to a minor improvement of their approximation factor for metric TSP. Our results also allow for a more streamlined analysis of the algorithm. To achieve these new bounds, we build upon the work of Gurvits-Leake on the use of the productization technique for bounding the capacity of a real stable polynomial. This technique allows one to reduce certain inequalities for real stable polynomials to products of affine linear forms, which have an underlying matrix structure. In this paper, we push this technique further by characterizing the worst-case polynomials via bipartitioned forests. This rigid combinatorial structure yields a clean induction argument, which implies our stronger bounds. In general, we believe the results of this paper will lead to further improvement and simplification of the analysis of various combinatorial and probabilistic bounds and algorithms.

Cite as

Leonid Gurvits, Nathan Klein, and Jonathan Leake. From Trees to Polynomials and Back Again: New Capacity Bounds with Applications to TSP. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 79:1-79:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gurvits_et_al:LIPIcs.ICALP.2024.79,
  author =	{Gurvits, Leonid and Klein, Nathan and Leake, Jonathan},
  title =	{{From Trees to Polynomials and Back Again: New Capacity Bounds with Applications to TSP}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{79:1--79:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.79},
  URN =		{urn:nbn:de:0030-drops-202229},
  doi =		{10.4230/LIPIcs.ICALP.2024.79},
  annote =	{Keywords: traveling salesman problem, strongly Rayleigh distributions, polynomial capacity, probability lower bounds, combinatorial lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Distributed Fast Crash-Tolerant Consensus with Nearly-Linear Quantum Communication

Authors: Mohammad T. HajiAghayi, Dariusz R. Kowalski, and Jan Olkowski


Abstract
Fault-tolerant Consensus is about reaching agreement on some of the input values in a limited time by non-faulty autonomous processes, despite of failures of processes or communication medium. This problem is particularly challenging and costly against an adaptive adversary with full information. Bar-Joseph and Ben-Or (PODC'98) were the first who proved an absolute lower bound Ω(√{n/log n}) on expected time complexity of Consensus in any classical (i.e., randomized or deterministic) message-passing network with n processes succeeding with probability 1 against such a strong adaptive adversary crashing processes. Seminal work of Ben-Or and Hassidim (STOC'05) broke the Ω(√{n/log n}) barrier for consensus in the classical (deterministic and randomized) networks by enhancing the model with quantum channels. In such networks, quantum communication between every pair of processes participating in the protocol is also allowed. They showed an (expected) constant-time quantum algorithm for a linear number of crashes t < n/3. In this paper, we improve upon that seminal work by reducing the number of quantum and communication bits to an arbitrarily small polynomial, and even more, to a polylogarithmic number - though, the latter in the cost of a slightly larger polylogarithmic time (still exponentially smaller than the time lower bound Ω(√{n/log n}) for the classical computation models).

Cite as

Mohammad T. HajiAghayi, Dariusz R. Kowalski, and Jan Olkowski. Distributed Fast Crash-Tolerant Consensus with Nearly-Linear Quantum Communication. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 80:1-80:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hajiaghayi_et_al:LIPIcs.ICALP.2024.80,
  author =	{HajiAghayi, Mohammad T. and Kowalski, Dariusz R. and Olkowski, Jan},
  title =	{{Distributed Fast Crash-Tolerant Consensus with Nearly-Linear Quantum Communication}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{80:1--80:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.80},
  URN =		{urn:nbn:de:0030-drops-202235},
  doi =		{10.4230/LIPIcs.ICALP.2024.80},
  annote =	{Keywords: distributed algorithms, quantum algorithms, adaptive adversary, crash failures, Consensus, quantum common coin, approximate counting}
}
Document
Track A: Algorithms, Complexity and Games
Oracle-Augmented Prophet Inequalities

Authors: Sariel Har-Peled, Elfarouk Harb, and Vasilis Livanos


Abstract
In the classical prophet inequality setting, a gambler is given a sequence of n random variables X₁, … , X_n, taken from known distributions, observes their values in adversarial order and selects one of them, immediately after it is being observed, aiming to select a value that is as high as possible. The classical prophet inequality shows a strategy that guarantees a value at least half of the value of an omniscience prophet that always picks the maximum, and this ratio is optimal. Here, we generalize the prophet inequality, allowing the gambler some additional information about the future that is otherwise privy only to the prophet. Specifically, at any point in the process, the gambler is allowed to query an oracle 𝒪. The oracle responds with a single bit answer: YES if the current realization is greater than the remaining realizations, and NO otherwise. We show that the oracle model with m oracle calls is equivalent to the Top-1-of-(m+1) model when the objective is maximizing the probability of selecting the maximum. This equivalence fails to hold when the objective is maximizing the competitive ratio, but we still show that any algorithm for the oracle model implies an equivalent competitive ratio for the Top-1-of-(m+1) model. We resolve the oracle model for any m, giving tight lower and upper bound on the best possible competitive ratio compared to an almighty adversary. As a consequence, we provide new results as well as improvements on known results for the Top-1-of-m model.

Cite as

Sariel Har-Peled, Elfarouk Harb, and Vasilis Livanos. Oracle-Augmented Prophet Inequalities. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 81:1-81:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.ICALP.2024.81,
  author =	{Har-Peled, Sariel and Harb, Elfarouk and Livanos, Vasilis},
  title =	{{Oracle-Augmented Prophet Inequalities}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{81:1--81:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.81},
  URN =		{urn:nbn:de:0030-drops-202245},
  doi =		{10.4230/LIPIcs.ICALP.2024.81},
  annote =	{Keywords: prophet inequalities, predictions, top-1-of-k model}
}
Document
Track A: Algorithms, Complexity and Games
Refuting Approaches to the Log-Rank Conjecture for XOR Functions

Authors: Hamed Hatami, Kaave Hosseini, Shachar Lovett, and Anthony Ostuni


Abstract
The log-rank conjecture, a longstanding problem in communication complexity, has persistently eluded resolution for decades. Consequently, some recent efforts have focused on potential approaches for establishing the conjecture in the special case of XOR functions, where the communication matrix is lifted from a boolean function, and the rank of the matrix equals the Fourier sparsity of the function, which is the number of its nonzero Fourier coefficients. In this note, we refute two conjectures. The first has origins in Montanaro and Osborne (arXiv'09) and is considered in Tsang, Wong, Xie, and Zhang (FOCS'13), and the second is due to Mande and Sanyal (FSTTCS'20). These conjectures were proposed in order to improve the best-known bound of Lovett (STOC'14) regarding the log-rank conjecture in the special case of XOR functions. Both conjectures speculate that the set of nonzero Fourier coefficients of the boolean function has some strong additive structure. We refute these conjectures by constructing two specific boolean functions tailored to each.

Cite as

Hamed Hatami, Kaave Hosseini, Shachar Lovett, and Anthony Ostuni. Refuting Approaches to the Log-Rank Conjecture for XOR Functions. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 82:1-82:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hatami_et_al:LIPIcs.ICALP.2024.82,
  author =	{Hatami, Hamed and Hosseini, Kaave and Lovett, Shachar and Ostuni, Anthony},
  title =	{{Refuting Approaches to the Log-Rank Conjecture for XOR Functions}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{82:1--82:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.82},
  URN =		{urn:nbn:de:0030-drops-202252},
  doi =		{10.4230/LIPIcs.ICALP.2024.82},
  annote =	{Keywords: Communication complexity, log-rank conjecture, XOR functions, additive structure}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
Document
Track A: Algorithms, Complexity and Games
The k-Opt Algorithm for the Traveling Salesman Problem Has Exponential Running Time for k ≥ 5

Authors: Sophia Heimann, Hung P. Hoang, and Stefan Hougardy


Abstract
The k-Opt algorithm is a local search algorithm for the Traveling Salesman Problem. Starting with an initial tour, it iteratively replaces at most k edges in the tour with the same number of edges to obtain a better tour. Krentel (FOCS 1989) showed that the Traveling Salesman Problem with the k-Opt neighborhood is complete for the class PLS (polynomial time local search) and that the k-Opt algorithm can have exponential running time for any pivot rule. However, his proof requires k ≫ 1000 and has a substantial gap. We show the two properties above for a much smaller value of k, addressing an open question by Monien, Dumrauf, and Tscheuschner (ICALP 2010). In particular, we prove the PLS-completeness for k ≥ 17 and the exponential running time for k ≥ 5.

Cite as

Sophia Heimann, Hung P. Hoang, and Stefan Hougardy. The k-Opt Algorithm for the Traveling Salesman Problem Has Exponential Running Time for k ≥ 5. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 84:1-84:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heimann_et_al:LIPIcs.ICALP.2024.84,
  author =	{Heimann, Sophia and Hoang, Hung P. and Hougardy, Stefan},
  title =	{{The k-Opt Algorithm for the Traveling Salesman Problem Has Exponential Running Time for k ≥ 5}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{84:1--84:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.84},
  URN =		{urn:nbn:de:0030-drops-202270},
  doi =		{10.4230/LIPIcs.ICALP.2024.84},
  annote =	{Keywords: Traveling Salesman Problem, k-Opt algorithm, PLS-completeness}
}
Document
Track A: Algorithms, Complexity and Games
Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration

Authors: Shuichi Hirahara and Naoto Ohsaka


Abstract
In the Minmax Set Cover Reconfiguration problem, given a set system ℱ over a universe 𝒰 and its two covers 𝒞^start and 𝒞^goal of size k, we wish to transform 𝒞^start into 𝒞^goal by repeatedly adding or removing a single set of ℱ while covering the universe 𝒰 in any intermediate state. Then, the objective is to minimize the maximum size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are PSPACE-hard to approximate within a factor of 2-(1/polyloglog N), where N is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) [Ohsaka, 2024] and Karthik C. S. and Manurangsi (2023) [Karthik C. S. and Manurangsi, 2023]. This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a 2-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011) [Takehiro Ito et al., 2011]. Our proof is based on a reconfiguration analogue of the FGLSS reduction [Feige et al., 1996] from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (STOC 2024) [Hirahara and Ohsaka, 2024]. We also prove that for any constant ε ∈ (0,1), Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε^-1)-uniform hypergraphs is PSPACE-hard to approximate within a factor of 2-ε.

Cite as

Shuichi Hirahara and Naoto Ohsaka. Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 85:1-85:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ICALP.2024.85,
  author =	{Hirahara, Shuichi and Ohsaka, Naoto},
  title =	{{Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{85:1--85:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.85},
  URN =		{urn:nbn:de:0030-drops-202283},
  doi =		{10.4230/LIPIcs.ICALP.2024.85},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, probabilistic proof systems, FGLSS reduction}
}
Document
Track A: Algorithms, Complexity and Games
Problems on Group-Labeled Matroid Bases

Authors: Florian Hörsch, András Imolay, Ryuhei Mizutani, Taihei Oki, and Tamás Schwarcz


Abstract
Consider a matroid equipped with a labeling of its ground set to an abelian group. We define the label of a subset of the ground set as the sum of the labels of its elements. We study a collection of problems on finding bases and common bases of matroids with restrictions on their labels. For zero bases and zero common bases, the results are mostly negative. While finding a non-zero basis of a matroid is not difficult, it turns out that the complexity of finding a non-zero common basis depends on the group. Namely, we show that the problem is hard for a fixed group if it contains an element of order two, otherwise it is polynomially solvable. As a generalization of both zero and non-zero constraints, we further study F-avoiding constraints where we seek a basis or common basis whose label is not in a given set F of forbidden labels. Using algebraic techniques, we give a randomized algorithm for finding an F-avoiding common basis of two matroids represented over the same field for finite groups given as operation tables. The study of F-avoiding bases with groups given as oracles leads to a conjecture stating that whenever an F-avoiding basis exists, an F-avoiding basis can be obtained from an arbitrary basis by exchanging at most |F| elements. We prove the conjecture for the special cases when |F| ≤ 2 or the group is ordered. By relying on structural observations on matroids representable over fixed, finite fields, we verify a relaxed version of the conjecture for these matroids. As a consequence, we obtain a polynomial-time algorithm in these special cases for finding an F-avoiding basis when |F| is fixed.

Cite as

Florian Hörsch, András Imolay, Ryuhei Mizutani, Taihei Oki, and Tamás Schwarcz. Problems on Group-Labeled Matroid Bases. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 86:1-86:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{horsch_et_al:LIPIcs.ICALP.2024.86,
  author =	{H\"{o}rsch, Florian and Imolay, Andr\'{a}s and Mizutani, Ryuhei and Oki, Taihei and Schwarcz, Tam\'{a}s},
  title =	{{Problems on Group-Labeled Matroid Bases}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{86:1--86:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.86},
  URN =		{urn:nbn:de:0030-drops-202299},
  doi =		{10.4230/LIPIcs.ICALP.2024.86},
  annote =	{Keywords: matroids, matroid intersection, congruency constraint, exact-weight constraint, additive combinatorics, algebraic algorithm, strongly base orderability}
}
Document
Track A: Algorithms, Complexity and Games
Finding Most-Shattering Minimum Vertex Cuts of Polylogarithmic Size in Near-Linear Time

Authors: Kevin Hua, Daniel Li, Jaewoo Park, and Thatchaphol Saranurak


Abstract
We show the first near-linear time randomized algorithms for listing all minimum vertex cuts of polylogarithmic size that separate the graph into at least three connected components (also known as shredders) and for finding the most shattering one, i.e., the one maximizing the number of connected components. Our algorithms break the quadratic time bound by Cheriyan and Thurimella (STOC'96) for both problems that has been unimproved for more than two decades. Our work also removes an important bottleneck to near-linear time algorithms for the vertex connectivity augmentation problem (Jordan '95) and finding an even-length directed cycle in a graph, a problem shown to be equivalent to many other fundamental problems (Vazirani and Yannakakis '90, Robertson et al. '99). Note that it is necessary to list only minimum vertex cuts that separate the graph into at least three components because there can be an exponential number of minimum vertex cuts in general. To obtain a near-linear time algorithm, we have extended techniques in local flow algorithms developed by Forster et al. (SODA'20) to list shredders on a local scale. We also exploit fast queries to a pairwise vertex connectivity oracle subject to vertex failures (Long and Saranurak FOCS'22, Kosinas ESA'23). This is the first application of using connectivity oracles subject to vertex failures to speed up a static graph algorithm.

Cite as

Kevin Hua, Daniel Li, Jaewoo Park, and Thatchaphol Saranurak. Finding Most-Shattering Minimum Vertex Cuts of Polylogarithmic Size in Near-Linear Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 87:1-87:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hua_et_al:LIPIcs.ICALP.2024.87,
  author =	{Hua, Kevin and Li, Daniel and Park, Jaewoo and Saranurak, Thatchaphol},
  title =	{{Finding Most-Shattering Minimum Vertex Cuts of Polylogarithmic Size in Near-Linear Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{87:1--87:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.87},
  URN =		{urn:nbn:de:0030-drops-202302},
  doi =		{10.4230/LIPIcs.ICALP.2024.87},
  annote =	{Keywords: Graphs, Flows, Randomized Algorithms, Vertex Connectivity}
}
Document
Track A: Algorithms, Complexity and Games
Satisfiability to Coverage in Presence of Fairness, Matroid, and Global Constraints

Authors: Tanmay Inamdar, Pallavi Jain, Daniel Lokshtanov, Abhishek Sahu, Saket Saurabh, and Anannya Upasana


Abstract
In the MaxSAT with Cardinality Constraint problem (CC-MaxSAT), we are given a CNF-formula Φ, and a positive integer k, and the goal is to find an assignment β with at most k variables set to true (also called a weight k-assignment) such that the number of clauses satisfied by β is maximized. Maximum Coverage can be seen as a special case of CC-MaxSat, where the formula Φ is monotone, i.e., does not contain any negative literals. CC-MaxSat and Maximum Coverage are extremely well-studied problems in the approximation algorithms as well as the parameterized complexity literature. Our first conceptual contribution is that CC-MaxSat and Maximum Coverage are equivalent to each other in the context of FPT-Approximation parameterized by k (here, the approximation is in terms of the number of clauses satisfied/elements covered). In particular, we give a randomized reduction from CC-MaxSat to Maximum Coverage running in time 𝒪(1/ε)^{k} ⋅ (m+n)^{𝒪(1)} that preserves the approximation guarantee up to a factor of (1-ε). Furthermore, this reduction also works in the presence of "fairness" constraints on the satisfied clauses, as well as matroid constraints on the set of variables that are assigned true. Here, the "fairness" constraints are modeled by partitioning the clauses of the formula Φ into r different colors, and the goal is to find an assignment that satisfies at least t_j clauses of each color 1 ≤ j ≤ r. Armed with this reduction, we focus on designing FPT-Approximation schemes (FPT-ASes) for Maximum Coverage and its generalizations. Our algorithms are based on a novel combination of a variety of ideas, including a carefully designed probability distribution that exploits sparse coverage functions. These algorithms substantially generalize the results in Jain et al. [SODA 2023] for CC-MaxSat and Maximum Coverage for K_{d,d}-free set systems (i.e., no d sets share d elements), as well as a recent FPT-AS for Matroid Constrained Maximum Coverage by Sellier [ESA 2023] for frequency-d set systems.

Cite as

Tanmay Inamdar, Pallavi Jain, Daniel Lokshtanov, Abhishek Sahu, Saket Saurabh, and Anannya Upasana. Satisfiability to Coverage in Presence of Fairness, Matroid, and Global Constraints. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 88:1-88:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{inamdar_et_al:LIPIcs.ICALP.2024.88,
  author =	{Inamdar, Tanmay and Jain, Pallavi and Lokshtanov, Daniel and Sahu, Abhishek and Saurabh, Saket and Upasana, Anannya},
  title =	{{Satisfiability to Coverage in Presence of Fairness, Matroid, and Global Constraints}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{88:1--88:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.88},
  URN =		{urn:nbn:de:0030-drops-202318},
  doi =		{10.4230/LIPIcs.ICALP.2024.88},
  annote =	{Keywords: Partial Vertex Cover, Max SAT, FPT Approximation, Matroids}
}
Document
Track A: Algorithms, Complexity and Games
Breaking a Barrier in Constructing Compact Indexes for Parameterized Pattern Matching

Authors: Kento Iseri, Tomohiro I, Diptarama Hendrian, Dominik Köppl, Ryo Yoshinaka, and Ayumi Shinohara


Abstract
A parameterized string (p-string) is a string over an alphabet (Σ_s ∪ Σ_p), where Σ_s and Σ_p are disjoint alphabets for static symbols (s-symbols) and for parameter symbols (p-symbols), respectively. Two p-strings x and y are said to parameterized match (p-match) if and only if x can be transformed into y by applying a bijection on Σ_p to every occurrence of p-symbols in x. The indexing problem for p-matching is to preprocess a p-string T of length n so that we can efficiently find the occurrences of substrings of T that p-match with a given pattern. Let σ_s and respectively σ_p be the numbers of distinct s-symbols and p-symbols that appear in T and σ = σ_s + σ_p. Extending the Burrows-Wheeler Transform (BWT) based index for exact string pattern matching, Ganguly et al. [SODA 2017] proposed parameterized BWTs (pBWTs) to design the first compact index for p-matching, and posed an open problem on how to construct the pBWT-based index in compact space, i.e., in O(n lg |Σ_s ∪ Σ_p|) bits of space. Hashimoto et al. [SPIRE 2022] showed how to construct the pBWT for T, under the assumption that Σ_s ∪ Σ_p = [0..O(σ)], in O(n lg σ) bits of space and O(n (σ_p lg n)/(lg lg n)) time in an online manner while reading the symbols of T from right to left. In this paper, we refine Hashimoto et al.’s algorithm to work in O(n lg σ) bits of space and O(n (lg σ_p lg n)/(lg lg n)) time in a more general assumption that Σ_s ∪ Σ_p = [0..n^{O(1)}]. Our result has an immediate application to constructing parameterized suffix arrays in O(n (lg σ_p lg n)/(lg lg n)) time and O(n lg σ) bits of working space. We also show that our data structure can support backward search, a core procedure of BWT-based indexes, at any stage of the online construction, making it the first compact index for p-matching that can be constructed in compact space and even in an online manner.

Cite as

Kento Iseri, Tomohiro I, Diptarama Hendrian, Dominik Köppl, Ryo Yoshinaka, and Ayumi Shinohara. Breaking a Barrier in Constructing Compact Indexes for Parameterized Pattern Matching. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 89:1-89:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{iseri_et_al:LIPIcs.ICALP.2024.89,
  author =	{Iseri, Kento and I, Tomohiro and Hendrian, Diptarama and K\"{o}ppl, Dominik and Yoshinaka, Ryo and Shinohara, Ayumi},
  title =	{{Breaking a Barrier in Constructing Compact Indexes for Parameterized Pattern Matching}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{89:1--89:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.89},
  URN =		{urn:nbn:de:0030-drops-202324},
  doi =		{10.4230/LIPIcs.ICALP.2024.89},
  annote =	{Keywords: Index for parameterized pattern matching, Parameterized Burrows-Wheeler Transform, Online construction}
}
Document
Track A: Algorithms, Complexity and Games
Dynamic PageRank: Algorithms and Lower Bounds

Authors: Rajesh Jayaram, Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski


Abstract
We consider the PageRank problem in the dynamic setting, where the goal is to explicitly maintain an approximate PageRank vector π ∈ ℝⁿ for a graph under a sequence of edge insertions and deletions. Our main result is a complete characterization of the complexity of dynamic PageRank maintenance for both multiplicative and additive (L₁) approximations. First, we establish matching lower and upper bounds for maintaining additive approximate PageRank in both incremental and decremental settings. In particular, we demonstrate that in the worst-case (1/α)^{Θ(log log n)} update time is necessary and sufficient for this problem, where α is the desired additive approximation. On the other hand, we demonstrate that the commonly employed ForwardPush approach performs substantially worse than this optimal runtime. Specifically, we show that ForwardPush requires Ω(n^{1-δ}) time per update on average, for any δ > 0, even in the incremental setting. For multiplicative approximations, however, we demonstrate that the situation is significantly more challenging. Specifically, we prove that any algorithm that explicitly maintains a constant factor multiplicative approximation of the PageRank vector of a directed graph must have amortized update time Ω(n^{1-δ}), for any δ > 0, even in the incremental setting, thereby resolving a 13-year old open question of Bahmani et al. (VLDB 2010). This sharply contrasts with the undirected setting, where we show that poly log n update time is feasible, even in the fully dynamic setting under oblivious adversary.

Cite as

Rajesh Jayaram, Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski. Dynamic PageRank: Algorithms and Lower Bounds. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 90:1-90:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jayaram_et_al:LIPIcs.ICALP.2024.90,
  author =	{Jayaram, Rajesh and {\L}\k{a}cki, Jakub and Mitrovi\'{c}, Slobodan and Onak, Krzysztof and Sankowski, Piotr},
  title =	{{Dynamic PageRank: Algorithms and Lower Bounds}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{90:1--90:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.90},
  URN =		{urn:nbn:de:0030-drops-202336},
  doi =		{10.4230/LIPIcs.ICALP.2024.90},
  annote =	{Keywords: PageRank, dynamic algorithms, graph algorithms}
}
Document
Track A: Algorithms, Complexity and Games
A Sublinear Time Tester for Max-Cut on Clusterable Graphs

Authors: Agastya Vibhuti Jha and Akash Kumar


Abstract
One natural question in the area of sublinear time algorithms asks whether we can distinguish between graphs with max-cut value at least 1-ε from graphs with max-cut value at most 1/2+ε in the adjacency list model where we can make degree queries and neighbor queries. Chiplunkar, Kapralov, Khanna, Mousavifar, and Peres (FOCS' 18) showed that in graphs of bounded degree, one cannot hope for a factor 1/2+ε approximation to the max-cut value in time n^{1/2+o(ε)}. Recently, Peng and Yoshida (SODA '23) obtained o(n) time algorithms which can distinguish expanders with max-cut value at least 1-ε from expanders with small max-cut value (their running time is n^{1/2+O(ε)}). In this paper, going beyond the results of Peng-Yoshida, we develop sublinear time algorithms for this problem on clusterable graphs (which is a graph class with a good community structure). Our algorithms run in ≈ n^{0.5001+ O(ε)} time. A natural extension of Peng-Yoshida approach does not seem to work for clusterable graphs. Indeed, their random walk based technique tracks the 𝓁₂ length of random walk vectors and they exploit the difference in the length of these vectors to tell apart expanders with large cut value from expanders with small cut-value. Such approaches fail to be reliable when graph has loosely connected clusters. Taking inspiration from [Ashish Chiplunkar et al., 2018], we exploit the more refined geometry of spectra of clusterable graphs which leads to our sublinear time implementation. We prove a novel spectral lemma which shows that in a spectral expander 2 - λ_{n-1} ≥ Ω(λ₂). This lemma is leveraged to show that there is a suitable difference between spectra of clusterable graphs with large cut value and spectra of clusterable graphs with small cut value. We use this gap to obtain our sublinear time implementation. To do this, we obtain a nuanced understanding of the eigenvector structure of clusterable graphs and in particular, we show that the eigenvectors of the normalized Laplacian of a clusterable graph, corresponding to eigenvalues which are close to 2 have a small infinity norm.

Cite as

Agastya Vibhuti Jha and Akash Kumar. A Sublinear Time Tester for Max-Cut on Clusterable Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 91:1-91:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jha_et_al:LIPIcs.ICALP.2024.91,
  author =	{Jha, Agastya Vibhuti and Kumar, Akash},
  title =	{{A Sublinear Time Tester for Max-Cut on Clusterable Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{91:1--91:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.91},
  URN =		{urn:nbn:de:0030-drops-202344},
  doi =		{10.4230/LIPIcs.ICALP.2024.91},
  annote =	{Keywords: Sublinear Algorithms, Graph Algorithms, Clusterable Graphs, Property Testung}
}
Document
Track A: Algorithms, Complexity and Games
Algorithms for the Generalized Poset Sorting Problem

Authors: Shaofeng H.-C. Jiang, Wenqian Wang, Yubo Zhang, and Yuhao Zhang


Abstract
We consider a generalized poset sorting problem (GPS), in which we are given a query graph G = (V, E) and an unknown poset 𝒫(V, ≺) that is defined on the same vertex set V, and the goal is to make as few queries as possible to edges in G in order to fully recover 𝒫, where each query (u, v) returns the relation between u, v, i.e., u ≺ v, v ≺ u or u ̸ ∼ v. This generalizes both the poset sorting problem [Faigle et al., SICOMP 88] and the generalized sorting problem [Huang et al., FOCS 11]. We give algorithms with Õ(n poly(k)) query complexity when G is a complete bipartite graph or G is stochastic under the Erdős-Rényi model, where k is the width of the poset, and these generalize [Daskalakis et al., SICOMP 11] which only studies complete graph G. Both results are based on a unified framework that reduces the poset sorting to partitioning the vertices with respect to a given pivot element, which may be of independent interest. Moreover, we also propose novel algorithms to implement this partition oracle. Notably, we suggest a randomized BFS with vertex skipping for the stochastic G, and it yields a nearly-tight bound even for the special case of generalized sorting (for stochastic G) which is comparable to the main result of a recent work [Kuszmaul et al., FOCS 21] but is conceptually different and simplified. Our study of GPS also leads to a new Õ(n^{1 - 1 / (2W)}) competitive ratio for the so-called weighted generalized sorting problem where W is the number of distinct weights in the query graph. This problem was considered as an open question in [Charikar et al., JCSS 02], and our result makes important progress as it yields the first nontrivial sublinear ratio for general weighted query graphs (for any bounded W). We obtain this via an Õ(nk + n^{1.5}) query complexity algorithm for the case where every edge in G is guaranteed to be comparable in the poset, which generalizes a Õ(n^{1.5}) bound for generalized sorting [Huang et al., FOCS 11].

Cite as

Shaofeng H.-C. Jiang, Wenqian Wang, Yubo Zhang, and Yuhao Zhang. Algorithms for the Generalized Poset Sorting Problem. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 92:1-92:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ICALP.2024.92,
  author =	{Jiang, Shaofeng H.-C. and Wang, Wenqian and Zhang, Yubo and Zhang, Yuhao},
  title =	{{Algorithms for the Generalized Poset Sorting Problem}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{92:1--92:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.92},
  URN =		{urn:nbn:de:0030-drops-202359},
  doi =		{10.4230/LIPIcs.ICALP.2024.92},
  annote =	{Keywords: sorting, poset sorting, generalized sorting}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Algorithms for Connectivity Augmentation

Authors: Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian


Abstract
We study the k-connectivity augmentation problem (k-CAP) in the single-pass streaming model. Given a (k-1)-edge connected graph G = (V,E) that is stored in memory, and a stream of weighted edges (also called links) L with weights in {0,1,… ,W}, the goal is to choose a minimum weight subset L' ⊆ L of the links such that G' = (V,E∪ L') is k-edge connected. We give a (2+ε)-approximation algorithm for this problem which requires to store O(ε^{-1} nlog n) words. Moreover, we show the tightness of our result: Any algorithm with better than 2-approximation for the problem requires Ω(n²) bits of space even when k = 2. This establishes a gap between the optimal approximation factor one can obtain in the streaming vs the offline setting for k-CAP. We further consider a natural generalization to the fully streaming model where both E and L arrive in the stream in an arbitrary order. We show that this problem has a space lower bound that matches the best possible size of a spanner of the same approximation ratio. Following this, we give improved results for spanners on weighted graphs: We show a streaming algorithm that finds a (2t-1+ε)-approximate weighted spanner of size at most O(ε^{-1} n^{1+1/t}log n) for integer t, whereas the best prior streaming algorithm for spanner on weighted graphs had size depending on log W. We believe that this result is of independent interest. Using our spanner result, we provide an optimal O(t)-approximation for k-CAP in the fully streaming model with O(nk + n^{1+1/t}) words of space. Finally we apply our results to network design problems such as Steiner tree augmentation problem (STAP), k-edge connected spanning subgraph (k-ECSS) and the general Survivable Network Design problem (SNDP). In particular, we show a single-pass O(tlog k)-approximation for SNDP using O(kn^{1+1/t}) words of space, where k is the maximum connectivity requirement.

Cite as

Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian. Streaming Algorithms for Connectivity Augmentation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 93:1-93:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ICALP.2024.93,
  author =	{Jin, Ce and Kapralov, Michael and Mahabadi, Sepideh and Vakilian, Ali},
  title =	{{Streaming Algorithms for Connectivity Augmentation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{93:1--93:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.93},
  URN =		{urn:nbn:de:0030-drops-202367},
  doi =		{10.4230/LIPIcs.ICALP.2024.93},
  annote =	{Keywords: streaming algorithms, connectivity augmentation}
}
Document
Track A: Algorithms, Complexity and Games
A Faster Algorithm for Pigeonhole Equal Sums

Authors: Ce Jin and Hongxun Wu


Abstract
An important area of research in exact algorithms is to solve Subset-Sum-type problems faster than meet-in-middle. In this paper we study Pigeonhole Equal Sums, a total search problem proposed by Papadimitriou (1994): given n positive integers w₁,… ,w_n of total sum ∑_{i = 1}ⁿ w_i < 2ⁿ-1, the task is to find two distinct subsets A, B ⊆ [n] such that ∑_{i ∈ A}w_i = ∑_{i ∈ B}w_i. Similar to the status of the Subset Sum problem, the best known algorithm for Pigeonhole Equal Sums runs in O^*(2^{n/2}) time, via either meet-in-middle or dynamic programming (Allcock, Hamoudi, Joux, Klingelhöfer, and Santha, 2022). Our main result is an improved algorithm for Pigeonhole Equal Sums in O^*(2^{0.4n}) time. We also give a polynomial-space algorithm in O^*(2^{0.75n}) time. Unlike many previous works in this area, our approach does not use the representation method, but rather exploits a simple structural characterization of input instances with few solutions.

Cite as

Ce Jin and Hongxun Wu. A Faster Algorithm for Pigeonhole Equal Sums. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 94:1-94:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ICALP.2024.94,
  author =	{Jin, Ce and Wu, Hongxun},
  title =	{{A Faster Algorithm for Pigeonhole Equal Sums}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{94:1--94:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.94},
  URN =		{urn:nbn:de:0030-drops-202375},
  doi =		{10.4230/LIPIcs.ICALP.2024.94},
  annote =	{Keywords: Subset Sum, Pigeonhole, PPP}
}
Document
Track A: Algorithms, Complexity and Games
Fully Dynamic Strongly Connected Components in Planar Digraphs

Authors: Adam Karczmarz and Marcin Smulewicz


Abstract
In this paper we consider maintaining strongly connected components (SCCs) of a directed planar graph subject to edge insertions and deletions. We show a data structure maintaining an implicit representation of the SCCs within Õ(n^{6/7}) worst-case time per update. The data structure supports, in O(log²{n}) time, reporting vertices of any specified SCC (with constant overhead per reported vertex) and aggregating vertex information (e.g., computing the maximum label) over all the vertices of that SCC. Furthermore, it can maintain global information about the structure of SCCs, such as the number of SCCs, or the size of the largest SCC. To the best of our knowledge, no fully dynamic SCCs data structures with sublinear update time have been previously known for any major subclass of digraphs. Our result should be contrasted with the n^{1-o(1)} amortized update time lower bound conditional on SETH, which holds even for dynamically maintaining whether a general digraph has more than two SCCs.

Cite as

Adam Karczmarz and Marcin Smulewicz. Fully Dynamic Strongly Connected Components in Planar Digraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 95:1-95:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{karczmarz_et_al:LIPIcs.ICALP.2024.95,
  author =	{Karczmarz, Adam and Smulewicz, Marcin},
  title =	{{Fully Dynamic Strongly Connected Components in Planar Digraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{95:1--95:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.95},
  URN =		{urn:nbn:de:0030-drops-202388},
  doi =		{10.4230/LIPIcs.ICALP.2024.95},
  annote =	{Keywords: dynamic strongly connected components, dynamic strong connectivity, dynamic reachability, planar graphs}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets

Authors: Yasushi Kawase, Koichi Nishimura, and Hanna Sumita


Abstract
We study the problem of minimizing a given symmetric strictly convex function over the Minkowski sum of an integral base-polyhedron and an M-convex set. This problem has a hybrid of continuous and discrete structures. This emerges from the problem of allocating mixed goods, consisting of both divisible and indivisible goods, to agents with binary valuations so that the fairness measure, such as the Nash welfare, is maximized. It is known that both an integral base-polyhedron and an M-convex set have similar and nice properties, and the non-hybrid case can be solved in polynomial time. While the hybrid case lacks some of these properties, we show the structure of an optimal solution. Moreover, we exploit a proximity inherent in the problem. Through our findings, we demonstrate that our problem is NP-hard even in the fair allocation setting where all indivisible goods are identical. Moreover, we provide a polynomial-time algorithm for the fair allocation problem when all divisible goods are identical.

Cite as

Yasushi Kawase, Koichi Nishimura, and Hanna Sumita. Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ICALP.2024.96,
  author =	{Kawase, Yasushi and Nishimura, Koichi and Sumita, Hanna},
  title =	{{Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.96},
  URN =		{urn:nbn:de:0030-drops-202393},
  doi =		{10.4230/LIPIcs.ICALP.2024.96},
  annote =	{Keywords: Integral base-polyhedron, Fair allocation, Matroid}
}
Document
Track A: Algorithms, Complexity and Games
Cut Sparsification and Succinct Representation of Submodular Hypergraphs

Authors: Yotam Kenneth and Robert Krauthgamer


Abstract
In cut sparsification, all cuts of a hypergraph H = (V,E,w) are approximated within 1±ε factor by a small hypergraph H'. This widely applied method was generalized recently to a setting where the cost of cutting each hyperedge e is provided by a splitting function g_e: 2^e → ℝ_+. This generalization is called a submodular hypergraph when the functions {g_e}_{e ∈ E} are submodular, and it arises in machine learning, combinatorial optimization, and algorithmic game theory. Previous work studied the setting where H' is a reweighted sub-hypergraph of H, and measured the size of H' by the number of hyperedges in it. In this setting, we present two results: (i) all submodular hypergraphs admit sparsifiers of size polynomial in n = |V| and ε^{-1}; (ii) we propose a new parameter, called spread, and use it to obtain smaller sparsifiers in some cases. We also show that for a natural family of splitting functions, relaxing the requirement that H' be a reweighted sub-hypergraph of H yields a substantially smaller encoding of the cuts of H (almost a factor n in the number of bits). This is in contrast to graphs, where the most succinct representation is attained by reweighted subgraphs. A new tool in our construction of succinct representation is the notion of deformation, where a splitting function g_e is decomposed into a sum of functions of small description, and we provide upper and lower bounds for deformation of common splitting functions.

Cite as

Yotam Kenneth and Robert Krauthgamer. Cut Sparsification and Succinct Representation of Submodular Hypergraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 97:1-97:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kenneth_et_al:LIPIcs.ICALP.2024.97,
  author =	{Kenneth, Yotam and Krauthgamer, Robert},
  title =	{{Cut Sparsification and Succinct Representation of Submodular Hypergraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{97:1--97:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.97},
  URN =		{urn:nbn:de:0030-drops-202406},
  doi =		{10.4230/LIPIcs.ICALP.2024.97},
  annote =	{Keywords: Cut Sparsification, Submodular Hypergraphs, Succinct Representation}
}
Document
Track A: Algorithms, Complexity and Games
Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs

Authors: Sanjeev Khanna, Aaron (Louie) Putterman, and Madhu Sudan


Abstract
Recently, a number of variants of the notion of cut-preserving hypergraph sparsification have been studied in the literature. These variants include directed hypergraph sparsification, submodular hypergraph sparsification, general notions of approximation including spectral approximations, and more general notions like sketching that can answer cut queries using more general data structures than just sparsifiers. In this work, we provide reductions between these different variants of hypergraph sparsification and establish new upper and lower bounds on the space complexity of preserving their cuts. Specifically, we show that: 1) (1 ± ε) directed hypergraph spectral (respectively cut) sparsification on n vertices efficiently reduces to (1 ± ε) undirected hypergraph spectral (respectively cut) sparsification on n² + 1 vertices. Using the work of Lee and Jambulapati, Liu, and Sidford (STOC 2023) this gives us directed hypergraph spectral sparsifiers with O(n² log²(n) / ε²) hyperedges and directed hypergraph cut sparsifiers with O(n² log(n)/ ε²) hyperedges by using the work of Chen, Khanna, and Nagda (FOCS 2020), both of which improve upon the work of Oko, Sakaue, and Tanigawa (ICALP 2023). 2) Any cut sketching scheme which preserves all cuts in any directed hypergraph on n vertices to a (1 ± ε) factor (for ε = 1/(2^{O(√{log(n)})})) must have worst-case bit complexity n^{3 - o(1)}. Because directed hypergraphs are a subclass of submodular hypergraphs, this also shows a worst-case sketching lower bound of n^{3 - o(1)} bits for sketching cuts in general submodular hypergraphs. 3) (1 ± ε) monotone submodular hypergraph cut sparsification on n vertices efficiently reduces to (1 ± ε) symmetric submodular hypergraph sparsification on n+1 vertices. Using the work of Jambulapati et. al. (FOCS 2023) this gives us monotone submodular hypergraph sparsifiers with Õ(n / ε²) hyperedges, improving on the O(n³ / ε²) hyperedge bound of Kenneth and Krauthgamer (arxiv 2023). At a high level, our results use the same general principle, namely, by showing that cuts in one class of hypergraphs can be simulated by cuts in a simpler class of hypergraphs, we can leverage sparsification results for the simpler class of hypergraphs.

Cite as

Sanjeev Khanna, Aaron (Louie) Putterman, and Madhu Sudan. Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 98:1-98:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{khanna_et_al:LIPIcs.ICALP.2024.98,
  author =	{Khanna, Sanjeev and Putterman, Aaron (Louie) and Sudan, Madhu},
  title =	{{Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{98:1--98:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.98},
  URN =		{urn:nbn:de:0030-drops-202410},
  doi =		{10.4230/LIPIcs.ICALP.2024.98},
  annote =	{Keywords: Sparsification, sketching, hypergraphs}
}
Document
Track A: Algorithms, Complexity and Games
Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number

Authors: Boris Klemz and Marie Diana Sieper


Abstract
The problem Level Planarity asks for a crossing-free drawing of a graph in the plane such that vertices are placed at prescribed y-coordinates (called levels) and such that every edge is realized as a y-monotone curve. In the variant Constrained Level Planarity, each level y is equipped with a partial order ≺_y on its vertices and in the desired drawing the left-to-right order of vertices on level y has to be a linear extension of ≺_y. Constrained Level Planarity is known to be a remarkably difficult problem: previous results by Klemz and Rote [ACM Trans. Alg.'19] and by Brückner and Rutter [SODA'17] imply that it remains NP-hard even when restricted to graphs whose tree-depth and feedback vertex set number are bounded by a constant and even when the instances are additionally required to be either proper, meaning that each edge spans two consecutive levels, or ordered, meaning that all given partial orders are total orders. In particular, these results rule out the existence of FPT-time (even XP-time) algorithms with respect to these and related graph parameters (unless P=NP). However, the parameterized complexity of Constrained Level Planarity with respect to the vertex cover number of the input graph remained open. In this paper, we show that Constrained Level Planarity can be solved in FPT-time when parameterized by the vertex cover number. In view of the previous intractability statements, our result is best-possible in several regards: a speed-up to polynomial time or a generalization to the aforementioned smaller graph parameters is not possible, even if restricting to proper or ordered instances.

Cite as

Boris Klemz and Marie Diana Sieper. Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 99:1-99:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{klemz_et_al:LIPIcs.ICALP.2024.99,
  author =	{Klemz, Boris and Sieper, Marie Diana},
  title =	{{Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{99:1--99:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.99},
  URN =		{urn:nbn:de:0030-drops-202428},
  doi =		{10.4230/LIPIcs.ICALP.2024.99},
  annote =	{Keywords: Parameterized Complexity, Graph Drawing, Planar Poset Diagram, Level Planarity, Constrained Level Planarity, Vertex Cover, FPT, Computational Geometry}
}
Document
Track A: Algorithms, Complexity and Games
Subquadratic Submodular Maximization with a General Matroid Constraint

Authors: Yusuke Kobayashi and Tatsuya Terao


Abstract
We consider fast algorithms for monotone submodular maximization with a general matroid constraint. We present a randomized (1 - 1/e - ε)-approximation algorithm that requires Õ_{ε}(√r n) independence oracle and value oracle queries, where n is the number of elements in the matroid and r ≤ n is the rank of the matroid. This improves upon the previously best algorithm by Buchbinder-Feldman-Schwartz [Mathematics of Operations Research 2017] that requires Õ_{ε}(r² + √rn) queries. Our algorithm is based on continuous relaxation, as with other submodular maximization algorithms in the literature. To achieve subquadratic query complexity, we develop a new rounding algorithm, which is our main technical contribution. The rounding algorithm takes as input a point represented as a convex combination of t bases of a matroid and rounds it to an integral solution. Our rounding algorithm requires Õ(r^{3/2} t) independence oracle queries, while the previously best rounding algorithm by Chekuri-Vondrák-Zenklusen [FOCS 2010] requires O(r² t) independence oracle queries. A key idea in our rounding algorithm is to use a directed cycle of arbitrary length in an auxiliary graph, while the algorithm of Chekuri-Vondrák-Zenklusen focused on directed cycles of length two.

Cite as

Yusuke Kobayashi and Tatsuya Terao. Subquadratic Submodular Maximization with a General Matroid Constraint. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 100:1-100:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kobayashi_et_al:LIPIcs.ICALP.2024.100,
  author =	{Kobayashi, Yusuke and Terao, Tatsuya},
  title =	{{Subquadratic Submodular Maximization with a General Matroid Constraint}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{100:1--100:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.100},
  URN =		{urn:nbn:de:0030-drops-202437},
  doi =		{10.4230/LIPIcs.ICALP.2024.100},
  annote =	{Keywords: submodular maximization, matroid constraint, approximation algorithm, rounding algorithm, query complexity}
}
Document
Track A: Algorithms, Complexity and Games
On the Space Usage of Approximate Distance Oracles with Sub-2 Stretch

Authors: Tsvi Kopelowitz, Ariel Korin, and Liam Roditty


Abstract
For an undirected unweighted graph G = (V,E) with n vertices and m edges, let d(u,v) denote the distance from u ∈ V to v ∈ V in G. An (α,β)-stretch approximate distance oracle (ADO) for G is a data structure that given u,v ∈ V returns in constant (or near constant) time a value dˆ(u,v) such that d(u,v) ≤ dˆ(u,v) ≤ α⋅ d(u,v) + β, for some reals α > 1, β. Thorup and Zwick [Mikkel Thorup and Uri Zwick, 2005] showed that one cannot beat stretch 3 with subquadratic space (in terms of n) for general graphs. Pǎtraşcu and Roditty [Mihai Pǎtraşcu and Liam Roditty, 2010] showed that one can obtain stretch 2 using O(m^{1/3}n^{4/3}) space, and so if m is subquadratic in n then the space usage is also subquadratic. Moreover, Pǎtraşcu and Roditty [Mihai Pǎtraşcu and Liam Roditty, 2010] showed that one cannot beat stretch 2 with subquadratic space even for graphs where m = Õ(n), based on the set-intersection hypothesis. In this paper we explore the conditions for which an ADO can beat stretch 2 while using subquadratic space. In particular, we show that if the maximum degree in G is Δ_G ≤ O(n^{1/k-ε}) for some 0 < ε ≤ 1/k, then there exists an ADO for G that uses Õ(n^{2-(kε)/3) space and has a (2,1-k)-stretch. For k = 2 this result implies a subquadratic sub-2 stretch ADO for graphs with Δ_G ≤ O(n^{1/2-ε}). Moreover, we prove a conditional lower bound, based on the set intersection hypothesis, which states that for any positive integer k ≤ log n, obtaining a sub-(k+2)/k stretch for graphs with Δ_G = Θ(n^{1/k}) requires Ω̃(n²) space. Thus, for graphs with maximum degree Θ(n^{1/2}), obtaining a sub-2 stretch requires Ω̃(n²) space.

Cite as

Tsvi Kopelowitz, Ariel Korin, and Liam Roditty. On the Space Usage of Approximate Distance Oracles with Sub-2 Stretch. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 101:1-101:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kopelowitz_et_al:LIPIcs.ICALP.2024.101,
  author =	{Kopelowitz, Tsvi and Korin, Ariel and Roditty, Liam},
  title =	{{On the Space Usage of Approximate Distance Oracles with Sub-2 Stretch}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{101:1--101:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.101},
  URN =		{urn:nbn:de:0030-drops-202443},
  doi =		{10.4230/LIPIcs.ICALP.2024.101},
  annote =	{Keywords: Graph algorithms, Approximate distance oracle, data structures, shortest path}
}
Document
Track A: Algorithms, Complexity and Games
Lipschitz Continuous Allocations for Optimization Games

Authors: Soh Kumabe and Yuichi Yoshida


Abstract
In cooperative game theory, the primary focus is the equitable allocation of payoffs or costs among agents. However, in the practical applications of cooperative games, accurately representing games is challenging. In such cases, using an allocation method sensitive to small perturbations in the game can lead to various problems, including dissatisfaction among agents and the potential for manipulation by agents seeking to maximize their own benefits. Therefore, the allocation method must be robust against game perturbations. In this study, we explore optimization games, in which the value of the characteristic function is provided as the optimal value of an optimization problem. To assess the robustness of the allocation methods, we use the Lipschitz constant, which quantifies the extent of change in the allocation vector in response to a unit perturbation in the weight vector of the underlying problem. Thereafter, we provide an algorithm for the matching game that returns an allocation belonging to the (1/2-ε)-approximate core with Lipschitz constant O(ε^{-1}). Additionally, we provide an algorithm for a minimum spanning tree game that returns an allocation belonging to the 4-approximate core with a constant Lipschitz constant. The Shapley value is a popular allocation that satisfies several desirable properties. Therefore, we investigate the robustness of the Shapley value. We demonstrate that the Lipschitz constant of the Shapley value for the minimum spanning tree is constant, whereas that for the matching game is Ω(log n), where n denotes the number of vertices.

Cite as

Soh Kumabe and Yuichi Yoshida. Lipschitz Continuous Allocations for Optimization Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 102:1-102:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kumabe_et_al:LIPIcs.ICALP.2024.102,
  author =	{Kumabe, Soh and Yoshida, Yuichi},
  title =	{{Lipschitz Continuous Allocations for Optimization Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{102:1--102:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.102},
  URN =		{urn:nbn:de:0030-drops-202456},
  doi =		{10.4230/LIPIcs.ICALP.2024.102},
  annote =	{Keywords: Cooperative Games, Lipschitz Continuity}
}
Document
Track A: Algorithms, Complexity and Games
Towards an Analysis of Quadratic Probing

Authors: William Kuszmaul and Zoe Xi


Abstract
Since 1968, one of the simplest open questions in the theory of hash tables has been to prove anything nontrivial about the correctness of quadratic probing. We make the first tangible progress towards this goal, showing that there exists a positive-constant load factor at which quadratic probing is a constant-expected-time hash table. Our analysis applies more generally to any fixed-offset open-addressing hash table, and extends to higher load factors in the case where the hash table examines blocks of some size B = ω(1).

Cite as

William Kuszmaul and Zoe Xi. Towards an Analysis of Quadratic Probing. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 103:1-103:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kuszmaul_et_al:LIPIcs.ICALP.2024.103,
  author =	{Kuszmaul, William and Xi, Zoe},
  title =	{{Towards an Analysis of Quadratic Probing}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{103:1--103:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.103},
  URN =		{urn:nbn:de:0030-drops-202463},
  doi =		{10.4230/LIPIcs.ICALP.2024.103},
  annote =	{Keywords: quadratic probing, hashing, open addressing, witness trees}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Non-Adaptive Cell Probe Dictionaries and Hashing

Authors: Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir


Abstract
We present a simple and provably optimal non-adaptive cell probe data structure for the static dictionary problem. Our data structure supports storing a set of n key-value pairs from [u]× [u] using s words of space and answering key lookup queries in t = O(lg(u/n)/lg(s/n)) non-adaptive probes. This generalizes a solution to the membership problem (i.e., where no values are associated with keys) due to Buhrman et al. We also present matching lower bounds for the non-adaptive static membership problem in the deterministic setting. Our lower bound implies that both our dictionary algorithm and the preceding membership algorithm are optimal, and in particular that there is an inherent complexity gap in these problems between no adaptivity and one round of adaptivity (with which hashing-based algorithms solve these problems in constant time). Using the ideas underlying our data structure, we also obtain the first implementation of a n-wise independent family of hash functions with optimal evaluation time in the cell probe model.

Cite as

Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir. Optimal Non-Adaptive Cell Probe Dictionaries and Hashing. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 104:1-104:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ICALP.2024.104,
  author =	{Larsen, Kasper Green and Pagh, Rasmus and Persiano, Giuseppe and Pitassi, Toniann and Yeo, Kevin and Zamir, Or},
  title =	{{Optimal Non-Adaptive Cell Probe Dictionaries and Hashing}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{104:1--104:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.104},
  URN =		{urn:nbn:de:0030-drops-202471},
  doi =		{10.4230/LIPIcs.ICALP.2024.104},
  annote =	{Keywords: non-adaptive, cell probe, dictionary, hashing}
}
Document
Track A: Algorithms, Complexity and Games
An Improved Quantum Max Cut Approximation via Maximum Matching

Authors: Eunou Lee and Ojas Parekh


Abstract
Finding a high (or low) energy state of a given quantum Hamiltonian is a potential area to gain a provable and practical quantum advantage. A line of recent studies focuses on Quantum Max Cut, where one is asked to find a high energy state of a given antiferromagnetic Heisenberg Hamiltonian. In this work, we present a classical approximation algorithm for Quantum Max Cut that achieves an approximation ratio of 0.595, outperforming the previous best algorithms of Lee [Eunou Lee, 2022] (0.562, generic input graph) and King [King, 2023] (0.582, triangle-free input graph). The algorithm is based on finding the maximum weighted matching of an input graph and outputs a product of at most 2-qubit states, which is simpler than the fully entangled output states of the previous best algorithms.

Cite as

Eunou Lee and Ojas Parekh. An Improved Quantum Max Cut Approximation via Maximum Matching. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 105:1-105:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ICALP.2024.105,
  author =	{Lee, Eunou and Parekh, Ojas},
  title =	{{An Improved Quantum Max Cut Approximation via Maximum Matching}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{105:1--105:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.105},
  URN =		{urn:nbn:de:0030-drops-202482},
  doi =		{10.4230/LIPIcs.ICALP.2024.105},
  annote =	{Keywords: approximation, optimization, local Hamiltonian, rounding, SDP, matching}
}
Document
Track A: Algorithms, Complexity and Games
Polylogarithmic Approximations for Robust s-t Path

Authors: Shi Li, Chenyang Xu, and Ruilong Zhang


Abstract
The paper revisits the Robust s-t Path problem, one of the most fundamental problems in robust optimization. In the problem, we are given a directed graph with n vertices and k distinct cost functions (scenarios) defined over edges, and aim to choose an s-t path such that the total cost of the path is always provable no matter which scenario is realized. Viewing each cost function as an agent, our goal is to find a fair s-t path, which minimizes the maximum cost among all agents. The problem is NP-hard to approximate within a factor of o(log k) unless NP ⊆ DTIME(n^{polylog n}), and the best-known approximation ratio is Õ(√n), which is based on the natural flow linear program. A longstanding open question is whether we can achieve a polylogarithmic approximation for the problem; it remains open even if a quasi-polynomial running time is allowed. Our main result is a O(log n log k) approximation for the Robust s-t Path problem in quasi-polynomial time, solving the open question in the quasi-polynomial time regime. The algorithm is built on a novel linear program formulation for a decision-tree-type structure, which enables us to overcome the Ω(√n) integrality gap for the natural flow LP. Furthermore, we show that for graphs with bounded treewidth, the quasi-polynomial running time can be improved to a polynomial. We hope our techniques can offer new insights into this problem and other related problems in robust optimization.

Cite as

Shi Li, Chenyang Xu, and Ruilong Zhang. Polylogarithmic Approximations for Robust s-t Path. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 106:1-106:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.106,
  author =	{Li, Shi and Xu, Chenyang and Zhang, Ruilong},
  title =	{{Polylogarithmic Approximations for Robust s-t Path}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{106:1--106:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.106},
  URN =		{urn:nbn:de:0030-drops-202497},
  doi =		{10.4230/LIPIcs.ICALP.2024.106},
  annote =	{Keywords: Approximation Algorithm, Randomized LP Rounding, Robust s-t Path}
}
Document
Track A: Algorithms, Complexity and Games
Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH

Authors: Shuangle Li, Bingkai Lin, and Yuwei Liu


Abstract
In this paper we present a new gap-creating randomized self-reduction for the parameterized Maximum Likelihood Decoding problem over 𝔽_p (k-MLD_p). The reduction takes a k-MLD_p instance with k⋅ n d-dimensional vectors as input, runs in O(d2^{O(k)}n^{1.01}) time for some computable function f, outputs a (3/2-ε)-Gap-k'-MLD_p instance for any ε > 0, where k' = O(k²log k). Using this reduction, we show that assuming the randomized Exponential Time Hypothesis (ETH), no algorithms can approximate k-MLD_p (and therefore its dual problem k-NCP_p) within factor (3/2-ε) in f(k)⋅ n^{o(√{k/log k})} time for any ε > 0. We then use reduction by Bhattacharyya, Ghoshal, Karthik and Manurangsi (ICALP 2018) to amplify the (3/2-ε)-gap to any constant. As a result, we show that assuming ETH, no algorithms can approximate k-NCP_p and k-MDP_p within γ-factor in f(k)⋅ n^{o(k^{ε_γ})} time for some constant ε_γ > 0. Combining with the gap-preserving reduction by Bennett, Cheraghchi, Guruswami and Ribeiro (STOC 2023), we also obtain similar lower bounds for k-MDP_p, k-CVP_p and k-SVP_p. These results improve upon the previous f(k)⋅ n^{Ω(poly log k)} lower bounds for these problems under ETH using reductions by Bhattacharyya et al. (J.ACM 2021) and Bennett et al. (STOC 2023).

Cite as

Shuangle Li, Bingkai Lin, and Yuwei Liu. Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 107:1-107:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.107,
  author =	{Li, Shuangle and Lin, Bingkai and Liu, Yuwei},
  title =	{{Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{107:1--107:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.107},
  URN =		{urn:nbn:de:0030-drops-202500},
  doi =		{10.4230/LIPIcs.ICALP.2024.107},
  annote =	{Keywords: Nearest Codeword Problem, Hardness of Approximations, Fine-grained Complexity, Parameterized Complexity, Minimum Distance Problem, Shortest Vector Problem}
}
Document
Track A: Algorithms, Complexity and Games
Two-Source and Affine Non-Malleable Extractors for Small Entropy

Authors: Xin Li and Yan Zhong


Abstract
Non-malleable extractors are generalizations and strengthening of standard randomness extractors, that are resilient to adversarial tampering. Such extractors have wide applications in cryptography and have become important cornerstones in recent breakthroughs of explicit constructions of two-source extractors and affine extractors for small entropy. However, explicit constructions of non-malleable extractors appear to be much harder than standard extractors. Indeed, in the well-studied models of two-source and affine non-malleable extractors, the previous best constructions only work for entropy rate > 2/3 and 1-γ for some small constant γ > 0 respectively by Li (FOCS' 23). In this paper, we present explicit constructions of two-source and affine non-malleable extractors that match the state-of-the-art constructions of standard ones for small entropy. Our main results include: - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k ≥ log^C n and polynomially small error, matching the parameters of standard extractors by Chattopadhyay and Zuckerman (STOC' 16, Annals of Mathematics' 19) and Li (FOCS' 16). - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k = O(log n) and constant error, matching the parameters of standard extractors by Li (FOCS' 23). Our constructions significantly improve previous results, and the parameters (entropy requirement and error) are the best possible without first improving the constructions of standard extractors. In addition, our improved affine non-malleable extractors give strong lower bounds for a certain kind of read-once linear branching programs, recently introduced by Gryaznov, Pudlák, and Talebanfard (CCC' 22) as a generalization of several well studied computational models. These bounds match the previously best-known average-case hardness results given by Chattopadhyay and Liao (CCC' 23) and Li (FOCS' 23), where the branching program size lower bounds are close to optimal, but the explicit functions we use here are different. Our results also suggest a possible deeper connection between non-malleable extractors and standard ones.

Cite as

Xin Li and Yan Zhong. Two-Source and Affine Non-Malleable Extractors for Small Entropy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 108:1-108:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.108,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Two-Source and Affine Non-Malleable Extractors for Small Entropy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{108:1--108:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.108},
  URN =		{urn:nbn:de:0030-drops-202512},
  doi =		{10.4230/LIPIcs.ICALP.2024.108},
  annote =	{Keywords: Randomness Extractors, Non-malleable, Two-source, Affine}
}
Document
Track A: Algorithms, Complexity and Games
Better Decremental and Fully Dynamic Sensitivity Oracles for Subgraph Connectivity

Authors: Yaowei Long and Yunfan Wang


Abstract
We study the sensitivity oracles problem for subgraph connectivity in the decremental and fully dynamic settings. In the fully dynamic setting, we preprocess an n-vertices m-edges undirected graph G with n_{off} deactivated vertices initially and the others are activated. Then we receive a single update D ⊆ V(G) of size |D| = d ≤ d_{⋆}, representing vertices whose states will be switched. Finally, we get a sequence of queries, each of which asks the connectivity of two given vertices u and v in the activated subgraph. The decremental setting is a special case when there is no deactivated vertex initially, and it is also known as the vertex-failure connectivity oracles problem. We present a better deterministic vertex-failure connectivity oracle with Ô(d_{⋆}m) preprocessing time, Õ(m) space, Õ(d²) update time and O(d) query time, which improves the update time of the previous almost-optimal oracle [Long and Saranurak, 2022] from Ô(d²) to Õ(d²). We also present a better deterministic fully dynamic sensitivity oracle for subgraph connectivity with Ô(min{m(n_{off} + d_{⋆}),n^{ω}}) preprocessing time, Õ(min{m(n_{off} + d_{⋆}),n²}) space, Õ(d²) update time and O(d) query time, which significantly improves the update time of the state of the art [Bingbing Hu et al., 2023] from Õ(d⁴) to Õ(d²). Furthermore, our solution is even almost-optimal assuming popular fine-grained complexity conjectures.

Cite as

Yaowei Long and Yunfan Wang. Better Decremental and Fully Dynamic Sensitivity Oracles for Subgraph Connectivity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 109:1-109:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{long_et_al:LIPIcs.ICALP.2024.109,
  author =	{Long, Yaowei and Wang, Yunfan},
  title =	{{Better Decremental and Fully Dynamic Sensitivity Oracles for Subgraph Connectivity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{109:1--109:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.109},
  URN =		{urn:nbn:de:0030-drops-202523},
  doi =		{10.4230/LIPIcs.ICALP.2024.109},
  annote =	{Keywords: connectivity, sensitivity}
}
Document
Track A: Algorithms, Complexity and Games
Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity

Authors: Zhenjian Lu and Rahul Santhanam


Abstract
We develop new characterizations of Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the intractability of conditional Kolmogorov complexity 𝖪 and conditional probabilistic time-bounded Kolmogorov complexity pK^t. In our first set of results, we show that NP ⊆ BPP iff pK^t(x ∣ y) can be computed efficiently in the worst case when t is sublinear in |x| + |y|; DistNP ⊆ HeurBPP iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions when t is sublinear in |x| + |y|; and infinitely-often one-way functions fail to exist iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions for t a sufficiently large polynomial in |x| + |y|. These results characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland purely in terms of the tractability of conditional pK^t. Notably, the results imply that Pessiland fails to exist iff the average-case intractability of conditional pK^t is insensitive to the difference between sublinear and polynomially bounded t. As a corollary, while we prove conditional pK^t to be NP-hard for sublinear t, showing NP-hardness for large enough polynomially bounded t would eliminate Pessiland as a possible world of average-case complexity. In our second set of results, we characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the distributional tractability of a natural problem, i.e., approximating the conditional Kolmogorov complexity, that is provably intractable in the worst case. We show that NP ⊆ BPP iff conditional Kolmogorov complexity can be approximated in the semi-worst case; and DistNP ⊆ HeurBPP iff conditional Kolmogorov complexity can be approximated on average over all independent polynomial-time samplable distributions. It follows from a result by Ilango, Ren, and Santhanam (STOC 2022) that infinitely-often one-way functions fail to exist iff conditional Kolmogorov complexity can be approximated on average over all polynomial-time samplable distributions. Together, these results yield the claimed characterizations. Our techniques, combined with previous work, also yield a characterization of auxiliary-input one-way functions and equivalences between different average-case tractability assumptions for conditional Kolmogorov complexity and its variants. Our results suggest that novel average-case tractability assumptions such as tractability in the semi-worst case and over independent polynomial-time samplable distributions might be worthy of further study.

Cite as

Zhenjian Lu and Rahul Santhanam. Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 110:1-110:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ICALP.2024.110,
  author =	{Lu, Zhenjian and Santhanam, Rahul},
  title =	{{Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{110:1--110:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.110},
  URN =		{urn:nbn:de:0030-drops-202538},
  doi =		{10.4230/LIPIcs.ICALP.2024.110},
  annote =	{Keywords: meta-complexity, Kolmogorov complexity, one-way functions, average-case complexity}
}
Document
Track A: Algorithms, Complexity and Games
Approximation Algorithms for 𝓁_p-Shortest Path and 𝓁_p-Group Steiner Tree

Authors: Yury Makarychev, Max Ovsiankin, and Erasmo Tani


Abstract
We present polylogarithmic approximation algorithms for variants of the Shortest Path, Group Steiner Tree, and Group ATSP problems with vector costs. In these problems, each edge e has a vector cost c_e ∈ ℝ_{≥0}^𝓁. For a feasible solution - a path, subtree, or tour (respectively) - we find the total vector cost of all the edges in the solution and then compute the 𝓁_p-norm of the obtained cost vector (we assume that p ≥ 1 is an integer). Our algorithms for series-parallel graphs run in polynomial time and those for arbitrary graphs run in quasi-polynomial time. To obtain our results, we introduce and use new flow-based Sum-of-Squares relaxations. We also obtain a number of hardness results.

Cite as

Yury Makarychev, Max Ovsiankin, and Erasmo Tani. Approximation Algorithms for 𝓁_p-Shortest Path and 𝓁_p-Group Steiner Tree. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 111:1-111:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{makarychev_et_al:LIPIcs.ICALP.2024.111,
  author =	{Makarychev, Yury and Ovsiankin, Max and Tani, Erasmo},
  title =	{{Approximation Algorithms for 𝓁\underlinep-Shortest Path and 𝓁\underlinep-Group Steiner Tree}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{111:1--111:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.111},
  URN =		{urn:nbn:de:0030-drops-202542},
  doi =		{10.4230/LIPIcs.ICALP.2024.111},
  annote =	{Keywords: Shortest Path, Asymmetric Group Steiner Tree, Sum-of-Squares}
}
Document
Track A: Algorithms, Complexity and Games
Testing Spreading Behavior in Networks with Arbitrary Topologies

Authors: Augusto Modanese and Yuichi Yoshida


Abstract
Given the full topology of a network, how hard is it to test if it is evolving according to a local rule or is far from doing so? Inspired by the works of Goldreich and Ron (J. ACM, 2017) and Nakar and Ron (ICALP, 2021), we initiate the study of property testing in dynamic environments with arbitrary topologies. Our focus is on the simplest non-trivial rule that can be tested, which corresponds to the 1-BP rule of bootstrap percolation and models a simple spreading behavior: Every "infected" node stays infected forever, and each "healthy" node becomes infected if and only if it has at least one infected neighbor. Our results are subdivided into two main groups: - If we are testing a single time step of evolution, then the query complexity is O(Δ/ε) or Õ(√n/ε) (whichever is smaller), where Δ and n are the maximum degree of a node and the number of vertices in the underlying graph, respectively. We also give lower bounds for both one- and two-sided error testers that match our upper bounds up to Δ = o(√n) and Δ = O(n^{1/3}), respectively. If ε is constant, then the first of these also holds against adaptive testers. - When testing the environment over T time steps, we have two algorithms that need O(Δ^{T-1}/εT) and Õ(|E|/εT) queries, respectively, where E is the set of edges of the underlying graph. All of our algorithms are one-sided error, and all of them are also non-adaptive, with the single exception of the more complex Õ(√n/ε)-query tester for the case T = 2.

Cite as

Augusto Modanese and Yuichi Yoshida. Testing Spreading Behavior in Networks with Arbitrary Topologies. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 112:1-112:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{modanese_et_al:LIPIcs.ICALP.2024.112,
  author =	{Modanese, Augusto and Yoshida, Yuichi},
  title =	{{Testing Spreading Behavior in Networks with Arbitrary Topologies}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{112:1--112:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.112},
  URN =		{urn:nbn:de:0030-drops-202554},
  doi =		{10.4230/LIPIcs.ICALP.2024.112},
  annote =	{Keywords: Property testing, bootstrap percolation, local phenomena, expander graphs}
}
Document
Track A: Algorithms, Complexity and Games
Alphabet Reduction for Reconfiguration Problems

Authors: Naoto Ohsaka


Abstract
We present a reconfiguration analogue of alphabet reduction à la Dinur (J. ACM, 2007) and its applications. Given a binary constraint graph G and its two satisfying assignments ψ^ini and ψ^tar, the Maxmin 2-CSP Reconfiguration problem requests to transform ψ^ini into ψ^tar by repeatedly changing the value of a single vertex so that the minimum fraction of satisfied edges is maximized. We demonstrate a polynomial-time reduction from Maxmin 2-CSP Reconfiguration with arbitrarily large alphabet size W ∈ ℕ to itself with universal alphabet size W₀ ∈ ℕ such that 1) the perfect completeness is preserved, and 2) if any reconfiguration for the former violates ε-fraction of edges, then Ω(ε)-fraction of edges must be unsatisfied during any reconfiguration for the latter. The crux of its construction is the reconfigurability of Hadamard codes, which enables to reconfigure between a pair of codewords, while avoiding getting too close to the other codewords. Combining this alphabet reduction with gap amplification due to Ohsaka (SODA 2024), we are able to amplify the 1 vs. 1-ε gap for arbitrarily small ε ∈ (0,1) up to the 1 vs. 1-ε₀ for some universal ε₀ ∈ (0,1) without blowing up the alphabet size. In particular, a 1 vs. 1-ε₀ gap version of Maxmin 2-CSP Reconfiguration with alphabet size W₀ is PSPACE-hard given a probabilistically checkable reconfiguration proof system having any soundness error 1-ε due to Hirahara and Ohsaka (STOC 2024) and Karthik C. S. and Manurangsi (2023). As an immediate corollary, we show that there exists a universal constant ε₀ ∈ (0,1) such that many popular reconfiguration problems are PSPACE-hard to approximate within a factor of 1-ε₀, including those of 3-SAT, Independent Set, Vertex Cover, Clique, Dominating Set, and Set Cover. This may not be achieved only by gap amplification of Ohsaka, which makes the alphabet size gigantic depending on ε^-1.

Cite as

Naoto Ohsaka. Alphabet Reduction for Reconfiguration Problems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 113:1-113:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ohsaka:LIPIcs.ICALP.2024.113,
  author =	{Ohsaka, Naoto},
  title =	{{Alphabet Reduction for Reconfiguration Problems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{113:1--113:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.113},
  URN =		{urn:nbn:de:0030-drops-202560},
  doi =		{10.4230/LIPIcs.ICALP.2024.113},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, Hadamard codes, alphabet reduction}
}
Document
Track A: Algorithms, Complexity and Games
Delineating Half-Integrality of the Erdős-Pósa Property for Minors: The Case of Surfaces

Authors: Christophe Paul, Evangelos Protopapas, Dimitrios M. Thilikos, and Sebastian Wiederrecht


Abstract
In 1986 Robertson and Seymour proved a generalization of the seminal result of Erdős and Pósa on the duality of packing and covering cycles: A graph has the Erdős-Pósa property for minors if and only if it is planar. In particular, for every non-planar graph H they gave examples showing that the Erdős-Pósa property does not hold for H. Recently, Liu confirmed a conjecture of Thomas and showed that every graph has the half-integral Erdős-Pósa property for minors. Liu’s proof is non-constructive and to this date, with the exception of a small number of examples, no constructive proof is known. In this paper, we initiate the delineation of the half-integrality of the Erdős-Pósa property for minors. We conjecture that for every graph H, there exists a unique (up to a suitable equivalence relation on graph parameters) graph parameter EP_H such that H has the Erdős-Pósa property in a minor-closed graph class 𝒢 if and only if sup{EP_H(G) ∣ G ∈ 𝒢} is finite. We prove this conjecture for the class ℋ of Kuratowski-connected shallow-vortex minors by showing that, for every non-planar H ∈ ℋ, the parameter EP_H(G) is precisely the maximum order of a Robertson-Seymour counterexample to the Erdős-Pósa property of H which can be found as a minor in G. Our results are constructive and imply, for the first time, parameterized algorithms that find either a packing, or a cover, or one of the Robertson-Seymour counterexamples, certifying the existence of a half-integral packing for the graphs in ℋ.

Cite as

Christophe Paul, Evangelos Protopapas, Dimitrios M. Thilikos, and Sebastian Wiederrecht. Delineating Half-Integrality of the Erdős-Pósa Property for Minors: The Case of Surfaces. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 114:1-114:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{paul_et_al:LIPIcs.ICALP.2024.114,
  author =	{Paul, Christophe and Protopapas, Evangelos and Thilikos, Dimitrios M. and Wiederrecht, Sebastian},
  title =	{{Delineating Half-Integrality of the Erd\H{o}s-P\'{o}sa Property for Minors: The Case of Surfaces}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{114:1--114:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.114},
  URN =		{urn:nbn:de:0030-drops-202576},
  doi =		{10.4230/LIPIcs.ICALP.2024.114},
  annote =	{Keywords: Erd\H{o}s-P\'{o}sa property, Erd\H{o}s-P\'{o}sa pair, Graph parameters, Graph minors, Universal obstruction, Surface containment}
}
Document
Track A: Algorithms, Complexity and Games
On the Cut-Query Complexity of Approximating Max-Cut

Authors: Orestis Plevrakis, Seyoon Ragavan, and S. Matthew Weinberg


Abstract
We consider the problem of query-efficient global max-cut on a weighted undirected graph in the value oracle model examined by [Rubinstein et al., 2018]. Graph algorithms in this cut query model and other query models have recently been studied for various other problems such as min-cut, connectivity, bipartiteness, and triangle detection. Max-cut in the cut query model can also be viewed as a natural special case of submodular function maximization: on query S ⊆ V, the oracle returns the total weight of the cut between S and V\S. Our first main technical result is a lower bound stating that a deterministic algorithm achieving a c-approximation for any c > 1/2 requires Ω(n) queries. This uses an extension of the cut dimension to rule out approximation (prior work of [Graur et al., 2020] introducing the cut dimension only rules out exact solutions). Secondly, we provide a randomized algorithm with Õ(n) queries that finds a c-approximation for any c < 1. We achieve this using a query-efficient sparsifier for undirected weighted graphs (prior work of [Rubinstein et al., 2018] holds only for unweighted graphs). To complement these results, for most constants c ∈ (0,1], we nail down the query complexity of achieving a c-approximation, for both deterministic and randomized algorithms (up to logarithmic factors). Analogously to general submodular function maximization in the same model, we observe a phase transition at c = 1/2: we design a deterministic algorithm for global c-approximate max-cut in O(log n) queries for any c < 1/2, and show that any randomized algorithm requires Ω(n/log n) queries to find a c-approximate max-cut for any c > 1/2. Additionally, we show that any deterministic algorithm requires Ω(n²) queries to find an exact max-cut (enough to learn the entire graph).

Cite as

Orestis Plevrakis, Seyoon Ragavan, and S. Matthew Weinberg. On the Cut-Query Complexity of Approximating Max-Cut. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 115:1-115:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{plevrakis_et_al:LIPIcs.ICALP.2024.115,
  author =	{Plevrakis, Orestis and Ragavan, Seyoon and Weinberg, S. Matthew},
  title =	{{On the Cut-Query Complexity of Approximating Max-Cut}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{115:1--115:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.115},
  URN =		{urn:nbn:de:0030-drops-202587},
  doi =		{10.4230/LIPIcs.ICALP.2024.115},
  annote =	{Keywords: query complexity, maximum cut, approximation algorithms, graph sparsification}
}
Document
Track A: Algorithms, Complexity and Games
One-Way Communication Complexity of Partial XOR Functions

Authors: Vladimir V. Podolskii and Dmitrii Sluch


Abstract
Boolean function F(x,y) for x,y ∈ {0,1}ⁿ is an XOR function if F(x,y) = f(x⊕ y) for some function f on n input bits, where ⊕ is a bit-wise XOR. XOR functions are relevant in communication complexity, partially for allowing the Fourier analytic technique. For total XOR functions, it is known that deterministic communication complexity of F is closely related to parity decision tree complexity of f. Montanaro and Osbourne (2009) observed that one-way communication complexity D_{cc}^{→}(F) of F is exactly equal to non-adaptive parity decision tree complexity NADT^{⊕}(f) of f. Hatami et al. (2018) showed that unrestricted communication complexity of F is polynomially related to parity decision tree complexity of f. We initiate the study of a similar connection for partial functions. We show that in the case of one-way communication complexity whether these measures are equal, depends on the number of undefined inputs of f. More precisely, if D_{cc}^{→}(F) = t and f is undefined on at most O((2^{n-t})/(√{n-t})) inputs, then NADT^{⊕}(f) = t. We also provide stronger bounds in extreme cases of small and large complexity. We show that the restriction on the number of undefined inputs in these results is unavoidable. That is, for a wide range of values of D_{cc}^{→}(F) and NADT^{⊕}(f) (from constant to n-2) we provide partial functions (with more than Ω((2^{n-t})/(√{n-t})) undefined inputs, where t = D_{cc}^{→}) for which D_{cc}^{→}(F) < NADT^{⊕}(f). In particular, we provide a function with an exponential gap between the two measures. Our separation results translate to the case of two-way communication complexity as well, in particular showing that the result of Hatami et al. (2018) cannot be generalized to partial functions. Previous results for total functions heavily rely on the Boolean Fourier analysis and thus, the technique does not translate to partial functions. For the proofs of our results we build a linear algebraic framework instead. Separation results are proved through the reduction to covering codes.

Cite as

Vladimir V. Podolskii and Dmitrii Sluch. One-Way Communication Complexity of Partial XOR Functions. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 116:1-116:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{podolskii_et_al:LIPIcs.ICALP.2024.116,
  author =	{Podolskii, Vladimir V. and Sluch, Dmitrii},
  title =	{{One-Way Communication Complexity of Partial XOR Functions}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{116:1--116:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.116},
  URN =		{urn:nbn:de:0030-drops-202591},
  doi =		{10.4230/LIPIcs.ICALP.2024.116},
  annote =	{Keywords: Partial functions, XOR functions, communication complexity, decision trees, covering codes}
}
Document
Track A: Algorithms, Complexity and Games
Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle

Authors: Aaron Potechin and Aaron Zhang


Abstract
We show that the minimum total coefficient size of a Nullstellensatz proof of the pigeonhole principle on n+1 pigeons and n holes is 2^{Θ(n)}. We also investigate the ordering principle and construct an explicit Nullstellensatz proof for the ordering principle on n elements with total coefficient size 2ⁿ - n.

Cite as

Aaron Potechin and Aaron Zhang. Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 117:1-117:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{potechin_et_al:LIPIcs.ICALP.2024.117,
  author =	{Potechin, Aaron and Zhang, Aaron},
  title =	{{Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{117:1--117:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.117},
  URN =		{urn:nbn:de:0030-drops-202604},
  doi =		{10.4230/LIPIcs.ICALP.2024.117},
  annote =	{Keywords: Proof complexity, Nullstellensatz, pigeonhole principle, coefficient size}
}
Document
Track A: Algorithms, Complexity and Games
Adaptive Sparsification for Matroid Intersection

Authors: Kent Quanrud


Abstract
We consider the matroid intersection problem in the independence oracle model. Given two matroids over n common elements such that the intersection has rank k, our main technique reduces approximate matroid intersection to logarithmically many primal-dual instances over subsets of size Õ(k). This technique is inspired by recent work by [Assadi, 2024] and requires additional insight into structuring and efficiently approximating the dual LP. This combination of ideas leads to faster approximate maximum cardinality and maximum weight matroid intersection algorithms in the independence oracle model. We obtain the first nearly linear time/query approximation schemes for the regime where k ≤ n^{2/3}.

Cite as

Kent Quanrud. Adaptive Sparsification for Matroid Intersection. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 118:1-118:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{quanrud:LIPIcs.ICALP.2024.118,
  author =	{Quanrud, Kent},
  title =	{{Adaptive Sparsification for Matroid Intersection}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{118:1--118:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.118},
  URN =		{urn:nbn:de:0030-drops-202614},
  doi =		{10.4230/LIPIcs.ICALP.2024.118},
  annote =	{Keywords: Matroid intersection, adaptive sparsification, multiplicative-weight udpates, primal-dual}
}
Document
Track A: Algorithms, Complexity and Games
Better Sparsifiers for Directed Eulerian Graphs

Authors: Sushant Sachdeva, Anvith Thudi, and Yibin Zhao


Abstract
Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distributions, hitting and commute times, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G, (1+ε)-sparsifiers with O(nε^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε^{-2}log⁴ n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1) We show that for every directed Eulerian graph G→, there exists an Eulerian sparsifier with O(nε^{-2} log² n log²log n + nε^{-4/3}log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2) We give an improved analysis of the constructions based on short-cycle decompositions, giving an m^{1+δ}-time algorithm for any constant δ > 0 for constructing Eulerian sparsifiers with O(nε^{-2}log³ n) edges.

Cite as

Sushant Sachdeva, Anvith Thudi, and Yibin Zhao. Better Sparsifiers for Directed Eulerian Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 119:1-119:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sachdeva_et_al:LIPIcs.ICALP.2024.119,
  author =	{Sachdeva, Sushant and Thudi, Anvith and Zhao, Yibin},
  title =	{{Better Sparsifiers for Directed Eulerian Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{119:1--119:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.119},
  URN =		{urn:nbn:de:0030-drops-202628},
  doi =		{10.4230/LIPIcs.ICALP.2024.119},
  annote =	{Keywords: Graph algorithms, Linear algebra and computation, Discrepancy theory}
}
Document
Track A: Algorithms, Complexity and Games
Caching Connections in Matchings

Authors: Yaniv Sadeh and Haim Kaplan


Abstract
Motivated by the desire to utilize a limited number of configurable optical switches by recent advances in Software Defined Networks (SDNs), we define an online problem which we call the Caching in Matchings problem. This problem has a natural combinatorial structure and therefore may find additional applications in theory and practice. In the Caching in Matchings problem our cache consists of k matchings of connections between servers that form a bipartite graph. To cache a connection we insert it into one of the k matchings possibly evicting at most two other connections from this matching. This problem resembles the problem known as Connection Caching [Cohen et al., 2000], where we also cache connections but our only restriction is that they form a graph with bounded degree k. Our results show a somewhat surprising qualitative separation between the problems: The competitive ratio of any online algorithm for caching in matchings must depend on the size of the graph. Specifically, we give a deterministic O(nk) competitive and randomized O(n log k) competitive algorithms for caching in matchings, where n is the number of servers and k is the number of matchings. We also show that the competitive ratio of any deterministic algorithm is Ω(max(n/k,k)) and of any randomized algorithm is Ω(log (n/(k² log k)) ⋅ log k). In particular, the lower bound for randomized algorithms is Ω(log n) regardless of k, and can be as high as Ω(log² n) if k = n^{1/3}, for example. We also show that if we allow the algorithm to use at least 2k-1 matchings compared to k used by the optimum then we match the competitive ratios of connection catching which are independent of n. Interestingly, we also show that even a single extra matching for the algorithm allows to get substantially better bounds.

Cite as

Yaniv Sadeh and Haim Kaplan. Caching Connections in Matchings. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 120:1-120:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sadeh_et_al:LIPIcs.ICALP.2024.120,
  author =	{Sadeh, Yaniv and Kaplan, Haim},
  title =	{{Caching Connections in Matchings}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{120:1--120:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.120},
  URN =		{urn:nbn:de:0030-drops-202639},
  doi =		{10.4230/LIPIcs.ICALP.2024.120},
  annote =	{Keywords: Caching, Matchings, Caching in Matchings, Edge Coloring, Online Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Edge Coloring with Asymptotically Optimal Colors

Authors: Mohammad Saneian and Soheil Behnezhad


Abstract
Given a graph G, an edge-coloring is an assignment of colors to edges of G such that any two edges sharing an endpoint receive different colors. By Vizing’s celebrated theorem, any graph of maximum degree Δ needs at least Δ and at most (Δ + 1) colors to be properly edge colored. In this paper, we study edge colorings in the streaming setting. The edges arrive one by one in an arbitrary order. The algorithm takes a single pass over the input and must output a solution using a much smaller space than the input size. Since the output of edge coloring is as large as its input, the assigned colors should also be reported in a streaming fashion. The streaming edge coloring problem has been studied in a series of works over the past few years. The main challenge is that the algorithm cannot "remember" all the color assignments that it returns. To ensure the validity of the solution, existing algorithms use many more colors than Vizing’s bound. Namely, in n-vertex graphs, the state-of-the-art algorithm with Õ(n s) space requires O(Δ²/s + Δ) colors. Note, in particular, that for an asymptotically optimal O(Δ) coloring, this algorithm requires Ω(nΔ) space which is as large as the input. Whether such a coloring can be achieved with sublinear space has been left open. In this paper, we answer this question in the affirmative. We present a randomized algorithm that returns an asymptotically optimal O(Δ) edge coloring using Õ(n √{Δ}) space. More generally, our algorithm returns a proper O(Δ^{1.5}/s + Δ) edge coloring with Õ(n s) space, improving prior algorithms for the whole range of s.

Cite as

Mohammad Saneian and Soheil Behnezhad. Streaming Edge Coloring with Asymptotically Optimal Colors. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 121:1-121:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{saneian_et_al:LIPIcs.ICALP.2024.121,
  author =	{Saneian, Mohammad and Behnezhad, Soheil},
  title =	{{Streaming Edge Coloring with Asymptotically Optimal Colors}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{121:1--121:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.121},
  URN =		{urn:nbn:de:0030-drops-202640},
  doi =		{10.4230/LIPIcs.ICALP.2024.121},
  annote =	{Keywords: Streaming, Edge coloring, Adversarial order}
}
Document
Track A: Algorithms, Complexity and Games
An Improved Integrality Gap for Disjoint Cycles in Planar Graphs

Authors: Niklas Schlomberg


Abstract
We present a new greedy rounding algorithm for the Cycle Packing Problem for uncrossable cycle families in planar graphs. This improves the best-known upper bound for the integrality gap of the natural packing LP to a constant slightly less than 3.5. Furthermore, the analysis works for both edge- and vertex-disjoint packing. The previously best-known constants were 4 for edge-disjoint and 5 for vertex-disjoint cycle packing. This result also immediately yields an improved Erdős-Pósa ratio: for any uncrossable cycle family in a planar graph, the minimum number of vertices (edges) needed to hit all cycles in the family is less than 8.38 times the maximum number of vertex-disjoint (edge-disjoint, respectively) cycles in the family. Some uncrossable cycle families of interest to which the result can be applied are the family of all cycles in a directed or undirected graph, in undirected graphs also the family of all odd cycles and the family of all cycles containing exactly one edge from a specified set of demand edges. The last example is an equivalent formulation of the fully planar Disjoint Paths Problem. Here the Erdős-Pósa ratio translates to a ratio between integral multi-commodity flows and minimum cuts.

Cite as

Niklas Schlomberg. An Improved Integrality Gap for Disjoint Cycles in Planar Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 122:1-122:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schlomberg:LIPIcs.ICALP.2024.122,
  author =	{Schlomberg, Niklas},
  title =	{{An Improved Integrality Gap for Disjoint Cycles in Planar Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{122:1--122:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.122},
  URN =		{urn:nbn:de:0030-drops-202651},
  doi =		{10.4230/LIPIcs.ICALP.2024.122},
  annote =	{Keywords: Cycle packing, planar graphs, disjoint paths}
}
Document
Track A: Algorithms, Complexity and Games
Limits of Sequential Local Algorithms on the Random k-XORSAT Problem

Authors: Kingsley Yung


Abstract
The random k-XORSAT problem is a random constraint satisfaction problem of n Boolean variables and m = rn clauses, which a random instance can be expressed as a G𝔽(2) linear system of the form Ax = b, where A is a random m × n matrix with k ones per row, and b is a random vector. It is known that there exist two distinct thresholds r_{core}(k) < r_{sat}(k) such that as n → ∞ for r < r_{sat}(k) the random instance has solutions with high probability, while for r_{core} < r < r_{sat}(k) the solution space shatters into an exponential number of clusters. Sequential local algorithms are a natural class of algorithms which assign values to variables one by one iteratively. In each iteration, the algorithm runs some heuristics, called local rules, to decide the value assigned, based on the local neighborhood of the selected variables under the factor graph representation of the instance. We prove that for any r > r_{core}(k) the sequential local algorithms with certain local rules fail to solve the random k-XORSAT with high probability. They include (1) the algorithm using the Unit Clause Propagation as local rule for k ≥ 9, and (2) the algorithms using any local rule that can calculate the exact marginal probabilities of variables in instances with factor graphs that are trees, for k ≥ 13. The well-known Belief Propagation and Survey Propagation are included in (2). Meanwhile, the best known linear-time algorithm succeeds with high probability for r < r_{core}(k). Our results support the intuition that r_{core}(k) is the sharp threshold for the existence of a linear-time algorithm for random k-XORSAT. Our approach is to apply the Overlap Gap Property OGP framework to the sub-instance induced by the core of the instance, instead of the whole instance. By doing so, the sequential local algorithms can be ruled out at density as low as r_{core}(k), since the sub-instance exhibits OGP at much lower clause density, compared with the whole instance.

Cite as

Kingsley Yung. Limits of Sequential Local Algorithms on the Random k-XORSAT Problem. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 123:1-123:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yung:LIPIcs.ICALP.2024.123,
  author =	{Yung, Kingsley},
  title =	{{Limits of Sequential Local Algorithms on the Random k-XORSAT Problem}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{123:1--123:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.123},
  URN =		{urn:nbn:de:0030-drops-202666},
  doi =		{10.4230/LIPIcs.ICALP.2024.123},
  annote =	{Keywords: Random k-XORSAT, Sequential local algorithms, Average-case complexity, Phase transition, Overlap gap property}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Lookahead Games and Efficient Determinisation of History-Deterministic Büchi Automata

Authors: Rohan Acharya, Marcin Jurdziński, and Aditya Prakash


Abstract
Our main technical contribution is a polynomial-time determinisation procedure for history-deterministic Büchi automata, which settles an open question of Kuperberg and Skrzypczak, 2015. A key conceptual contribution is the lookahead game, which is a variant of Bagnol and Kuperberg’s token game, in which Adam is given a fixed lookahead. We prove that the lookahead game is equivalent to the 1-token game. This allows us to show that the 1-token game characterises history-determinism for semantically-deterministic Büchi automata, which paves the way to our polynomial-time determinisation procedure.

Cite as

Rohan Acharya, Marcin Jurdziński, and Aditya Prakash. Lookahead Games and Efficient Determinisation of History-Deterministic Büchi Automata. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 124:1-124:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{acharya_et_al:LIPIcs.ICALP.2024.124,
  author =	{Acharya, Rohan and Jurdzi\'{n}ski, Marcin and Prakash, Aditya},
  title =	{{Lookahead Games and Efficient Determinisation of History-Deterministic B\"{u}chi Automata}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{124:1--124:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.124},
  URN =		{urn:nbn:de:0030-drops-202672},
  doi =		{10.4230/LIPIcs.ICALP.2024.124},
  annote =	{Keywords: History determinism, Good-for-games, Automata over infinite words, Games}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Edit Distance of Finite State Transducers

Authors: C. Aiswarya, Amaldev Manuel, and Saina Sunny


Abstract
We lift metrics over words to metrics over word-to-word transductions, by defining the distance between two transductions as the supremum of the distances of their respective outputs over all inputs. This allows to compare transducers beyond equivalence. Two transducers are close (resp. k-close) with respect to a metric if their distance is finite (resp. at most k). Over integer-valued metrics computing the distance between transducers is equivalent to deciding the closeness and k-closeness problems. For common integer-valued edit distances such as, Hamming, transposition, conjugacy and Levenshtein family of distances, we show that the closeness and the k-closeness problems are decidable for functional transducers. Hence, the distance with respect to these metrics is also computable. Finally, we relate the notion of distance between functions to the notions of diameter of a relation and index of a relation in another. We show that computing edit distance between functional transducers is equivalent to computing diameter of a rational relation and both are a specific instance of the index problem of rational relations.

Cite as

C. Aiswarya, Amaldev Manuel, and Saina Sunny. Edit Distance of Finite State Transducers. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 125:1-125:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aiswarya_et_al:LIPIcs.ICALP.2024.125,
  author =	{Aiswarya, C. and Manuel, Amaldev and Sunny, Saina},
  title =	{{Edit Distance of Finite State Transducers}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{125:1--125:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.125},
  URN =		{urn:nbn:de:0030-drops-202682},
  doi =		{10.4230/LIPIcs.ICALP.2024.125},
  annote =	{Keywords: transducers, edit distance, conjugacy}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Separability in Büchi VASS and Singly Non-Linear Systems of Inequalities

Authors: Pascal Baumann, Eren Keskin, Roland Meyer, and Georg Zetzsche


Abstract
The ω-regular separability problem for Büchi VASS coverability languages has recently been shown to be decidable, but with an EXPSPACE lower and a non-primitive recursive upper bound - the exact complexity remained open. We close this gap and show that the problem is EXPSPACE-complete. A careful analysis of our complexity bounds additionally yields a PSPACE procedure in the case of fixed dimension ≥ 1, which matches a pre-established lower bound of PSPACE for one dimensional Büchi VASS. Our algorithm is a non-deterministic search for a witness whose size, as we show, can be suitably bounded. Part of the procedure is to decide the existence of runs in VASS that satisfy certain non-linear properties. Therefore, a key technical ingredient is to analyze a class of systems of inequalities where one variable may occur in non-linear (polynomial) expressions. These so-called singly non-linear systems (SNLS) take the form A(x)⋅ y ≥ b(x), where A(x) and b(x) are a matrix resp. a vector whose entries are polynomials in x, and y ranges over vectors in the rationals. Our main contribution on SNLS is an exponential upper bound on the size of rational solutions to singly non-linear systems. The proof consists of three steps. First, we give a tailor-made quantifier elimination to characterize all real solutions to x. Second, using the root separation theorem about the distance of real roots of polynomials, we show that if a rational solution exists, then there is one with at most polynomially many bits. Third, we insert the solution for x into the SNLS, making it linear and allowing us to invoke standard solution bounds from convex geometry. Finally, we combine the results about SNLS with several techniques from the area of VASS to devise an EXPSPACE decision procedure for ω-regular separability of Büchi VASS.

Cite as

Pascal Baumann, Eren Keskin, Roland Meyer, and Georg Zetzsche. Separability in Büchi VASS and Singly Non-Linear Systems of Inequalities. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 126:1-126:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baumann_et_al:LIPIcs.ICALP.2024.126,
  author =	{Baumann, Pascal and Keskin, Eren and Meyer, Roland and Zetzsche, Georg},
  title =	{{Separability in B\"{u}chi VASS and Singly Non-Linear Systems of Inequalities}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{126:1--126:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.126},
  URN =		{urn:nbn:de:0030-drops-202695},
  doi =		{10.4230/LIPIcs.ICALP.2024.126},
  annote =	{Keywords: Vector addition systems, infinite words, separability, inequalities, quantifier elimination, rational, polynomials}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Decidability of Graph Neural Networks via Logical Characterizations

Authors: Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan


Abstract
We present results concerning the expressiveness and decidability of a popular graph learning formalism, graph neural networks (GNNs), exploiting connections with logic. We use a family of recently-discovered decidable logics involving "Presburger quantifiers". We show how to use these logics to measure the expressiveness of classes of GNNs, in some cases getting exact correspondences between the expressiveness of logics and GNNs. We also employ the logics, and the techniques used to analyze them, to obtain decision procedures for verification problems over GNNs. We complement this with undecidability results for static analysis problems involving the logics, as well as for GNN verification problems.

Cite as

Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan. Decidability of Graph Neural Networks via Logical Characterizations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 127:1-127:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{benedikt_et_al:LIPIcs.ICALP.2024.127,
  author =	{Benedikt, Michael and Lu, Chia-Hsuan and Motik, Boris and Tan, Tony},
  title =	{{Decidability of Graph Neural Networks via Logical Characterizations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{127:1--127:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.127},
  URN =		{urn:nbn:de:0030-drops-202708},
  doi =		{10.4230/LIPIcs.ICALP.2024.127},
  annote =	{Keywords: Logic, Graph Neural Networks}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Automata-Theoretic Characterisations of Branching-Time Temporal Logics

Authors: Massimo Benerecetti, Laura Bozzelli, Fabio Mogavero, and Adriano Peron


Abstract
Characterisations theorems serve as important tools in model theory and can be used to assess and compare the expressive power of temporal languages used for the specification and verification of properties in formal methods. While complete connections have been established for the linear-time case between temporal logics, predicate logics, algebraic models, and automata, the situation in the branching-time case remains considerably more fragmented. In this work, we provide an automata-theoretic characterisation of some important branching-time temporal logics, namely CTL* and ECTL* interpreted on arbitrary-branching trees, by identifying two variants of Hesitant Tree Automata that are proved equivalent to those logics. The characterisations also apply to Monadic Path Logic and the bisimulation-invariant fragment of Monadic Chain Logic, again interpreted over trees. These results widen the characterisation landscape of the branching-time case and solve a forty-year-old open question.

Cite as

Massimo Benerecetti, Laura Bozzelli, Fabio Mogavero, and Adriano Peron. Automata-Theoretic Characterisations of Branching-Time Temporal Logics. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 128:1-128:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{benerecetti_et_al:LIPIcs.ICALP.2024.128,
  author =	{Benerecetti, Massimo and Bozzelli, Laura and Mogavero, Fabio and Peron, Adriano},
  title =	{{Automata-Theoretic Characterisations of Branching-Time Temporal Logics}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{128:1--128:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.128},
  URN =		{urn:nbn:de:0030-drops-202716},
  doi =		{10.4230/LIPIcs.ICALP.2024.128},
  annote =	{Keywords: Branching-Time Temporal Logics, Monadic Second-Order Logics, Tree Automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Complexity of Computing in Continuous Time: Space Complexity Is Precision

Authors: Manon Blanc and Olivier Bournez


Abstract
Models of computations over the integers are equivalent from a computability and complexity theory point of view by the (effective) Church-Turing thesis. It is not possible to unify discrete-time models over the reals. The situation is unclear but simpler for continuous-time models, as there is a unifying mathematical model, provided by ordinary differential equations (ODEs). Each model corresponds to a particular class of ODEs. For example, the General Purpose Analog Computer model of Claude Shannon, introduced as a mathematical model of analogue machines (Differential Analyzers), is known to correspond to polynomial ODEs. However, the question of a robust complexity theory for such models and its relations to classical (discrete) computation theory is an old problem. There was some recent significant progress: it has been proved that (classical) time complexity corresponds to the length of the involved curves, i.e. to the length of the solutions of the corresponding polynomial ODEs. The question of whether there is a simple and robust way to measure space complexity remains. We argue that space complexity corresponds to precision and conversely. Concretely, we propose and prove an algebraic characterisation of FPSPACE, using continuous ODEs. Recent papers proposed algebraic characterisations of polynomial-time and polynomial-space complexity classes over the reals, but with a discrete-time: those algebras rely on discrete ODE schemes. Here, we use classical (continuous) ODEs, with the classic definition of derivation and hence with the more natural context of continuous-time associated with ODEs. We characterise both the case of polynomial space functions over the integers and the reals. This is done by proving two inclusions. The first is obtained using some original polynomial space method for solving ODEs. For the other, we prove that Turing machines, with a proper representation of real numbers, can be simulated by continuous ODEs and not just discrete ODEs. A major consequence is that the associated space complexity is provably related to the numerical stability of involved schemas and the associated required precision. We obtain that a problem can be solved in polynomial space if and only if it can be simulated by some numerically stable ODE, using a polynomial precision.

Cite as

Manon Blanc and Olivier Bournez. The Complexity of Computing in Continuous Time: Space Complexity Is Precision. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 129:1-129:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.ICALP.2024.129,
  author =	{Blanc, Manon and Bournez, Olivier},
  title =	{{The Complexity of Computing in Continuous Time: Space Complexity Is Precision}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{129:1--129:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.129},
  URN =		{urn:nbn:de:0030-drops-202722},
  doi =		{10.4230/LIPIcs.ICALP.2024.129},
  annote =	{Keywords: Models of computation, Ordinary differential equations, Real computations, Analog computations, Complexity theory, Implicit complexity, Recursion scheme}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Function Spaces for Orbit-Finite Sets

Authors: Mikołaj Bojańczyk, Lê Thành Dũng (Tito) Nguyễn, and Rafał Stefański


Abstract
Orbit-finite sets are a generalisation of finite sets, and as such support many operations allowed for finite sets, such as pairing, quotienting, or taking subsets. However, they do not support function spaces, i.e. if X and Y are orbit-finite sets, then the space of finitely supported functions from X to Y is not orbit-finite. We propose a solution to this problem inspired by linear logic.

Cite as

Mikołaj Bojańczyk, Lê Thành Dũng (Tito) Nguyễn, and Rafał Stefański. Function Spaces for Orbit-Finite Sets. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 130:1-130:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2024.130,
  author =	{Boja\'{n}czyk, Miko{\l}aj and Nguy\~{ê}n, L\^{e} Th\`{a}nh D\~{u}ng (Tito) and Stefa\'{n}ski, Rafa{\l}},
  title =	{{Function Spaces for Orbit-Finite Sets}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{130:1--130:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.130},
  URN =		{urn:nbn:de:0030-drops-202730},
  doi =		{10.4230/LIPIcs.ICALP.2024.130},
  annote =	{Keywords: Orbit-finite sets, automata, linear types, game semantics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Structure of Trees in the Pushdown Hierarchy

Authors: Arnaud Carayol and Lucien Charamond


Abstract
In this article, we investigate the structure of the trees in the pushdown hierarchy, a hierarchy of infinite graphs having a decidable MSO-theory. We show that a binary complete tree in the pushdown hierarchy must contain at least two different subtrees which are isomorphic. We extend this property to any tree with no leaves and with chains of unary vertices of bounded length. We provided two applications of this result. A first application in formal language theory, gives a simple argument to show that some languages are not deterministic higher-order indexed languages. A second application in number theory shows that the real numbers defined by deterministic higher-order pushdown automata are either rational or transcendental.

Cite as

Arnaud Carayol and Lucien Charamond. The Structure of Trees in the Pushdown Hierarchy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 131:1-131:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{carayol_et_al:LIPIcs.ICALP.2024.131,
  author =	{Carayol, Arnaud and Charamond, Lucien},
  title =	{{The Structure of Trees in the Pushdown Hierarchy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{131:1--131:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.131},
  URN =		{urn:nbn:de:0030-drops-202749},
  doi =		{10.4230/LIPIcs.ICALP.2024.131},
  annote =	{Keywords: Pushdown hierarchy, Monadic second-order logic, Automatic numbers}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Integer Linear-Exponential Programming in NP by Quantifier Elimination

Authors: Dmitry Chistikov, Alessio Mansutti, and Mikhail R. Starchak


Abstract
This paper provides an NP procedure that decides whether a linear-exponential system of constraints has an integer solution. Linear-exponential systems extend standard integer linear programs with exponential terms 2^x and remainder terms (x mod 2^y). Our result implies that the existential theory of the structure (ℕ,0,1,+,2^(⋅),V_2(⋅,⋅), ≤) has an NP-complete satisfiability problem, thus improving upon a recent EXPSPACE upper bound. This theory extends the existential fragment of Presburger arithmetic with the exponentiation function x ↦ 2^x and the binary predicate V_2(x,y) that is true whenever y ≥ 1 is the largest power of 2 dividing x. Our procedure for solving linear-exponential systems uses the method of quantifier elimination. As a by-product, we modify the classical Gaussian variable elimination into a non-deterministic polynomial-time procedure for integer linear programming (or: existential Presburger arithmetic).

Cite as

Dmitry Chistikov, Alessio Mansutti, and Mikhail R. Starchak. Integer Linear-Exponential Programming in NP by Quantifier Elimination. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 132:1-132:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chistikov_et_al:LIPIcs.ICALP.2024.132,
  author =	{Chistikov, Dmitry and Mansutti, Alessio and Starchak, Mikhail R.},
  title =	{{Integer Linear-Exponential Programming in NP by Quantifier Elimination}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{132:1--132:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.132},
  URN =		{urn:nbn:de:0030-drops-202758},
  doi =		{10.4230/LIPIcs.ICALP.2024.132},
  annote =	{Keywords: decision procedures, integer programming, quantifier elimination}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Finite-Memory Strategies for Almost-Sure Energy-MeanPayoff Objectives in MDPs

Authors: Mohan Dantam and Richard Mayr


Abstract
We consider finite-state Markov decision processes with the combined Energy-MeanPayoff objective. The controller tries to avoid running out of energy while simultaneously attaining a strictly positive mean payoff in a second dimension. We show that finite memory suffices for almost surely winning strategies for the Energy-MeanPayoff objective. This is in contrast to the closely related Energy-Parity objective, where almost surely winning strategies require infinite memory in general. We show that exponential memory is sufficient (even for deterministic strategies) and necessary (even for randomized strategies) for almost surely winning Energy-MeanPayoff. The upper bound holds even if the strictly positive mean payoff part of the objective is generalized to multidimensional strictly positive mean payoff. Finally, it is decidable in pseudo-polynomial time whether an almost surely winning strategy exists.

Cite as

Mohan Dantam and Richard Mayr. Finite-Memory Strategies for Almost-Sure Energy-MeanPayoff Objectives in MDPs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 133:1-133:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dantam_et_al:LIPIcs.ICALP.2024.133,
  author =	{Dantam, Mohan and Mayr, Richard},
  title =	{{Finite-Memory Strategies for Almost-Sure Energy-MeanPayoff Objectives in MDPs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{133:1--133:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.133},
  URN =		{urn:nbn:de:0030-drops-202762},
  doi =		{10.4230/LIPIcs.ICALP.2024.133},
  annote =	{Keywords: Markov decision processes, energy, mean payoff, parity, strategy complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Functional Closure Properties of Finite ℕ-Weighted Automata

Authors: Julian Dörfler and Christian Ikenmeyer


Abstract
We determine all functional closure properties of finite ℕ-weighted automata, even all multivariate ones, and in particular all multivariate polynomials. We also determine all univariate closure properties in the promise setting, and all multivariate closure properties under certain assumptions on the promise, in particular we determine all multivariate closure properties where the output vector lies on a monotone algebraic graph variety.

Cite as

Julian Dörfler and Christian Ikenmeyer. Functional Closure Properties of Finite ℕ-Weighted Automata. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 134:1-134:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dorfler_et_al:LIPIcs.ICALP.2024.134,
  author =	{D\"{o}rfler, Julian and Ikenmeyer, Christian},
  title =	{{Functional Closure Properties of Finite \mathbb{N}-Weighted Automata}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{134:1--134:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.134},
  URN =		{urn:nbn:de:0030-drops-202777},
  doi =		{10.4230/LIPIcs.ICALP.2024.134},
  annote =	{Keywords: Finite automata, weighted automata, counting, closure properties, algebraic varieties}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
A Finite Presentation of Graphs of Treewidth at Most Three

Authors: Amina Doumane, Samuel Humeau, and Damien Pous


Abstract
We provide a finite equational presentation of graphs of treewidth at most three, solving an instance of an open problem by Courcelle and Engelfriet. We use a syntax generalising series-parallel expressions, denoting graphs with a small interface. We introduce appropriate notions of connectivity for such graphs (components, cutvertices, separation pairs). We use those concepts to analyse the structure of graphs of treewidth at most three, showing how they can be decomposed recursively, first canonically into connected parallel components, and then non-deterministically. The main difficulty consists in showing that all non-deterministic choices can be related using only finitely many equational axioms.

Cite as

Amina Doumane, Samuel Humeau, and Damien Pous. A Finite Presentation of Graphs of Treewidth at Most Three. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 135:1-135:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doumane_et_al:LIPIcs.ICALP.2024.135,
  author =	{Doumane, Amina and Humeau, Samuel and Pous, Damien},
  title =	{{A Finite Presentation of Graphs of Treewidth at Most Three}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{135:1--135:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.135},
  URN =		{urn:nbn:de:0030-drops-202787},
  doi =		{10.4230/LIPIcs.ICALP.2024.135},
  annote =	{Keywords: Graphs, treewidth, connectedness, axiomatisation, series-parallel expressions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Improved Algorithm for Reachability in d-VASS

Authors: Yuxi Fu, Qizhe Yang, and Yangluo Zheng


Abstract
An 𝖥_{d} upper bound for the reachability problem in vector addition systems with states (VASS) in fixed dimension is given, where 𝖥_d is the d-th level of the Grzegorczyk hierarchy of complexity classes. The new algorithm combines the idea of the linear path scheme characterization of the reachability in the 2-dimension VASSes with the general decomposition algorithm by Mayr, Kosaraju and Lambert. The result improves the 𝖥_{d + 4} upper bound due to Leroux and Schmitz (LICS 2019).

Cite as

Yuxi Fu, Qizhe Yang, and Yangluo Zheng. Improved Algorithm for Reachability in d-VASS. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 136:1-136:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.ICALP.2024.136,
  author =	{Fu, Yuxi and Yang, Qizhe and Zheng, Yangluo},
  title =	{{Improved Algorithm for Reachability in d-VASS}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{136:1--136:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.136},
  URN =		{urn:nbn:de:0030-drops-202799},
  doi =		{10.4230/LIPIcs.ICALP.2024.136},
  annote =	{Keywords: Petri net, vector addition system, reachability}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Classes of Bounded Tree Rank, Their Interpretations, and Efficient Sparsification

Authors: Jakub Gajarský and Rose McCarty


Abstract
Graph classes of bounded tree rank were introduced recently in the context of the model checking problem for first-order logic of graphs. These graph classes are a common generalization of graph classes of bounded degree and bounded treedepth, and they are a special case of graph classes of bounded expansion. We introduce a notion of decomposition for these classes and show that these decompositions can be efficiently computed. Also, a natural extension of our decomposition leads to a new characterization and decomposition for graph classes of bounded expansion (and an efficient algorithm computing this decomposition). We then focus on interpretations of graph classes of bounded tree rank. We give a characterization of graph classes interpretable in graph classes of tree rank 2. Importantly, our characterization leads to an efficient sparsification procedure: For any graph class 𝒞 interpretable in a graph class of tree rank at most 2, there is a polynomial time algorithm that to any G ∈ 𝒞 computes a (sparse) graph H from a fixed graph class of tree rank at most 2 such that G = I(H) for a fixed interpretation I. To the best of our knowledge, this is the first efficient "interpretation reversal" result that generalizes the result of Gajarský et al. [LICS 2016], who showed an analogous result for graph classes interpretable in classes of graphs of bounded degree.

Cite as

Jakub Gajarský and Rose McCarty. On Classes of Bounded Tree Rank, Their Interpretations, and Efficient Sparsification. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 137:1-137:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gajarsky_et_al:LIPIcs.ICALP.2024.137,
  author =	{Gajarsk\'{y}, Jakub and McCarty, Rose},
  title =	{{On Classes of Bounded Tree Rank, Their Interpretations, and Efficient Sparsification}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{137:1--137:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.137},
  URN =		{urn:nbn:de:0030-drops-202802},
  doi =		{10.4230/LIPIcs.ICALP.2024.137},
  annote =	{Keywords: First-order model checking, structural graph theory, structural sparsity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Deciding Linear Height and Linear Size-To-Height Increase of Macro Tree Transducers

Authors: Paul Gallot, Sebastian Maneth, Keisuke Nakano, and Charles Peyrat


Abstract
We present a novel normal form for (total deterministic) macro tree transducers (mtts), called "depth proper normal form". If an mtt is in this normal form, then it is guaranteed that each parameter of each state appears at arbitrary depths in the output trees of that state. Intuitively, if some parameter only appears at certain bounded depths in the output trees of a state, then this parameter can be eliminated by in-lining the corresponding output paths at each call site of that state. We use regular look-ahead in order to determine which of the paths should be in-lined. As a consequence of changing the look-ahead, a parameter that was previously appearing at unbounded depths, may be appearing at bounded depths for some new look-ahead; for this reason, our construction has to be iterated to obtain an mtt in depth-normal form. Using the normal form, we can decide whether the translation of an mtt has linear height increase or has linear size-to-height increase.

Cite as

Paul Gallot, Sebastian Maneth, Keisuke Nakano, and Charles Peyrat. Deciding Linear Height and Linear Size-To-Height Increase of Macro Tree Transducers. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 138:1-138:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gallot_et_al:LIPIcs.ICALP.2024.138,
  author =	{Gallot, Paul and Maneth, Sebastian and Nakano, Keisuke and Peyrat, Charles},
  title =	{{Deciding Linear Height and Linear Size-To-Height Increase of Macro Tree Transducers}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{138:1--138:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.138},
  URN =		{urn:nbn:de:0030-drops-202818},
  doi =		{10.4230/LIPIcs.ICALP.2024.138},
  annote =	{Keywords: automata, formal language theory, macro tree transducer, normal form}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations

Authors: Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah


Abstract
We introduce a powerful termination algorithm for structurally recursive functions that improves on the core ideas behind lexicographic termination algorithms for functional programs. The algorithm generates linear-lexicographic combinations of primitive measure functions measuring the recursive structure of terms. We introduce a measure language that enables the simplification and comparison of measures and we prove meta-theoretic properties of our measure language. Moreover, we demonstrate our algorithm, on an untyped first-order functional language and prove its soundness and that it runs in polynomial time. We also provide a Haskell implementation. As part of this work, we also show how to solve the maximisation of negative vector-components as a linear program.

Cite as

Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah. T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 139:1-139:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{giles_et_al:LIPIcs.ICALP.2024.139,
  author =	{Giles, Raphael Douglas and Jackson, Vincent and Rizkallah, Christine},
  title =	{{T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{139:1--139:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.139},
  URN =		{urn:nbn:de:0030-drops-202827},
  doi =		{10.4230/LIPIcs.ICALP.2024.139},
  annote =	{Keywords: Termination, Recursive functions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The 2-Dimensional Constraint Loop Problem Is Decidable

Authors: Quentin Guilmant, Engel Lefaucheux, Joël Ouaknine, and James Worrell


Abstract
A linear constraint loop is specified by a system of linear inequalities that define the relation between the values of the program variables before and after a single execution of the loop body. In this paper we consider the problem of determining whether such a loop terminates, i.e., whether all maximal executions are finite, regardless of how the loop is initialised and how the non-determinism in the loop body is resolved. We focus on the variant of the termination problem in which the loop variables range over ℝ. Our main result is that the termination problem is decidable over the reals in dimension 2. A more abstract formulation of our main result is that it is decidable whether a binary relation on ℝ² that is given as a conjunction of linear constraints is well-founded.

Cite as

Quentin Guilmant, Engel Lefaucheux, Joël Ouaknine, and James Worrell. The 2-Dimensional Constraint Loop Problem Is Decidable. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 140:1-140:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{guilmant_et_al:LIPIcs.ICALP.2024.140,
  author =	{Guilmant, Quentin and Lefaucheux, Engel and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{The 2-Dimensional Constraint Loop Problem Is Decidable}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{140:1--140:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.140},
  URN =		{urn:nbn:de:0030-drops-202831},
  doi =		{10.4230/LIPIcs.ICALP.2024.140},
  annote =	{Keywords: Linear Constraints Loops, Minkowski-Weyl, Convex Sets, Asymptotic Expansions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Flattability of Priority Vector Addition Systems

Authors: Roland Guttenberg


Abstract
Vector addition systems (VAS), also known as Petri nets, are a popular model of concurrent systems. Many problems from many areas reduce to the reachability problem for VAS, which consists of deciding whether a target configuration of a VAS is reachable from a given initial configuration. One of the main approaches to solve the problem on practical instances is called flattening, intuitively removing nested loops. This technique is known to terminate for semilinear VAS due to [Jérôme Leroux, 2013]. In this paper, we prove that also for VAS with nested zero tests, called Priority VAS, flattening does in fact terminate for all semilinear reachability relations. Furthermore, we prove that Priority VAS admit semilinear inductive invariants. Both of these results are obtained by defining a well-quasi-order on runs of Priority VAS which has good pumping properties.

Cite as

Roland Guttenberg. Flattability of Priority Vector Addition Systems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 141:1-141:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{guttenberg:LIPIcs.ICALP.2024.141,
  author =	{Guttenberg, Roland},
  title =	{{Flattability of Priority Vector Addition Systems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{141:1--141:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.141},
  URN =		{urn:nbn:de:0030-drops-202848},
  doi =		{10.4230/LIPIcs.ICALP.2024.141},
  annote =	{Keywords: Priority Vector Addition Systems, Semilinear, Inductive Invariants, Geometry, Flattability, Almost Semilinear, Transformer Relation}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
An Efficient Quantifier Elimination Procedure for Presburger Arithmetic

Authors: Christoph Haase, Shankara Narayanan Krishna, Khushraj Madnani, Om Swostik Mishra, and Georg Zetzsche


Abstract
All known quantifier elimination procedures for Presburger arithmetic require doubly exponential time for eliminating a single block of existentially quantified variables. It has even been claimed in the literature that this upper bound is tight. We observe that this claim is incorrect and develop, as the main result of this paper, a quantifier elimination procedure eliminating a block of existentially quantified variables in singly exponential time. As corollaries, we can establish the precise complexity of numerous problems. Examples include deciding (i) monadic decomposability for existential formulas, (ii) whether an existential formula defines a well-quasi ordering or, more generally, (iii) certain formulas of Presburger arithmetic with Ramsey quantifiers. Moreover, despite the exponential blowup, our procedure shows that under mild assumptions, even NP upper bounds for decision problems about quantifier-free formulas can be transferred to existential formulas. The technical basis of our results is a kind of small model property for parametric integer programming that generalizes the seminal results by von zur Gathen and Sieveking on small integer points in convex polytopes.

Cite as

Christoph Haase, Shankara Narayanan Krishna, Khushraj Madnani, Om Swostik Mishra, and Georg Zetzsche. An Efficient Quantifier Elimination Procedure for Presburger Arithmetic. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 142:1-142:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{haase_et_al:LIPIcs.ICALP.2024.142,
  author =	{Haase, Christoph and Krishna, Shankara Narayanan and Madnani, Khushraj and Mishra, Om Swostik and Zetzsche, Georg},
  title =	{{An Efficient Quantifier Elimination Procedure for Presburger Arithmetic}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{142:1--142:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.142},
  URN =		{urn:nbn:de:0030-drops-202856},
  doi =		{10.4230/LIPIcs.ICALP.2024.142},
  annote =	{Keywords: Presburger arithmetic, quantifier elimination, parametric integer programming, convex geometry}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Forcing, Transition Algebras, and Calculi

Authors: Go Hashimoto, Daniel Găină, and Ionuţ Ţuţu


Abstract
We bring forward a logical system of transition algebras that enhances many-sorted first-order logic using features from dynamic logics. The sentences we consider include compositions, unions, and transitive closures of transition relations, which are treated similarly to the actions used in dynamic logics in order to define necessity and possibility operators. This leads to a higher degree of expressivity than that of many-sorted first-order logic. For example, one can finitely axiomatize both the finiteness and the reachability of models, neither of which are ordinarily possible in many-sorted first-order logic. We introduce syntactic entailment and study basic properties such as compactness and completeness, showing that the latter does not hold when standard finitary proof rules are used. Consequently, we define proof rules having both finite and countably infinite premises, and we provide conditions under which completeness can be proved. To that end, we generalize the forcing method introduced in model theory by Robinson from a single signature to a category of signatures, and we apply it to obtain a completeness result for signatures that are at most countable.

Cite as

Go Hashimoto, Daniel Găină, and Ionuţ Ţuţu. Forcing, Transition Algebras, and Calculi. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 143:1-143:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hashimoto_et_al:LIPIcs.ICALP.2024.143,
  author =	{Hashimoto, Go and G\u{a}in\u{a}, Daniel and \c{T}u\c{t}u, Ionu\c{t}},
  title =	{{Forcing, Transition Algebras, and Calculi}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{143:1--143:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.143},
  URN =		{urn:nbn:de:0030-drops-202868},
  doi =		{10.4230/LIPIcs.ICALP.2024.143},
  annote =	{Keywords: Forcing, institution theory, calculi, algebraic specification, transition systems}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Transcendence of Numbers Related to Sturmian and Arnoux-Rauzy Words

Authors: Pavol Kebis, Florian Luca, Joël Ouaknine, Andrew Scoones, and James Worrell


Abstract
We consider numbers of the form S_β(u): = ∑_{n=0}^∞ (u_n)/(βⁿ), where u = ⟨u_n⟩_{n=0}^∞ is an infinite word over a finite alphabet and β ∈ ℂ satisfies |β| > 1. Our main contribution is to present a combinatorial criterion on u, called echoing, that implies that S_β(u) is transcendental whenever β is algebraic. We show that every Sturmian word is echoing, as is the Tribonacci word, a leading example of an Arnoux-Rauzy word. We furthermore characterise ̅{ℚ}-linear independence of sets of the form {1, S_β(u₁),…,S_β(u_k)}, where u₁,…,u_k are Sturmian words having the same slope. Finally, we give an application of the above linear independence criterion to the theory of dynamical systems, showing that for a contracted rotation on the unit circle with algebraic slope, its limit set is either finite or consists exclusively of transcendental elements other than its endpoints 0 and 1. This confirms a conjecture of Bugeaud, Kim, Laurent, and Nogueira.

Cite as

Pavol Kebis, Florian Luca, Joël Ouaknine, Andrew Scoones, and James Worrell. On Transcendence of Numbers Related to Sturmian and Arnoux-Rauzy Words. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 144:1-144:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kebis_et_al:LIPIcs.ICALP.2024.144,
  author =	{Kebis, Pavol and Luca, Florian and Ouaknine, Jo\"{e}l and Scoones, Andrew and Worrell, James},
  title =	{{On Transcendence of Numbers Related to Sturmian and Arnoux-Rauzy Words}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{144:1--144:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.144},
  URN =		{urn:nbn:de:0030-drops-202873},
  doi =		{10.4230/LIPIcs.ICALP.2024.144},
  annote =	{Keywords: Transcendence, Subspace Theorem, Fibonacci Word, Tribonacci Word}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The Threshold Problem for Hypergeometric Sequences with Quadratic Parameters

Authors: George Kenison


Abstract
Hypergeometric sequences are rational-valued sequences that satisfy first-order linear recurrence relations with polynomial coefficients; that is, ⟨u_n⟩_{n=0}^∞ is hypergeometric if it satisfies a first-order linear recurrence of the form p(n)u_{n+1} = q(n)u_n with polynomial coefficients p,q ∈ ℤ[x] and u₀ ∈ ℚ. In this paper, we consider the Threshold Problem for hypergeometric sequences: given a hypergeometric sequence ⟨u_n⟩_{n=0}^∞ and a threshold t ∈ ℚ, determine whether u_n ≥ t for each n ∈ ℕ₀. We establish decidability for the Threshold Problem under the assumption that the coefficients p and q are monic polynomials whose roots lie in an imaginary quadratic extension of ℚ. We also establish conditional decidability results; for example, under the assumption that the coefficients p and q are monic polynomials whose roots lie in any number of quadratic extensions of ℚ, the Threshold Problem is decidable subject to the truth of Schanuel’s conjecture. Finally, we show how our approach both recovers and extends some of the recent decidability results on the Membership Problem for hypergeometric sequences with quadratic parameters.

Cite as

George Kenison. The Threshold Problem for Hypergeometric Sequences with Quadratic Parameters. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 145:1-145:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kenison:LIPIcs.ICALP.2024.145,
  author =	{Kenison, George},
  title =	{{The Threshold Problem for Hypergeometric Sequences with Quadratic Parameters}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{145:1--145:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.145},
  URN =		{urn:nbn:de:0030-drops-202882},
  doi =		{10.4230/LIPIcs.ICALP.2024.145},
  annote =	{Keywords: Threshold Problem, Membership Problem, Hypergeometric Sequences}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Solving Promise Equations over Monoids and Groups

Authors: Alberto Larrauri and Stanislav Živný


Abstract
We give a complete complexity classification for the problem of finding a solution to a given system of equations over a fixed finite monoid, given that a solution over a more restricted monoid exists. As a corollary, we obtain a complexity classification for the same problem over groups.

Cite as

Alberto Larrauri and Stanislav Živný. Solving Promise Equations over Monoids and Groups. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 146:1-146:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{larrauri_et_al:LIPIcs.ICALP.2024.146,
  author =	{Larrauri, Alberto and \v{Z}ivn\'{y}, Stanislav},
  title =	{{Solving Promise Equations over Monoids and Groups}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{146:1--146:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.146},
  URN =		{urn:nbn:de:0030-drops-202893},
  doi =		{10.4230/LIPIcs.ICALP.2024.146},
  annote =	{Keywords: constraint satisfaction, promise constraint satisfaction, equations, minions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games

Authors: Bruno Loff and Mateusz Skomra


Abstract
We devise a policy-iteration algorithm for deterministic two-player discounted and mean-payoff games, that runs in polynomial time with high probability, on any input where each payoff is chosen independently from a sufficiently random distribution and the underlying graph of the game is ergodic. This includes the case where an arbitrary set of payoffs has been perturbed by a Gaussian, showing for the first time that deterministic two-player games can be solved efficiently, in the sense of smoothed analysis. More generally, we devise a condition number for deterministic discounted and mean-payoff games played on ergodic graphs, and show that our algorithm runs in time polynomial in this condition number. Our result confirms a previous conjecture of Boros et al., which was claimed as a theorem [Boros et al., 2011] and later retracted [Boros et al., 2018]. It stands in contrast with a recent counter-example by Christ and Yannakakis [Christ and Yannakakis, 2023], showing that Howard’s policy-iteration algorithm does not run in smoothed polynomial time on stochastic single-player mean-payoff games. Our approach is inspired by the analysis of random optimal assignment instances by Frieze and Sorkin [Frieze and Sorkin, 2007], and the analysis of bias-induced policies for mean-payoff games by Akian, Gaubert and Hochart [Akian et al., 2018].

Cite as

Bruno Loff and Mateusz Skomra. Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 147:1-147:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{loff_et_al:LIPIcs.ICALP.2024.147,
  author =	{Loff, Bruno and Skomra, Mateusz},
  title =	{{Smoothed Analysis of Deterministic Discounted and Mean-Payoff Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{147:1--147:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.147},
  URN =		{urn:nbn:de:0030-drops-202908},
  doi =		{10.4230/LIPIcs.ICALP.2024.147},
  annote =	{Keywords: Mean-payoff games, discounted games, policy iteration, smoothed analysis}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
An Order out of Nowhere: A New Algorithm for Infinite-Domain {CSP}s

Authors: Antoine Mottet, Tomáš Nagy, and Michael Pinsker


Abstract
We consider the problem of satisfiability of sets of constraints in a given set of finite uniform hypergraphs. While the problem under consideration is similar in nature to the problem of satisfiability of constraints in graphs, the classical complexity reduction to finite-domain CSPs that was used in the proof of the complexity dichotomy for such problems cannot be used as a black box in our case. We therefore introduce an algorithmic technique inspired by classical notions from the theory of finite-domain CSPs, and prove its correctness based on symmetries that depend on a linear order that is external to the structures under consideration. Our second main result is a P/NP-complete complexity dichotomy for such problems over many sets of uniform hypergraphs. The proof is based on the translation of the problem into the framework of constraint satisfaction problems (CSPs) over infinite uniform hypergraphs. Our result confirms in particular the Bodirsky-Pinsker conjecture for CSPs of first-order reducts of some homogeneous hypergraphs. This forms a vast generalization of previous work by Bodirsky-Pinsker (STOC'11) and Bodirsky-Martin-Pinsker-Pongrácz (ICALP'16) on graph satisfiability.

Cite as

Antoine Mottet, Tomáš Nagy, and Michael Pinsker. An Order out of Nowhere: A New Algorithm for Infinite-Domain {CSP}s. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 148:1-148:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mottet_et_al:LIPIcs.ICALP.2024.148,
  author =	{Mottet, Antoine and Nagy, Tom\'{a}\v{s} and Pinsker, Michael},
  title =	{{An Order out of Nowhere: A New Algorithm for Infinite-Domain \{CSP\}s}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{148:1--148:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.148},
  URN =		{urn:nbn:de:0030-drops-202912},
  doi =		{10.4230/LIPIcs.ICALP.2024.148},
  annote =	{Keywords: Constraint Satisfaction Problems, Hypergraphs, Polymorphisms}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
A Complete Quantitative Axiomatisation of Behavioural Distance of Regular Expressions

Authors: Wojciech Różowski


Abstract
Deterministic automata have been traditionally studied through the point of view of language equivalence, but another perspective is given by the canonical notion of shortest-distinguishing-word distance quantifying the of states. Intuitively, the longer the word needed to observe a difference between two states, then the closer their behaviour is. In this paper, we give a sound and complete axiomatisation of shortest-distinguishing-word distance between regular languages. Our axiomatisation relies on a recently developed quantitative analogue of equational logic, allowing to manipulate rational-indexed judgements of the form e ≡_ε f meaning term e is approximately equivalent to term f within the error margin of ε. The technical core of the paper is dedicated to the completeness argument that draws techniques from order theory and Banach spaces to simplify the calculation of the behavioural distance to the point it can be then mimicked by axiomatic reasoning.

Cite as

Wojciech Różowski. A Complete Quantitative Axiomatisation of Behavioural Distance of Regular Expressions. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 149:1-149:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rozowski:LIPIcs.ICALP.2024.149,
  author =	{R\'{o}\.{z}owski, Wojciech},
  title =	{{A Complete Quantitative Axiomatisation of Behavioural Distance of Regular Expressions}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{149:1--149:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.149},
  URN =		{urn:nbn:de:0030-drops-202920},
  doi =		{10.4230/LIPIcs.ICALP.2024.149},
  annote =	{Keywords: Regular Expressions, Behavioural Distances, Quantitative Equational Theories}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Homogeneity and Homogenizability: Hard Problems for the Logic SNP

Authors: Jakub Rydval


Abstract
The infinite-domain CSP dichotomy conjecture extends the finite-domain CSP dichotomy theorem to reducts of finitely bounded homogeneous structures. Every countable finitely bounded homogeneous structure is uniquely described by a universal first-order sentence up to isomorphism, and every reduct of such a structure by a sentence of the logic SNP. By Fraïssé’s Theorem, testing the existence of a finitely bounded homogeneous structure for a given universal first-order sentence is equivalent to testing the amalgamation property for the class of its finite models. The present paper motivates a complexity-theoretic view on the classification problem for finitely bounded homogeneous structures. We show that this meta-problem is EXPSPACE-hard or PSPACE-hard, depending on whether the input is specified by a universal sentence or a set of forbidden substructures. By relaxing the input to SNP sentences and the question to the existence of a structure with a finitely bounded homogeneous expansion, we obtain a different meta-problem, closely related to the question of homogenizability. We show that this second meta-problem is already undecidable, even if the input SNP sentence comes from the Datalog fragment and uses at most binary relation symbols. As a byproduct of our proof, we also get the undecidability of some other properties for Datalog programs, e.g., whether they can be rewritten in the logic MMSNP, whether they solve some finite-domain CSP, or whether they define a structure with a homogeneous Ramsey expansion in a finite relational signature.

Cite as

Jakub Rydval. Homogeneity and Homogenizability: Hard Problems for the Logic SNP. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 150:1-150:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rydval:LIPIcs.ICALP.2024.150,
  author =	{Rydval, Jakub},
  title =	{{Homogeneity and Homogenizability: Hard Problems for the Logic SNP}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{150:1--150:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.150},
  URN =		{urn:nbn:de:0030-drops-202939},
  doi =		{10.4230/LIPIcs.ICALP.2024.150},
  annote =	{Keywords: constraint satisfaction problems, finitely bounded, homogeneous, amalgamation property, universal, SNP, homogenizable}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Identifying Tractable Quantified Temporal Constraints Within Ord-Horn

Authors: Jakub Rydval, Žaneta Semanišinová, and Michał Wrona


Abstract
The constraint satisfaction problem, parameterized by a relational structure, provides a general framework for expressing computational decision problems. Already the restriction to the class of all finite structures forms an interesting microcosm on its own, but to express decision problems in temporal reasoning one has to take a step beyond the finite-domain realm. An important class of templates used in this context are temporal structures, i.e., structures over ℚ whose relations are first-order definable using the usual countable dense linear order without endpoints. In the standard setting, which allows only existential quantification over input variables, the complexity of finite and temporal constraints has been fully classified. In the quantified setting, i.e., when one also allows universal quantifiers, there is only a handful of partial classification results and many concrete cases of unknown complexity. This paper presents a significant progress towards understanding the complexity of the quantified constraint satisfaction problem for temporal structures. We provide a complexity dichotomy for quantified constraints over the Ord-Horn fragment, which played an important role in understanding the complexity of constraints both over temporal structures and in Allen’s interval algebra. We show that all problems under consideration are in P or coNP-hard. In particular, we determine the complexity of the quantified constraint satisfaction problem for (ℚ;x = y⇒ x ≥ z), hereby settling a question open for more than ten years.

Cite as

Jakub Rydval, Žaneta Semanišinová, and Michał Wrona. Identifying Tractable Quantified Temporal Constraints Within Ord-Horn. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 151:1-151:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rydval_et_al:LIPIcs.ICALP.2024.151,
  author =	{Rydval, Jakub and Semani\v{s}inov\'{a}, \v{Z}aneta and Wrona, Micha{\l}},
  title =	{{Identifying Tractable Quantified Temporal Constraints Within Ord-Horn}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{151:1--151:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.151},
  URN =		{urn:nbn:de:0030-drops-202944},
  doi =		{10.4230/LIPIcs.ICALP.2024.151},
  annote =	{Keywords: constraint satisfaction problems, quantifiers, dichotomy, temporal reasoning, Ord-Horn}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On Homomorphism Indistinguishability and Hypertree Depth

Authors: Benjamin Scheidt


Abstract
GC^k is a logic introduced by Scheidt and Schweikardt (2023) to express properties of hypergraphs. It is similar to first-order logic with counting quantifiers (C) adapted to the hypergraph setting. It has distinct sets of variables for vertices and for hyperedges and requires vertex variables to be guarded by hyperedge variables on every quantification. We prove that two hypergraphs G, H satisfy the same sentences in the logic GC^k with guard depth at most k if, and only if, they are homomorphism indistinguishable over the class of hypergraphs of strict hypertree depth at most k. This lifts the analogous result for tree depth ≤ k and sentences of first-order logic with counting quantifiers of quantifier rank at most k due to Grohe (2020) from graphs to hypergraphs. The guard depth of a formula is the quantifier rank with respect to hyperedge variables, and strict hypertree depth is a restriction of hypertree depth as defined by Adler, Gavenčiak and Klimošová (2012). To justify this restriction, we show that for every H, the strict hypertree depth of H is at most 1 larger than its hypertree depth, and we give additional evidence that strict hypertree depth can be viewed as a reasonable generalisation of tree depth for hypergraphs.

Cite as

Benjamin Scheidt. On Homomorphism Indistinguishability and Hypertree Depth. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 152:1-152:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{scheidt:LIPIcs.ICALP.2024.152,
  author =	{Scheidt, Benjamin},
  title =	{{On Homomorphism Indistinguishability and Hypertree Depth}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{152:1--152:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.152},
  URN =		{urn:nbn:de:0030-drops-202958},
  doi =		{10.4230/LIPIcs.ICALP.2024.152},
  annote =	{Keywords: homomorphism indistinguishability, counting logics, guarded logics, hypergraphs, incidence graphs, tree depth, elimination forest, hypertree width}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
On the Length of Strongly Monotone Descending Chains over ℕ^d

Authors: Sylvain Schmitz and Lia Schütze


Abstract
A recent breakthrough by Künnemann, Mazowiecki, Schütze, Sinclair-Banks, and Węgrzycki (ICALP 2023) bounds the running time for the coverability problem in d-dimensional vector addition systems under unary encoding to n^{2^{O(d)}}, improving on Rackoff’s n^{2^{O(dlg d)}} upper bound (Theor. Comput. Sci. 1978), and provides conditional matching lower bounds. In this paper, we revisit Lazić and Schmitz' "ideal view" of the backward coverability algorithm (Inform. Comput. 2021) in the light of this breakthrough. We show that the controlled strongly monotone descending chains of downwards-closed sets over ℕ^d that arise from the dual backward coverability algorithm of Lazić and Schmitz on d-dimensional unary vector addition systems also enjoy this tight n^{2^{O(d)}} upper bound on their length, and that this also translates into the same bound on the running time of the backward coverability algorithm. Furthermore, our analysis takes place in a more general setting than that of Lazić and Schmitz, which allows to show the same results and improve on the 2EXPSPACE upper bound derived by Benedikt, Duff, Sharad, and Worrell (LICS 2017) for the coverability problem in invertible affine nets.

Cite as

Sylvain Schmitz and Lia Schütze. On the Length of Strongly Monotone Descending Chains over ℕ^d. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 153:1-153:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schmitz_et_al:LIPIcs.ICALP.2024.153,
  author =	{Schmitz, Sylvain and Sch\"{u}tze, Lia},
  title =	{{On the Length of Strongly Monotone Descending Chains over \mathbb{N}^d}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{153:1--153:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.153},
  URN =		{urn:nbn:de:0030-drops-202964},
  doi =		{10.4230/LIPIcs.ICALP.2024.153},
  annote =	{Keywords: Vector addition system, coverability, well-quasi-order, order ideal, affine net}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
FO Logic on Cellular Automata Orbits Equals MSO Logic

Authors: Guillaume Theyssier


Abstract
We introduce an extension of classical cellular automata (CA) to arbitrary labeled graphs, and show that FO logic on CA orbits is equivalent to MSO logic. We deduce various results from that equivalence, including a characterization of finitely generated groups on which FO model checking for CA orbits is undecidable, and undecidability of satisfiability of a fixed FO property for CA over finite graphs. We also show concrete examples of FO formulas for CA orbits whose model checking problem is equivalent to the domino problem, or its seeded or recurring variants respectively, on any finitely generated group. For the recurring domino problem, we use an extension of the FO signature by a relation found in the well-known Garden of Eden theorem, but we also show a concrete FO formula without the extension and with one quantifier alternation whose model checking problem does not belong to the arithmetical hierarchy on group ℤ².

Cite as

Guillaume Theyssier. FO Logic on Cellular Automata Orbits Equals MSO Logic. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 154:1-154:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{theyssier:LIPIcs.ICALP.2024.154,
  author =	{Theyssier, Guillaume},
  title =	{{FO Logic on Cellular Automata Orbits Equals MSO Logic}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{154:1--154:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.154},
  URN =		{urn:nbn:de:0030-drops-202972},
  doi =		{10.4230/LIPIcs.ICALP.2024.154},
  annote =	{Keywords: MSO logic, FO logic, cellular automata, domino problem, Cayley graphs}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Regular Expressions with Backreferences and Lookaheads Capture NLOG

Authors: Yuya Uezato


Abstract
Backreferences and lookaheads are vital features to make classical regular expressions (REGEX) practical. Although these features have been widely used, understanding of the unrestricted combination of them has been limited. Practically, most likely, no implementation fully supports them. Theoretically, while some studies have addressed these features separately, few have dared to combine them. Those few studies showed that the amalgamation of these features significantly enhances the expressiveness of REGEX. However, no acceptable expressivity bound for REWBLk - REGEX with backreferences and lookaheads - has been established. We elucidate this by establishing that REWBLk coincides with NLOG, the class of languages accepted by log-space nondeterministic Turing machines (NTMs). In translating REWBLk to log-space NTMs, negative lookaheads are the most challenging part since it essentially requires complementing log-space NTMs in nondeterministic log-space. To address this problem, we revisit Immerman-Szelepcsényi theorem. In addition, we employ log-space nested-oracles NTMs to naturally handle nested lookaheads of REWBLk. Utilizing such oracle machines, we also present the new result that the membership problem of REWBLk is PSPACE-complete.

Cite as

Yuya Uezato. Regular Expressions with Backreferences and Lookaheads Capture NLOG. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 155:1-155:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{uezato:LIPIcs.ICALP.2024.155,
  author =	{Uezato, Yuya},
  title =	{{Regular Expressions with Backreferences and Lookaheads Capture NLOG}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{155:1--155:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.155},
  URN =		{urn:nbn:de:0030-drops-202984},
  doi =		{10.4230/LIPIcs.ICALP.2024.155},
  annote =	{Keywords: Regular Expression, Automata Theory, Nondeterministic Log-Space}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Verification of Population Protocols with Unordered Data

Authors: Steffen van Bergerem, Roland Guttenberg, Sandra Kiefer, Corto Mascle, Nicolas Waldburger, and Chana Weil-Kennedy


Abstract
Population protocols are a well-studied model of distributed computation in which a group of anonymous finite-state agents communicates via pairwise interactions. Together they decide whether their initial configuration, i. e., the initial distribution of agents in the states, satisfies a property. As an extension in order to express properties of multisets over an infinite data domain, Blondin and Ladouceur (ICALP'23) introduced population protocols with unordered data (PPUD). In PPUD, each agent carries a fixed data value, and the interactions between agents depend on whether their data are equal or not. Blondin and Ladouceur also identified the interesting subclass of immediate observation PPUD (IOPPUD), where in every transition one of the two agents remains passive and does not move, and they characterised its expressive power. We study the decidability and complexity of formally verifying these protocols. The main verification problem for population protocols is well-specification, that is, checking whether the given PPUD computes some function. We show that well-specification is undecidable in general. By contrast, for IOPPUD, we exhibit a large yet natural class of problems, which includes well-specification among other classic problems, and establish that these problems are in ExpSpace. We also provide a lower complexity bound, namely coNExpTime-hardness.

Cite as

Steffen van Bergerem, Roland Guttenberg, Sandra Kiefer, Corto Mascle, Nicolas Waldburger, and Chana Weil-Kennedy. Verification of Population Protocols with Unordered Data. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 156:1-156:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanbergerem_et_al:LIPIcs.ICALP.2024.156,
  author =	{van Bergerem, Steffen and Guttenberg, Roland and Kiefer, Sandra and Mascle, Corto and Waldburger, Nicolas and Weil-Kennedy, Chana},
  title =	{{Verification of Population Protocols with Unordered Data}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{156:1--156:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.156},
  URN =		{urn:nbn:de:0030-drops-202993},
  doi =		{10.4230/LIPIcs.ICALP.2024.156},
  annote =	{Keywords: Population protocols, Parameterized verification, Distributed computing, Well-specification}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Domain Reasoning in TopKAT

Authors: Cheng Zhang, Arthur Azevedo de Amorim, and Marco Gaboardi


Abstract
TopKAT is the algebraic theory of Kleene algebra with tests (KAT) extended with a top element. Compared to KAT, one pleasant feature of TopKAT is that, in relational models, the top element allows us to express the domain and codomain of a relation. This enables several applications in program logics, such as proving under-approximate specifications or reachability properties of imperative programs. However, while TopKAT inherits many pleasant features of KATs, such as having a decidable equational theory, it is incomplete with respect to relational models. In other words, there are properties that hold true of all relational TopKATs but cannot be proved with the axioms of TopKAT. This issue is potentially worrisome for program-logic applications, in which relational models play a key role. In this paper, we further investigate the completeness properties of TopKAT with respect to relational models. We show that TopKAT is complete with respect to (co)domain comparison of KAT terms, but incomplete when comparing the (co)domain of arbitrary TopKAT terms. Since the encoding of under-approximate specifications in TopKAT hinges on this type of formula, the aforementioned incompleteness results have a limited impact when using TopKAT to reason about such specifications.

Cite as

Cheng Zhang, Arthur Azevedo de Amorim, and Marco Gaboardi. Domain Reasoning in TopKAT. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 157:1-157:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ICALP.2024.157,
  author =	{Zhang, Cheng and de Amorim, Arthur Azevedo and Gaboardi, Marco},
  title =	{{Domain Reasoning in TopKAT}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{157:1--157:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.157},
  URN =		{urn:nbn:de:0030-drops-203003},
  doi =		{10.4230/LIPIcs.ICALP.2024.157},
  annote =	{Keywords: Kleene algebra, Kleene Algebra With Tests, Kleene Algebra With Domain, Kleene Algebra With Top and Tests, Completeness, Decidability}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail