No documents found matching your filter selection.
Document
Complete Volume
Authors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis
Abstract
LIPIcs, Volume 334, ICALP 2025, Complete Volume
Cite as
52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 1-3274, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@Proceedings{censorhillel_et_al:LIPIcs.ICALP.2025,
title = {{LIPIcs, Volume 334, ICALP 2025, Complete Volume}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {1--3274},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025},
URN = {urn:nbn:de:0030-drops-237680},
doi = {10.4230/LIPIcs.ICALP.2025},
annote = {Keywords: LIPIcs, Volume 334, ICALP 2025, Complete Volume}
}
Document
Front Matter
Authors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis
Abstract
Front Matter, Table of Contents, Preface, Conference Organization
Cite as
52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 0:i-0:xliv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{censorhillel_et_al:LIPIcs.ICALP.2025.0,
author = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
title = {{Front Matter, Table of Contents, Preface, Conference Organization}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {0:i--0:xliv},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.0},
URN = {urn:nbn:de:0030-drops-237675},
doi = {10.4230/LIPIcs.ICALP.2025.0},
annote = {Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Authors:
Anupam Gupta
Abstract
The analysis of algorithm performance in the worst-case has long been the gold standard of theoretical computer science: it provides a simple, compelling, and robust model, which can often be predictive as well as descriptive. That said, in recent years we have seen an exciting surge in analyzing algorithms using models that go beyond the worst case. Particularly, how can we use ideas from machine learning to inform algorithm design?
In this talk we will discuss some of the results and techniques that come out of this endeavor, in both offline and online settings. For example, we will study covering problems like set cover, load balancing problems like scheduling jobs of machines, and cut problems. We will see some of the modeling decisions in beyond worst-case frameworks, as well as the algorithmic ideas - some old, some new - that can be used to give more nuanced guarantees for these classical problems, complementing our understanding of these problem in the worst-case settings.
Cite as
Anupam Gupta. Online Algorithm Design Beyond the Worst Case (Invited Talk). In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gupta:LIPIcs.ICALP.2025.1,
author = {Gupta, Anupam},
title = {{Online Algorithm Design Beyond the Worst Case}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {1:1--1:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.1},
URN = {urn:nbn:de:0030-drops-233786},
doi = {10.4230/LIPIcs.ICALP.2025.1},
annote = {Keywords: Beyond Worst-Case Analysis, Algorithms with Predictions, Random Order Models}
}
Document
Invited Talk
Authors:
Dana Ron
Abstract
This short paper accompanies an invited talk given at ICALP2025. It is an informal, high-level presentation of tolerant testing and distance approximation. It includes some general results as well as a few specific ones, with the aim of providing a taste of this research direction within the area of sublinear algorithms.
Cite as
Dana Ron. Let’s Try to Be More Tolerant: On Tolerant Property Testing and Distance Approximation (Invited Talk). In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 2:1-2:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{ron:LIPIcs.ICALP.2025.2,
author = {Ron, Dana},
title = {{Let’s Try to Be More Tolerant: On Tolerant Property Testing and Distance Approximation}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {2:1--2:10},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.2},
URN = {urn:nbn:de:0030-drops-233798},
doi = {10.4230/LIPIcs.ICALP.2025.2},
annote = {Keywords: Sublinear Algorithms, Tolerant Property Testing, Distance Approximation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Anders Aamand, Allen Liu, and Shyam Narayanan
Abstract
In the trace reconstruction problem our goal is to learn an unknown string x ∈ {0,1}ⁿ given independent traces of x. A trace is obtained by independently deleting each bit of x with some probability δ and concatenating the remaining bits. It is a major open question whether the trace reconstruction problem can be solved with a polynomial number of traces when the deletion probability δ is constant. The best known upper bound and lower bounds are respectively exp(Õ(n^{1/5})) [Zachary Chase, 2021a] and ̃ Ω(n^{3/2}) [Zachary Chase, 2021b]. Our main result is that if the string x is mildly separated, meaning that the number of zeros between any two ones in x is at least polylog n, and if δ is a sufficiently small constant, then the trace reconstruction problem can be solved with O(n log n) traces and in polynomial time.
Cite as
Anders Aamand, Allen Liu, and Shyam Narayanan. Near-Optimal Trace Reconstruction for Mildly Separated Strings. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{aamand_et_al:LIPIcs.ICALP.2025.3,
author = {Aamand, Anders and Liu, Allen and Narayanan, Shyam},
title = {{Near-Optimal Trace Reconstruction for Mildly Separated Strings}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {3:1--3:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.3},
URN = {urn:nbn:de:0030-drops-233801},
doi = {10.4230/LIPIcs.ICALP.2025.3},
annote = {Keywords: Trace Reconstruction, deletion channel, sample complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Pierre Aboulker, Édouard Bonnet, Timothé Picavet, and Nicolas Trotignon
Abstract
We exhibit a new obstacle to the nascent algorithmic theory for classes excluding an induced minor. We indeed show that on the class of string graphs - which avoids the 1-subdivision of, say, K₅ as an induced minor - Induced 2-Disjoint Paths is NP-complete. So, while k-Disjoint Paths, for a fixed k, is polynomial-time solvable in general graphs, the absence of a graph as an induced minor does not make its induced variant tractable, even for k = 2. This answers a question of Korhonen and Lokshtanov [SODA '24], and complements a polynomial-time algorithm for Induced k-Disjoint Paths in classes of bounded genus by Kobayashi and Kawarabayashi [SODA '09]. In addition to being string graphs, our produced hard instances are subgraphs of a constant power of bounded-degree planar graphs, hence have bounded twin-width and bounded maximum degree.
We also leverage our new result to show that there is a fixed subcubic graph H such that deciding if an input graph contains H as an induced subdivision is NP-complete. Until now, all the graphs H for which such a statement was known had a vertex of degree at least 4. This answers a question by Chudnovsky, Seymour, and Trotignon [JCTB '13], and by Le [JGT '19]. Finally we resolve another question of Korhonen and Lokshtanov by exhibiting a subcubic graph H without two adjacent degree-3 vertices and such that deciding if an input n-vertex graph contains H as an induced minor is NP-complete, and unless the Exponential-Time Hypothesis fails, requires time 2^{Ω(√ n)}. This complements an algorithm running in subexponential time 2^{Õ(n^{2/3})} by these authors [SODA '24] under the same technical condition.
Cite as
Pierre Aboulker, Édouard Bonnet, Timothé Picavet, and Nicolas Trotignon. Induced Disjoint Paths Without an Induced Minor. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{aboulker_et_al:LIPIcs.ICALP.2025.4,
author = {Aboulker, Pierre and Bonnet, \'{E}douard and Picavet, Timoth\'{e} and Trotignon, Nicolas},
title = {{Induced Disjoint Paths Without an Induced Minor}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {4:1--4:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.4},
URN = {urn:nbn:de:0030-drops-233813},
doi = {10.4230/LIPIcs.ICALP.2025.4},
annote = {Keywords: Induced Disjoint Paths, string graphs, induced subdivisions, induced minors}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Deeksha Adil, Shunhua Jiang, and Rasmus Kyng
Abstract
We propose a randomized multiplicative weight update (MWU) algorithm for 𝓁_{∞} regression that runs in Õ(n^{2+1/22.5} poly(1/ε)) time when ω = 2+o(1), improving upon the previous best Õ(n^{2+1/18} polylog(1/ε)) runtime in the low-accuracy regime. Our algorithm combines state-of-the-art inverse maintenance data structures with acceleration. In order to do so, we propose a novel acceleration scheme for MWU that exhibits stability and robustness, which are required for the efficient implementations of the inverse maintenance data structures.
We also design a faster deterministic MWU algorithm that runs in Õ(n^{2+1/12}poly(1/ε)) time when ω = 2+o(1), improving upon the previous best Õ(n^{2+1/6} poly log(1/ε)) runtime in the low-accuracy regime. We achieve this by showing a novel stability result that goes beyond previously known works based on interior point methods (IPMs).
Our work is the first to use acceleration and inverse maintenance together efficiently, finally making the two most important building blocks of modern structured convex optimization compatible.
Cite as
Deeksha Adil, Shunhua Jiang, and Rasmus Kyng. Acceleration Meets Inverse Maintenance: Faster 𝓁_∞-Regression. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 5:1-5:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{adil_et_al:LIPIcs.ICALP.2025.5,
author = {Adil, Deeksha and Jiang, Shunhua and Kyng, Rasmus},
title = {{Acceleration Meets Inverse Maintenance: Faster 𝓁\underline∞-Regression}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {5:1--5:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.5},
URN = {urn:nbn:de:0030-drops-233823},
doi = {10.4230/LIPIcs.ICALP.2025.5},
annote = {Keywords: Regression, Inverse Maintenance, Multiplicative Weights Update}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Deeksha Adil and Thatchaphol Saranurak
Abstract
We present a dynamic algorithm for maintaining (1+ε)-approximate maximum eigenvector and eigenvalue of a positive semi-definite matrix A undergoing decreasing updates, i.e., updates which may only decrease eigenvalues. Given a vector v updating A ← A-vv^⊤, our algorithm takes Õ(nnz(v)) amortized update time, i.e., polylogarithmic per non-zeros in the update vector.
Our technique is based on a novel analysis of the influential power method in the dynamic setting. The two previous sets of techniques have the following drawbacks (1) algebraic techniques can maintain exact solutions but their update time is at least polynomial per non-zeros, and (2) sketching techniques admit polylogarithmic update time but suffer from a crude additive approximation.
Our algorithm exploits an oblivious adversary. Interestingly, we show that any algorithm with polylogarithmic update time per non-zeros that works against an adaptive adversary and satisfies an additional natural property would imply a breakthrough for checking psd-ness of matrices in Õ(n²) time, instead of O(n^ω) time.
Cite as
Deeksha Adil and Thatchaphol Saranurak. Decremental (1+ε)-Approximate Maximum Eigenvector: Dynamic Power Method. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{adil_et_al:LIPIcs.ICALP.2025.6,
author = {Adil, Deeksha and Saranurak, Thatchaphol},
title = {{Decremental (1+\epsilon)-Approximate Maximum Eigenvector: Dynamic Power Method}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {6:1--6:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.6},
URN = {urn:nbn:de:0030-drops-233834},
doi = {10.4230/LIPIcs.ICALP.2025.6},
annote = {Keywords: Power Method, Dynamic Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Panagiotis Aivasiliotis, Andreas Göbel, Marc Roth, and Johannes Schmitt
Abstract
We investigate the complexity of parameterised holant problems p-Holant(𝒮) for families of symmetric signatures 𝒮. The parameterised holant framework has been introduced by Curticapean in 2015 as a counter-part to the classical and well-established theory of holographic reductions and algorithms, and it constitutes an extensive family of coloured and weighted counting constraint satisfaction problems on graph-like structures, encoding as special cases various well-studied counting problems in parameterised and fine-grained complexity theory such as counting edge-colourful k-matchings, graph-factors, Eulerian orientations or, more generally, subgraphs with weighted degree constraints. We establish an exhaustive complexity trichotomy along the set of signatures 𝒮: Depending on the signatures, p-Holant(𝒮) is either
1) solvable in "FPT-near-linear time", i.e., in time f(k)⋅ 𝒪̃(|x|), or
2) solvable in "FPT-matrix-multiplication time", i.e., in time f(k)⋅ {𝒪}(n^{ω}), where n is the number of vertices of the underlying graph, but not solvable in FPT-near-linear time, unless the Triangle Conjecture fails, or
3) #W[1]-complete and no significant improvement over the naive brute force algorithm is possible unless the Exponential Time Hypothesis fails. This classification reveals a significant and surprising gap in the complexity landscape of parameterised Holants: Not only is every instance either fixed-parameter tractable or #W[1]-complete, but additionally, every FPT instance is solvable in time (at most) f(k)⋅ {𝒪}(n^{ω}). We show that there are infinitely many instances of each of the types; for example, all constant signatures yield holant problems of type (1), and the problem of counting edge-colourful k-matchings modulo p is of type (p) for p ∈ {2,3}.
Finally, we also establish a complete classification for a natural uncoloured version of parameterised holant problem p-UnColHolant(𝒮), which encodes as special cases the non-coloured analogues of the aforementioned examples. We show that the complexity of p-UnColHolant(𝒮) is different: Depending on 𝒮 all instances are either solvable in FPT-near-linear time, or #W[1]-complete, that is, there are no instances of type (2).
Cite as
Panagiotis Aivasiliotis, Andreas Göbel, Marc Roth, and Johannes Schmitt. Parameterised Holant Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{aivasiliotis_et_al:LIPIcs.ICALP.2025.7,
author = {Aivasiliotis, Panagiotis and G\"{o}bel, Andreas and Roth, Marc and Schmitt, Johannes},
title = {{Parameterised Holant Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {7:1--7:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.7},
URN = {urn:nbn:de:0030-drops-233842},
doi = {10.4230/LIPIcs.ICALP.2025.7},
annote = {Keywords: holant problems, counting problems, parameterised algorithms, fine-grained complexity theory, homomorphisms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Hessa Al-Thani and Viswanath Nagarajan
Abstract
We study a fundamental stochastic selection problem involving n independent random variables, each of which can be queried at some cost. Given a tolerance level δ, the goal is to find a δ-approximately minimum (or maximum) value over all the random variables, at minimum expected cost. A solution to this problem is an adaptive sequence of queries, where the choice of the next query may depend on previously-observed values. Two variants arise, depending on whether the goal is to find a δ-minimum value or a δ-minimizer. When all query costs are uniform, we provide a 4-approximation algorithm for both variants. When query costs are non-uniform, we provide a 5.83-approximation algorithm for the δ-minimum value and a 7.47-approximation for the δ-minimizer. All our algorithms rely on non-adaptive policies (that perform a fixed sequence of queries), so we also upper bound the corresponding "adaptivity" gaps. Our analysis relates the stopping probabilities in the algorithm and optimal policies, where a key step is in proving and using certain stochastic dominance properties.
Cite as
Hessa Al-Thani and Viswanath Nagarajan. Identifying Approximate Minimizers Under Stochastic Uncertainity. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{althani_et_al:LIPIcs.ICALP.2025.8,
author = {Al-Thani, Hessa and Nagarajan, Viswanath},
title = {{Identifying Approximate Minimizers Under Stochastic Uncertainity}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {8:1--8:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.8},
URN = {urn:nbn:de:0030-drops-233854},
doi = {10.4230/LIPIcs.ICALP.2025.8},
annote = {Keywords: Approximation algorithms, stochastic optimization, selection problem}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Jonathan Allcock, Jinge Bao, Aleksandrs Belovs, Troy Lee, and Miklos Santha
Abstract
In this work, we initiate a systematic study of the time complexity of quantum divide and conquer (QD&C) algorithms for classical problems, and propose a general framework for their analysis. We establish generic conditions under which search and minimization problems with classical divide and conquer algorithms are amenable to quantum speedup, and apply these theorems to various problems involving strings, integers, and geometric objects. These include Longest Distinct Substring, Klee's Coverage, several optimization problems on stock transactions, and k-Increasing Subsequence. For most of these problems our quantum time upper bounds match the quantum query lower bounds, up to polylogarithmic factors.
We give a structured framework for describing and classifying a wide variety of QD&C algorithms so that quantum speedups can be more easily identified and applied, and prove general statements on QD&C time complexity covering a range of cases, accounting for the time required for all operations. In particular, we explicitly account for memory access operations in the commonly used QRAM (read-only) and QRAG (read-write) models, which are assumed to take unit time in the query model, and which require careful analysis when involved in recursion.
Our generic QD&C theorems have several nice features.
1) To apply them, it suffices to come up with a classical divide and conquer algorithm satisfying the conditions of the theorem. The quantization of the algorithm is then completely handled by the theorem. This can make it easier to find applications which admit a quantum speedup, and contrast with dynamic programming algorithms which can be difficult to quantize due to their highly sequential nature.
2) As these theorems give bounds on time complexity, they can be applied to a greater range of problems than those based on query complexity, e.g., where the best-known quantum algorithms require super-linear time.
3) It can handle minimization problems as well as boolean functions, which allows us to improve on the query complexity result of Childs et al. [Childs et al., 2025] for k-Increasing Subsequence by a logarithmic factor.
Cite as
Jonathan Allcock, Jinge Bao, Aleksandrs Belovs, Troy Lee, and Miklos Santha. On the Quantum Time Complexity of Divide and Conquer. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 9:1-9:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{allcock_et_al:LIPIcs.ICALP.2025.9,
author = {Allcock, Jonathan and Bao, Jinge and Belovs, Aleksandrs and Lee, Troy and Santha, Miklos},
title = {{On the Quantum Time Complexity of Divide and Conquer}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {9:1--9:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.9},
URN = {urn:nbn:de:0030-drops-233863},
doi = {10.4230/LIPIcs.ICALP.2025.9},
annote = {Keywords: Quantum Computing, Quantum Algorithms, Divide and Conquer}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aida Aminian, Shahin Kamali, Seyed-Mohammad Seyed-Javadi, and Sumedha
Abstract
In Telephone Broadcasting, the goal is to disseminate a message from a given source vertex of an input graph to all other vertices in the minimum number of rounds, where at each round, an informed vertex can send the message to at most one of its uninformed neighbors. For general graphs of n vertices, the problem is NP-complete, and the best existing algorithm has an approximation factor of 𝒪(log n/ log log n). The existence of a constant factor approximation for the general graphs is still unknown.
In this paper, we study the problem in two simple families of sparse graphs, namely, cacti and graphs of bounded pathwidth. There have been several efforts to understand the complexity of the problem in cactus graphs, mostly establishing the presence of polynomial-time solutions for restricted families of cactus graphs (e.g., [Čevnik and Žerovnik, 2017; Ehresmann, 2021; Harutyunyan et al., 2009; Harutyunyan and Maraachlian, 2007; Harutyunyan and Maraachlian, 2008; Harutyunyan et al., 2023]). Despite these efforts, the complexity of the problem in arbitrary cactus graphs remained open. We settle this question by establishing the NP-completeness of telephone broadcasting in cactus graphs. For that, we show the problem is NP-complete in a simple subfamily of cactus graphs, which we call snowflake graphs. These graphs are not only cacti but also have pathwidth 2. These results establish that, despite being polynomial-time solvable in trees, the problem becomes NP-complete in very simple extensions of trees.
On the positive side, we present constant-factor approximation algorithms for the studied families of graphs, namely, an algorithm with an approximation factor of 2 for cactus graphs and an approximation factor of 𝒪(1) for graphs of bounded pathwidth.
Cite as
Aida Aminian, Shahin Kamali, Seyed-Mohammad Seyed-Javadi, and Sumedha. On the Complexity of Telephone Broadcasting from Cacti to Bounded Pathwidth Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{aminian_et_al:LIPIcs.ICALP.2025.10,
author = {Aminian, Aida and Kamali, Shahin and Seyed-Javadi, Seyed-Mohammad and Sumedha},
title = {{On the Complexity of Telephone Broadcasting from Cacti to Bounded Pathwidth Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {10:1--10:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.10},
URN = {urn:nbn:de:0030-drops-233874},
doi = {10.4230/LIPIcs.ICALP.2025.10},
annote = {Keywords: Telephone Broadcasting, Approximation Algorithms, NP-Hardness, Graph Pathwidth, Cactus Graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Prashanth Amireddy, Amik Raj Behera, Srikanth Srinivasan, and Madhu Sudan
Abstract
The celebrated Ore-DeMillo-Lipton-Schwartz-Zippel (ODLSZ) lemma asserts that n-variate non-zero polynomial functions of degree d over a field 𝔽, are non-zero over any "grid" (points of the form Sⁿ for finite subset S ⊆ 𝔽) with probability at least max{|S|^{-d/(|S|-1)},1-d/|S|} over the choice of random point from the grid. In particular, over the Boolean cube (S = {0,1} ⊆ 𝔽), the lemma asserts non-zero polynomials are non-zero with probability at least 2^{-d}. In this work we extend the ODLSZ lemma optimally (up to lower-order terms) to "Boolean slices" i.e., points of Hamming weight exactly k. We show that non-zero polynomials on the slice are non-zero with probability (t/n)^{d}(1 - o_{n}(1)) where t = min{k,n-k} for every d ≤ k ≤ (n-d). As with the ODLSZ lemma, our results extend to polynomials over Abelian groups. This bound is tight upto the error term as evidenced by multilinear monomials of degree d, and it is also the case that some corrective term is necessary. A particularly interesting case is the "balanced slice" (k = n/2) where our lemma asserts that non-zero polynomials are non-zero with roughly the same probability on the slice as on the whole cube.
The behaviour of low-degree polynomials over Boolean slices has received much attention in recent years. However, the problem of proving a tight version of the ODLSZ lemma does not seem to have been considered before, except for a recent work of Amireddy, Behera, Paraashar, Srinivasan and Sudan (SODA 2025), who established a sub-optimal bound of approximately ((k/n)⋅ (1-(k/n)))^d using a proof similar to that of the standard ODLSZ lemma.
While the statement of our result mimics that of the ODLSZ lemma, our proof is significantly more intricate and involves spectral reasoning which is employed to show that a natural way of embedding a copy of the Boolean cube inside a balanced Boolean slice is a good sampler.
Cite as
Prashanth Amireddy, Amik Raj Behera, Srikanth Srinivasan, and Madhu Sudan. A Near-Optimal Polynomial Distance Lemma over Boolean Slices. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{amireddy_et_al:LIPIcs.ICALP.2025.11,
author = {Amireddy, Prashanth and Behera, Amik Raj and Srinivasan, Srikanth and Sudan, Madhu},
title = {{A Near-Optimal Polynomial Distance Lemma over Boolean Slices}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {11:1--11:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.11},
URN = {urn:nbn:de:0030-drops-233881},
doi = {10.4230/LIPIcs.ICALP.2025.11},
annote = {Keywords: Low-degree polynomials, Boolean slices, Schwartz-Zippel Lemma}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aditya Anand, Euiwoong Lee, Jason Li, and Thatchaphol Saranurak
Abstract
Given a directed graph G with n vertices and m edges, a parameter k and two disjoint subsets S,T ⊆ V(G), we show that the number of all-subsets important separators, which is the number of A-B important vertex separators of size at most k over all A ⊆ S and B ⊆ T, is at most β(|S|, |T|, k) = 4^k binom(|S|, ≤ k) binom(|T|, ≤ 2k), where binom(x, ≤ c) = ∑_{i = 1}^c binom(x,i), and that they can be enumerated in time 𝒪(β(|S|,|T|,k)k²(m+n)). This is a generalization of the folklore result stating that the number of A-B important separators for two fixed sets A and B is at most 4^k (first implicitly shown by Chen, Liu and Lu Algorithmica '09). From this result, we obtain the following applications:
1) We give a construction for detection sets and sample sets in directed graphs, generalizing the results of Kleinberg (Internet Mathematics' 03) and Feige and Mahdian (STOC' 06) to directed graphs.
2) Via our new sample sets, we give the first FPT algorithm for finding balanced separators in directed graphs parameterized by k, the size of the separator. Our algorithm runs in time 2^{𝒪(k)} ⋅ (m + n).
3) Additionally, we show a 𝒪(√{log k}) approximation algorithm for finding balanced separators in directed graphs in polynomial time. This improves the best known approximation guarantee of 𝒪(√{log n}) and matches the known guarantee in undirected graphs by Feige, Hajiaghayi and Lee (SICOMP' 08).
4) Finally, using our algorithm for listing all-subsets important separators, we give a deterministic construction of vertex cut sparsifiers in directed graphs when we are interested in preserving min-cuts of size upto c between bipartitions of the terminal set. Our algorithm constructs a sparsifier of size 𝒪(binom(t, ≤ 3c)2^{𝒪(c)}) and runs in time 𝒪(binom(t, ≤ 3c) 2^{𝒪(c)}(m + n)), where t is the number of terminals, and the sparsifier additionally preserves the set of important separators of size at most c between bipartitions of the terminals.
Cite as
Aditya Anand, Euiwoong Lee, Jason Li, and Thatchaphol Saranurak. All-Subsets Important Separators with Applications to Sample Sets, Balanced Separators and Vertex Sparsifiers in Directed Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{anand_et_al:LIPIcs.ICALP.2025.12,
author = {Anand, Aditya and Lee, Euiwoong and Li, Jason and Saranurak, Thatchaphol},
title = {{All-Subsets Important Separators with Applications to Sample Sets, Balanced Separators and Vertex Sparsifiers in Directed Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {12:1--12:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.12},
URN = {urn:nbn:de:0030-drops-233892},
doi = {10.4230/LIPIcs.ICALP.2025.12},
annote = {Keywords: directed graphs, important separators, sample sets, balanced separators}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Simon Apers, Minbo Gao, Zhengfeng Ji, and Chenghua Liu
Abstract
We present a quantum algorithm for sampling random spanning trees from a weighted graph in Õ(√{mn}) time, where n and m denote the number of vertices and edges, respectively. Our algorithm has sublinear runtime for dense graphs and achieves a quantum speedup over the best-known classical algorithm, which runs in Õ(m) time. The approach carefully combines, on one hand, a classical method based on "large-step" random walks for reduced mixing time and, on the other hand, quantum algorithmic techniques, including quantum graph sparsification and a sampling-without-replacement variant of Hamoudi’s multiple-state preparation. We also establish a matching lower bound, proving the optimality of our algorithm up to polylogarithmic factors. These results highlight the potential of quantum computing in accelerating fundamental graph sampling problems.
Cite as
Simon Apers, Minbo Gao, Zhengfeng Ji, and Chenghua Liu. Quantum Speedup for Sampling Random Spanning Trees. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 13:1-13:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{apers_et_al:LIPIcs.ICALP.2025.13,
author = {Apers, Simon and Gao, Minbo and Ji, Zhengfeng and Liu, Chenghua},
title = {{Quantum Speedup for Sampling Random Spanning Trees}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {13:1--13:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.13},
URN = {urn:nbn:de:0030-drops-233907},
doi = {10.4230/LIPIcs.ICALP.2025.13},
annote = {Keywords: Quantum Computing, Quantum Algorithms, Random Spanning Trees}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Per Austrin, Ioana O. Bercea, Mayank Goswami, Nutan Limaye, and Adarsh Srinivasan
Abstract
Given a k-CNF formula and an integer s ≥ 2, we study algorithms that obtain s solutions to the formula that are as dispersed as possible. For s = 2, this problem of computing the diameter of a k-CNF formula was initiated by Creszenzi and Rossi, who showed strong hardness results even for k = 2. The current best upper bound [Angelsmark and Thapper '04] goes to 4ⁿ as k → ∞. As our first result, we show that this quadratic blow up is not necessary by utilizing the Fast-Fourier transform (FFT) to give a O^*(2ⁿ) time exact algorithm for computing the diameter of any k-CNF formula.
For s > 2, the problem was raised in the SAT community (Nadel '11) and several heuristics have been proposed for it, but no algorithms with theoretical guarantees are known. We give exact algorithms using FFT and clique-finding that run in O^*(2^{(s-1)n}) and O^*(s² |Ω_{𝐅}|^{ω ⌈ s/3 ⌉}) respectively, where |Ω_{𝐅}| is the size of the solutions space of the formula 𝐅 and ω is the matrix multiplication exponent.
However, current SAT algorithms for finding one solution run in time O^*(2^{ε_{k}n}) for ε_{k} ≈ 1-Θ(1/k), which is much faster than all above run times. As our main result, we analyze two popular SAT algorithms - PPZ (Paturi, Pudlák, Zane '97) and Schöning’s ('02) algorithms, and show that in time poly(s)O^*(2^{ε_{k}n}), they can be used to approximate diameter as well as the dispersion (s > 2) problem. While we need to modify Schöning’s original algorithm for technical reasons, we show that the PPZ algorithm, without any modification, samples solutions in a geometric sense. We believe this geometric sampling property of PPZ may be of independent interest.
Finally, we focus on diverse solutions to NP-complete optimization problems, and give bi-approximations running in time poly(s)O^*(2^{ε n}) with ε < 1 for several problems such as Maximum Independent Set, Minimum Vertex Cover, Minimum Hitting Set, Feedback Vertex Set, Multicut on Trees and Interval Vertex Deletion. For all of these problems, all existing exact methods for finding optimal diverse solutions have a runtime with at least an exponential dependence on the number of solutions s. Our methods show that by relaxing to bi-approximations, this dependence on s can be made polynomial.
Cite as
Per Austrin, Ioana O. Bercea, Mayank Goswami, Nutan Limaye, and Adarsh Srinivasan. Algorithms for the Diverse-k-SAT Problem: The Geometry of Satisfying Assignments. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{austrin_et_al:LIPIcs.ICALP.2025.14,
author = {Austrin, Per and Bercea, Ioana O. and Goswami, Mayank and Limaye, Nutan and Srinivasan, Adarsh},
title = {{Algorithms for the Diverse-k-SAT Problem: The Geometry of Satisfying Assignments}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {14:1--14:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.14},
URN = {urn:nbn:de:0030-drops-233916},
doi = {10.4230/LIPIcs.ICALP.2025.14},
annote = {Keywords: Exponential time algorithms, Satisfiability, k-SAT, PPZ, Sch\"{o}ning, Dispersion, Diversity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Christine Awofeso, Patrick Greaves, Oded Lachish, Amit Levi, and Felix Reidl
Abstract
We study C_k-freeness in sparse graphs from a property testing perspective, specifically for graph classes with bounded r-admissibility. Our work is motivated by the large gap between upper and lower bounds in this area: C_k-freeness is known to be testable in planar graphs [Czumaj and Sohler, 2019], but not in graphs with bounded arboricity for k > 3 [Talya Eden et al., 2024]. There are a large number of interesting graph classes that include planar graphs and have bounded arboricity (e.g. classes excluding a minor), calling for a more fine-grained approach to the question of testing C_k-freeness in sparse graph classes.
One such approach, inspired by the work of Nesetril and Ossona de Mendez [Nešetřil and {Ossona de Mendez}, 2012], is to consider the graph measure of r-admissibility, which naturally forms a hierarchy of graph families A₁ ⊃ A₂ ⊃ … ⊃ A_∞ where A_r contains all graph classes whose r-admissibility is bounded by some constant. The family A₁ contains classes with bounded arboricity, the class A_∞ contains classes like planar graphs, graphs of bounded degree, and minor-free graphs. Awofeso ηl [Awofeso et al., 2025] recently made progress in this direction. They showed that C₄- and C₅-freeness is testable in A₂. They further showed that C_k-freeness is not testable in A_{⌊k/2⌋ -1} and conjectured that C_k-freeness is testable in A_{⌊k/2⌋}. In this work, we prove this conjecture: C_k-freeness is indeed testable in graphs of bounded ⌊k/2⌋-admissibility.
Cite as
Christine Awofeso, Patrick Greaves, Oded Lachish, Amit Levi, and Felix Reidl. Testing C_k-Freeness in Bounded Admissibility Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 15:1-15:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{awofeso_et_al:LIPIcs.ICALP.2025.15,
author = {Awofeso, Christine and Greaves, Patrick and Lachish, Oded and Levi, Amit and Reidl, Felix},
title = {{Testing C\underlinek-Freeness in Bounded Admissibility Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {15:1--15:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.15},
URN = {urn:nbn:de:0030-drops-233926},
doi = {10.4230/LIPIcs.ICALP.2025.15},
annote = {Keywords: Property Testing, Sparse Graphs, Cycle, Admissibility}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alkida Balliu, Mohsen Ghaffari, Fabian Kuhn, Augusto Modanese, Dennis Olivetti, Mikaël Rabie, Jukka Suomela, and Jara Uitto
Abstract
By prior work, we have many wonderful results related to distributed graph algorithms for problems that can be defined with local constraints; the formal framework used in prior work is locally checkable labeling problems (LCLs), introduced by Naor and Stockmeyer in the 1990s. It is known, for example, that if we have a deterministic algorithm that solves an LCL in o(log n) rounds, we can speed it up to O(log^* n) rounds, and if we have a randomized algorithm that solves an LCL in O(log^* n) rounds, we can derandomize it for free.
It is also known that randomness helps with some LCL problems: there are LCL problems with randomized complexity Θ(log log n) and deterministic complexity Θ(log n). However, so far there have not been any LCL problems in which the use of shared randomness has been necessary; in all prior algorithms it has been enough that the nodes have access to their own private sources of randomness.
Could it be the case that shared randomness never helps with LCLs? Could we have a general technique that takes any distributed graph algorithm for any LCL that uses shared randomness, and turns it into an equally fast algorithm where private randomness is enough?
In this work we show that the answer is no. We present an LCL problem Π such that the round complexity of Π is Ω(√n) in the usual randomized LOCAL model (with private randomness), but if the nodes have access to a source of shared randomness, then the complexity drops to O(log n).
As corollaries, we also resolve several other open questions related to the landscape of distributed computing in the context of LCL problems. In particular, problem Π demonstrates that distributed quantum algorithms for LCL problems strictly benefit from a shared quantum state. Problem Π also gives a separation between finitely dependent distributions and non-signaling distributions.
Cite as
Alkida Balliu, Mohsen Ghaffari, Fabian Kuhn, Augusto Modanese, Dennis Olivetti, Mikaël Rabie, Jukka Suomela, and Jara Uitto. Shared Randomness Helps with Local Distributed Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{balliu_et_al:LIPIcs.ICALP.2025.16,
author = {Balliu, Alkida and Ghaffari, Mohsen and Kuhn, Fabian and Modanese, Augusto and Olivetti, Dennis and Rabie, Mika\"{e}l and Suomela, Jukka and Uitto, Jara},
title = {{Shared Randomness Helps with Local Distributed Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {16:1--16:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.16},
URN = {urn:nbn:de:0030-drops-233931},
doi = {10.4230/LIPIcs.ICALP.2025.16},
annote = {Keywords: Distributed computing, locally checkable labelings, shared randomness}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sayan Bandyapadhyay, William Lochet, Daniel Lokshtanov, Dániel Marx, Pranabendu Misra, Daniel Neuen, Saket Saurabh, Prafullkumar Tale, and Jie Xue
Abstract
We prove a robust contraction decomposition theorem for H-minor-free graphs, which states that given an H-minor-free graph G and an integer p, one can partition in polynomial time the vertices of G into p sets Z₁,… ,Z_p such that tw(G/(Z_i ⧵ Z')) = O(p + |Z'|) for all i ∈ [p] and Z' ⊆ Z_i. Here, tw(⋅) denotes the treewidth of a graph and G/(Z_i ⧵ Z') denotes the graph obtained from G by contracting all edges with both endpoints in Z_i ⧵ Z'.
Our result generalizes earlier results by Klein [SICOMP 2008] and Demaine et al. [STOC 2011] based on partitioning E(G), and some recent theorems for planar graphs by Marx et al. [SODA 2022], for bounded-genus graphs (more generally, almost-embeddable graphs) by Bandyapadhyay et al. [SODA 2022], and for unit-disk graphs by Bandyapadhyay et al. [SoCG 2022].
The robust contraction decomposition theorem directly results in parameterized algorithms with running time 2^{Õ(√k)} ⋅ n^{O(1)} or n^{O(√k)} for every vertex/edge deletion problems on H-minor-free graphs that can be formulated as Permutation CSP Deletion or 2-Conn Permutation CSP Deletion. Consequently, we obtain the first subexponential-time parameterized algorithms for Subset Feedback Vertex Set, Subset Odd Cycle Transversal, Subset Group Feedback Vertex Set, 2-Conn Component Order Connectivity on H-minor-free graphs. For other problems which already have subexponential-time parameterized algorithms on H-minor-free graphs (e.g., Odd Cycle Transversal, Vertex Multiway Cut, Vertex Multicut, etc.), our theorem gives much simpler algorithms of the same running time.
Cite as
Sayan Bandyapadhyay, William Lochet, Daniel Lokshtanov, Dániel Marx, Pranabendu Misra, Daniel Neuen, Saket Saurabh, Prafullkumar Tale, and Jie Xue. Robust Contraction Decomposition for Minor-Free Graphs and Its Applications. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 17:1-17:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bandyapadhyay_et_al:LIPIcs.ICALP.2025.17,
author = {Bandyapadhyay, Sayan and Lochet, William and Lokshtanov, Daniel and Marx, D\'{a}niel and Misra, Pranabendu and Neuen, Daniel and Saurabh, Saket and Tale, Prafullkumar and Xue, Jie},
title = {{Robust Contraction Decomposition for Minor-Free Graphs and Its Applications}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {17:1--17:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.17},
URN = {urn:nbn:de:0030-drops-233948},
doi = {10.4230/LIPIcs.ICALP.2025.17},
annote = {Keywords: subexponential time algorithms, graph decomposition, planar graphs, minor-free graphs, graph contraction}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kiril Bangachev and S. Matthew Weinberg
Abstract
For a set M of m elements, we define a decreasing chain of classes of normalized monotone-increasing valuation functions from 2^M to ℝ_{≥ 0}, parameterized by an integer q ∈ [2,m]. For a given q, we refer to the class as q-partitioning. A valuation function is subadditive if and only if it is 2-partitioning, and fractionally subadditive if and only if it is m-partitioning. Thus, our chain establishes an interpolation between subadditive and fractionally subadditive valuations. We show that this interpolation is smooth (q-partitioning valuations are "nearly" (q-1)-partitioning in a precise sense, Theorem 6), interpretable (the definition arises by analyzing the core of a cost-sharing game, à la the Bondareva-Shapley Theorem for fractionally subadditive valuations, Section 3.1), and non-trivial (the class of q-partitioning valuations is distinct for all q, Proposition 3).
For domains where provable separations exist between subadditive and fractionally subadditive, we interpolate the stronger guarantees achievable for fractionally subadditive valuations to all q ∈ {2,…, m}. Two highlights are the following:
1) An Ω ((log log q)/(log log m))-competitive posted price mechanism for q-partitioning valuations. Note that this matches asymptotically the state-of-the-art for both subadditive (q = 2) [Paul Dütting et al., 2020], and fractionally subadditive (q = m) [Feldman et al., 2015].
2) Two upper-tail concentration inequalities on 1-Lipschitz, q-partitioning valuations over independent items. One extends the state-of-the-art for q = m to q < m, the other improves the state-of-the-art for q = 2 for q > 2. Our concentration inequalities imply several corollaries that interpolate between subadditive and fractionally subadditive, for example: 𝔼[v(S)] ≤ (1 + 1/log q)Median[v(S)] + O(log q). To prove this, we develop a new isoperimetric inequality using Talagrand’s method of control by q points, which may be of independent interest.
We also discuss other probabilistic inequalities and game-theoretic applications of q-partitioning valuations, and connections to subadditive MPH-k valuations [Tomer Ezra et al., 2019].
Cite as
Kiril Bangachev and S. Matthew Weinberg. q-Partitioning Valuations: Exploring the Space Between Subadditive and Fractionally Subadditive Valuations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 18:1-18:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bangachev_et_al:LIPIcs.ICALP.2025.18,
author = {Bangachev, Kiril and Weinberg, S. Matthew},
title = {{q-Partitioning Valuations: Exploring the Space Between Subadditive and Fractionally Subadditive Valuations}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {18:1--18:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.18},
URN = {urn:nbn:de:0030-drops-233956},
doi = {10.4230/LIPIcs.ICALP.2025.18},
annote = {Keywords: Subadditive Functions, Fractionally Subadditive Functions, Posted Price Mechanisms, Concentration Inequalities}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh
Abstract
The Maximum Submodular Matching (MSM) problem is a generalization of the classical Maximum Weight Matching (MWM) problem. In this problem, given a monotone submodular function f: 2^E → ℝ^{≥ 0} defined over subsets of edges of a graph G(V, E), we are asked to return a matching whose submodular value is maximum among all matchings in graph G(V, E). In this paper, we consider this problem in a fully dynamic setting against an oblivious adversary. In this setting, we are given a sequence 𝒮 of insertions and deletions of edges of the underlying graph G(V, E), along with an oracle access to the monotone submodular function f. The goal is to maintain a matching M such that, at any time t of sequence 𝒮, its submodular value is a good approximation of the value of the optimal submodular matching while keeping the number of operations minimal.
We develop the first dynamic algorithm for the submodular matching problem, in which we maintain a matching whose submodular value is within expected (8 + ε)-approximation of the optimal submodular matching at any time t of sequence 𝒮 using expected amortized poly(log n, 1/(ε)) update time.
Our approach incorporates a range of novel techniques, notably the concept of Uniform Hierarchical Caches (UHC) data structure along with its invariants, which lead to the first algorithm for fully dynamic submodular matching and may be of independent interest for designing dynamic algorithms for other problems.
Cite as
Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. Dynamic Algorithms for Submodular Matching. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 19:1-19:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{banihashem_et_al:LIPIcs.ICALP.2025.19,
author = {Banihashem, Kiarash and Biabani, Leyla and Goudarzi, Samira and Hajiaghayi, MohammadTaghi and Jabbarzade, Peyman and Monemizadeh, Morteza},
title = {{Dynamic Algorithms for Submodular Matching}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {19:1--19:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.19},
URN = {urn:nbn:de:0030-drops-233969},
doi = {10.4230/LIPIcs.ICALP.2025.19},
annote = {Keywords: Matching, Submodular, Dynamic, Polylogarithmic}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ishan Bansal, Joe Cheriyan, Sanjeev Khanna, and Miles Simmons
Abstract
We present improved approximation algorithms for some problems in the related areas of Capacitated Network Design and Flexible Graph Connectivity.
In the Cap-k-ECSS problem, we are given a graph G = (V,E) whose edges have non-negative costs and positive integer capacities, and the goal is to find a minimum-cost edge-set F such that every non-trivial cut of the graph G' = (V,F) has capacity at least k. Let n = |V| and let u_{min} (respectively, u_{max}) denote the minimum (respectively, maximum) capacity of an edge; assume that u_{max} ≤ k. We present an O(log({k}/u_{min}))-approximation algorithm for the Cap-k-ECSS problem, asymptotically improving upon the previous best approximation ratio of min(O(log{n}), k, 2u_{max}, 6 ⋅ {⌈ k/u_{min} ⌉}) whenever log(k/u_{min}) = o(log{n}) and u_{max} is sufficiently large.
In the (p,q)-Flexible Graph Connectivity problem, denoted (p,q)-FGC, the input is a graph G = (V, E) where E is partitioned into safe and unsafe edges, and the goal is to find a minimum-cost edge-set F such that the subgraph G' = (V, F) remains p-edge connected upon removal of any q unsafe edges from F. We present an 8-approximation algorithm for the (1,q)-FGC problem that improves upon the previous best approximation ratio of (q+1).
Both of our results are obtained by using natural LP relaxations strengthened with the knapsack-cover inequalities, and then, during the rounding process, utilizing a recent O(1)-approximation algorithm for the Cover Small Cuts problem. In the latter problem, the goal is to find a minimum-cost set of links such that each non-trivial cut of capacity less than a specified value is covered by a link. We also show that the problem of covering small cuts inherently arises in another variant of (p,q)-FGC. Specifically, we give Cook reductions that preserve approximation ratios within O(1) factors between the (2,q)-FGC problem and the 2-Cover Small Cuts problem; in the latter problem, each small cut needs to be covered by two links.
Cite as
Ishan Bansal, Joe Cheriyan, Sanjeev Khanna, and Miles Simmons. Improved Approximation Algorithms for Capacitated Network Design and Flexible Graph Connectivity. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bansal_et_al:LIPIcs.ICALP.2025.20,
author = {Bansal, Ishan and Cheriyan, Joe and Khanna, Sanjeev and Simmons, Miles},
title = {{Improved Approximation Algorithms for Capacitated Network Design and Flexible Graph Connectivity}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {20:1--20:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.20},
URN = {urn:nbn:de:0030-drops-233973},
doi = {10.4230/LIPIcs.ICALP.2025.20},
annote = {Keywords: Approximation algorithms, Capacitated network design, Covering small cuts, Edge-connectivity of graphs, f-Connectivity problem, Flexible Graph Connectivity, Knapsack-cover inequalities}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Paul Beame and Michael Whitmeyer
Abstract
We prove several results concerning the communication complexity of a collision-finding problem, each of which has applications to the complexity of cutting-plane proofs, which make inferences based on integer linear inequalities.
In particular, we prove an Ω(n^{1-1/k} log k /2^k) lower bound on the k-party number-in-hand communication complexity of collision-finding. This implies a 2^{n^{1-o(1)}} lower bound on the size of tree-like cutting-planes refutations of the bit pigeonhole principle CNFs, which are compact and natural propositional encodings of the negation of the pigeonhole principle, improving on the best previous lower bound of 2^{Ω(√n)}. Using the method of density-restoring partitions, we also extend that previous lower bound to the full range of pigeonhole parameters.
Finally, using a refinement of a bottleneck-counting framework of Haken and Cook and Sokolov for DAG-like communication protocols, we give a 2^{Ω(n^{1/4})} lower bound on the size of fully general (not necessarily tree-like) cutting planes refutations of the same bit pigeonhole principle formulas, improving on the best previous lower bound of 2^{Ω(n^{1/8})}.
Cite as
Paul Beame and Michael Whitmeyer. Multiparty Communication Complexity of Collision-Finding and Cutting Planes Proofs of Concise Pigeonhole Principles. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 21:1-21:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{beame_et_al:LIPIcs.ICALP.2025.21,
author = {Beame, Paul and Whitmeyer, Michael},
title = {{Multiparty Communication Complexity of Collision-Finding and Cutting Planes Proofs of Concise Pigeonhole Principles}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {21:1--21:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.21},
URN = {urn:nbn:de:0030-drops-233982},
doi = {10.4230/LIPIcs.ICALP.2025.21},
annote = {Keywords: Proof Complexity, Communication Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Amik Raj Behera, Nutan Limaye, Varun Ramanathan, and Srikanth Srinivasan
Abstract
In this work, we prove upper and lower bounds over fields of positive characteristics for several fragments of the Ideal Proof System (IPS), an algebraic proof system introduced by Grochow and Pitassi (J. ACM 2018). Our results extend the works of Forbes, Shpilka, Tzameret, and Wigderson (Theory of Computing 2021) and also of Govindasamy, Hakoniemi, and Tzameret (FOCS 2022). These works primarily focused on proof systems over fields of characteristic 0, and we are able to extend these results to positive characteristic.
The question of proving general IPS lower bounds over positive characteristic is motivated by the important question of proving AC⁰[p]-Frege lower bounds. This connection was observed by Grochow and Pitassi (J. ACM 2018). Additional motivation comes from recent developments in algebraic complexity theory due to Forbes (CCC 2024) who showed how to extend previous lower bounds over characteristic 0 to positive characteristic.
In our work, we adapt the functional lower bound method of Forbes et al. (Theory of Computing 2021) to prove exponential-size lower bounds for various subsystems of IPS. In order to establish these size lower bounds, we first prove a tight degree lower bound for a variant of Subset Sum over positive characteristic. This forms the core of all our lower bounds.
Additionally, we derive upper bounds for the instances presented above. We show that they have efficient constant-depth IPS refutations. This demonstrates that constant-depth IPS refutations are stronger than the proof systems considered above even in positive characteristic. We also show that constant-depth IPS can efficiently refute a general class of instances, namely all symmetric instances, thereby further uncovering the strength of these algebraic proofs in positive characteristic.
Cite as
Amik Raj Behera, Nutan Limaye, Varun Ramanathan, and Srikanth Srinivasan. New Bounds for the Ideal Proof System in Positive Characteristic. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{behera_et_al:LIPIcs.ICALP.2025.22,
author = {Behera, Amik Raj and Limaye, Nutan and Ramanathan, Varun and Srinivasan, Srikanth},
title = {{New Bounds for the Ideal Proof System in Positive Characteristic}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {22:1--22:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.22},
URN = {urn:nbn:de:0030-drops-233992},
doi = {10.4230/LIPIcs.ICALP.2025.22},
annote = {Keywords: Ideal Proof Systems, Algebraic Complexity, Positive Characteristic}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Omri Ben-Eliezer, Tomer Grossman, and Moni Naor
Abstract
Suppose you are given a function f: [n] → [n] via (black-box) query access to the function. You are looking to find something local, like a collision (a pair x ≠ y s.t. f(x) = f(y)). The question is whether knowing the "shape" of the function helps you or not (by shape we mean that some permutation of the function is known). Formally, we investigate the unlabeled instance optimality of substructure detection problems in graphs and functions. A problem is g(n)-instance optimal if it admits an algorithm A satisfying that for any possible input, the (randomized) query complexity of A is at most g(n) times larger than the query complexity of any algorithm A' which solves the same problem while holding an unlabeled copy of the input (i.e., any A' that "knows the structure of the input"). Our results point to a trichotomy of unlabeled instance optimality among substructure detection problems in graphs and functions:
- A few very simple properties have an O(1)-instance optimal algorithm.
- Most properties of graphs and functions, with examples such as containing a fixed point or a 3-collision in functions, or a triangle in graphs, are n^{c}-far from instance optimal for some constant c > 0.
- The problems of collision detection in functions and finding a claw in a graph serve as a middle ground between the two regimes. We show that these two properties are not Ω(log n)-instance optimal, and conjecture that this bound is tight. We provide evidence towards this conjecture, by proving that finding a claw in a graph is O(log(n))-instance optimal among all input graphs for which the query complexity of an algorithm holding an unlabeled certificate is O(√{n/(log n)}).
Cite as
Omri Ben-Eliezer, Tomer Grossman, and Moni Naor. On the Instance Optimality of Detecting Collisions and Subgraphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{beneliezer_et_al:LIPIcs.ICALP.2025.23,
author = {Ben-Eliezer, Omri and Grossman, Tomer and Naor, Moni},
title = {{On the Instance Optimality of Detecting Collisions and Subgraphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {23:1--23:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.23},
URN = {urn:nbn:de:0030-drops-234002},
doi = {10.4230/LIPIcs.ICALP.2025.23},
annote = {Keywords: instance optimality, instance complexity, unlabeled certificate, subgraph detection, collision detection}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Gal Beniamini and Nir Lavee
Abstract
We consider the well-studied pattern-counting problem: given a permutation π ∈ 𝕊_n and an integer k > 1, count the number of order-isomorphic occurrences of every pattern τ ∈ 𝕊_k in π.
Our first result is an 𝒪̃(n²)-time algorithm for k = 6 and k = 7. The proof relies heavily on a new family of graphs that we introduce, called pattern-trees. Every such tree corresponds to an integer linear combination of permutations in 𝕊_k, and is associated with linear extensions of partially ordered sets. We design an evaluation algorithm for these combinations, and apply it to a family of linearly-independent trees. For k = 8, we show a barrier: the subspace spanned by trees in the previous family has dimension exactly |𝕊₈| - 1, one less than required.
Our second result is an 𝒪̃(n^{7/4})-time algorithm for k = 5. This algorithm extends the framework of pattern-trees by speeding-up their evaluation in certain cases. A key component of the proof is the introduction of pair-rectangle-trees, a data structure for dominance counting.
Cite as
Gal Beniamini and Nir Lavee. Counting Permutation Patterns with Multidimensional Trees. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 24:1-24:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{beniamini_et_al:LIPIcs.ICALP.2025.24,
author = {Beniamini, Gal and Lavee, Nir},
title = {{Counting Permutation Patterns with Multidimensional Trees}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {24:1--24:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.24},
URN = {urn:nbn:de:0030-drops-234018},
doi = {10.4230/LIPIcs.ICALP.2025.24},
annote = {Keywords: Pattern counting, patterns, permutations}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Benjamin Bergougnoux, Édouard Bonnet, and Julien Duron
Abstract
We show that it is NP-hard to distinguish graphs of linear mim-width at most 1211 from graphs of sim-width at least 1216. This implies that Mim-Width, Sim-Width, One-Sided Mim-Width, and their linear counterparts are all paraNP-complete, i.e., NP-complete to compute even when upper bounded by a constant. A key intermediate problem that we introduce and show NP-complete, Linear Degree Balancing, inputs an edge-weighted graph G and an integer τ, and asks whether V(G) can be linearly ordered such that every vertex of G has weighted backward and forward degrees at most τ.
Cite as
Benjamin Bergougnoux, Édouard Bonnet, and Julien Duron. Mim-Width Is paraNP-Complete. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bergougnoux_et_al:LIPIcs.ICALP.2025.25,
author = {Bergougnoux, Benjamin and Bonnet, \'{E}douard and Duron, Julien},
title = {{Mim-Width Is paraNP-Complete}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {25:1--25:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.25},
URN = {urn:nbn:de:0030-drops-234020},
doi = {10.4230/LIPIcs.ICALP.2025.25},
annote = {Keywords: Mim-width, lower bounds, parameterized complexity, ordered graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Gaétan Berthe, Marin Bougeret, Daniel Gonçalves, and Jean-Florent Raymond
Abstract
The paper deals with the Feedback Vertex Set problem parameterized by the solution size. Given a graph G and a parameter k, one has to decide if there is a set S of at most k vertices such that G-S is acyclic. Assuming the Exponential Time Hypothesis, it is known that FVS cannot be solved in time 2^{o(k)}n^{𝒪(1)} in general graphs. To overcome this, many recent results considered FVS restricted to particular intersection graph classes and provided such 2^{o(k)}n^{𝒪(1)} algorithms.
In this paper we provide generic conditions on a graph class for the existence of an algorithm solving FVS in subexponential FPT time, i.e. time 2^k^ε poly(n), for some ε < 1, where n denotes the number of vertices of the instance and k the parameter. On the one hand this result unifies algorithms that have been proposed over the years for several graph classes such as planar graphs, map graphs, unit-disk graphs, pseudo-disk graphs, and string graphs of bounded edge-degree. On the other hand it extends the tractability horizon of FVS to new classes that are not amenable to previously used techniques, in particular intersection graphs of "thin" objects like segment graphs or more generally s-string graphs.
Cite as
Gaétan Berthe, Marin Bougeret, Daniel Gonçalves, and Jean-Florent Raymond. Pushing the Frontiers of Subexponential FPT Time for Feedback Vertex Set. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{berthe_et_al:LIPIcs.ICALP.2025.26,
author = {Berthe, Ga\'{e}tan and Bougeret, Marin and Gon\c{c}alves, Daniel and Raymond, Jean-Florent},
title = {{Pushing the Frontiers of Subexponential FPT Time for Feedback Vertex Set}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {26:1--26:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.26},
URN = {urn:nbn:de:0030-drops-234036},
doi = {10.4230/LIPIcs.ICALP.2025.26},
annote = {Keywords: Subexponential FPT algorithms, geometric intersection graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Koustav Bhanja
Abstract
Let G = (V,E) be an undirected multi-graph on n = |V| vertices and S ⊆ V be a Steiner set in G. Steiner cut is a fundamental concept; moreover, global cut (|S| = n), as well as (s,t)-cut (|S| = 2), is just a special case of Steiner cut. We study Steiner cuts of capacity minimum+1, and as an important application, we provide a dual edge Sensitivity Oracle for Steiner mincut - a compact data structure for efficiently reporting a Steiner mincut after failure/insertion of any pair of edges.
A compact data structure for cuts of capacity minimum+1 has been designed for both global cuts [Dinitz and Nutov, STOC 1995] and (s,t)-cuts [Baswana, Bhanja, and Pandey, ICALP 2022 & TALG 2023]. Moreover, both data structures are also used crucially to design a dual edge Sensitivity Oracle for their respective mincuts. Unfortunately, except for these two extreme scenarios of Steiner cuts, no generalization of these results is known. Therefore, to address this gap, we present the following first results on Steiner cuts for any S satisfying 2 ≤ |S| ≤ n.
1) Data Structure for Minimum+1 Steiner Cut: There is an {O}(n(n-|S|+1)) space data structure that, given any pair of vertices u,v, can determine in {O}(1) time whether the Steiner cut of the least capacity separating u and v has capacity minimum+1. It can report such a cut, if it exists, in {O}(n) time, which is worst-case optimal.
2) Dual Edge Sensitivity Oracle: We design the following pair of data structures. (a) There is an {O}(n(n-|S|+1)) space data structure that, after the failure or insertion of any pair of edges in G, can report the capacity of Steiner mincut in {O}(1) time and a Steiner mincut in {O}(n) time, which is worst-case optimal. (b) If we are interested in reporting only the capacity of Steiner mincut, there is a more compact data structure that occupies {O}((n-|S|)²+n) space and can report the capacity of Steiner mincut in {O}(1) time after the failure or insertion of any pair of edges.
3) Lower Bound for Sensitivity Oracle: For undirected multi-graphs, for any Steiner set S ⊆ V, any data structure that, after the failure or insertion of any pair of edges, can report the capacity of Steiner mincut must occupy Ω((n-|S|)²) bits of space in the worst case, irrespective of the query time. To arrive at our results, we provide several techniques, especially a generalization of the 3-Star Lemma given by Dinitz and Vainshtein [SICOMP 2000], which is of independent interest.
Our results achieve the same space and time bounds of the existing results for the two extreme scenarios of Steiner cuts - global and (s,t)-cut. In addition, the space occupied by our data structures in (1) and (2) reduces as |S| tends to n. Also, they occupy subquadratic space if |S| is close to n.
Cite as
Koustav Bhanja. Minimum+1 Steiner Cut and Dual Edge Sensitivity Oracle: Bridging Gap between Global and (s,t)-cut. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 27:1-27:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bhanja:LIPIcs.ICALP.2025.27,
author = {Bhanja, Koustav},
title = {{Minimum+1 Steiner Cut and Dual Edge Sensitivity Oracle: Bridging Gap between Global and (s,t)-cut}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {27:1--27:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.27},
URN = {urn:nbn:de:0030-drops-234040},
doi = {10.4230/LIPIcs.ICALP.2025.27},
annote = {Keywords: cut, mincut, minimum+1, steiner, edge fault, sensitivity oracle, dual edges}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Vishwas Bhargava and Devansh Shringi
Abstract
We present a deterministic 2^{k^{𝒪(1)}} poly(n,d) time algorithm for decomposing d-dimensional, width-n tensors of rank at most k over ℝ and ℂ. This improves upon the previous randomized algorithm of Peleg, Shpilka, and Volk (ITCS '24) that takes 2^{k^{k^{𝒪(k)}}} poly(n,d) time and the deterministic n^k^k time algorithms of Bhargava, Saraf, and Volkovich (STOC '21).
Our work resolves an open question asked by Peleg, Shpilka, and Volk (ITCS '24) on whether a deterministic Fixed Parameter Tractable (FPT) algorithm exists for worst-case tensor decomposition. We also make substantial progress on the fundamental problem of how the tractability of tensor decomposition varies as the tensor rank increases. Our result implies that we can achieve deterministic polynomial-time decomposition as long as the rank of the tensor is at most (log n)^{1/C}, where C is some fixed constant independent of n and d. Further, we note that there cannot exist a polynomial-time algorithm for k = ω(log n) unless ETH fails. Our algorithm works for all fields; however, the time complexity worsens to 2^{k^{k^{𝒪(1)}}} and requires randomization for finite fields of large characteristics. Both conditions are provably necessary unless there are improvements in the state of the art for system solving over the corresponding fields.
Our approach achieves this by designing a proper learning (reconstruction) algorithm for set-multilinear depth-3 arithmetic circuits. On a technical note, we design a "partial" clustering algorithm for set-multilinear depth-3 arithmetic circuits that lets us isolate a cluster from any set-multilinear depth-3 circuit while preserving the structure of the circuit.
Cite as
Vishwas Bhargava and Devansh Shringi. Faster & Deterministic FPT Algorithm for Worst-Case Tensor Decomposition. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bhargava_et_al:LIPIcs.ICALP.2025.28,
author = {Bhargava, Vishwas and Shringi, Devansh},
title = {{Faster \& Deterministic FPT Algorithm for Worst-Case Tensor Decomposition}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {28:1--28:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.28},
URN = {urn:nbn:de:0030-drops-234052},
doi = {10.4230/LIPIcs.ICALP.2025.28},
annote = {Keywords: Algebraic circuits, Deterministic algorithms, FPT algorithm, Learning circuits, Reconstruction, Tensor Decomposition, Tensor Rank}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aditya Bhaskara, Sepideh Mahabadi, Madhusudhan Reddy Pittu, Ali Vakilian, and David P. Woodruff
Abstract
In this paper we study constrained subspace approximation problem. Given a set of n points {a₁,…,a_n} in ℝ^d, the goal of the subspace approximation problem is to find a k dimensional subspace that best approximates the input points. More precisely, for a given p ≥ 1, we aim to minimize the pth power of the 𝓁_p norm of the error vector (‖a₁-Pa₁‖,…,‖a_n-Pa_n‖), where P denotes the projection matrix onto the subspace and the norms are Euclidean. In constrained subspace approximation (CSA), we additionally have constraints on the projection matrix P. In its most general form, we require P to belong to a given subset 𝒮 that is described explicitly or implicitly.
We introduce a general framework for constrained subspace approximation. Our approach, that we term coreset-guess-solve, yields either (1+ε)-multiplicative or ε-additive approximations for a variety of constraints. We show that it provides new algorithms for partition-constrained subspace approximation with applications to fair subspace approximation, k-means clustering, and projected non-negative matrix factorization, among others. Specifically, while we reconstruct the best known bounds for k-means clustering in Euclidean spaces, we improve the known results for the remainder of the problems.
Cite as
Aditya Bhaskara, Sepideh Mahabadi, Madhusudhan Reddy Pittu, Ali Vakilian, and David P. Woodruff. Guessing Efficiently for Constrained Subspace Approximation. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 29:1-29:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bhaskara_et_al:LIPIcs.ICALP.2025.29,
author = {Bhaskara, Aditya and Mahabadi, Sepideh and Pittu, Madhusudhan Reddy and Vakilian, Ali and Woodruff, David P.},
title = {{Guessing Efficiently for Constrained Subspace Approximation}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {29:1--29:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.29},
URN = {urn:nbn:de:0030-drops-234068},
doi = {10.4230/LIPIcs.ICALP.2025.29},
annote = {Keywords: parameterized complexity, low rank approximation, fairness, non-negative matrix factorization, clustering}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sujoy Bhore and Lazar Milenković
Abstract
Lightness, sparsity, and hop-diameter are the fundamental parameters of geometric spanners. Arya et al. [STOC'95] showed in their seminal work that there exists a construction of Euclidean (1+ε)-spanners with hop-diameter O(log n) and lightness O(log n). They also gave a general tradeoff of hop-diameter k and sparsity O(α_k(n)), where α_k is a very slowly growing inverse of an Ackermann-style function. The former combination of logarithmic hop-diameter and lightness is optimal due to the lower bound by Dinitz et al. [FOCS'08]. Later, Elkin and Solomon [STOC'13] generalized the light spanner construction to doubling metrics and extended the tradeoff for more values of hop-diameter k. In a recent line of work [SoCG'22, SoCG'23], Le et al. proved that the aforementioned tradeoff between the hop-diameter and sparsity is tight for every choice of hop-diameter k. A fundamental question remains: What is the optimal tradeoff between the hop-diameter and lightness for every value of k?
In this paper, we present a general framework for constructing light spanners with small hop-diameter. Our framework is based on tree covers. In particular, we show that if a metric admits a tree cover with γ trees, stretch t, and lightness L, then it also admits a t-spanner with hop-diameter k and lightness O(kn^{2/k}⋅ γ L). Further, we note that the tradeoff for trees is tight due to a construction in uniform line metric, which is perhaps the simplest tree metric. As a direct consequence of this framework, we obtain new tradeoffs between lightness and hop-diameter for doubling metrics.
Cite as
Sujoy Bhore and Lazar Milenković. Light Spanners with Small Hop-Diameter. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 30:1-30:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bhore_et_al:LIPIcs.ICALP.2025.30,
author = {Bhore, Sujoy and Milenkovi\'{c}, Lazar},
title = {{Light Spanners with Small Hop-Diameter}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {30:1--30:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.30},
URN = {urn:nbn:de:0030-drops-234075},
doi = {10.4230/LIPIcs.ICALP.2025.30},
annote = {Keywords: Geometric Spanners, Lightness, Hop-Diameter, Recurrences, Lower Bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Péter Biró, Gergely Csáji, and Ildikó Schlotter
Abstract
We study the NP-hard Stable Hypergraph Matching (SHM) problem and its generalization allowing capacities, the Stable Hypergraph b-Matching (SHbM) problem, and investigate their computational properties under various structural constraints. Our study is motivated by the fact that Scarf’s Lemma [Scarf, 1967] together with a result of Lovász [Lovász, 1972] guarantees the existence of a stable matching whenever the underlying hypergraph is normal. Furthermore, if the hypergraph is unimodular (i.e., its incidence matrix is totally unimodular), then even a stable b-matching is guaranteed to exist. However, no polynomial-time algorithm is known for finding a stable matching or b-matching in unimodular hypergraphs.
We identify subclasses of unimodular hypergraphs where SHM and SHbM are tractable such as laminar hypergraphs or so-called subpath hypergraphs with bounded-size hyperedges; for the latter case, even a maximum-weight stable b-matching can be found efficiently. We complement our algorithms by showing that optimizing over stable matchings is NP-hard even in laminar hypergraphs. As a practically important special case of SHbM for unimodular hypergraphs, we investigate a tripartite stable matching problem with students, schools, and companies as agents, called the University Dual Admission problem, which models real-world scenarios in higher education admissions.
Finally, we examine a superclass of subpath hypergraphs that are normal but not necessarily unimodular, namely subtree hypergraphs where hyperedges correspond to subtrees of a tree. We establish that for such hypergraphs, stable matchings can be found in polynomial time but, in the setting with capacities, finding a stable b-matching is NP-hard.
Cite as
Péter Biró, Gergely Csáji, and Ildikó Schlotter. Stable Hypergraph Matching in Unimodular Hypergraphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{biro_et_al:LIPIcs.ICALP.2025.31,
author = {Bir\'{o}, P\'{e}ter and Cs\'{a}ji, Gergely and Schlotter, Ildik\'{o}},
title = {{Stable Hypergraph Matching in Unimodular Hypergraphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {31:1--31:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.31},
URN = {urn:nbn:de:0030-drops-234086},
doi = {10.4230/LIPIcs.ICALP.2025.31},
annote = {Keywords: stable hypergraph matching, Scarf’s Lemma, unimodular hypergraphs, university dual admission}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Greg Bodwin, Michael Dinitz, Ama Koranteng, and Lily Wang
Abstract
There has recently been significant interest in fault tolerant spanners, which are spanners that still maintain their stretch guarantees after some nodes or edges fail. This work has culminated in an almost complete understanding of the three-way tradeoff between stretch, sparsity, and number of faults tolerated. However, despite some progress in metric settings, there have been no results to date on the tradeoff in general graphs between stretch, lightness, and number of faults tolerated.
We initiate the study of light edge fault tolerant (EFT) graph spanners, obtaining the first such results. First, we observe that lightness can be unbounded if we use the traditional definition (normalizing by the MST). We then argue that a natural definition of fault-tolerant lightness is to instead normalize by a min-weight fault tolerant connectivity preserver; essentially, a fault-tolerant version of the MST. However, even with this, we show that it is still not generally possible to construct f-EFT spanners whose weight compares reasonably to the weight of a min-weight f-EFT connectivity preserver.
In light of this lower bound, it is natural to then consider bicriteria notions of lightness, where we compare the weight of an f-EFT spanner to a min-weight (f' > f)-EFT connectivity preserver. The most interesting question is to determine the minimum value of f' that allows for reasonable lightness upper bounds. Our main result is a precise answer to this question: f' = 2f. In particular, we show that the lightness can be untenably large (roughly n/k for a k-spanner) if one normalizes by the min-weight (2f-1)-EFT connectivity preserver. But if one normalizes by the min-weight 2f-EFT connectivity preserver, then we show that the lightness is bounded by just O(f^{1/2}) times the non-fault tolerant lightness (roughly n^{1/k} for a (1+ε)(2k-1)-spanner).
Cite as
Greg Bodwin, Michael Dinitz, Ama Koranteng, and Lily Wang. Light Edge Fault Tolerant Graph Spanners. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bodwin_et_al:LIPIcs.ICALP.2025.32,
author = {Bodwin, Greg and Dinitz, Michael and Koranteng, Ama and Wang, Lily},
title = {{Light Edge Fault Tolerant Graph Spanners}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {32:1--32:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.32},
URN = {urn:nbn:de:0030-drops-234093},
doi = {10.4230/LIPIcs.ICALP.2025.32},
annote = {Keywords: Fault Tolerant Spanners, Light Spanners}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Itai Boneh, Shay Golan, Shay Mozes, Daniel Prigan, and Oren Weimann
Abstract
We show how to preprocess a weighted undirected n-vertex planar graph in Õ(n^{4/3}) time, such that the distance between any pair of vertices can then be reported in Õ(1) time. This improves the previous Õ(n^{3/2}) preprocessing time [JACM'23].
Our main technical contribution is a near optimal construction of additively weighted Voronoi diagrams in undirected planar graphs. Namely, given a planar graph G and a face f, we show that one can preprocess G in Õ(n) time such that given any weight assignment to the vertices of f one can construct the additively weighted Voronoi diagram of f in near optimal Õ(|f|) time. This improves the Õ(√{n|f|}) construction time of [JACM'23].
Cite as
Itai Boneh, Shay Golan, Shay Mozes, Daniel Prigan, and Oren Weimann. Faster Construction of a Planar Distance Oracle with Õ(1) Query Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 33:1-33:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{boneh_et_al:LIPIcs.ICALP.2025.33,
author = {Boneh, Itai and Golan, Shay and Mozes, Shay and Prigan, Daniel and Weimann, Oren},
title = {{Faster Construction of a Planar Distance Oracle with \~{O}(1) Query Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {33:1--33:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.33},
URN = {urn:nbn:de:0030-drops-234106},
doi = {10.4230/LIPIcs.ICALP.2025.33},
annote = {Keywords: Distance Oracle, Planar Graph, Construction Time}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alex Bortolotti, Monaldo Mastrolilli, and Luis Felipe Vargas
Abstract
The Sum-of-Squares (SoS) hierarchy, also known as Lasserre hierarchy, has emerged as a promising tool in optimization. However, it remains unclear whether fixed-degree SoS proofs can be automated [O'Donnell (2017)]. Indeed, there are examples of polynomial systems with bounded coefficients that admit low-degree SoS proofs, but these proofs necessarily involve numbers with an exponential number of bits, implying that low-degree SoS proofs cannot always be found efficiently.
A sufficient condition derived from the Nullstellensatz proof system [Raghavendra and Weitz (2017)] identifies cases where bit complexity issues can be circumvented. One of the main problems left open by Raghavendra and Weitz is proving any result for refutations, as their condition applies only to polynomial systems with a large set of solutions.
In this work, we broaden the class of polynomial systems for which degree-d SoS proofs can be automated. To achieve this, we develop a new criterion and we demonstrate how our criterion applies to polynomial systems beyond the scope of Raghavendra and Weitz’s result. In particular, we establish a separation for instances arising from Constraint Satisfaction Problems (CSPs). Moreover, our result extends to refutations, establishing that polynomial-time refutation is possible for broad classes of polynomial time solvable constraint problems, highlighting a first advancement in this area.
Cite as
Alex Bortolotti, Monaldo Mastrolilli, and Luis Felipe Vargas. On the Degree Automatability of Sum-Of-Squares Proofs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 34:1-34:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bortolotti_et_al:LIPIcs.ICALP.2025.34,
author = {Bortolotti, Alex and Mastrolilli, Monaldo and Vargas, Luis Felipe},
title = {{On the Degree Automatability of Sum-Of-Squares Proofs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {34:1--34:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.34},
URN = {urn:nbn:de:0030-drops-234110},
doi = {10.4230/LIPIcs.ICALP.2025.34},
annote = {Keywords: Sum of squares, Polynomial calculus, Polynomial ideal membership, Polymorphisms, Gr\"{o}bner basis theory, Constraint satisfaction problems, Proof complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Karl Bringmann, Nick Fischer, Bernhard Haeupler, and Rustam Latypov
Abstract
Low Diameter Decompositions (LDDs) are invaluable tools in the design of combinatorial graph algorithms. While historically they have been applied mainly to undirected graphs, in the recent breakthrough for the negative-length Single Source Shortest Path problem, Bernstein, Nanongkai, and Wulff-Nilsen [FOCS '22] extended the use of LDDs to directed graphs for the first time. Specifically, their LDD deletes each edge with probability at most O(1/D ⋅ log²n), while ensuring that each strongly connected component in the remaining graph has a (weak) diameter of at most D.
In this work, we make further advancements in the study of directed LDDs. We reveal a natural and intuitive (in hindsight) connection to Expander Decompositions, and leveraging this connection along with additional techniques, we establish the existence of an LDD with an edge-cutting probability of O(1/D ⋅ log n log log n). This improves the previous bound by nearly a logarithmic factor and closely approaches the lower bound of Ω(1/D ⋅ log n). With significantly more technical effort, we also develop two efficient algorithms for computing our LDDs: a deterministic algorithm that runs in time Õ(m poly(D)) and a randomized algorithm that runs in near-linear time Õ(m).
We believe that our work provides a solid conceptual and technical foundation for future research relying on directed LDDs, which will undoubtedly follow soon.
Cite as
Karl Bringmann, Nick Fischer, Bernhard Haeupler, and Rustam Latypov. Near-Optimal Directed Low-Diameter Decompositions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 35:1-35:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bringmann_et_al:LIPIcs.ICALP.2025.35,
author = {Bringmann, Karl and Fischer, Nick and Haeupler, Bernhard and Latypov, Rustam},
title = {{Near-Optimal Directed Low-Diameter Decompositions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {35:1--35:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.35},
URN = {urn:nbn:de:0030-drops-234125},
doi = {10.4230/LIPIcs.ICALP.2025.35},
annote = {Keywords: Low Diameter Decompositions, Expander Decompositions, Directed Graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kevin Buchin, Maike Buchin, Zijin Huang, André Nusser, and Sampson Wong
Abstract
We study the problem of computing the Fréchet distance between two polygonal curves under transformations. First, we consider translations in the Euclidean plane. Given two curves π and σ of total complexity n and a threshold δ ≥ 0, we present an 𝒪̃(n^{7 + 1/3}) time algorithm to determine whether there exists a translation t ∈ ℝ² such that the Fréchet distance between π and σ + t is at most δ. This improves on the previous best result, which is an 𝒪(n⁸) time algorithm.
We then generalize this result to any class of rationally parameterized transformations, which includes translation, rotation, scaling, and arbitrary affine transformations. For a class T of rationally parametrized transformations with k degrees of freedom, we show that one can determine whether there is a transformation τ ∈ T such that the Fréchet distance between π and τ(σ) is at most δ in 𝒪̃(n^{3k+4/3}) time.
Cite as
Kevin Buchin, Maike Buchin, Zijin Huang, André Nusser, and Sampson Wong. Faster Fréchet Distance Under Transformations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{buchin_et_al:LIPIcs.ICALP.2025.36,
author = {Buchin, Kevin and Buchin, Maike and Huang, Zijin and Nusser, Andr\'{e} and Wong, Sampson},
title = {{Faster Fr\'{e}chet Distance Under Transformations}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {36:1--36:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.36},
URN = {urn:nbn:de:0030-drops-234137},
doi = {10.4230/LIPIcs.ICALP.2025.36},
annote = {Keywords: Fr\'{e}chet distance, curve similarity, shape matching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andrei A. Bulatov and Stanislav Živný
Abstract
The Mermin-Peres magic square is a celebrated example of a system of Boolean linear equations that is not (classically) satisfiable but is satisfiable via linear operators on a Hilbert space of dimension four. A natural question is then, for what kind of problems such a phenomenon occurs? Atserias, Kolaitis, and Severini answered this question for all Boolean Constraint Satisfaction Problems (CSPs): For 0-Valid-SAT, 1-Valid-SAT, 2-SAT, Horn-SAT, and Dual Horn-SAT, classical satisfiability and operator satisfiability is the same and thus there is no gap; for all other Boolean CSPs, these notions differ as there are gaps, i.e., there are unsatisfiable instances that are satisfiable via operators on Hilbert spaces.
We generalize their result to CSPs on arbitrary finite domains and give an almost complete classification: First, we show that NP-hard CSPs admit a separation between classical satisfiability and satisfiability via operators on finite- and infinite-dimensional Hilbert spaces. Second, we show that tractable CSPs of bounded width have no satisfiability gaps of any kind. Finally, we show that tractable CSPs of unbounded width can simulate, in a satisfiability-gap-preserving fashion, linear equations over an Abelian group of prime order p; for such CSPs, we obtain a separation of classical satisfiability and satisfiability via operators on infinite-dimensional Hilbert spaces. Furthermore, if p = 2, such CSPs also have gaps separating classical satisfiability and satisfiability via operators on finite- and infinite-dimensional Hilbert spaces.
Cite as
Andrei A. Bulatov and Stanislav Živný. Satisfiability of Commutative vs. Non-Commutative CSPs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bulatov_et_al:LIPIcs.ICALP.2025.37,
author = {Bulatov, Andrei A. and \v{Z}ivn\'{y}, Stanislav},
title = {{Satisfiability of Commutative vs. Non-Commutative CSPs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {37:1--37:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.37},
URN = {urn:nbn:de:0030-drops-234149},
doi = {10.4230/LIPIcs.ICALP.2025.37},
annote = {Keywords: constraint satisfaction, quantum CSP, operator CSP}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Silvia Butti, Alberto Larrauri, and Stanislav Živný
Abstract
A celebrated result of Håstad established that, for any constant ε > 0, it is NP-hard to find an assignment satisfying a (1/|G|+ε)-fraction of the constraints of a given 3-LIN instance over an Abelian group G even if one is promised that an assignment satisfying a (1-ε)-fraction of the constraints exists. Engebretsen, Holmerin, and Russell showed the same result for 3-LIN instances over any finite (not necessarily Abelian) group. In other words, for almost-satisfiable instances of 3-LIN the random assignment achieves an optimal approximation guarantee. We prove that the random assignment algorithm is still best possible under a stronger promise that the 3-LIN instance is almost satisfiable over an arbitrarily more restrictive group.
Cite as
Silvia Butti, Alberto Larrauri, and Stanislav Živný. Optimal Inapproximability of Promise Equations over Finite Groups. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 38:1-38:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{butti_et_al:LIPIcs.ICALP.2025.38,
author = {Butti, Silvia and Larrauri, Alberto and \v{Z}ivn\'{y}, Stanislav},
title = {{Optimal Inapproximability of Promise Equations over Finite Groups}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {38:1--38:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.38},
URN = {urn:nbn:de:0030-drops-234150},
doi = {10.4230/LIPIcs.ICALP.2025.38},
annote = {Keywords: promise constraint satisfaction, approximation, linear equations}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Barış Can Esmer and Ariel Kulik
Abstract
In this paper, we present Sampling with a Black Box, a unified framework for the design of parameterized approximation algorithms for vertex deletion problems (e.g., Vertex Cover, Feedback Vertex Set, etc.). The framework relies on two components:
- A Sampling Step. A polynomial-time randomized algorithm that, given a graph G, returns a random vertex v such that the optimum of G⧵ {v} is smaller by 1 than the optimum of G, with some prescribed probability q. We show that such algorithms exist for multiple vertex deletion problems.
- A Black Box algorithm which is either an exact parameterized algorithm, a polynomial-time approximation algorithm, or a parameterized-approximation algorithm. The framework combines these two components together. The sampling step is applied iteratively to remove vertices from the input graph, and then the solution is extended using the black box algorithm. The process is repeated sufficiently many times so that the target approximation ratio is attained with a constant probability.
We use the technique to derive parameterized approximation algorithms for several vertex deletion problems, including Feedback Vertex Set, d-Hitting Set and 𝓁-Path Vertex Cover. In particular, for every approximation ratio 1 < β < 2, we attain a parameterized β-approximation for Feedback Vertex Set, which is faster than the parameterized β-approximation of [Jana, Lokshtanov, Mandal, Rai and Saurabh, MFCS 23']. Furthermore, our algorithms are always faster than the algorithms attained using Fidelity Preserving Transformations [Fellows, Kulik, Rosamond, and Shachnai, JCSS 18'].
Cite as
Barış Can Esmer and Ariel Kulik. Sampling with a Black Box: Faster Parameterized Approximation Algorithms for Vertex Deletion Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{canesmer_et_al:LIPIcs.ICALP.2025.39,
author = {Can Esmer, Bar{\i}\c{s} and Kulik, Ariel},
title = {{Sampling with a Black Box: Faster Parameterized Approximation Algorithms for Vertex Deletion Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {39:1--39:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.39},
URN = {urn:nbn:de:0030-drops-234165},
doi = {10.4230/LIPIcs.ICALP.2025.39},
annote = {Keywords: Parameterized Approximation Algorithms, Random Sampling}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Nairen Cao, Shi Li, and Jia Ye
Abstract
We revisit the simultaneous approximation model for the correlation clustering problem introduced by Davies, Moseley, and Newman [Davies et al., 2024]. The objective is to find a clustering that minimizes given norms of the disagreement vector over all vertices.
We present an efficient algorithm that produces a clustering that is simultaneously a 63.3-approximation for all monotone symmetric norms. This significantly improves upon the previous approximation ratio of 6348 due to Davies, Moseley, and Newman [Davies et al., 2024], which works only for 𝓁_p-norms.
To achieve this result, we first reduce the problem to approximating all top-k norms simultaneously, using the connection between monotone symmetric norms and top-k norms established by Chakrabarty and Swamy [Chakrabarty and Swamy, 2019]. Then we develop a novel procedure that constructs a 12.66-approximate fractional clustering for all top-k norms. Our 63.3-approximation ratio is obtained by combining this with the 5-approximate rounding algorithm by Kalhan, Makarychev, and Zhou [Kalhan et al., 2019].
We then demonstrate that with a loss of ε in the approximation ratio, the algorithm can be adapted to run in nearly linear time and in the MPC (massively parallel computation) model with poly-logarithmic number of rounds.
By allowing a further trade-off in the approximation ratio to (359+ε), the number of MPC rounds can be reduced to a constant.
Cite as
Nairen Cao, Shi Li, and Jia Ye. Simultaneously Approximating All Norms for Massively Parallel Correlation Clustering. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 40:1-40:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{cao_et_al:LIPIcs.ICALP.2025.40,
author = {Cao, Nairen and Li, Shi and Ye, Jia},
title = {{Simultaneously Approximating All Norms for Massively Parallel Correlation Clustering}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {40:1--40:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.40},
URN = {urn:nbn:de:0030-drops-234171},
doi = {10.4230/LIPIcs.ICALP.2025.40},
annote = {Keywords: Correlation Clustering, All-Norms, Approximation Algorithm, Massively Parallel Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Agustín Caracci, Christoph Dürr, and José Verschae
Abstract
We study a generalized binary search problem on the line and general trees. On the line (e.g., a sorted array), binary search finds a target node in O(log n) queries in the worst case, where n is the number of nodes. In time-constrained applications, we might only have time to perform a sub-logarithmic number of queries. In this case, it is impossible to guarantee that the target will be found regardless of its position. Our main result is the construction of a randomized strategy that maximizes the minimum (over the target’s position) probability of finding the target. Such a strategy provides a natural solution when there is no a priori (stochastic) information about the target’s position. As with regular binary search, we can find and run the strategy in O(log n) time (and using only O(log n) random bits). Our construction is obtained by reinterpreting the problem as a two-player zero-sum game and exploiting an underlying number theoretical structure.
For the more general case on trees, querying an edge returns the edge’s endpoint closest to the target. Given a bound k on the number of queries, we quantify a the-less-queries-the-better approach by defining a seeker’s profit p depending on the number of queries needed to locate the hider. For the linear programming formulation of the corresponding zero-sum game, we show that computing the best response for the hider (that is, the separation problem of the underlying dual LP) can be done in time O(n² 2^{2k}), where n is the size of the tree. This result allows us to compute a Nash equilibrium in polynomial time whenever k = O(log n). In contrast, computing the best response for the hider is NP-hard in general.
Cite as
Agustín Caracci, Christoph Dürr, and José Verschae. Randomized Binary and Tree Search Under Pressure. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{caracci_et_al:LIPIcs.ICALP.2025.41,
author = {Caracci, Agust{\'\i}n and D\"{u}rr, Christoph and Verschae, Jos\'{e}},
title = {{Randomized Binary and Tree Search Under Pressure}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {41:1--41:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.41},
URN = {urn:nbn:de:0030-drops-234181},
doi = {10.4230/LIPIcs.ICALP.2025.41},
annote = {Keywords: Binary Search, Search Trees on Trees, Nash Equilibrium}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Amir Carmel, Debarati Das, Evangelos Kipouridis, and Evangelos Pipis
Abstract
Fitting distances to tree metrics and ultrametrics are two widely used methods in hierarchical clustering, primarily explored within the context of numerical taxonomy. Formally, given a positive distance function D: binom(V,2) → ℝ_{>0}, the goal is to find a tree (or an ultrametric) T including all elements of set V, such that the difference between the distances among vertices in T and those specified by D is minimized. Numerical taxonomy was first introduced by Sneath and Sokal [Nature 1962], and since then it has been studied extensively in both biology and computer science.
In this paper, we initiate the study of ultrametric and tree metric fitting problems in the semi-streaming model, where the distances between pairs of elements from V (with |V| = n), defined by the function D, can arrive in an arbitrary order. We study these problems under various distance norms; namely the 𝓁₀ objective, which aims to minimize the number of modified entries in D to fit a tree-metric or an ultrametric; the 𝓁₁ objective, which seeks to minimize the total sum of distance errors across all pairs of points in V; and the 𝓁_∞ objective, which focuses on minimizing the maximum error incurred by any entries in D.
- Our first result addresses the 𝓁₀ objective. We provide a single-pass polynomial-time Õ(n)-space O(1) approximation algorithm for ultrametrics and prove that no single-pass exact algorithm exists, even with exponential time.
- Next, we show that the algorithm for 𝓁₀ implies an O(Δ/δ) approximation for the 𝓁₁ objective, where Δ is the maximum, and δ is the minimum absolute difference between distances in the input. This bound matches the best-known approximation for the RAM model using a combinatorial algorithm when Δ/δ = O(n).
- For the 𝓁_∞ objective, we provide a complete characterization of the ultrametric fitting problem. First, we present a single-pass polynomial-time Õ(n)-space 2-approximation algorithm and show that no better than 2-approximation is possible, even with exponential time. Furthermore, we show that with an additional pass, it is possible to achieve a polynomial-time exact algorithm for ultrametrics.
- Finally, we extend all these results to tree metrics by using only one additional pass through the stream and without asymptotically increasing the approximation factor.
Cite as
Amir Carmel, Debarati Das, Evangelos Kipouridis, and Evangelos Pipis. Fitting Tree Metrics and Ultrametrics in Data Streams. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 42:1-42:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{carmel_et_al:LIPIcs.ICALP.2025.42,
author = {Carmel, Amir and Das, Debarati and Kipouridis, Evangelos and Pipis, Evangelos},
title = {{Fitting Tree Metrics and Ultrametrics in Data Streams}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {42:1--42:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.42},
URN = {urn:nbn:de:0030-drops-234197},
doi = {10.4230/LIPIcs.ICALP.2025.42},
annote = {Keywords: Streaming, Clustering, Ultrametrics, Tree metrics, Distance fitting}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Karthekeyan Chandrasekaran, Chandra Chekuri, and Shubhang Kulkarni
Abstract
We consider deletion problems in graphs and supermodular functions where the goal is to reduce density. In Graph Density Deletion (GraphDD), we are given a graph G = (V,E) with non-negative vertex costs and a non-negative parameter ρ ≥ 0 and the goal is to remove a minimum cost subset S of vertices such that the densest subgraph in G-S has density at most ρ. This problem has an underlying matroidal structure and generalizes several classical problems such as vertex cover, feedback vertex set, and pseudoforest deletion set for appropriately chosen ρ ≤ 1 and all of these classical problems admit a 2-approximation. In sharp contrast, we prove that for every fixed integer ρ > 1, GraphDD is hard to approximate to within a logarithmic factor via a reduction from SetCover, thus showing a phase transition phenomenon. Next, we investigate a generalization of GraphDD to monotone supermodular functions, termed Supermodular Density Deletion (SupmodDD). In SupmodDD, we are given a monotone supermodular function f:2^V → ℤ_{≥0} via an evaluation oracle with element costs and a non-negative integer ρ ≥ 0 and the goal is remove a minimum cost subset S ⊆ V such that the densest subset according to f in V-S has density at most ρ. We show that SupmodDD is approximation equivalent to the well-known Submodular Cover problem; this implies a tight logarithmic approximation and hardness for SupmodDD; it also implies a logarithmic approximation for GraphDD, thus matching our inapproximability bound. Motivated by these hardness results, we design bicriteria approximation algorithms for both GraphDD and SupmodDD.
Cite as
Karthekeyan Chandrasekaran, Chandra Chekuri, and Shubhang Kulkarni. On Deleting Vertices to Reduce Density in Graphs and Supermodular Functions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 43:1-43:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chandrasekaran_et_al:LIPIcs.ICALP.2025.43,
author = {Chandrasekaran, Karthekeyan and Chekuri, Chandra and Kulkarni, Shubhang},
title = {{On Deleting Vertices to Reduce Density in Graphs and Supermodular Functions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {43:1--43:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.43},
URN = {urn:nbn:de:0030-drops-234200},
doi = {10.4230/LIPIcs.ICALP.2025.43},
annote = {Keywords: Combinatorial Optimization, Approximation Algorithms, Randomized Algorithms, Hardness of Approximation, Densest Subgraph, Supermodular Functions, Submodular Set Cover}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Karthekeyan Chandrasekaran, Chandra Chekuri, and Weihao Zhu
Abstract
Finding the maximum number of disjoint spanning trees in a given graph is a well-studied problem with several applications and connections. The Tutte-Nash-Williams theorem provides a min-max relation for this problem which also extends to disjoint bases in a matroid and leads to efficient algorithms [Schrijver, 2003]. Several other packing problems such as element disjoint Steiner trees, disjoint set covers, and disjoint dominating sets are NP-Hard but admit an O(log n)-approximation [Feige et al., 2002; Cheriyan and Salavatipour, 2007]. Călinescu, Chekuri, and Vondrák [G. Călinescu et al., 2009] viewed all these packing problems as packing bases of a polymatroid and provided a unified perspective. Motivated by applications in wireless networks, recent works have studied the problem of packing set covers in the online model [Pananjady et al., 2015; Emek et al., 2019; Bienkowski et al., 2025]. The online model poses new challenges for packing problems. In particular, it is not clear how to pack a maximum number of disjoint spanning trees in a graph when edges arrive online. Motivated by these applications and theoretical considerations, we formulate an online model for packing bases of a polymatroid, and describe a randomized algorithm with a polylogarithmic competitive ratio. Our algorithm is based on interesting connections to the notion of quotients of a polymatroid that has recently seen applications in polymatroid sparsification [Quanrud, 2024]. We generalize the previously known result for the online disjoint set cover problem [Emek et al., 2019] and also address several other packing problems in a unified fashion. For the special case of packing disjoint spanning trees in a graph (or a hypergraph) whose edges arrive online, we provide an alternative to our general algorithm that is simpler and faster while achieving the same poly-logarithmic competitive ratio.
Cite as
Karthekeyan Chandrasekaran, Chandra Chekuri, and Weihao Zhu. Online Disjoint Spanning Trees and Polymatroid Bases. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 44:1-44:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chandrasekaran_et_al:LIPIcs.ICALP.2025.44,
author = {Chandrasekaran, Karthekeyan and Chekuri, Chandra and Zhu, Weihao},
title = {{Online Disjoint Spanning Trees and Polymatroid Bases}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {44:1--44:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.44},
URN = {urn:nbn:de:0030-drops-234212},
doi = {10.4230/LIPIcs.ICALP.2025.44},
annote = {Keywords: Disjoint Spanning Trees, Base Packing, Polymatroids, Online Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Karthekeyan Chandrasekaran, Yuri Faenza, Chengyue He, and Jay Sethuraman
Abstract
Scarf’s algorithm - a pivoting procedure that finds a dominating extreme point in a down-monotone polytope - can be used to show the existence of a fractional stable matching in hypergraphs. The problem of finding a fractional stable matching in hypergraphs, however, is PPAD-complete. In this work, we study the behavior of Scarf’s algorithm on arborescence hypergraphs, the family of hypergraphs in which hyperedges correspond to the paths of an arborescence. For arborescence hypergraphs, we prove that Scarf’s algorithm can be implemented to find an integral stable matching in polynomial time. En route to our result, we uncover novel structural properties of bases and pivots for the more general family of network hypergraphs. Our work provides the first proof of polynomial-time convergence of Scarf’s algorithm on hypergraphic stable matching problems, giving hope to the possibility of polynomial-time convergence of Scarf’s algorithm for other families of polytopes.
Cite as
Karthekeyan Chandrasekaran, Yuri Faenza, Chengyue He, and Jay Sethuraman. Scarf’s Algorithm on Arborescence Hypergraphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 45:1-45:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chandrasekaran_et_al:LIPIcs.ICALP.2025.45,
author = {Chandrasekaran, Karthekeyan and Faenza, Yuri and He, Chengyue and Sethuraman, Jay},
title = {{Scarf’s Algorithm on Arborescence Hypergraphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {45:1--45:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.45},
URN = {urn:nbn:de:0030-drops-234220},
doi = {10.4230/LIPIcs.ICALP.2025.45},
annote = {Keywords: Scarf’s algorithm, Arborescence Hypergraphs, Stable Matchings}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Karthekeyan Chandrasekaran, Siyue Liu, and R. Ravi
Abstract
Flows and colorings are disparate concepts in graph algorithms - the former is tractable while the latter is intractable. Tutte [Tutte, 1954; Tutte, 1966] introduced the concept of nowhere-zero flows to unify these two concepts. Jaeger [Jaeger, 1976] showed that nowhere-zero flows are equivalent to cut-balanced orientations. Motivated by connections between nowhere-zero flows, cut-balanced orientations, Nash-Williams' well-balanced orientations, and postman problems, we study optimization versions of nowhere-zero flows and cut-balanced orientations. Given a bidirected graph with asymmetric costs on two orientations of each edge, we study the min cost nowhere-zero k-flow problem and min cost k-cut-balanced orientation problem. We show that both problems are NP-hard to approximate within any finite factor. Given the strong inapproximability result, we design bicriteria approximations for both problems: we obtain a (6,6)-approximation to the min cost nowhere-zero k-flow and a (k,6)-approximation to the min cost k-cut-balanced orientation. For the case of symmetric costs (where the costs of both orientations are the same for every edge), we show that the nowhere-zero k-flow problem remains NP-hard and admits a 3-approximation.
Cite as
Karthekeyan Chandrasekaran, Siyue Liu, and R. Ravi. Minimum Cost Nowhere-Zero Flows and Cut-Balanced Orientations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 46:1-46:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chandrasekaran_et_al:LIPIcs.ICALP.2025.46,
author = {Chandrasekaran, Karthekeyan and Liu, Siyue and Ravi, R.},
title = {{Minimum Cost Nowhere-Zero Flows and Cut-Balanced Orientations}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {46:1--46:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.46},
URN = {urn:nbn:de:0030-drops-234238},
doi = {10.4230/LIPIcs.ICALP.2025.46},
annote = {Keywords: Nowhere-zero Flows, Cut-balanced orientations, Bicriteria approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Arnab Chatterjee, Amin Coja-Oghlan, Mihyun Kang, Lena Krieg, Maurice Rolvien, and Gregory B. Sorkin
Abstract
We analyse the performance of Belief Propagation Guided Decimation, a physics-inspired message passing algorithm, on the random k-XORSAT problem. Specifically, we derive an explicit threshold up to which the algorithm succeeds with a strictly positive probability Ω(1) that we compute explicitly, but beyond which the algorithm with high probability fails to find a satisfying assignment. In addition, we analyse a thought experiment called the decimation process for which we identify a (non-) reconstruction and a condensation phase transition. The main results of the present work confirm physics predictions from [Ricci-Tersenghi and Semerjian: J. Stat. Mech. 2009] that link the phase transitions of the decimation process with the performance of the algorithm, and improve over partial results from a recent article [Yung: Proc. ICALP 2024].
Cite as
Arnab Chatterjee, Amin Coja-Oghlan, Mihyun Kang, Lena Krieg, Maurice Rolvien, and Gregory B. Sorkin. Belief Propagation Guided Decimation on Random k-XORSAT. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 47:1-47:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chatterjee_et_al:LIPIcs.ICALP.2025.47,
author = {Chatterjee, Arnab and Coja-Oghlan, Amin and Kang, Mihyun and Krieg, Lena and Rolvien, Maurice and Sorkin, Gregory B.},
title = {{Belief Propagation Guided Decimation on Random k-XORSAT}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {47:1--47:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.47},
URN = {urn:nbn:de:0030-drops-234248},
doi = {10.4230/LIPIcs.ICALP.2025.47},
annote = {Keywords: random k-XORSAT, belief propagation, decimation process, random matrices}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shiri Chechik, Hongyi Chen, and Tianyi Zhang
Abstract
Given a graph, an edge coloring assigns colors to edges so that no pairs of adjacent edges share the same color. We are interested in edge coloring algorithms under the W-streaming model. In this model, the algorithm does not have enough memory to hold the entire graph, so the edges of the input graph are read from a data stream one by one in an unknown order, and the algorithm needs to print a valid edge coloring in an output stream. The performance of the algorithm is measured by the amount of space and the number of different colors it uses.
This streaming edge coloring problem has been studied by several works in recent years. When the input graph contains n vertices and has maximum vertex degree Δ, it is known that in the W-streaming model, an O(Δ²)-edge coloring can be computed deterministically with Õ(n) space [Ansari, Saneian, and Zarrabi-Zadeh, 2022], or an O(Δ^{1.5})-edge coloring can be computed by a Õ(n)-space randomized algorithm [Behnezhad, Saneian, 2024] [Chechik, Mukhtar, Zhang, 2024].
In this paper, we achieve polynomial improvement over previous results. Specifically, we show how to improve the number of colors to Õ(Δ^{4/3+ε}) using space Õ(n) deterministically, for any constant ε > 0. This is the first deterministic result that bypasses the quadratic bound on the number of colors while using near-linear space.
Cite as
Shiri Chechik, Hongyi Chen, and Tianyi Zhang. Improved Streaming Edge Coloring. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 48:1-48:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chechik_et_al:LIPIcs.ICALP.2025.48,
author = {Chechik, Shiri and Chen, Hongyi and Zhang, Tianyi},
title = {{Improved Streaming Edge Coloring}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {48:1--48:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.48},
URN = {urn:nbn:de:0030-drops-234257},
doi = {10.4230/LIPIcs.ICALP.2025.48},
annote = {Keywords: edge coloring, streaming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Antares Chen, Lorenzo Orecchia, and Erasmo Tani
Abstract
Despite there being significant work on developing spectral- [Chan et al., 2018; Lau et al., 2023; Kwok et al., 2022], and metric-embedding-based [Louis and Makarychev, 2016] approximation algorithms for hypergraph conductance, little is known regarding the approximability of other hypergraph partitioning objectives.
This work proposes algorithms for a general model of hypergraph partitioning that unifies both undirected and directed versions of many well-studied partitioning objectives. The first contribution of this paper introduces polymatroidal cut functions, a large class of cut functions amenable to approximation algorithms via metric embeddings and routing multicommodity flows. We demonstrate a simple O(√{log n})-approximation, where n is the number of vertices in the hypergraph, for these problems by rounding relaxations to metrics of negative-type.
The second contribution of this paper generalizes the cut-matching game framework of Khandekar et al. [Khandekar et al., 2007] to tackle polymatroidal cut functions. This yields an almost-linear time O(log n)-approximation algorithm for standard versions of undirected and directed hypergraph partitioning [Kwok et al., 2022]. A technical contribution of our construction is a novel cut-matching game, which greatly relaxes the set of allowed actions by the cut player and allows for the use of approximate s-t maximum flows by the matching player. We believe this to be of independent interest.
Cite as
Antares Chen, Lorenzo Orecchia, and Erasmo Tani. Submodular Hypergraph Partitioning: Metric Relaxations and Fast Algorithms via an Improved Cut-Matching Game. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 49:1-49:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.49,
author = {Chen, Antares and Orecchia, Lorenzo and Tani, Erasmo},
title = {{Submodular Hypergraph Partitioning: Metric Relaxations and Fast Algorithms via an Improved Cut-Matching Game}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {49:1--49:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.49},
URN = {urn:nbn:de:0030-drops-234261},
doi = {10.4230/LIPIcs.ICALP.2025.49},
annote = {Keywords: Hypergraph Partitioning, Cut Improvement, Cut-Matching Game}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kuowen Chen, Jian Li, Yuval Rabani, and Yiran Zhang
Abstract
We study the general norm optimization for combinatorial problems, initiated by Chakrabarty and Swamy (STOC 2019). We propose a general formulation that captures a large class of combinatorial structures: we are given a set 𝒰 of n weighted elements and a family of feasible subsets ℱ. Each subset S ∈ ℱ is called a feasible solution/set of the problem. We denote the value vector by v = {v_i}_{i ∈ [n]}, where v_i ≥ 0 is the value of element i. For any subset S ⊆ 𝒰, we use v[S] to denote the n-dimensional vector {v_e⋅ 𝟏[e ∈ S]}_{e ∈ 𝒰} (i.e., we zero out all entries that are not in S). Let f: ℝⁿ → ℝ_+ be a symmetric monotone norm function. Our goal is to minimize the norm objective f(v[S]) over feasible subset S ∈ ℱ. The problem significantly generalizes the corresponding min-sum and min-max problems.
We present a general equivalent reduction of the norm minimization problem to a multi-criteria optimization problem with logarithmic budget constraints, up to a constant approximation factor. Leveraging this reduction, we obtain constant factor approximation algorithms for the norm minimization versions of several covering problems, such as interval cover, multi-dimensional knapsack cover, and logarithmic factor approximation for set cover. We also study the norm minimization versions for perfect matching, s-t path and s-t cut. We show the natural linear programming relaxations for these problems have a large integrality gap. To complement the negative result, we show that, for perfect matching, it is possible to obtain a bi-criteria result: for any constant ε,δ > 0, we can find in polynomial time a nearly perfect matching (i.e., a matching that matches at least 1-ε proportion of vertices) and its cost is at most (8+δ) times of the optimum for perfect matching. Moreover, we establish the existence of a polynomial-time O(log log n)-approximation algorithm for the norm minimization variant of the s-t path problem. Specifically, our algorithm achieves an α-approximation with a time complexity of n^{O(log log n / α)}, where 9 ≤ α ≤ log log n.
Cite as
Kuowen Chen, Jian Li, Yuval Rabani, and Yiran Zhang. New Results on a General Class of Minimum Norm Optimization Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.50,
author = {Chen, Kuowen and Li, Jian and Rabani, Yuval and Zhang, Yiran},
title = {{New Results on a General Class of Minimum Norm Optimization Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {50:1--50:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.50},
URN = {urn:nbn:de:0030-drops-234276},
doi = {10.4230/LIPIcs.ICALP.2025.50},
annote = {Keywords: Approximation Algorithms, Minimum Norm Optimization, Linear Programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang
Abstract
We consider the classic Knapsack problem. Let t and OPT be the capacity and the optimal value, respectively. If one seeks a solution with total profit at least OPT/(1 + ε) and total weight at most t, then Knapsack can be solved in Õ(n + (1/(ε))²) time [Chen, Lian, Mao, and Zhang '24][Mao '24]. This running time is the best possible (up to a logarithmic factor), assuming that (min,+)-convolution cannot be solved in truly subquadratic time [Künnemann, Paturi, and Schneider '17][Cygan, Mucha, Węgrzycki, and Włodarczyk '19]. The same upper and lower bounds hold if one seeks a solution with total profit at least OPT and total weight at most (1 + ε)t. Therefore, it is natural to ask the following question.
If one seeks a solution with total profit at least OPT/(1+ε) and total weight at most (1 + ε)t, can Knsapck be solved in Õ(n + (1/(ε))^{2-δ}) time for some constant δ > 0?
We answer this open question affirmatively by proposing an Õ(n + (1/(ε))^{7/4})-time algorithm.
Cite as
Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Weakly Approximating Knapsack in Subquadratic Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 51:1-51:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.51,
author = {Chen, Lin and Lian, Jiayi and Mao, Yuchen and Zhang, Guochuan},
title = {{Weakly Approximating Knapsack in Subquadratic Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {51:1--51:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.51},
URN = {urn:nbn:de:0030-drops-234286},
doi = {10.4230/LIPIcs.ICALP.2025.51},
annote = {Keywords: Knapsack, FPTAS}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Xi Chen, William Pires, Toniann Pitassi, and Rocco A. Servedio
Abstract
We study the relative-error property testing model for Boolean functions that was recently introduced in the work of [X. Chen et al., 2025]. In relative-error testing, the testing algorithm gets uniform random satisfying assignments as well as black-box queries to f, and it must accept f with high probability whenever f has the property that is being tested and reject any f that is relative-error far from having the property. Here the relative-error distance from f to a function g is measured with respect to |f^{-1}(1)| rather than with respect to the entire domain size 2ⁿ as in the Hamming distance measure that is used in the standard model; thus, unlike the standard model, relative-error testing allows us to study the testability of sparse Boolean functions that have few satisfying assignments. It was shown in [X. Chen et al., 2025] that relative-error testing is at least as difficult as standard-model property testing, but for many natural and important Boolean function classes the precise relationship between the two notions is unknown.
In this paper we consider the well-studied and fundamental properties of being a conjunction and being a decision list. In the relative-error setting, we give an efficient one-sided error tester for conjunctions with running time and query complexity O(1/ε).
Secondly, we give a two-sided relative-error Õ(1/ε) tester for decision lists, matching the query complexity of the state-of-the-art algorithm in the standard model [Nader H. Bshouty, 2020; I. Diakonikolas et al., 2007].
Cite as
Xi Chen, William Pires, Toniann Pitassi, and Rocco A. Servedio. Relative-Error Testing of Conjunctions and Decision Lists. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 52:1-52:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.52,
author = {Chen, Xi and Pires, William and Pitassi, Toniann and Servedio, Rocco A.},
title = {{Relative-Error Testing of Conjunctions and Decision Lists}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {52:1--52:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.52},
URN = {urn:nbn:de:0030-drops-234291},
doi = {10.4230/LIPIcs.ICALP.2025.52},
annote = {Keywords: Property Testing, Relative Error}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Yu Chen and Zihan Tan
Abstract
We study vertex sparsification for preserving cuts. Given a graph G with a subset |T| = k of its vertices called terminals, a quality-q cut sparsifier is a graph G' that contains T, such that, for any partition (T₁,T₂) of T into non-empty subsets, the value of the min-cut in G' separating T₁ from T₂ is within factor q from the value of the min-cut in G separating T₁ from T₂. The construction of cut sparsifiers with good (small) quality and size has been a central problem in graph compression for years.
Planar graphs and quasi-bipartite graphs are two important special families studied in this research direction. The main results in this paper are new cut sparsifier constructions for them in the high-quality regime (where q = 1 or 1+{ε} for small {ε} > 0).
We first show that every planar graph admits a planar quality-(1+{ε}) cut sparsifier of size Õ(k/poly({ε})), which is in sharp contrast with the lower bound of 2^{Ω(k)} for the quality-1 case.
We then show that every quasi-bipartite graph admits a quality-1 cut sparsifier of size 2^{Õ(k²)}. This is the second to improve over the doubly-exponential bound for general graphs (previously only planar graphs have been shown to have single-exponential size quality-1 cut sparsifiers).
Lastly, we show that contraction, a common approach for constructing cut sparsifiers adopted in most previous works, does not always give optimal bounds for cut sparsifiers. We demonstrate this by showing that the optimal size bound for quality-(1+{ε}) contraction-based cut sparsifiers for quasi-bipartite graphs lies in the range [k^{̃Ω(1/{ε})},k^{O(1/{ε}²)}], while in previous work an upper bound of Õ(k/{ε}²) was achieved via a non-contraction approach.
Cite as
Yu Chen and Zihan Tan. Cut-Preserving Vertex Sparsifiers for Planar and Quasi-Bipartite Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.53,
author = {Chen, Yu and Tan, Zihan},
title = {{Cut-Preserving Vertex Sparsifiers for Planar and Quasi-Bipartite Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {53:1--53:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.53},
URN = {urn:nbn:de:0030-drops-234304},
doi = {10.4230/LIPIcs.ICALP.2025.53},
annote = {Keywords: Termianl Cut, Graph Sparsification}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Zejia Chen, Yulin Wang, Chihao Zhang, and Zihan Zhang
Abstract
We examine various perspectives on the decay of correlation for the uniform distribution over proper q-edge colorings of graphs with maximum degree Δ.
First, we establish the coupling independence property when q ≥ 3Δ for general graphs. Together with the recent work of Chen, Feng, Guo, Zhang and Zou (2024), this result implies a fully polynomial-time approximation scheme (FPTAS) for counting the number of proper q-edge colorings.
Next, we prove the strong spatial mixing property on trees, provided that q > (3+o(1))Δ. The strong spatial mixing property is derived from the spectral independence property of a version of the weighted edge coloring distribution, which is established using the matrix trickle-down method developed in Abdolazimi, Liu and Oveis Gharan (FOCS, 2021) and Wang, Zhang and Zhang (STOC, 2024).
Finally, we show that the weak spatial mixing property holds on trees with maximum degree Δ if and only if q ≥ 2Δ-1.
Cite as
Zejia Chen, Yulin Wang, Chihao Zhang, and Zihan Zhang. Decay of Correlation for Edge Colorings When q > 3Δ. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 54:1-54:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chen_et_al:LIPIcs.ICALP.2025.54,
author = {Chen, Zejia and Wang, Yulin and Zhang, Chihao and Zhang, Zihan},
title = {{Decay of Correlation for Edge Colorings When q \rangle 3\Delta}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {54:1--54:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.54},
URN = {urn:nbn:de:0030-drops-234314},
doi = {10.4230/LIPIcs.ICALP.2025.54},
annote = {Keywords: Strong Spatial Mixing, Edge Coloring, Approximate Counting}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shabarish Chenakkod, Michał Dereziński, and Xiaoyu Dong
Abstract
An oblivious subspace embedding is a random m× n matrix Π such that, for any d-dimensional subspace, with high probability Π preserves the norms of all vectors in that subspace within a 1±ε factor. In this work, we give an oblivious subspace embedding with the optimal dimension m = Θ(d/ε²) that has a near-optimal sparsity of Õ(1/ε) non-zero entries per column of Π. This is the first result to nearly match the conjecture of Nelson and Nguyen [FOCS 2013] in terms of the best sparsity attainable by an optimal oblivious subspace embedding, improving on a prior bound of Õ(1/ε⁶) non-zeros per column [Chenakkod et al., STOC 2024]. We further extend our approach to the non-oblivious setting, proposing a new family of Leverage Score Sparsified embeddings with Independent Columns, which yield faster runtimes for matrix approximation and regression tasks.
In our analysis, we develop a new method which uses a decoupling argument together with the cumulant method for bounding the edge universality error of isotropic random matrices. To achieve near-optimal sparsity, we combine this general-purpose approach with new trace inequalities that leverage the specific structure of our subspace embedding construction.
Cite as
Shabarish Chenakkod, Michał Dereziński, and Xiaoyu Dong. Optimal Oblivious Subspace Embeddings with Near-Optimal Sparsity. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chenakkod_et_al:LIPIcs.ICALP.2025.55,
author = {Chenakkod, Shabarish and Derezi\'{n}ski, Micha{\l} and Dong, Xiaoyu},
title = {{Optimal Oblivious Subspace Embeddings with Near-Optimal Sparsity}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {55:1--55:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.55},
URN = {urn:nbn:de:0030-drops-234324},
doi = {10.4230/LIPIcs.ICALP.2025.55},
annote = {Keywords: Randomized linear algebra, matrix sketching, subspace embeddings}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Jiaqi Cheng and Rishab Goyal
Abstract
We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors:
1) For any constant ε > 0, any SNARK with proof size |π| < |ω|/(λ^ε) + poly(λ, |x|) can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in λ, independent of witness size.
2) Under an additional assumption that the underlying SNARK has as an efficient knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length |ω| - Ω(λ), or |ω|/(1+ε) + poly(λ, |x|), any constant ε > 0. Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing arguments of knowledge that beat the trivial construction. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
Cite as
Jiaqi Cheng and Rishab Goyal. Boosting SNARKs and Rate-1 Barrier in Arguments of Knowledge. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{cheng_et_al:LIPIcs.ICALP.2025.56,
author = {Cheng, Jiaqi and Goyal, Rishab},
title = {{Boosting SNARKs and Rate-1 Barrier in Arguments of Knowledge}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {56:1--56:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.56},
URN = {urn:nbn:de:0030-drops-234339},
doi = {10.4230/LIPIcs.ICALP.2025.56},
annote = {Keywords: SNARGs, RAM Delegation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shucheng Chi, Ran Duan, Benyu Wang, and Tianle Xie
Abstract
Given a graph G = (V,E) (n = |V|, m = |E|) and two vertices s,t ∈ V, the f-fault replacement path (fFRP) problem computes for every set F of at most f edges, the distance from s to t when edges in F fail. A recent result shows that 2FRP in directed graphs can be solved in Õ(n³) time [Vassilevska Williams, Woldeghebriel, Xu 2022]. In this paper, we show a 3FRP algorithm in deterministic Õ(n³) time for undirected weighted graphs, which almost matches the size of the output. This implies that fFRP in undirected graphs can be solved in nearly optimal Õ(n^f) time for all f ≥ 3.
To construct our 3FRP algorithm, we introduce an incremental distance sensitivity oracle (DSO) for undirected graphs with Õ(n²) worst-case update time, while preprocessing time, space, and query time are still Õ(n³), Õ(n²) and Õ(1), respectively, which match the static DSO [Bernstein and Karger 2009]. Here in a DSO, we can preprocess a graph so that the distance between any pair of vertices given any failed edge can be answered efficiently. From the recent result in [Peng and Rubinstein 2023], we can obtain an offline dynamic DSO from the incremental worst-case DSO, which makes the construction of our 3FRP algorithm more convenient. By the offline dynamic DSO, we can also construct a 2-fault single-source replacement path (2-fault SSRP) algorithm in Õ(n³) time, that is, from a given vertex s, we want to find the distance to any vertex t when any pair of edges fail. Thus the Õ(n³) time complexity for 2-fault SSRP is also nearly optimal.
Now we know that in undirected graphs 1FRP can be solved in Õ(m) time [Nardelli, Proietti, Widmayer 2001], and 2FRP and 3FRP in undirected graphs can be solved in Õ(n³) time. In this paper, we also show that a truly subcubic algorithm for 2FRP in undirected weighted graphs does not exist under APSP hypothesis.
Cite as
Shucheng Chi, Ran Duan, Benyu Wang, and Tianle Xie. Undirected 3-Fault Replacement Path in Nearly Cubic Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 57:1-57:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chi_et_al:LIPIcs.ICALP.2025.57,
author = {Chi, Shucheng and Duan, Ran and Wang, Benyu and Xie, Tianle},
title = {{Undirected 3-Fault Replacement Path in Nearly Cubic Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {57:1--57:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.57},
URN = {urn:nbn:de:0030-drops-234346},
doi = {10.4230/LIPIcs.ICALP.2025.57},
annote = {Keywords: Graph Algorithm, Shortest Path, Replacement Path}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Flavio Chierichetti, Mirko Giacchini, Alessandro Panconesi, and Andrea Vattani
Abstract
Online Bipartite Matching with random user arrival is a fundamental problem in the online advertisement ecosystem. Over the last 30 years, many algorithms and impossibility results have been developed for this problem. In particular, the latest impossibility result was established by Manshadi, Oveis Gharan and Saberi [Manshadi et al., 2011] in 2011. Since then, several algorithms have been published in an effort to narrow the gap between the upper and the lower bounds on the competitive ratio.
In this paper we show that no algorithm can achieve a competitive ratio better than 1- e/(e^e) = 0.82062…, improving upon the 0.823 upper bound presented in [Manshadi et al., 2011]. Our construction is simple to state, accompanied by a fully analytic proof, and yields a competitive ratio bound intriguingly similar to 1 - 1/e, the optimal competitive ratio for the fully adversarial Online Bipartite Matching problem.
Although the tightness of our upper bound remains an open question, we show that our construction is extremal in a natural class of instances.
Cite as
Flavio Chierichetti, Mirko Giacchini, Alessandro Panconesi, and Andrea Vattani. A New Impossibility Result for Online Bipartite Matching Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 58:1-58:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chierichetti_et_al:LIPIcs.ICALP.2025.58,
author = {Chierichetti, Flavio and Giacchini, Mirko and Panconesi, Alessandro and Vattani, Andrea},
title = {{A New Impossibility Result for Online Bipartite Matching Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {58:1--58:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.58},
URN = {urn:nbn:de:0030-drops-234354},
doi = {10.4230/LIPIcs.ICALP.2025.58},
annote = {Keywords: Bipartite Matching, Random Graphs, Competitive Ratio}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Nathan Claudet and Simon Perdrix
Abstract
We describe an algorithm with quasi-polynomial runtime n^{log₂(n)+O(1)} for deciding local unitary (LU) equivalence of graph states. The algorithm builds on a recent graphical characterisation of LU-equivalence via generalised local complementation. By first transforming the corresponding graphs into a standard form using usual local complementations, LU-equivalence reduces to the existence of a single generalised local complementation that maps one graph to the other. We crucially demonstrate that this reduces to solving a system of quasi-polynomially many linear equations, avoiding an exponential blow-up. As a byproduct, we generalise Bouchet’s algorithm for deciding local Clifford (LC) equivalence of graph states by allowing the addition of arbitrary linear constraints. We also improve existing bounds on the size of graph states that are LU- but not LC-equivalent. While the smallest known examples involve 27 qubits, and it is established that no such examples exist for up to 8 qubits, we refine this bound by proving that LU- and LC-equivalence coincide for graph states involving up to 19 qubits.
Cite as
Nathan Claudet and Simon Perdrix. Deciding Local Unitary Equivalence of Graph States in Quasi-Polynomial Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 59:1-59:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{claudet_et_al:LIPIcs.ICALP.2025.59,
author = {Claudet, Nathan and Perdrix, Simon},
title = {{Deciding Local Unitary Equivalence of Graph States in Quasi-Polynomial Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {59:1--59:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.59},
URN = {urn:nbn:de:0030-drops-234367},
doi = {10.4230/LIPIcs.ICALP.2025.59},
annote = {Keywords: Quantum computing, Graph theory, Entanglement, Local complementation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Roni Con, Zeyu Guo, Ray Li, and Zihan Zhang
Abstract
In this paper, we prove that with high probability, random Reed-Solomon codes approach the half-Singleton bound - the optimal rate versus error tradeoff for linear insdel codes - with linear-sized alphabets. More precisely, we prove that, for any ε > 0 and positive integers n and k, with high probability, random Reed-Solomon codes of length n and dimension k can correct (1-ε)n-2k+1 adversarial insdel errors over alphabets of size n+2^{poly(1/ε)}k. This significantly improves upon the alphabet size demonstrated in the work of Con, Shpilka, and Tamo (IEEE TIT, 2023), who showed the existence of Reed-Solomon codes with exponential alphabet size Õ(binom(n,2k-1)²) precisely achieving the half-Singleton bound.
Our methods are inspired by recent works on list-decoding Reed-Solomon codes. Brakensiek-Gopi-Makam (STOC 2023) showed that random Reed-Solomon codes are list-decodable up to capacity with exponential-sized alphabets, and Guo-Zhang (FOCS 2023) and Alrabiah-Guruswami-Li (STOC 2024) improved the alphabet-size to linear. We achieve a similar alphabet-size reduction by similarly establishing strong bounds on the probability that certain random rectangular matrices are full rank. To accomplish this in our insdel context, our proof combines the random matrix techniques from list-decoding with structural properties of Longest Common Subsequences.
Cite as
Roni Con, Zeyu Guo, Ray Li, and Zihan Zhang. Random Reed-Solomon Codes Achieve the Half-Singleton Bound for Insertions and Deletions over Linear-Sized Alphabets. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 60:1-60:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{con_et_al:LIPIcs.ICALP.2025.60,
author = {Con, Roni and Guo, Zeyu and Li, Ray and Zhang, Zihan},
title = {{Random Reed-Solomon Codes Achieve the Half-Singleton Bound for Insertions and Deletions over Linear-Sized Alphabets}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {60:1--60:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.60},
URN = {urn:nbn:de:0030-drops-234372},
doi = {10.4230/LIPIcs.ICALP.2025.60},
annote = {Keywords: coding theory, error-correcting codes, Reed-Solomon codes, insdel, insertion-deletion errors, half-Singleton bound}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Arjan Cornelissen, Simon Apers, and Sander Gribling
Abstract
Estimating the volume of a convex body is a canonical problem in theoretical computer science. Its study has led to major advances in randomized algorithms, Markov chain theory, and computational geometry. In particular, determining the query complexity of volume estimation to a membership oracle has been a longstanding open question. Most of the previous work focuses on the high-dimensional limit. In this work, we tightly characterize the deterministic, randomized and quantum query complexity of this problem in the high-precision limit, i.e., when the dimension is constant.
Cite as
Arjan Cornelissen, Simon Apers, and Sander Gribling. How to Compute the Volume in Low Dimension?. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 61:1-61:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{cornelissen_et_al:LIPIcs.ICALP.2025.61,
author = {Cornelissen, Arjan and Apers, Simon and Gribling, Sander},
title = {{How to Compute the Volume in Low Dimension?}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {61:1--61:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.61},
URN = {urn:nbn:de:0030-drops-234381},
doi = {10.4230/LIPIcs.ICALP.2025.61},
annote = {Keywords: Query complexity, computational geometry, quantum computing, volume estimation, high-precision limit}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Martín Costa and Ermiya Farokhnejad
Abstract
The metric k-median problem is a textbook clustering problem. As input, we are given a metric space V of size n and an integer k, and our task is to find a subset S ⊆ V of at most k "centers" that minimizes the total distance from each point in V to its nearest center in S.
Mettu and Plaxton [UAI'02] gave a randomized algorithm for k-median that computes a O(1)-approximation in Õ(nk) time. They also showed that any algorithm for this problem with a bounded approximation ratio must have a running time of Ω(nk). Thus, the running time of their algorithm is optimal up to polylogarithmic factors.
For deterministic k-median, Guha et al. [FOCS'00] gave an algorithm that computes a poly(log (n/k))-approximation in Õ(nk) time, where the degree of the polynomial in the approximation is unspecified. To the best of our knowledge, this remains the state-of-the-art approximation of any deterministic k-median algorithm with this running time.
This leads us to the following natural question: What is the best approximation of a deterministic k-median algorithm with near-optimal running time? We make progress in answering this question by giving a deterministic algorithm that computes a O(log(n/k))-approximation in Õ(nk) time. We also provide a lower bound showing that any deterministic algorithm with this running time must have an approximation ratio of Ω(log n/(log k + log log n)), establishing a gap between the randomized and deterministic settings for k-median.
Cite as
Martín Costa and Ermiya Farokhnejad. Deterministic k-Median Clustering in Near-Optimal Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 62:1-62:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{costa_et_al:LIPIcs.ICALP.2025.62,
author = {Costa, Mart{\'\i}n and Farokhnejad, Ermiya},
title = {{Deterministic k-Median Clustering in Near-Optimal Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {62:1--62:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.62},
URN = {urn:nbn:de:0030-drops-234395},
doi = {10.4230/LIPIcs.ICALP.2025.62},
annote = {Keywords: k-clustering, k-median, deterministic algorithms, approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Luís Felipe I. Cunha, Ignasi Sau, Uéverton S. Souza, and Mario Valencia-Pabon
Abstract
An elimination tree of a connected graph G is a rooted tree on the vertices of G obtained by choosing a root v and recursing on the connected components of G-v to obtain the subtrees of v. The graph associahedron of G is a polytope whose vertices correspond to elimination trees of G and whose edges correspond to tree rotations, a natural operation between elimination trees. These objects generalize associahedra, which correspond to the case where G is a path. Ito et al. [ICALP 2023] recently proved that the problem of computing distances on graph associahedra is NP-hard. In this paper we prove that the problem, for a general graph G, is fixed-parameter tractable parameterized by the distance k. Prior to our work, only the case where G is a path was known to be fixed-parameter tractable. To prove our result, we use a novel approach based on a marking scheme that restricts the search to a set of vertices whose size is bounded by a (large) function of k.
Cite as
Luís Felipe I. Cunha, Ignasi Sau, Uéverton S. Souza, and Mario Valencia-Pabon. Computing Distances on Graph Associahedra Is Fixed-Parameter Tractable. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 63:1-63:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{cunha_et_al:LIPIcs.ICALP.2025.63,
author = {Cunha, Lu{\'\i}s Felipe I. and Sau, Ignasi and Souza, U\'{e}verton S. and Valencia-Pabon, Mario},
title = {{Computing Distances on Graph Associahedra Is Fixed-Parameter Tractable}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {63:1--63:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.63},
URN = {urn:nbn:de:0030-drops-234408},
doi = {10.4230/LIPIcs.ICALP.2025.63},
annote = {Keywords: graph associahedra, elimination tree, rotation distance, parameterized complexity, fixed-parameter tractable algorithm, combinatorial shortest path, reconfiguration}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Artur Czumaj, Guichen Gao, Mohsen Ghaffari, and Shaofeng H.-C. Jiang
Abstract
The k-center problem is a fundamental optimization problem with numerous applications in machine learning, data analysis, data mining, and communication networks. The k-center problem has been extensively studied in the classical sequential setting for several decades, and more recently there have been some efforts in understanding the problem in parallel computing, on the Massively Parallel Computation (MPC) model. For now, we have a good understanding of k-center in the case where each local MPC machine has sufficient local memory to store some representatives from each cluster, that is, when one has Ω(k) local memory per machine. While this setting covers the case of small values of k, for a large number of clusters these algorithms require undesirably large local memory, making them poorly scalable. The case of large k has been considered only recently for the fully scalable low-local-memory MPC model for the Euclidean instances of the k-center problem. However, the earlier works have been considering only the constant dimensional Euclidean space, required a super-constant number of rounds, and produced only k(1+o(1)) centers whose cost is a super-constant approximation of k-center.
In this work, we significantly improve upon the earlier results for the k-center problem for the fully scalable low-local-memory MPC model. In the low dimensional Euclidean case in ℝ^d, we present the first constant-round fully scalable MPC algorithm for (2+ε)-approximation. We push the ratio further to (1 + ε)-approximation albeit using slightly more (1 + ε)k centers. All these results naturally extends to slightly super-constant values of d. In the high-dimensional regime, we provide the first fully scalable MPC algorithm that in a constant number of rounds achieves an O(log n/ log log n)-approximation for k-center.
Cite as
Artur Czumaj, Guichen Gao, Mohsen Ghaffari, and Shaofeng H.-C. Jiang. Fully Scalable MPC Algorithms for Euclidean k-Center. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 64:1-64:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{czumaj_et_al:LIPIcs.ICALP.2025.64,
author = {Czumaj, Artur and Gao, Guichen and Ghaffari, Mohsen and Jiang, Shaofeng H.-C.},
title = {{Fully Scalable MPC Algorithms for Euclidean k-Center}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {64:1--64:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.64},
URN = {urn:nbn:de:0030-drops-234416},
doi = {10.4230/LIPIcs.ICALP.2025.64},
annote = {Keywords: Massively Parallel Computing, Euclidean Spaces, k-Center Clustering}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Gianluca De Marco and Dariusz R. Kowalski
Abstract
A superimposed code is a collection of binary vectors (codewords) with the property that no vector is contained in the Boolean sum of any k others, enabling unique identification of codewords within any group of k. Superimposed codes are foundational combinatorial tools with applications in areas ranging from distributed computing and data retrieval to fault-tolerant communication. However, classical superimposed codes rely on strict alignment assumptions, limiting their effectiveness in asynchronous and fault-prone environments, which are common in modern systems and applications.
We introduce Ultra-Resilient Superimposed Codes (URSCs), a new class of codes that extends the classic superimposed framework by ensuring a stronger codewords' isolation property and resilience to two types of adversarial perturbations: arbitrary cyclic shifts and partial bitwise corruption (flips). Additionally, URSCs exhibit universality, adapting seamlessly to any number k of concurrent codewords without prior knowledge. This is a combination of properties not achieved in any previous construction.
We provide the first polynomial-time construction of URSCs with near-optimal length, significantly outperforming previous constructions with less general features, all without requiring prior knowledge of the number of concurrent codewords, k. We demonstrate that our URSCs significantly advance the state of the art in multiple applications, including uncoordinated beeping networks, where our codes reduce time complexity for local broadcast by nearly two orders of magnitude, and generalized contention resolution in multi-access channel communication.
Cite as
Gianluca De Marco and Dariusz R. Kowalski. Ultra-Resilient Superimposed Codes: Near-Optimal Construction and Applications. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 65:1-65:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{demarco_et_al:LIPIcs.ICALP.2025.65,
author = {De Marco, Gianluca and Kowalski, Dariusz R.},
title = {{Ultra-Resilient Superimposed Codes: Near-Optimal Construction and Applications}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {65:1--65:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.65},
URN = {urn:nbn:de:0030-drops-234429},
doi = {10.4230/LIPIcs.ICALP.2025.65},
annote = {Keywords: superimposed codes, ultra-resiliency, deterministic algorithms, uncoordinated beeping networks, contention resolution}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mahsa Derakhshan, Andisheh Ghasemi, and Rajmohan Rajaraman
Abstract
We study the communication complexity of the Minimum Vertex Cover (MVC) problem on general graphs within the k-party one-way communication model. Edges of an arbitrary n-vertex graph are distributed among k parties. The objective is for the parties to collectively find a small vertex cover of the graph while adhering to a communication protocol where each party sequentially sends a message to the next until the last party outputs a valid vertex cover of the whole graph. We are particularly interested in the trade-off between the size of the messages sent and the approximation ratio of the output solution.
It is straightforward to see that any constant approximation protocol for MVC requires communicating Ω(n) bits. Additionally, there exists a trivial 2-approximation protocol where the parties collectively find a maximal matching of the graph greedily and return the subset of vertices matched. This raises a natural question: What is the best approximation ratio achievable using optimal communication of O(n)? We design a protocol with an approximation ratio of (2-2^{-k+1}+ε) and O(n) communication for any desirably small constant ε > 0, which is strictly better than 2 for any constant number of parties. Moreover, we show that achieving an approximation ratio smaller than 3/2 for the two-party case requires n^{1 + Ω(1/lg lg n)} communication, thereby establishing the tightness of our protocol for two parties.
A notable aspect of our protocol is that no edges are communicated between the parties. Instead, for any 1 ≤ i < k, the i-th party only communicates a constant number of vertex covers for all edges assigned to the first i parties. An interesting consequence is that the communication cost of our protocol is O(n) bits, as opposed to the typical Ω(nlog n) bits required for many graph problems, such as maximum matching, where protocols commonly involve communicating edges.
Cite as
Mahsa Derakhshan, Andisheh Ghasemi, and Rajmohan Rajaraman. One-Way Communication Complexity of Minimum Vertex Cover in General Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 66:1-66:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{derakhshan_et_al:LIPIcs.ICALP.2025.66,
author = {Derakhshan, Mahsa and Ghasemi, Andisheh and Rajaraman, Rajmohan},
title = {{One-Way Communication Complexity of Minimum Vertex Cover in General Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {66:1--66:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.66},
URN = {urn:nbn:de:0030-drops-234430},
doi = {10.4230/LIPIcs.ICALP.2025.66},
annote = {Keywords: Communication Complexity, Minimum Vertex Cover}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Mahsa Derakhshan and Mohammad Saneian
Abstract
In this paper, we study the weighted stochastic matching problem. Let G = (V, E) be a given edge-weighted graph, and let its realization 𝒢 be a random subgraph of G that includes each edge e ∈ E independently with a known probability p_e. The goal in this problem is to pick a sparse subgraph Q of G without prior knowledge of 𝒢, such that the maximum weight matching among the realized edges of Q (i.e., the subgraph Q ∩ 𝒢) in expectation approximates the maximum weight matching of the entire realization 𝒢.
It is established by previous work that attaining any constant approximation ratio for this problem requires selecting a subgraph of max-degree Ω(1/p), where p = min_{e ∈ E} p_e. On the positive side, there exists a (1-ε)-approximation algorithm by Behnezhad and Derakhshan [FOCS'20], albeit at the cost of a max-degree having exponential dependence on 1/p. Within the O(1/p) query regime, however, the best-known algorithm achieves a 0.536 approximation ratio due to Dughmi, Kalayci, and Patel [ICALP'23], improving over the 0.501 approximation algorithm by Behnezhad, Farhadi, Hajiaghayi, and Reyhani [SODA'19].
In this work, we present a 0.68-approximation algorithm with the asymptotically optimal O(1/p) queries per vertex. Our result not only substantially improves the approximation ratio for weighted graphs, but also breaks the well-known 2/3 barrier with the optimal number of queries - even for unweighted graphs. Our analysis involves reducing the problem to designing a randomized matching algorithm on a given stochastic graph with some variance-bounding properties. To achieve these properties, we leverage a randomized algorithm by MacRury and Ma [STOC'24] for a variant of online stochastic matching.
Cite as
Mahsa Derakhshan and Mohammad Saneian. Query Efficient Weighted Stochastic Matching. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 67:1-67:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{derakhshan_et_al:LIPIcs.ICALP.2025.67,
author = {Derakhshan, Mahsa and Saneian, Mohammad},
title = {{Query Efficient Weighted Stochastic Matching}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {67:1--67:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.67},
URN = {urn:nbn:de:0030-drops-234445},
doi = {10.4230/LIPIcs.ICALP.2025.67},
annote = {Keywords: Sublinear algorithms, Stochastic, Matching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Stéphane Devismes, Yoann Dieudonné, and Arnaud Labourel
Abstract
A mobile agent, starting from a node s of a simple undirected connected graph G = (V,E), has to explore all nodes and edges of G using the minimum number of edge traversals. To do so, the agent uses a deterministic algorithm that allows it to gain information on G as it traverses its edges. During its exploration, the agent must always respect the constraint of knowing a path of length at most D to go back to node s. The upper bound D is fixed as being equal to (1+α)r, where r is the eccentricity of node s (i.e., the maximum distance from s to any other node) and α is any positive real constant. This task has been introduced by Duncan et al. [Christian A. Duncan et al., 2006] and is known as distance-constrained exploration.
The penalty of an exploration algorithm running in G is the number of edge traversals made by the agent in excess of |E|. In [Petrisor Panaite and Andrzej Pelc, 1999], Panaite and Pelc gave an algorithm for solving exploration without any constraint on the moves that is guaranteed to work in every graph G with a (small) penalty in 𝒪(|V|). Hence, a natural question is whether we can obtain a distance-constrained exploration algorithm with the same guarantee as well.
In this paper, we provide a negative answer to this question. We also observe that an algorithm working in every graph G with a linear penalty in |V| cannot be obtained for the task of fuel-constrained exploration, another variant studied in the literature.
This solves an open problem posed by Duncan et al. in [Christian A. Duncan et al., 2006] and shows a fundamental separation with the task of exploration without constraint on the moves.
Cite as
Stéphane Devismes, Yoann Dieudonné, and Arnaud Labourel. Graph Exploration: The Impact of a Distance Constraint. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 68:1-68:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{devismes_et_al:LIPIcs.ICALP.2025.68,
author = {Devismes, St\'{e}phane and Dieudonn\'{e}, Yoann and Labourel, Arnaud},
title = {{Graph Exploration: The Impact of a Distance Constraint}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {68:1--68:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.68},
URN = {urn:nbn:de:0030-drops-234452},
doi = {10.4230/LIPIcs.ICALP.2025.68},
annote = {Keywords: exploration, graph, mobile agent}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Michael Dinitz, Ama Koranteng, and Yasamin Nazari
Abstract
For a given graph G, a hopset H with hopbound β and stretch α is a set of edges such that between every pair of vertices u and v, there is a path with at most β hops in G ∪ H that approximates the distance between u and v up to a multiplicative stretch of α. Hopsets have found a wide range of applications for distance-based problems in various computational models since the 90s. More recently, there has been significant interest in understanding these fundamental objects from an existential and structural perspective. But all of this work takes a worst-case (or existential) point of view: How many edges do we need to add to satisfy a given hopbound and stretch requirement for any input graph?
We initiate the study of the natural optimization variant of this problem: given a specific graph instance, what is the minimum number of edges that satisfy the hopbound and stretch requirements? We give approximation algorithms for a generalized hopset problem which, when combined with known existential bounds, lead to different approximation guarantees for various regimes depending on hopbound, stretch, and directed vs. undirected inputs. We complement our upper bounds with a lower bound that implies Label Cover hardness for directed hopsets and shortcut sets with hopbound at least 3.
Cite as
Michael Dinitz, Ama Koranteng, and Yasamin Nazari. Approximation Algorithms for Optimal Hopsets. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 69:1-69:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{dinitz_et_al:LIPIcs.ICALP.2025.69,
author = {Dinitz, Michael and Koranteng, Ama and Nazari, Yasamin},
title = {{Approximation Algorithms for Optimal Hopsets}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {69:1--69:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.69},
URN = {urn:nbn:de:0030-drops-234464},
doi = {10.4230/LIPIcs.ICALP.2025.69},
annote = {Keywords: Hopsets, Approximation Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sahar Diskin, Ilay Hoshen, and Maksim Zhukovskii
Abstract
We show that for every ε > 0 there exists a sufficiently large d₀ ∈ ℕ such that for every d ≥ d₀, whp the random d-regular graph G(n,d) contains a T-factor for every tree T on at most (1-ε)d/log d vertices. This is best possible since, for large enough integer d, whp G(n,d) does not contain a ((1+ε)d)/(log d)-star-factor. Our method gives a randomised algorithm which whp finds said T-factor and whose expected running time is O(n^{1+o(1)}), as well as an efficient deterministic counterpart.
Cite as
Sahar Diskin, Ilay Hoshen, and Maksim Zhukovskii. Tiling Random Regular Graphs Efficiently. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 70:1-70:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{diskin_et_al:LIPIcs.ICALP.2025.70,
author = {Diskin, Sahar and Hoshen, Ilay and Zhukovskii, Maksim},
title = {{Tiling Random Regular Graphs Efficiently}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {70:1--70:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.70},
URN = {urn:nbn:de:0030-drops-234477},
doi = {10.4230/LIPIcs.ICALP.2025.70},
annote = {Keywords: Random regular graphs, Tree tilings}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Dani Dorfman, Haim Kaplan, Robert E. Tarjan, Mikkel Thorup, and Uri Zwick
Abstract
We present a randomized Õ(n^{3.5})-time algorithm for computing optimal energetic paths for an electric car between all pairs of vertices in an n-vertex directed graph with positive and negative costs, or gains, which are defined to be the negatives of the costs. The optimal energetic paths are finite and well-defined even if the graph contains negative-cost, or equivalently, positive-gain, cycles. This makes the problem much more challenging than standard shortest paths problems.
More specifically, for every two vertices s and t in the graph, the algorithm computes α_B(s,t), the maximum amount of charge the car can reach t with, if it starts at s with full battery, i.e., with charge B, where B is the capacity of the battery. The algorithm also outputs a concise description of the optimal energetic paths that achieve these values. In the presence of positive-gain cycles, optimal paths are not necessarily simple. For dense graphs, our new Õ(n^{3.5}) time algorithm improves on a previous Õ(mn²)-time algorithm of Dorfman et al. [ESA 2023] for the problem.
The gain of an arc is the amount of charge added to the battery of the car when traversing the arc. The charge in the battery can never exceed the capacity B of the battery and can never be negative. An arc of positive gain may correspond, for example, to a downhill road segment, while an arc with a negative gain may correspond to an uphill segment. A positive-gain cycle, if one exists, can be used in certain cases to charge the battery to its capacity. This makes the problem more interesting and more challenging. As mentioned, optimal energetic paths are well-defined even in the presence of positive-gain cycles. Positive-gain cycles may arise when certain road segments have magnetic charging strips, or when the electric car has solar panels.
Combined with a result of Dorfman et al. [SOSA 2024], this also provides a randomized Õ(n^{3.5})-time algorithm for computing minimum-cost paths between all pairs of vertices in an n-vertex graph when the battery can be externally recharged, at varying costs, at intermediate vertices.
Cite as
Dani Dorfman, Haim Kaplan, Robert E. Tarjan, Mikkel Thorup, and Uri Zwick. Faster All-Pairs Optimal Electric Car Routing. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 71:1-71:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{dorfman_et_al:LIPIcs.ICALP.2025.71,
author = {Dorfman, Dani and Kaplan, Haim and Tarjan, Robert E. and Thorup, Mikkel and Zwick, Uri},
title = {{Faster All-Pairs Optimal Electric Car Routing}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {71:1--71:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.71},
URN = {urn:nbn:de:0030-drops-234486},
doi = {10.4230/LIPIcs.ICALP.2025.71},
annote = {Keywords: EV routing, Shortest Paths, Shortcuts, Sampling}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Max Dupré la Tour, Manuel Lafond, Ndiamé Ndiaye, and Adrian Vetta
Abstract
A graph G = (V,E) is a k-leaf power if there is a tree T whose leaves are the vertices of G, with the property that a pair of distinct leaves u and v share an edge in G if and only if they are distance at most k apart in T. For k ≤ 4, it is known that there exists a finite set F_k of graphs such that the class ℒ(k) of k-leaf power graphs is characterized as the set of strongly chordal graphs that do not contain any graph in F_k as an induced subgraph. We prove no such characterization holds for k ≥ 5. That is, for any k ≥ 5, there is no finite set F_k of graphs such that ℒ(k) is equivalent to the set of strongly chordal graphs that do not contain as an induced subgraph any graph in F_k.
Cite as
Max Dupré la Tour, Manuel Lafond, Ndiamé Ndiaye, and Adrian Vetta. k-Leaf Powers Cannot Be Characterized by a Finite Set of Forbidden Induced Subgraphs for k ≥ 5. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 72:1-72:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{duprelatour_et_al:LIPIcs.ICALP.2025.72,
author = {Dupr\'{e} la Tour, Max and Lafond, Manuel and Ndiaye, Ndiam\'{e} and Vetta, Adrian},
title = {{k-Leaf Powers Cannot Be Characterized by a Finite Set of Forbidden Induced Subgraphs for k ≥ 5}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {72:1--72:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.72},
URN = {urn:nbn:de:0030-drops-234499},
doi = {10.4230/LIPIcs.ICALP.2025.72},
annote = {Keywords: Leaf Powers, Forbidden Graph Characterizations, Strongly Chordal Graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Koppány István Encz, Monaldo Mastrolilli, and Eleonora Vercesi
Abstract
Branch-and-bound algorithms (B&B) and polynomial-time approximation schemes (PTAS) are two seemingly distant areas of combinatorial optimization. We intend to (partially) bridge the gap between them while expanding the boundary of theoretical knowledge on the B&B framework. Branch-and-bound algorithms typically guarantee that an optimal solution is eventually found. However, we show that the standard implementation of branch-and-bound for certain knapsack and scheduling problems also exhibits PTAS-like behaviour, yielding increasingly better solutions within polynomial time. Our findings are supported by computational experiments and comparisons with benchmark methods.
Cite as
Koppány István Encz, Monaldo Mastrolilli, and Eleonora Vercesi. Branch-And-Bound Algorithms as Polynomial-Time Approximation Schemes. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 73:1-73:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{encz_et_al:LIPIcs.ICALP.2025.73,
author = {Encz, Kopp\'{a}ny Istv\'{a}n and Mastrolilli, Monaldo and Vercesi, Eleonora},
title = {{Branch-And-Bound Algorithms as Polynomial-Time Approximation Schemes}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {73:1--73:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.73},
URN = {urn:nbn:de:0030-drops-234502},
doi = {10.4230/LIPIcs.ICALP.2025.73},
annote = {Keywords: Branch-and-bound algorithm, Polynomial-time approximation scheme, Parallel machine scheduling problem, Knapsack problem}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sándor P. Fekete, Peter Kramer, Jan-Marc Reinhardt, Christian Rieck, and Christian Scheffer
Abstract
Tilt models offer intuitive and clean definitions of complex systems in which particles are influenced by global control commands. Despite a wide range of applications, there has been almost no theoretical investigation into the associated issues of filling and draining geometric environments. This is partly because a globally controlled system (i.e., passive matter) exhibits highly complex behavior that cannot be locally restricted. Thus, there is a strong need for theoretical studies that investigate these models both (1) in terms of relative power to each other, and (2) from a complexity theory perspective. In this work, we provide (1) general tools for comparing and contrasting different models of global control, and (2) both complexity and algorithmic results on filling and draining.
Cite as
Sándor P. Fekete, Peter Kramer, Jan-Marc Reinhardt, Christian Rieck, and Christian Scheffer. Drainability and Fillability of Polyominoes in Diverse Models of Global Control. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 74:1-74:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fekete_et_al:LIPIcs.ICALP.2025.74,
author = {Fekete, S\'{a}ndor P. and Kramer, Peter and Reinhardt, Jan-Marc and Rieck, Christian and Scheffer, Christian},
title = {{Drainability and Fillability of Polyominoes in Diverse Models of Global Control}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {74:1--74:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.74},
URN = {urn:nbn:de:0030-drops-234518},
doi = {10.4230/LIPIcs.ICALP.2025.74},
annote = {Keywords: Global control, full Tilt, single Tilt, Fillability, Drainability, Polyominoes, Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shiyuan Feng, William Swartworth, and David Woodruff
Abstract
We consider the heavy-hitters and F_p moment estimation problems in the sliding window model. For F_p moment estimation with 1 < p ≤ 2, we show that it is possible to give a (1± ε) multiplicative approximation to the F_p moment with 2/3 probability on any given window of size n using Õ(1/(ε^p)log² n + 1/(ε²)log n) bits of space. We complement this result with a lower bound showing that our algorithm gives tight bounds up to factors of log log n and log1/(ε). As a consequence of our F₂ moment estimation algorithm, we show that the heavy-hitters problem can be solved on an arbitrary window using O(1/(ε²)log² n) space which is tight.
Cite as
Shiyuan Feng, William Swartworth, and David Woodruff. Tight Bounds for Heavy-Hitters and Moment Estimation in the Sliding Window Model. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 75:1-75:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{feng_et_al:LIPIcs.ICALP.2025.75,
author = {Feng, Shiyuan and Swartworth, William and Woodruff, David},
title = {{Tight Bounds for Heavy-Hitters and Moment Estimation in the Sliding Window Model}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {75:1--75:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.75},
URN = {urn:nbn:de:0030-drops-234524},
doi = {10.4230/LIPIcs.ICALP.2025.75},
annote = {Keywords: sketching, streaming, heavy hitters, sliding window, moment estimation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ying Feng and Piotr Indyk
Abstract
For two d-dimensional point sets A,B of size up to n, the Chamfer distance from A to B is defined as CH(A,B) = ∑_{a ∈ A} min_{b ∈ B} ‖a-b‖. The Chamfer distance is a widely used measure for quantifying dissimilarity between sets of points, used in many machine learning and computer vision applications. A recent work of Bakshi et al, NeuriPS'23, gave the first near-linear time (1+ε)-approximate algorithm, with a running time of 𝒪(nd log (n)/ε²). In this paper we improve the running time further, to 𝒪(nd(log log n+log1/(ε))/ε²)). When ε is a constant, this reduces the gap between the upper bound and the trivial Ω(dn) lower bound significantly, from 𝒪(log n) to 𝒪(log log n).
Cite as
Ying Feng and Piotr Indyk. Even Faster Algorithm for the Chamfer Distance. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 76:1-76:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{feng_et_al:LIPIcs.ICALP.2025.76,
author = {Feng, Ying and Indyk, Piotr},
title = {{Even Faster Algorithm for the Chamfer Distance}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {76:1--76:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.76},
URN = {urn:nbn:de:0030-drops-234531},
doi = {10.4230/LIPIcs.ICALP.2025.76},
annote = {Keywords: Chamfer distance}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Adi Fine, Haim Kaplan, and Uri Stemmer
Abstract
We consider a simple load-balancing game between an algorithm and an adaptive adversary. In a simplified version of this game, the adversary observes the assignment of jobs to machines and selects a machine to kill. The algorithm must then restart the jobs from the failed machine on other machines. The adversary repeats this process, observing the new assignment and eliminating another machine, and so on. The adversary aims to force the algorithm to perform many restarts, while we seek a robust algorithm that minimizes restarts regardless of the adversary’s strategy. This game was recently introduced by Bhattacharya et al. for designing a 3-spanner with low recourse against an adaptive adversary.
We prove that a simple algorithm, which assigns each job to a randomly chosen live bin, incurs O(n log n) recourse against an adaptive adversary. This enables us to construct a much simpler 3-spanner with a recourse that is smaller by a factor of O(log² n) compared to the previous construction, without increasing the update time or the size of the spanner.
This motivates a careful examination of the range of attacks an adaptive adversary can deploy against simple algorithms before resorting to more complex ones. As our case study demonstrates, this attack space may not be as large as it initially appears, enabling the development of robust algorithms that are both simpler and easier to analyze.
Cite as
Adi Fine, Haim Kaplan, and Uri Stemmer. Minimizing Recourse in an Adaptive Balls and Bins Game. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 77:1-77:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fine_et_al:LIPIcs.ICALP.2025.77,
author = {Fine, Adi and Kaplan, Haim and Stemmer, Uri},
title = {{Minimizing Recourse in an Adaptive Balls and Bins Game}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {77:1--77:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.77},
URN = {urn:nbn:de:0030-drops-234544},
doi = {10.4230/LIPIcs.ICALP.2025.77},
annote = {Keywords: Adaptive adversary, load-balancing game, balls-and-bins, randomized algorithms, dynamic 3-spanner, dynamic graph algorithms, adversarial robustness}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Nick Fischer, Marvin Künnemann, Mirza Redžić, and Julian Stieß
Abstract
Is detecting a k-clique in k-partite regular (hyper-)graphs as hard as in the general case? Intuition suggests yes, but proving this - especially for hypergraphs - poses notable challenges. Concretely, we consider a strong notion of regularity in h-uniform hypergraphs, where we essentially require that any subset of at most h-1 is incident to a uniform number of hyperedges. Such notions are studied intensively in the combinatorial block design literature. We show that any f(k)n^{g(k)}-time algorithm for detecting k-cliques in such graphs transfers to an f'(k)n^{g(k)}-time algorithm for the general case, establishing a fine-grained equivalence between the h-uniform hyperclique hypothesis and its natural regular analogue.
Equipped with this regularization result, we then fully resolve the fine-grained complexity of optimizing Boolean constraint satisfaction problems over assignments with k non-zeros. Our characterization depends on the maximum degree d of a constraint function. Specifically, if d ≤ 1, we obtain a linear-time solvable problem, if d = 2, the time complexity is essentially equivalent to k-clique detection, and if d ≥ 3 the problem requires exhaustive-search time under the 3-uniform hyperclique hypothesis. To obtain our hardness results, the regularization result plays a crucial role, enabling a very convenient approach when applied carefully. We believe that our regularization result will find further applications in the future.
Cite as
Nick Fischer, Marvin Künnemann, Mirza Redžić, and Julian Stieß. The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 78:1-78:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fischer_et_al:LIPIcs.ICALP.2025.78,
author = {Fischer, Nick and K\"{u}nnemann, Marvin and Red\v{z}i\'{c}, Mirza and Stie{\ss}, Julian},
title = {{The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {78:1--78:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.78},
URN = {urn:nbn:de:0030-drops-234559},
doi = {10.4230/LIPIcs.ICALP.2025.78},
annote = {Keywords: fine-grained complexity theory, clique detections in hypergraphs, constraint satisfaction, parameterized algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Maxime Flin and Magnús M. Halldórsson
Abstract
We consider the problem of maintaining a proper (Δ + 1)-vertex coloring in a graph on n-vertices and maximum degree Δ undergoing edge insertions and deletions. We give a randomized algorithm with amortized update time Õ(n^{2/3}) against adaptive adversaries, meaning that updates may depend on past decisions by the algorithm. This improves on the very recent Õ(n^{8/9})-update-time algorithm by Behnezhad, Rajaraman, and Wasim (SODA 2025) and matches a natural barrier for dynamic (Δ+1)-coloring algorithms. The main improvements are on the densest regions of the graph, where we use structural hints from the study of distributed graph algorithms.
Cite as
Maxime Flin and Magnús M. Halldórsson. Faster Dynamic (Δ+1)-Coloring Against Adaptive Adversaries. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 79:1-79:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{flin_et_al:LIPIcs.ICALP.2025.79,
author = {Flin, Maxime and Halld\'{o}rsson, Magn\'{u}s M.},
title = {{Faster Dynamic (\Delta+1)-Coloring Against Adaptive Adversaries}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {79:1--79:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.79},
URN = {urn:nbn:de:0030-drops-234560},
doi = {10.4230/LIPIcs.ICALP.2025.79},
annote = {Keywords: Dynamic Graph Algorithms, Coloring}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Pierre Fraigniaud, Maël Luce, Frédéric Magniez, and Ioan Todinca
Abstract
We show that, for every k ≥ 2, C_{2k}-freeness can be decided in O(n^{1-(1)/(k)}) rounds in the Broadcast CONGEST model, by a deterministic algorithm. This (deterministic) round-complexity is optimal for k = 2 up to logarithmic factors thanks to the lower bound for C₄-freeness by Drucker et al. [PODC 2014], which holds even for randomized algorithms. Moreover it matches the round-complexity of the best known randomized algorithms by Censor-Hillel et al. [DISC 2020] for k ∈ {3,4,5}, and by Fraigniaud et al. [PODC 2024] for k ≥ 6. Our algorithm uses parallel BFS-explorations with deterministic selections of the set of paths that are forwarded at each round, in a way similar to what was done for the detection of odd-length cycles, by Korhonen and Rybicki [OPODIS 2017]. However, the key element in the design and analysis of our algorithm is a new combinatorial result bounding the "local density" of graphs without 2k-cycles, which we believe is interesting on its own.
Cite as
Pierre Fraigniaud, Maël Luce, Frédéric Magniez, and Ioan Todinca. Deterministic Even-Cycle Detection in Broadcast CONGEST. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 80:1-80:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fraigniaud_et_al:LIPIcs.ICALP.2025.80,
author = {Fraigniaud, Pierre and Luce, Ma\"{e}l and Magniez, Fr\'{e}d\'{e}ric and Todinca, Ioan},
title = {{Deterministic Even-Cycle Detection in Broadcast CONGEST}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {80:1--80:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.80},
URN = {urn:nbn:de:0030-drops-234573},
doi = {10.4230/LIPIcs.ICALP.2025.80},
annote = {Keywords: local computing, CONGEST model}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Cheng-Hao Fu, Andrea Lincoln, and Rene Reyes
Abstract
In this paper we present tight lower-bounds and new upper-bounds for hypergraph and database problems. We give tight lower-bounds for finding minimum hypercycles. We give tight lower-bounds for a substantial regime of unweighted hypercycle. We also give a new faster algorithm for longer unweighted hypercycles. We give a worst-case to average-case reduction from detecting a subgraph of a hypergraph in the worst-case to counting subgraphs of hypergraphs in the average-case. We demonstrate two applications of this worst-case to average-case reduction, which result in average-case lower bounds for counting counting hypercycles in random hypergraphs and queries in average-case databases. Our tight upper and lower bounds for hypercycle detection in the worst-case have immediate implications for the average-case via our worst-case to average-case reductions.
Cite as
Cheng-Hao Fu, Andrea Lincoln, and Rene Reyes. Worst-Case and Average-Case Hardness of Hypercycle and Database Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 81:1-81:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fu_et_al:LIPIcs.ICALP.2025.81,
author = {Fu, Cheng-Hao and Lincoln, Andrea and Reyes, Rene},
title = {{Worst-Case and Average-Case Hardness of Hypercycle and Database Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {81:1--81:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.81},
URN = {urn:nbn:de:0030-drops-234581},
doi = {10.4230/LIPIcs.ICALP.2025.81},
annote = {Keywords: Hypergraphs, hypercycles, fine-grained complexity, average-case complexity, databases}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Harmender Gahlawat, Abhishek Rathod, and Meirav Zehavi
Abstract
Research of cycles through specific vertices is a central topic in graph theory. In this context, we focus on a well-studied computational problem, T-Cycle: given an undirected n-vertex graph G and a set of k vertices T ⊆ V(G) termed terminals, the objective is to determine whether G contains a simple cycle C through all the terminals. Our contribution is twofold: (i) We provide a 2^{O(√klog k)}⋅ n-time fixed-parameter deterministic algorithm for T-Cycle on planar graphs; (ii) We provide a k^{O(1)}⋅ n-time deterministic kernelization algorithm for T-Cycle on planar graphs where the produced instance is of size klog^{O(1)}k.
Both of our algorithms are optimal in terms of both k and n up to (poly)logarithmic factors in k under the ETH. In fact, our algorithms are the first subexponential-time fixed-parameter algorithm for T-Cycle on planar graphs, as well as the first polynomial kernel for T-Cycle on planar graphs. This substantially improves upon/expands the known literature on the parameterized complexity of the problem.
Cite as
Harmender Gahlawat, Abhishek Rathod, and Meirav Zehavi. (Almost-)Optimal FPT Algorithm and Kernel for T-Cycle on Planar Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 82:1-82:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gahlawat_et_al:LIPIcs.ICALP.2025.82,
author = {Gahlawat, Harmender and Rathod, Abhishek and Zehavi, Meirav},
title = {{(Almost-)Optimal FPT Algorithm and Kernel for T-Cycle on Planar Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {82:1--82:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.82},
URN = {urn:nbn:de:0030-drops-234593},
doi = {10.4230/LIPIcs.ICALP.2025.82},
annote = {Keywords: FPT Algorithms, Kernelization, T-Cycle, Subexponential Algorithmms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andreas Galanis, Leslie Ann Goldberg, and Paulina Smolarova
Abstract
We consider sampling in the so-called low-temperature regime, which is typically characterised by non-local behaviour and strong global correlations. Canonical examples include sampling independent sets on bipartite graphs and sampling from the ferromagnetic q-state Potts model. Low-temperature sampling is computationally intractable for general graphs, but recent advances based on the polymer method have made significant progress for graph families that exhibit certain expansion properties that reinforce the correlations, including for example expanders, lattices and dense graphs.
One of the most natural graph classes that has so far escaped this algorithmic framework is the class of sparse Erdős-Rényi random graphs whose expansion only manifests for sufficiently large subsets of vertices; small sets of vertices on the other hand have vanishing expansion which makes them behave independently from the bulk of the graph and therefore weakens the correlations. At a more technical level, the expansion of small sets is crucial for establishing the Kotecky-Priess condition which underpins the applicability of the framework.
Our main contribution is to develop the polymer method in the low-temperature regime for sparse random graphs. As our running example, we use the Potts and random-cluster models on G(n,d/n) for d = Θ(1), where we show a polynomial-time sampling algorithm for all sufficiently large q and d, at all temperatures. Our approach applies more generally for models that are monotone. Key to our result is a simple polymer definition that blends easily with the connectivity properties of the graph and allows us to show that polymers have size at most O(log n).
Cite as
Andreas Galanis, Leslie Ann Goldberg, and Paulina Smolarova. Low-Temperature Sampling on Sparse Random Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 83:1-83:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{galanis_et_al:LIPIcs.ICALP.2025.83,
author = {Galanis, Andreas and Goldberg, Leslie Ann and Smolarova, Paulina},
title = {{Low-Temperature Sampling on Sparse Random Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {83:1--83:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.83},
URN = {urn:nbn:de:0030-drops-234606},
doi = {10.4230/LIPIcs.ICALP.2025.83},
annote = {Keywords: approximate counting, Glauber dynamics, random cluster model, approximate sampling, Erd\H{o}s-R\'{e}nyi Graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Andreas Galanis, Leslie Ann Goldberg, and Xusheng Zhang
Abstract
Consider a k-SAT formula Φ where every variable appears at most d times, and let σ be a satisfying assignment of Φ sampled proportionally to e^{β m(σ)} where m(σ) is the number of variables set to true and β is a real parameter. Given Φ and σ, can we learn the value of β efficiently?
This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The k-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly d ≤ 2^{k/6.45} and impossible when d ≥ (k+1) 2^{k-1}. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in d, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of k-SAT formulas of bounded degree.
Our main contribution is to answer this question negatively. We show that one-shot learning for k-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees d as low as k² when β is sufficiently large, and bootstrap this to small values of β when d scales exponentially with k, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on d in terms of β. In particular, for the uniform case β → 0 that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition d≲ 2^{k/2}. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d≳ 2^{k/2}.
Cite as
Andreas Galanis, Leslie Ann Goldberg, and Xusheng Zhang. One-Shot Learning for k-SAT. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 84:1-84:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{galanis_et_al:LIPIcs.ICALP.2025.84,
author = {Galanis, Andreas and Goldberg, Leslie Ann and Zhang, Xusheng},
title = {{One-Shot Learning for k-SAT}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {84:1--84:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.84},
URN = {urn:nbn:de:0030-drops-234610},
doi = {10.4230/LIPIcs.ICALP.2025.84},
annote = {Keywords: Computational Learning Theory, k-SAT, Maximum likelihood estimation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Dániel Garamvölgyi, Ryuhei Mizutani, Taihei Oki, Tamás Schwarcz, and Yutaro Yamaguchi
Abstract
Consider a matroid M whose ground set is equipped with a labeling to an abelian group. A basis of M is called F-avoiding if the sum of the labels of its elements is not in a forbidden label set F. Hörsch, Imolay, Mizutani, Oki, and Schwarcz (2024) conjectured that if an F-avoiding basis exists, then any basis can be transformed into an F-avoiding basis by exchanging at most |F| elements. This proximity conjecture is known to hold for certain specific groups; in the case where |F| ≤ 2; or when the matroid is subsequence-interchangeably base orderable (SIBO), which is a weakening of the so-called strongly base orderable (SBO) property.
In this paper, we settle the proximity conjecture for sparse paving matroids or in the case where |F| ≤ 4. Related to the latter result, we present the first known example of a non-SIBO matroid. We further address the setting of multiple group-label constraints, showing proximity results for the cases of two labelings, SIBO matroids, matroids representable over a fixed, finite field, and sparse paving matroids.
Cite as
Dániel Garamvölgyi, Ryuhei Mizutani, Taihei Oki, Tamás Schwarcz, and Yutaro Yamaguchi. Towards the Proximity Conjecture on Group-Labeled Matroids. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 85:1-85:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{garamvolgyi_et_al:LIPIcs.ICALP.2025.85,
author = {Garamv\"{o}lgyi, D\'{a}niel and Mizutani, Ryuhei and Oki, Taihei and Schwarcz, Tam\'{a}s and Yamaguchi, Yutaro},
title = {{Towards the Proximity Conjecture on Group-Labeled Matroids}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {85:1--85:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.85},
URN = {urn:nbn:de:0030-drops-234628},
doi = {10.4230/LIPIcs.ICALP.2025.85},
annote = {Keywords: sparse paving matroid, subsequence-interchangeable base orderability, congruency constraint, multiple labelings}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Paweł Gawrychowski and Wojciech Janczewski
Abstract
A permutation graph is the intersection graph of a set of segments between two parallel lines. In other words, they are defined by a permutation π on n elements, such that u and v are adjacent if an only if u < v but π(u) > π(v). We consider the problem of computing the distances in such a graph in the setting of informative labeling schemes.
The goal of such a scheme is to assign a short bitstring 𝓁(u) to every vertex u, such that the distance between u and v can be computed using only 𝓁(u) and 𝓁(v), and no further knowledge about the whole graph (other than that it is a permutation graph). This elegantly captures the intuition that we would like our data structure to be distributed, and often leads to interesting combinatorial challenges while trying to obtain lower and upper bounds that match up to the lower-order terms.
For distance labeling of permutation graphs on n vertices, Katz, Katz, and Peleg [STACS 2000] showed how to construct labels consisting of 𝒪(log² n) bits. Later, Bazzaro and Gavoille [Discret. Math. 309(11)] obtained an asymptotically optimal bound by showing how to construct labels consisting of 9log{n}+𝒪(1) bits, and proving that 3log{n}-𝒪(log{log{n}}) bits are necessary. This however leaves a quite large gap between the known lower and upper bounds. We close this gap by showing how to construct labels consisting of 3log{n}+𝒪(1) bits.
Cite as
Paweł Gawrychowski and Wojciech Janczewski. Optimal Distance Labeling for Permutation Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 86:1-86:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2025.86,
author = {Gawrychowski, Pawe{\l} and Janczewski, Wojciech},
title = {{Optimal Distance Labeling for Permutation Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {86:1--86:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.86},
URN = {urn:nbn:de:0030-drops-234632},
doi = {10.4230/LIPIcs.ICALP.2025.86},
annote = {Keywords: informative labeling, permutation graph, distance labeling}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Giordano Giambartolomei, Frederik Mallmann-Trenn, and Raimundo Saona
Abstract
Prophet inequalities are a central object of study in optimal stopping theory. In the iid model, a gambler sees values in an online fashion, sampled independently from a given distribution. Upon observing each value, the gambler either accepts it as a reward, or irrevocably rejects it and proceeds to observe the next value. The goal of the gambler, who cannot see the future, is to maximise the expected value of the reward while competing against the expectation of a prophet (the offline maximum). In other words, one seeks to maximise the gambler-to-prophet ratio of the expectations.
This model has been studied with infinite, finite and unknown number of values. When the gambler faces a random number of values, the model is said to have a random horizon. We consider the model in which the gambler is given a priori knowledge of the horizon’s distribution. Alijani et al. (2020) designed a single-threshold algorithm achieving a ratio of 1/2 when the random horizon has an increasing hazard rate and is independent of the values. We prove that with a single threshold, a ratio of 1/2 is actually achievable for several larger classes of horizon distributions, with the largest being known as the 𝒢 class in reliability theory. Moreover, we show that this does not extend to its dual, the ̅𝒢 class (which includes the decreasing hazard rate class), while it can be extended to low-variance horizons. Finally, we construct the first example of a family of horizons, for which multiple thresholds are necessary to achieve a nonzero ratio. We establish that the Secretary Problem optimal stopping rule provides one such algorithm, paving the way towards the study of the model beyond single-threshold algorithms.
Cite as
Giordano Giambartolomei, Frederik Mallmann-Trenn, and Raimundo Saona. IID Prophet Inequality with Random Horizon: Going Beyond Increasing Hazard Rates. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 87:1-87:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{giambartolomei_et_al:LIPIcs.ICALP.2025.87,
author = {Giambartolomei, Giordano and Mallmann-Trenn, Frederik and Saona, Raimundo},
title = {{IID Prophet Inequality with Random Horizon: Going Beyond Increasing Hazard Rates}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {87:1--87:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.87},
URN = {urn:nbn:de:0030-drops-234643},
doi = {10.4230/LIPIcs.ICALP.2025.87},
annote = {Keywords: Online algorithms, Prophet Inequality, Random Horizon, Secretary Problem}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Daniel Gibney, Jackson Huffstutler, Mano Prakash Parthasarathi, and Sharma V. Thankachan
Abstract
We study the problem of indexing a text T[1..n] to support pattern matching with wildcards. The input of a query is a pattern P[1..m] containing h ∈ [0, k] wildcard (a.k.a. don't care) characters and the output is the set of occurrences of P in T (i.e., starting positions of substrings of T that matches P), where k = o(log n) is fixed at index construction. A classic solution by Cole et al. [STOC 2004] provides an index with space complexity O(n ⋅ (clog n)^k/k!)) and query time O(m+2^h log log n+occ), where c > 1 is a constant, and occ denotes the number of occurrences of P in T. We introduce a new data structure that significantly reduces space usage for highly repetitive texts while maintaining efficient query processing. Its space (in words) and query time are as follows:
O(δ log (n/δ)⋅ c^k (1+(log^k (δ log n))/k!)) and O((m+2^h +occ)log n))
The parameter δ, known as substring complexity, is a recently introduced measure of repetitiveness that serves as a unifying and lower-bounding metric for several popular measures, including the number of phrases in the LZ77 factorization (denoted by z) and the number of runs in the Burrows-Wheeler Transform (denoted by r). Moreover, O(δ log (n/δ)) represents the optimal space required to encode the data in terms of n and δ, helping us see how close our space is to the minimum required. In another trade-off, we match the query time of Cole et al.’s index using O(n+δ log (n/δ) ⋅ (clogδ)^{k+ε}/k!) space, where ε > 0 is an arbitrarily small constant. We also demonstrate how these techniques can be applied to a more general indexing problem, where the query pattern includes k-gaps (a gap can be interpreted as a contiguous sequence of wildcard characters).
Cite as
Daniel Gibney, Jackson Huffstutler, Mano Prakash Parthasarathi, and Sharma V. Thankachan. Repetition Aware Text Indexing for Matching Patterns with Wildcards. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 88:1-88:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gibney_et_al:LIPIcs.ICALP.2025.88,
author = {Gibney, Daniel and Huffstutler, Jackson and Parthasarathi, Mano Prakash and Thankachan, Sharma V.},
title = {{Repetition Aware Text Indexing for Matching Patterns with Wildcards}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {88:1--88:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.88},
URN = {urn:nbn:de:0030-drops-234656},
doi = {10.4230/LIPIcs.ICALP.2025.88},
annote = {Keywords: Pattern Matching, Text Indexing, Wildcard Matching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Valentin Gledel, Nacim Oijid, Sébastien Tavenas, and Stéphan Thomassé
Abstract
Positional games were introduced by Hales and Jewett in 1963, and their study became more popular when Erdős and Selfridge showed their connection to Ramsey theory and hypergraph coloring in 1973. Several conventions of these games exist, and the most popular one, Maker-Breaker was proved to be PSPACE-complete by Schaefer in 1978. The study of their complexity then stopped for decades, until 2017 when Bonnet, Jamain, and Saffidine proved that Maker-Breaker is W[1]-complete when parameterized by the number of moves. The study was then intensified when Rahman and Watson improved Schaefer’s result in 2021 by proving that the PSPACE-hardness holds for 6-uniform hypergraphs. More recently, Galliot, Gravier, and Sivignon proved that computing the winner on rank 3 hypergraphs is in P, and Keopke proved that the PSPACE-hardness also holds for 5-uniform hypergraphs.
We focus here on the Client-Waiter and the Waiter-Client conventions. Both were proved to be NP-hard by Csernenszky, Martin, and Pluhár in 2011, but neither completeness nor positive results were known. In this paper, we complete the study of these conventions by proving that the former is PSPACE-complete, even restricted to 6-uniform hypergraphs, and by providing an FPT-algorithm for the latter, parameterized by the size of its largest edge. In particular, the winner of Waiter-Client can be computed in polynomial time in rank k hypergraphs for any fixed integer k. Finally, in search of the exact location of the complexity gap in the Client-Waiter convention, we focus on rank 3 hypergraphs. We provide an algorithm that runs in polynomial time with an oracle in NP.
Cite as
Valentin Gledel, Nacim Oijid, Sébastien Tavenas, and Stéphan Thomassé. On the Complexity of Client-Waiter and Waiter-Client Games. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 89:1-89:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gledel_et_al:LIPIcs.ICALP.2025.89,
author = {Gledel, Valentin and Oijid, Nacim and Tavenas, S\'{e}bastien and Thomass\'{e}, St\'{e}phan},
title = {{On the Complexity of Client-Waiter and Waiter-Client Games}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {89:1--89:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.89},
URN = {urn:nbn:de:0030-drops-234666},
doi = {10.4230/LIPIcs.ICALP.2025.89},
annote = {Keywords: Complexity, positional games, Maker-Breaker, Client-Waiter, Waiter-Client, PSPACE-complete, FPT}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Guilherme de C. M. Gomes, Raul Lopes, and Ignasi Sau
Abstract
In the Directed Disjoint Paths problem (k-DDP), we are given a digraph and k pairs of terminals, and the goal is to find k pairwise vertex-disjoint paths connecting each pair of terminals. Bang-Jensen and Thomassen [SIAM J. Discrete Math. 1992] claimed that k-DDP is NP-complete on tournaments, and this result triggered a very active line of research about the complexity of the problem on tournaments and natural superclasses. We identify a flaw in their proof, which has been acknowledged by the authors, and provide a new NP-completeness proof. From an algorithmic point of view, Fomin and Pilipczuk [J. Comb. Theory B 2019] provided an FPT algorithm for the edge-disjoint version of the problem on semicomplete digraphs, and showed that their technique cannot work for the vertex-disjoint version. We overcome this obstacle by showing that the version of k-DDP where we allow congestion c on the vertices is FPT on semicomplete digraphs provided that c is greater than k/2. This is based on a quite elaborate irrelevant vertex argument inspired by the edge-disjoint version, and we show that our choice of c is best possible for this technique, with a counterexample with no irrelevant vertices when c ≤ k/2. We also prove that k-DDP on digraphs that can be partitioned into h semicomplete digraphs is W[1]-hard parameterized by k+h, which shows that the XP algorithm presented by Chudnovsky, Scott, and Seymour [J. Comb. Theory B 2019] is essentially optimal.
Cite as
Guilherme de C. M. Gomes, Raul Lopes, and Ignasi Sau. Revisiting Directed Disjoint Paths on Tournaments (And Relatives). In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 90:1-90:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{dec.m.gomes_et_al:LIPIcs.ICALP.2025.90,
author = {de C. M. Gomes, Guilherme and Lopes, Raul and Sau, Ignasi},
title = {{Revisiting Directed Disjoint Paths on Tournaments (And Relatives)}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {90:1--90:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.90},
URN = {urn:nbn:de:0030-drops-234678},
doi = {10.4230/LIPIcs.ICALP.2025.90},
annote = {Keywords: directed graphs, tournaments, semicomplete digraphs, directed disjoint paths, congestion, parameterized complexity, directed pathwidth}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Gramoz Goranci, Monika Henzinger, Harald Räcke, and A. R. Sricharan
Abstract
We give an algorithm that, with high probability, maintains a (1-ε)-approximate s-t maximum flow in undirected, uncapacitated n-vertex graphs undergoing m edge insertions in Õ(m+ n F^*/ε) total update time, where F^{*} is the maximum flow on the final graph. This is the first algorithm to achieve polylogarithmic amortized update time for dense graphs (m = Ω(n²)), and more generally, for graphs where F^* = Õ(m/n).
At the heart of our incremental algorithm is the residual graph sparsification technique of Karger and Levine [SICOMP '15], originally designed for computing exact maximum flows in the static setting. Our main contributions are (i) showing how to maintain such sparsifiers for approximate maximum flows in the incremental setting and (ii) generalizing the cut sparsification framework of Fung et al. [SICOMP '19] from undirected graphs to balanced directed graphs.
Cite as
Gramoz Goranci, Monika Henzinger, Harald Räcke, and A. R. Sricharan. Incremental Approximate Maximum Flow via Residual Graph Sparsification. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 91:1-91:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{goranci_et_al:LIPIcs.ICALP.2025.91,
author = {Goranci, Gramoz and Henzinger, Monika and R\"{a}cke, Harald and Sricharan, A. R.},
title = {{Incremental Approximate Maximum Flow via Residual Graph Sparsification}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {91:1--91:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.91},
URN = {urn:nbn:de:0030-drops-234686},
doi = {10.4230/LIPIcs.ICALP.2025.91},
annote = {Keywords: incremental flow, sparsification, approximate flow}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Gramoz Goranci, Adam Karczmarz, Ali Momeni, and Nikos Parotsidis
Abstract
Given a directed graph G, a transitive reduction G^t of G (first studied by Aho, Garey, Ullman [SICOMP `72]) is a minimal subgraph of G that preserves the reachability relation between every two vertices in G.
In this paper, we study the computational complexity of transitive reduction in the dynamic setting. We obtain the first fully dynamic algorithms for maintaining a transitive reduction of a general directed graph undergoing updates such as edge insertions or deletions. Our first algorithm achieves O(m+n log n) amortized update time, which is near-optimal for sparse directed graphs, and can even support extended update operations such as inserting a set of edges all incident to the same vertex, or deleting an arbitrary set of edges. Our second algorithm relies on fast matrix multiplication and achieves O(m+ n^{1.585}) worst-case update time.
Cite as
Gramoz Goranci, Adam Karczmarz, Ali Momeni, and Nikos Parotsidis. Fully Dynamic Algorithms for Transitive Reduction. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 92:1-92:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{goranci_et_al:LIPIcs.ICALP.2025.92,
author = {Goranci, Gramoz and Karczmarz, Adam and Momeni, Ali and Parotsidis, Nikos},
title = {{Fully Dynamic Algorithms for Transitive Reduction}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {92:1--92:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.92},
URN = {urn:nbn:de:0030-drops-234697},
doi = {10.4230/LIPIcs.ICALP.2025.92},
annote = {Keywords: Spectral sparsification, Dynamic algorithms, (Directed) hypergraphs, Data structures}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Adam Górkiewicz and Adam Karczmarz
Abstract
In this paper, we show new data structures maintaining approximate shortest paths in sparse directed graphs with polynomially bounded non-negative edge weights under edge insertions.
We give more efficient incremental (1+ε)-approximate APSP data structures that work against an adaptive adversary: a deterministic one with Õ(m^{3/2}n^{3/4}) total update time and a randomized one with Õ(m^{4/3}n^{5/6}) total update time. For sparse graphs, these both improve polynomially upon the best-known bound against an adaptive adversary [Karczmarz and Łącki, ESA 2019]. To achieve that, building on the ideas of [Chechik and Zhang, SODA 2021] and [Kyng, Meierhans and Probst Gutenberg, SODA 2022], we show a near-optimal (1+ε)-approximate incremental SSSP data structure for a special case when all edge updates are adjacent to the source, that might be of independent interest.
We also describe a very simple and near-optimal offline incremental (1+ε)-approximate SSSP data structure. While online near-linear partially dynamic SSSP data structures have been elusive so far (except for dense instances), our result excludes using certain types of impossibility arguments to rule them out. Additionally, our offline solution leads to near-optimal and deterministic all-pairs bounded-leg shortest paths data structure for sparse graphs.
Cite as
Adam Górkiewicz and Adam Karczmarz. On Incremental Approximate Shortest Paths in Directed Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 93:1-93:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gorkiewicz_et_al:LIPIcs.ICALP.2025.93,
author = {G\'{o}rkiewicz, Adam and Karczmarz, Adam},
title = {{On Incremental Approximate Shortest Paths in Directed Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {93:1--93:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.93},
URN = {urn:nbn:de:0030-drops-234700},
doi = {10.4230/LIPIcs.ICALP.2025.93},
annote = {Keywords: dynamic shortest paths, incremental shortest paths, offline dynamic algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Tsubasa Harada and Toshiya Itoh
Abstract
For the online transportation problem with m server sites, it has long been known that the competitive ratio of any deterministic algorithm is at least 2m-1. Kalyanasundaram and Pruhs conjectured in 1998 that a deterministic (2m-1)-competitive algorithm exists for this problem, a conjecture that has remained open for over two decades.
In this paper, we propose a new deterministic algorithm for the online transportation problem and show that it achieves a competitive ratio of at most 8m-5. This is the first O(m)-competitive deterministic algorithm, coming close to the lower bound of 2m-1 within a constant factor.
Cite as
Tsubasa Harada and Toshiya Itoh. A Nearly Optimal Deterministic Algorithm for Online Transportation Problem. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 94:1-94:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{harada_et_al:LIPIcs.ICALP.2025.94,
author = {Harada, Tsubasa and Itoh, Toshiya},
title = {{A Nearly Optimal Deterministic Algorithm for Online Transportation Problem}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {94:1--94:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.94},
URN = {urn:nbn:de:0030-drops-234712},
doi = {10.4230/LIPIcs.ICALP.2025.94},
annote = {Keywords: Online algorithms, Competitive analysis, Online metric matching, Online weighted matching, Online minimum weight perfect matching, Online transportation problem, Online facility assignment, Greedy algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sebastian Haslebacher
Abstract
ARRIVAL is the problem of deciding which out of two possible destinations will be reached first by a token that moves deterministically along the edges of a directed graph, according to so-called switching rules. It is known to lie in NP ∩ CoNP, but not known to lie in 𝖯. The state-of-the-art algorithm due to Gärtner et al. (ICALP `21) runs in time 2^{𝒪(√n log n)} on an n-vertex graph.
We prove that ARRIVAL can be solved in time 2^{𝒪(k log² n)} on n-vertex graphs of treewidth k. Our algorithm is derived by adapting a simple recursive algorithm for a generalization of ARRIVAL called G-ARRIVAL. This simple recursive algorithm acts as a framework from which we can also rederive the subexponential upper bound of Gärtner et al.
Our second result is a reduction from G-ARRIVAL to the problem of finding an approximate fixed point of an 𝓁₁-contracting function f : [0, 1]ⁿ → [0, 1]ⁿ. Finding such fixed points is a well-studied problem in the case of the 𝓁₂-metric and the 𝓁_∞-metric, but little is known about the 𝓁₁-case.
Both of our results highlight parallels between ARRIVAL and the Simple Stochastic Games (SSG) problem. Concretely, Chatterjee et al. (SODA `23) gave an algorithm for SSG parameterized by treewidth that achieves a similar bound as we do for ARRIVAL, and SSG is known to reduce to 𝓁_∞-contraction.
Cite as
Sebastian Haslebacher. ARRIVAL: Recursive Framework & 𝓁₁-Contraction. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 95:1-95:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{haslebacher:LIPIcs.ICALP.2025.95,
author = {Haslebacher, Sebastian},
title = {{ARRIVAL: Recursive Framework \& 𝓁₁-Contraction}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {95:1--95:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.95},
URN = {urn:nbn:de:0030-drops-234723},
doi = {10.4230/LIPIcs.ICALP.2025.95},
annote = {Keywords: ARRIVAL, G-ARRIVAL, Deterministic Random Walk, Rotor-Routing, 𝓁₁-Contraction, Banach Fixed Point}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shuichi Hirahara and Naoto Ohsaka
Abstract
k-Coloring Reconfiguration is one of the most well-studied reconfiguration problems, which asks to transform a given proper k-coloring of a graph to another by repeatedly recoloring a single vertex. Its approximate version, Maxmin k-Cut Reconfiguration, is defined as an optimization problem of maximizing the minimum fraction of bichromatic edges during the transformation between (not necessarily proper) k-colorings. In this paper, we demonstrate that the optimal approximation factor of this problem is 1 - Θ(1/k) for every k ≥ 2. Specifically, we prove the PSPACE-hardness of approximating the objective value within a factor of 1 - ε/k for some universal constant ε > 0, whereas we develop a deterministic polynomial-time algorithm that achieves the approximation factor of 1 - 2/k.
To prove the hardness result, we propose a new probabilistic verifier that tests a "striped" pattern. Our approximation algorithm is based on a random transformation that passes through a random k-coloring.
Cite as
Shuichi Hirahara and Naoto Ohsaka. Asymptotically Optimal Inapproximability of Maxmin k-Cut Reconfiguration. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 96:1-96:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{hirahara_et_al:LIPIcs.ICALP.2025.96,
author = {Hirahara, Shuichi and Ohsaka, Naoto},
title = {{Asymptotically Optimal Inapproximability of Maxmin k-Cut Reconfiguration}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {96:1--96:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.96},
URN = {urn:nbn:de:0030-drops-234733},
doi = {10.4230/LIPIcs.ICALP.2025.96},
annote = {Keywords: reconfiguration problems, graph coloring, hardness of approximation}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shuichi Hirahara and Nobutaka Shimizu
Abstract
We present an optimal "worst-case exact to average-case approximate" reduction for matrix multiplication over a finite field of prime order p. Any efficient algorithm that correctly computes, in expectation, at least (1/p + ε)-fraction of entries of the multiplication A ⋅ B of a pair (A, B) of uniformly random matrices over the finite field of order p for a positive constant ε can be transformed into an efficient randomized algorithm that computes A ⋅ B for all the pairs (A, B) of matrices with high probability. Previously, such reductions were known only in a low-error regime (Gola, Shinkar and Singh; RANDOM 2024) or under non-uniform reductions (Hirahara and Shimizu; STOC 2025).
Cite as
Shuichi Hirahara and Nobutaka Shimizu. An Optimal Error-Correcting Reduction for Matrix Multiplication. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 97:1-97:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{hirahara_et_al:LIPIcs.ICALP.2025.97,
author = {Hirahara, Shuichi and Shimizu, Nobutaka},
title = {{An Optimal Error-Correcting Reduction for Matrix Multiplication}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {97:1--97:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.97},
URN = {urn:nbn:de:0030-drops-234742},
doi = {10.4230/LIPIcs.ICALP.2025.97},
annote = {Keywords: Matrix Multiplication, Error-Correcting Reduction, Average-Case Complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Zhiyi Huang, Chui Shan Lee, Xinkai Shu, and Zhaozi Wang
Abstract
We study the online allocation of divisible items to n agents with additive valuations for p-mean welfare maximization, a problem introduced by Barman, Khan, and Maiti (2022). Our algorithmic and hardness results characterize the optimal competitive ratios for the entire spectrum of -∞ ≤ p ≤ 1. Surprisingly, our improved algorithms for all p ≤ (1)/(log n) are simply the greedy algorithm for the Nash welfare, supplemented with two auxiliary components to ensure all agents have non-zero utilities and to help a small number of agents with low utilities. In this sense, the long arm of Nashian allocation achieves near-optimal competitive ratios not only for Nash welfare but also all the way to egalitarian welfare.
Cite as
Zhiyi Huang, Chui Shan Lee, Xinkai Shu, and Zhaozi Wang. The Long Arm of Nashian Allocation in Online p-Mean Welfare Maximization. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 98:1-98:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{huang_et_al:LIPIcs.ICALP.2025.98,
author = {Huang, Zhiyi and Lee, Chui Shan and Shu, Xinkai and Wang, Zhaozi},
title = {{The Long Arm of Nashian Allocation in Online p-Mean Welfare Maximization}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {98:1--98:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.98},
URN = {urn:nbn:de:0030-drops-234754},
doi = {10.4230/LIPIcs.ICALP.2025.98},
annote = {Keywords: Online Algorithms, Fair Division, Nash Welfare}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Yuni Iwamasa, Taihei Oki, and Tasuku Soma
Abstract
We study the semistability of quiver representations from an algorithmic perspective. We present efficient algorithms for several fundamental computational problems on the semistability of quiver representations: deciding the semistability and σ-semistability, finding the maximizers of King’s criterion, and computing the Harder-Narasimhan filtration. We also investigate a class of polyhedral cones defined by the linear system in King’s criterion, which we refer to as King cones. For rank-one representations, we demonstrate that these King cones can be encoded by submodular flow polytopes, enabling us to decide the σ-semistability in strongly polynomial time. Our approach employs submodularity in quiver representations, which may be of independent interest.
Cite as
Yuni Iwamasa, Taihei Oki, and Tasuku Soma. Algorithmic Aspects of Semistability of Quiver Representations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 99:1-99:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{iwamasa_et_al:LIPIcs.ICALP.2025.99,
author = {Iwamasa, Yuni and Oki, Taihei and Soma, Tasuku},
title = {{Algorithmic Aspects of Semistability of Quiver Representations}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {99:1--99:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.99},
URN = {urn:nbn:de:0030-drops-234762},
doi = {10.4230/LIPIcs.ICALP.2025.99},
annote = {Keywords: quivers, \sigma-semistability, King’s criterion, operator scaling, submodular flow}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ragesh Jaiswal, Amit Kumar, and Jatin Yadav
Abstract
Sorting is one of the most basic primitives in many algorithms and data analysis tasks. Comparison-based sorting algorithms, like quick-sort and merge-sort, are known to be optimal when the outcome of each comparison is error-free. However, many real-world sorting applications operate in scenarios where the outcome of each comparison can be noisy. In this work, we explore settings where a bounded number of comparisons are potentially corrupted by erroneous agents, resulting in arbitrary, adversarial outcomes.
We model the sorting problem as a query-limited tournament graph where edges involving erroneous nodes may yield arbitrary results. Our primary contribution is a randomized algorithm inspired by quick-sort that, in expectation, produces an ordering close to the true total order while only querying Õ(n) edges. We achieve a distance from the target order π within (3 + ε)|B|, where B is the set of erroneous nodes, balancing the competing objectives of minimizing both query complexity and misalignment with π. Our algorithm needs to carefully balance two aspects - identify a pivot that partitions the vertex set evenly and ensure that this partition is "truthful" and yet query as few "triangles" in the graph G as possible. Since the nodes in B can potentially hide in an intricate manner, our algorithm requires several technical steps that ensure that progress is made in each recursive step.
Additionally, we demonstrate significant implications for the Ulam-k-Median problem. This is a classical clustering problem where the metric is defined on the set of permutations on a set of d elements. Chakraborty, Das, and Krauthgamer gave a (2-ε) FPT approximation algorithm for this problem, where the running time is super-linear in both n and d. We give the first (2-ε) FPT linear time approximation algorithm for this problem. Our main technical result gives a strengthening of the results in Chakraborty et al. by showing that a good 1-median solution can be obtained from a constant-size random sample of the input. We use our robust sorting framework to find a good solution from such a random sample. We feel that the notion of robust sorting should have applications in several such settings.
Cite as
Ragesh Jaiswal, Amit Kumar, and Jatin Yadav. Robust-Sorting and Applications to Ulam-Median. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 100:1-100:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{jaiswal_et_al:LIPIcs.ICALP.2025.100,
author = {Jaiswal, Ragesh and Kumar, Amit and Yadav, Jatin},
title = {{Robust-Sorting and Applications to Ulam-Median}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {100:1--100:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.100},
URN = {urn:nbn:de:0030-drops-234774},
doi = {10.4230/LIPIcs.ICALP.2025.100},
annote = {Keywords: Sorting, clustering, query complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Shaofeng H.-C. Jiang and Jianing Lou
Abstract
We devise ε-coresets for robust (k,z)-Clustering with m outliers through black-box reductions to vanilla clustering. Given an ε-coreset construction for vanilla clustering with size N, we construct coresets of size N⋅ polylog(kmε^{-1}) + O_z(min{kmε^{-1}, m ε^{-2z}log^z(kmε^{-1})}) for various metric spaces, where O_z hides 2^{O(zlog z)} factors. This increases the size of the vanilla coreset by a small multiplicative factor of polylog(kmε^{-1}), and the additive term is up to a (ε^{-1}log (km))^{O(z)} factor to the size of the optimal robust coreset. Plugging in recent vanilla coreset results of [Cohen-Addad, Saulpic and Schwiegelshohn, STOC'21; Cohen-Addad, Draganov, Russo, Saulpic and Schwiegelshohn, SODA'25], we obtain the first coresets for (k,z)-Clustering with m outliers with size near-linear in k while previous results have size at least Ω(k²) [Huang, Jiang, Lou and Wu, ICLR'23; Huang, Li, Lu and Wu, SODA'25].
Technically, we establish two conditions under which a vanilla coreset is as well a robust coreset. The first condition requires the dataset to satisfy special structures - it can be broken into "dense" parts with bounded diameter. We combine this with a new bounded-diameter decomposition that has only O_z(km ε^{-1}) non-dense points to obtain the O_z(km ε^{-1}) additive bound. Another sufficient condition requires the vanilla coreset to possess an extra size-preserving property. To utilize this condition, we further give a black-box reduction that turns a vanilla coreset to the one that satisfies the said size-preserving property, and this leads to the alternative O_z(mε^{-2z}log^{z}(kmε^{-1})) additive size bound.
We also give low-space implementations of our reductions in the dynamic streaming setting. Combined with known streaming constructions for vanilla coresets [Braverman, Frahling, Lang, Sohler and Yang, ICML'17; Hu, Song, Yang and Zhong, arXiv'1802.00459], we obtain the first dynamic streaming algorithms for coresets for k-Median (and k-Means) with m outliers, using space Õ(k + m) ⋅ poly(dε^{-1}log Δ) for inputs on a discrete grid [Δ]^d.
Cite as
Shaofeng H.-C. Jiang and Jianing Lou. Coresets for Robust Clustering via Black-Box Reductions to Vanilla Case. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 101:1-101:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{jiang_et_al:LIPIcs.ICALP.2025.101,
author = {Jiang, Shaofeng H.-C. and Lou, Jianing},
title = {{Coresets for Robust Clustering via Black-Box Reductions to Vanilla Case}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {101:1--101:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.101},
URN = {urn:nbn:de:0030-drops-234781},
doi = {10.4230/LIPIcs.ICALP.2025.101},
annote = {Keywords: Coresets, clustering, outliers, streaming algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Chris Jones and Lucas Pesenti
Abstract
We study a general class of nonlinear iterative algorithms which includes power iteration, belief propagation and approximate message passing, and many forms of gradient descent. When the input is a random matrix with i.i.d. entries, we use Boolean Fourier analysis to analyze these algorithms as low-degree polynomials in the entries of the input matrix. Each symmetrized Fourier character represents all monomials with a certain shape as specified by a small graph, which we call a Fourier diagram.
We prove fundamental asymptotic properties of the Fourier diagrams: over the randomness of the input, all diagrams with cycles are negligible; the tree-shaped diagrams form a basis of asymptotically independent Gaussian vectors; and, when restricted to the trees, iterative algorithms exactly follow an idealized Gaussian dynamic. We use this to prove a state evolution formula, giving a "complete" asymptotic description of the algorithm’s trajectory.
The restriction to tree-shaped monomials mirrors the assumption of the cavity method, a 40-year-old non-rigorous technique in statistical physics which has served as one of the most important techniques in the field. We demonstrate how to implement cavity method derivations by 1) restricting the iteration to its tree approximation, and 2) observing that heuristic cavity method-type arguments hold rigorously on the simplified iteration. Our proofs use combinatorial arguments similar to the trace method from random matrix theory.
Finally, we push the diagram analysis to a number of iterations that scales with the dimension n of the input matrix, proving that the tree approximation still holds for a simple variant of power iteration all the way up to n^{Ω(1)} iterations.
Cite as
Chris Jones and Lucas Pesenti. Fourier Analysis of Iterative Algorithms. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 102:1-102:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{jones_et_al:LIPIcs.ICALP.2025.102,
author = {Jones, Chris and Pesenti, Lucas},
title = {{Fourier Analysis of Iterative Algorithms}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {102:1--102:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.102},
URN = {urn:nbn:de:0030-drops-234791},
doi = {10.4230/LIPIcs.ICALP.2025.102},
annote = {Keywords: Iterative Algorithms, Message-passing Algorithms, Random Matrix Theory}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Michael Kapralov, Akash Kumar, Silvio Lattanzi, Aida Mousavifar, and Weronika Wrzos-Kaminska
Abstract
Testing graph cluster structure has been a central object of study in property testing since the foundational work of Goldreich and Ron [STOC'96] on expansion testing, i.e. the problem of distinguishing between a single cluster (an expander) and a graph that is far from a single cluster. More generally, a (k, ε)-clusterable graph G is a graph whose vertex set admits a partition into k induced expanders, each with outer conductance bounded by ε. A recent line of work initiated by Czumaj, Peng and Sohler [STOC'15] has shown how to test whether a graph is close to (k, ε)-clusterable, and to locally determine which cluster a given vertex belongs to with misclassification rate ≈ ε, but no sublinear time algorithms for learning the structure of inter-cluster connections are known. As a simple example, can one locally distinguish between the "cluster graph" forming a line and a clique?
In this paper, we consider the problem of testing the hierarchical cluster structure of (k, ε)-clusterable graphs in sublinear time. Our measure of hierarchical clusterability is the well-established Dasgupta cost, and our main result is an algorithm that approximates Dasgupta cost of a (k, ε)-clusterable graph in sublinear time, using a small number of randomly chosen seed vertices for which cluster labels are known. Our main result is an O(√{log k}) approximation to Dasgupta cost of G in ≈ n^{1/2+O(ε)} time using ≈ n^{1/3} seeds, effectively giving a sublinear time simulation of the algorithm of Charikar and Chatziafratis [SODA'17] on clusterable graphs. To the best of our knowledge, ours is the first result on approximating the hierarchical clustering properties of such graphs in sublinear time.
Cite as
Michael Kapralov, Akash Kumar, Silvio Lattanzi, Aida Mousavifar, and Weronika Wrzos-Kaminska. Approximating Dasgupta Cost in Sublinear Time from a Few Random Seeds. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 103:1-103:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kapralov_et_al:LIPIcs.ICALP.2025.103,
author = {Kapralov, Michael and Kumar, Akash and Lattanzi, Silvio and Mousavifar, Aida and Wrzos-Kaminska, Weronika},
title = {{Approximating Dasgupta Cost in Sublinear Time from a Few Random Seeds}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {103:1--103:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.103},
URN = {urn:nbn:de:0030-drops-234804},
doi = {10.4230/LIPIcs.ICALP.2025.103},
annote = {Keywords: Sublinear algorithms, Hierarchical Clustering, Dasgupta’s Cost}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Debajyoti Kar, Arindam Khan, and Malin Rau
Abstract
We study three fundamental three-dimensional (3D) geometric packing problems: 3D (Geometric) Bin Packing (3D-BP), 3D Strip Packing (3D-SP), and Minimum Volume Bounding Box (3D-MVBB), where given a set of 3D (rectangular) cuboids, the goal is to find an axis-aligned nonoverlapping packing of all cuboids. In 3D-BP, we need to pack the given cuboids into the minimum number of unit cube bins. In 3D-SP, we need to pack them into a 3D cuboid with a unit square base and minimum height. Finally, in 3D-MVBB, the goal is to pack into a cuboid box of minimum volume.
It is NP-hard to even decide whether a set of rectangles can be packed into a unit square bin - giving an (absolute) approximation hardness of 2 for 3D-BP and 3D-SP. The previous best (absolute) approximation for all three problems is by Li and Cheng (SICOMP, 1990), who gave algorithms with approximation ratios of 13, 46/7, and 46/7+ε, respectively, for 3D-BP, 3D-SP, and 3D-MVBB. We provide improved approximation ratios of 6, 6, and 3+ε, respectively, for the three problems, for any constant ε > 0.
For 3D-BP, in the asymptotic regime, Bansal, Correa, Kenyon, and Sviridenko (Math. Oper. Res., 2006) showed that there is no asymptotic polynomial-time approximation scheme (APTAS) even when all items have the same height. Caprara (Math. Oper. Res., 2008) gave an asymptotic approximation ratio of T_{∞}² + ε ≈ 2.86, where T_{∞} is the well-known Harmonic constant in Bin Packing. We provide an algorithm with an improved asymptotic approximation ratio of 3 T_{∞}/2 + ε ≈ 2.54. Further, we show that unlike 3D-BP (and 3D-SP), 3D-MVBB admits an APTAS.
Cite as
Debajyoti Kar, Arindam Khan, and Malin Rau. Improved Approximation Algorithms for Three-Dimensional Bin Packing. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 104:1-104:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kar_et_al:LIPIcs.ICALP.2025.104,
author = {Kar, Debajyoti and Khan, Arindam and Rau, Malin},
title = {{Improved Approximation Algorithms for Three-Dimensional Bin Packing}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {104:1--104:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.104},
URN = {urn:nbn:de:0030-drops-234814},
doi = {10.4230/LIPIcs.ICALP.2025.104},
annote = {Keywords: Approximation Algorithms, Geometric Packing, Multidimensional Packing}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Prem Nigam Kar, David E. Roberson, Tim Seppelt, and Peter Zeman
Abstract
Mančinska and Roberson [FOCS'20] showed that two graphs are quantum isomorphic if and only if they are homomorphism indistinguishable over the class of planar graphs. Atserias et al. [JCTB'19] proved that quantum isomorphism is undecidable in general. The NPA hierarchy gives a sequence of semidefinite programming relaxations of quantum isomorphism. Recently, Roberson and Seppelt [ICALP'23] obtained a homomorphism indistinguishability characterization of the feasibility of each level of the Lasserre hierarchy of semidefinite programming relaxations of graph isomorphism. We prove a quantum analogue of this result by showing that each level of the NPA hierarchy of SDP relaxations for quantum isomorphism of graphs is equivalent to homomorphism indistinguishability over an appropriate class of planar graphs. By combining the convergence of the NPA hierarchy with the fact that the union of these graph classes is the set of all planar graphs, we are able to give a new proof of the result of Mančinska and Roberson [FOCS'20] that avoids the use of the theory of quantum groups. This homomorphism indistinguishability characterization also allows us to give a randomized polynomial-time algorithm deciding exact feasibility of each fixed level of the NPA hierarchy of SDP relaxations for quantum isomorphism.
Cite as
Prem Nigam Kar, David E. Roberson, Tim Seppelt, and Peter Zeman. NPA Hierarchy for Quantum Isomorphism and Homomorphism Indistinguishability. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 105:1-105:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kar_et_al:LIPIcs.ICALP.2025.105,
author = {Kar, Prem Nigam and Roberson, David E. and Seppelt, Tim and Zeman, Peter},
title = {{NPA Hierarchy for Quantum Isomorphism and Homomorphism Indistinguishability}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {105:1--105:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.105},
URN = {urn:nbn:de:0030-drops-234828},
doi = {10.4230/LIPIcs.ICALP.2025.105},
annote = {Keywords: Quantum isomorphism, NPA hierarchy, homomorphism indistinguishability}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sanjeev Khanna, Christian Konrad, and Jacques Dark
Abstract
We initiate the study of the Maximal Matching problem in bounded-deletion graph streams. In this setting, a graph G is revealed as an arbitrary sequence of edge insertions and deletions, where the number of insertions is unrestricted but the number of deletions is guaranteed to be at most K, for some given parameter K. The single-pass streaming space complexity of this problem is known to be Θ(n²) when K is unrestricted, where n is the number of vertices of the input graph. In this work, we present new randomized and deterministic algorithms and matching lower bound results that together give a tight understanding (up to poly-log factors) of how the space complexity of Maximal Matching evolves as a function of the parameter K: The randomized space complexity of this problem is Θ̃(n ⋅ √K), while the deterministic space complexity is Θ̃(n ⋅ K). We further show that if we relax the maximal matching requirement to an α-approximation to Maximum Matching, for any constant α > 2, then the space complexity for both, deterministic and randomized algorithms, strikingly changes to Θ̃(n + K).
A key conceptual contribution of our work that underlies all our algorithmic results is the introduction of the hierarchical maximal matching data structure, which computes a hierarchy of L maximal matchings on the substream of edge insertions, for an integer L. This deterministic data structure allows recovering a Maximal Matching even in the presence of up to L-1 edge deletions, which immediately yields an optimal deterministic algorithm with space Õ(n ⋅ K). To reduce the space to Õ(n ⋅ √K), we compute only √K levels of our hierarchical matching data structure and utilize a randomized linear sketch, i.e., our matching repair data structure, to repair any damage due to edge deletions. Using our repair data structure, we show that the level that is least affected by deletions can be repaired back to be globally maximal. The repair data structure is computed independently of the hierarchical maximal matching data structure and stores information for vertices at different scales with a gradually smaller set of vertices storing more and more information about their incident edges. The repair process then makes progress either by rematching a vertex to a previously unmatched vertex, or by strategically matching it to another matched vertex whose current mate is in a better position to find a new mate in that we have stored more information about its incident edges.
Our lower bound result for randomized algorithms is obtained by establishing a lower bound for a generalization of the well-known Augmented-Index problem in the one-way two-party communication setting that we refer to as Embedded-Augmented-Index, and then showing that an instance of Embedded-Augmented-Index reduces to computing a maximal matching in bounded-deletion streams. To obtain our lower bound for deterministic algorithms, we utilize a compression argument to show that a deterministic algorithm with space o(n ⋅ K) would yield a scheme to compress a suitable class of graphs below the information-theoretic threshold.
Cite as
Sanjeev Khanna, Christian Konrad, and Jacques Dark. Streaming Maximal Matching with Bounded Deletions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 106:1-106:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{khanna_et_al:LIPIcs.ICALP.2025.106,
author = {Khanna, Sanjeev and Konrad, Christian and Dark, Jacques},
title = {{Streaming Maximal Matching with Bounded Deletions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {106:1--106:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.106},
URN = {urn:nbn:de:0030-drops-234834},
doi = {10.4230/LIPIcs.ICALP.2025.106},
annote = {Keywords: Streaming Algorithms, Maximal Matching, Maximum Matching, Bounded-Deletion Streams}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sanjeev Khanna, Aaron Putterman, and Madhu Sudan
Abstract
We initiate the study of spectral sparsification for instances of Constraint Satisfaction Problems (CSPs). In particular, we introduce a notion of the spectral energy of a fractional assignment for a Boolean CSP instance, and define a spectral sparsifier as a weighted subset of constraints that approximately preserves this energy for all fractional assignments. Our definition not only strengthens the combinatorial notion of a CSP sparsifier but also extends well-studied concepts such as spectral sparsifiers for graphs and hypergraphs.
Recent work by Khanna, Putterman, and Sudan [SODA 2024] demonstrated near-linear sized combinatorial sparsifiers for a broad class of CSPs, which they term field-affine CSPs. Our main result is a polynomial-time algorithm that constructs a spectral CSP sparsifier of near-quadratic size for all field-affine CSPs. This class of CSPs includes graph (and hypergraph) cuts, XORs, and more generally, any predicate which can be written as P(x₁, … x_r) = 𝟏[∑ a_i x_i ≠ b mod p].
Based on our notion of the spectral energy of a fractional assignment, we also define an analog of the second eigenvalue of a CSP instance. We then show an extension of Cheeger’s inequality for all even-arity XOR CSPs, showing that this second eigenvalue loosely captures the "expansion" of the underlying CSP. This extension specializes to the case of Cheeger’s inequality when all constraints are even XORs and thus gives a new generalization of this powerful inequality which converts the combinatorial notion of expansion to an analytic property.
Perhaps the most important effect of spectral sparsification is that it has led to certifiable sparsifiers for graphs and hypergraphs. This aspect remains open in our case even for XOR CSPs since the eigenvalues we describe in our Cheeger inequality are not known to be efficiently computable. Computing this efficiently, and/or finding other ways to certifiably sparsify CSPs are open questions emerging from our work. Another important open question is determining which classes of CSPs have near-linear size spectral sparsifiers.
Cite as
Sanjeev Khanna, Aaron Putterman, and Madhu Sudan. A Theory of Spectral CSP Sparsification. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 107:1-107:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{khanna_et_al:LIPIcs.ICALP.2025.107,
author = {Khanna, Sanjeev and Putterman, Aaron and Sudan, Madhu},
title = {{A Theory of Spectral CSP Sparsification}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {107:1--107:12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.107},
URN = {urn:nbn:de:0030-drops-234840},
doi = {10.4230/LIPIcs.ICALP.2025.107},
annote = {Keywords: Sparsification, sketching, hypergraphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sanjeev Khanna, Aaron Putterman, and Madhu Sudan
Abstract
We study the problem of constructing hypergraph cut sparsifiers in the streaming model where a hypergraph on n vertices is revealed either via an arbitrary sequence of hyperedge insertions alone (insertion-only streaming model) or via an arbitrary sequence of hyperedge insertions and deletions (dynamic streaming model). For any ε ∈ (0,1), a (1 ± ε) hypergraph cut-sparsifier of a hypergraph H is a reweighted subgraph H' whose cut values approximate those of H to within a (1 ± ε) factor. Prior work shows that in the static setting, one can construct a (1 ± ε) hypergraph cut-sparsifier using Õ(nr/ε²) bits of space [Chen-Khanna-Nagda FOCS 2020], and in the setting of dynamic streams using Õ(nrlog m/ε²) bits of space [Khanna-Putterman-Sudan FOCS 2024]; here the Õ notation hides terms that are polylogarithmic in n, and we use m to denote the total number of hyperedges in the hypergraph. Up until now, the best known space complexity for insertion-only streams has been the same as that for the dynamic streams. This naturally poses the question of understanding the complexity of hypergraph sparsification in insertion-only streams.
Perhaps surprisingly, in this work we show that in insertion-only streams, a (1 ± ε) cut-sparsifier can be computed in Õ(nr/ε²) bits of space, matching the complexity of the static setting. As a consequence, this also establishes an Ω(log m) factor separation between the space complexity of hypergraph cut sparsification in insertion-only streams and dynamic streams, as the latter is provably known to require Ω(nr log m) bits of space. To better explain this gap, we then show a more general result: namely, if the stream has at most k hyperedge deletions then Õ(n r log k/ε²) bits of space suffice for hypergraph cut sparsification. Thus the space complexity smoothly interpolates between the insertion-only regime (k = 0) and the fully dynamic regime (k = m). Our algorithmic results are driven by a key technical insight: once sufficiently many hyperedges have been inserted into the stream (relative to the number of allowed deletions), we can significantly reduce the underlying hypergraph by size by irrevocably contracting large subsets of vertices.
Finally, we complement this result with an essentially matching lower bound of Ω(n r log(k/n)) bits, thus providing essentially a tight characterization of the space complexity for hypergraph cut-sparsification across a spectrum of streaming models.
Cite as
Sanjeev Khanna, Aaron Putterman, and Madhu Sudan. Near-Optimal Hypergraph Sparsification in Insertion-Only and Bounded-Deletion Streams. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 108:1-108:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{khanna_et_al:LIPIcs.ICALP.2025.108,
author = {Khanna, Sanjeev and Putterman, Aaron and Sudan, Madhu},
title = {{Near-Optimal Hypergraph Sparsification in Insertion-Only and Bounded-Deletion Streams}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {108:1--108:11},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.108},
URN = {urn:nbn:de:0030-drops-234851},
doi = {10.4230/LIPIcs.ICALP.2025.108},
annote = {Keywords: Sparsification, sketching, hypergraphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Kacper Kluk, Marcin Pilipczuk, Michał Pilipczuk, and Giannos Stamoulis
Abstract
We show that for any fixed integer k ⩾ 0, there exists an algorithm that computes the diameter and the eccentricies of all vertices of an input unweighted, undirected n-vertex graph of Euler genus at most k in time 𝒪_k(n^{2-1/25}). Furthermore, for the more general class of graphs that can be constructed by clique-sums from graphs that are of Euler genus at most k after deletion of at most k vertices, we show an algorithm for the same task that achieves the running time bound 𝒪_k(n^{2-1/356} log^{6k} n). Up to today, the only known subquadratic algorithms for computing the diameter in those graph classes are that of [Ducoffe, Habib, Viennot; SICOMP 2022], [Le, Wulff-Nilsen; SODA 2024], and [Duraj, Konieczny, Potępa; ESA 2024]. These algorithms work in the more general setting of K_h-minor-free graphs, but the running time bound is 𝒪_h(n^{2-c_h}) for some constant c_h > 0 depending on h. That is, our savings in the exponent of the polynomial function of n, as compared to the naive quadratic algorithm, are independent of the parameter k.
The main technical ingredient of our work is an improved bound on the number of distance profiles, as defined in [Le, Wulff-Nilsen; SODA 2024], in graphs of bounded Euler genus.
Cite as
Kacper Kluk, Marcin Pilipczuk, Michał Pilipczuk, and Giannos Stamoulis. Faster Diameter Computation in Graphs of Bounded Euler Genus. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 109:1-109:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kluk_et_al:LIPIcs.ICALP.2025.109,
author = {Kluk, Kacper and Pilipczuk, Marcin and Pilipczuk, Micha{\l} and Stamoulis, Giannos},
title = {{Faster Diameter Computation in Graphs of Bounded Euler Genus}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {109:1--109:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.109},
URN = {urn:nbn:de:0030-drops-234869},
doi = {10.4230/LIPIcs.ICALP.2025.109},
annote = {Keywords: Diameter, eccentricity, subquadratic algorithms, surface-embeddable graphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Evangelos Kosinas
Abstract
We present an optimal oracle for answering connectivity queries in undirected graphs in the presence of at most three vertex failures. Specifically, we show that we can process a graph G in O(n+m) time, in order to build a data structure that occupies O(n) space, which can be used in order to answer queries of the form "given a set F of at most three vertices, and two vertices x and y not in F, are x and y connected in G⧵ F?" in constant time, where n and m denote the number of vertices and edges, respectively, of G. The idea is to rely on the DFS-based framework introduced by Kosinas [ESA'23], for handling connectivity queries in the presence of multiple vertex failures. Our technical contribution is to show how to appropriately extend the toolkit of the DFS-based parameters, in order to optimally handle up to three vertex failures. Our approach has the interesting property that it does not rely on a compact representation of vertex cuts, and has the potential to provide optimal solutions for more vertex failures. Furthermore, we show that the DFS-based framework can be easily extended in order to answer vertex-cut queries, and the number of connected components in the presence of multiple vertex failures. In the case of three vertex failures, we can answer such queries in O(log n) time.
Cite as
Evangelos Kosinas. An Optimal 3-Fault-Tolerant Connectivity Oracle. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 110:1-110:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kosinas:LIPIcs.ICALP.2025.110,
author = {Kosinas, Evangelos},
title = {{An Optimal 3-Fault-Tolerant Connectivity Oracle}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {110:1--110:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.110},
URN = {urn:nbn:de:0030-drops-234879},
doi = {10.4230/LIPIcs.ICALP.2025.110},
annote = {Keywords: Graphs, Connectivity, Fault-Tolerant, Oracles}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein
Abstract
We give a simple algorithm for maintaining a n^{o(1)}-approximate spanner H of a graph G with n vertices as G receives edge updates by reduction to the dynamic All-Pairs Shortest Paths (APSP) problem. Given an initially empty graph G, our algorithm processes m insertions and n deletions in total time m^{1 + o(1)} and maintains an initially empty spanner H with total recourse n^{1 + o(1)}. When the number of insertions is much larger than the number of deletions, this notably yields recourse sub-linear in the total number of updates.
Our simple algorithm can be extended to maintain a δ ≥ ω(1)-approximate spanner with n^{1+o(1)} edges throughout a sequence of m insertions and D deletions with amortized update time n^{o(1)} and total recourse n^{1 + o(1)} + n^{o(1)} ⋅ D via batching.
Cite as
Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein. A Simple Dynamic Spanner via APSP. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 111:1-111:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{kyng_et_al:LIPIcs.ICALP.2025.111,
author = {Kyng, Rasmus and Meierhans, Simon and Z\"{o}cklein, Gernot},
title = {{A Simple Dynamic Spanner via APSP}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {111:1--111:11},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.111},
URN = {urn:nbn:de:0030-drops-234886},
doi = {10.4230/LIPIcs.ICALP.2025.111},
annote = {Keywords: Dynamic graph algorithms, Spanner, Dynamic Greedy Spanner}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Alexandra Lassota and Koen Ligthart
Abstract
We study integer linear programs (ILP) of the form min{c^⊤ x | Ax = b,l ≤ x ≤ u,x ∈ ℤⁿ} and analyze their parameterized complexity with respect to their distance to the generalized matching problem, following the well-established approach of capturing the hardness of a problem by the distance to triviality. The generalized matching problem is an ILP where each column of the constraint matrix has 1-norm of at most 2. It captures several well-known polynomial time solvable problems such as matching and flow problems. We parameterize by the size of variable and constraint backdoors, which measure the least number of columns or rows that must be deleted to obtain a generalized matching ILP. This extends generalized matching problems by allowing a parameterized number of additional arbitrary variables or constraints, yielding a novel parameter.
We present the following results: (i) a fixed-parameter tractable (FPT) algorithm for ILPs parameterized by the size p of a minimum variable backdoor to generalized matching; (ii) a randomized slice-wise polynomial (XP) time algorithm for ILPs parameterized by the size h of a minimum constraint backdoor to generalized matching as long as c and A are encoded in unary; (iii) we complement (ii) by proving that solving an ILP is W[1]-hard when parameterized by h even when c,A,l,u have coefficients of constant size. To obtain (i), we prove a variant of lattice-convexity of the degree sequences of weighted b-matchings, which we study in the light of SBO jump M-convex functions. This allows us to model the matching part as a polyhedral constraint on the integer backdoor variables. The resulting ILP is solved in FPT time using an integer programming algorithm. For (ii), the randomized XP time algorithm is obtained by pseudo-polynomially reducing the problem to the exact matching problem. To prevent an exponential blowup in terms of the encoding length of b, we bound the Graver complexity of the constraint matrix and employ a Graver augmentation local search framework. The hardness result (iii) is obtained through a parameterized reduction from ILP with h constraints and coefficients encoded in unary.
Cite as
Alexandra Lassota and Koen Ligthart. Parameterized Algorithms for Matching Integer Programs with Additional Rows and Columns. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 112:1-112:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{lassota_et_al:LIPIcs.ICALP.2025.112,
author = {Lassota, Alexandra and Ligthart, Koen},
title = {{Parameterized Algorithms for Matching Integer Programs with Additional Rows and Columns}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {112:1--112:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.112},
URN = {urn:nbn:de:0030-drops-234895},
doi = {10.4230/LIPIcs.ICALP.2025.112},
annote = {Keywords: Integer Programming, fixed-parameter Tractability, polyhedral Optimization, Matchings}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Lvzhou Li and Jingquan Luo
Abstract
Quantum state preparation is a fundamental and significant subroutine in quantum computing. In this paper, we conduct a systematic investigation of the circuit size (the total count of elementary gates in the circuit) for sparse quantum state preparation. A quantum state is said to be d-sparse if it has only d non-zero amplitudes. For the task of preparing an n-qubit d-sparse quantum state, we obtain the following results:
- Without ancillary qubits: Any n-qubit d-sparse quantum state can be prepared by a quantum circuit of size O(nd/(log n) + n) without using ancillary qubits, which improves the previous best results. It is asymptotically optimal when d = poly(n), and this optimality holds for a broader scope under some reasonable assumptions.
- With limited ancillary qubits: (i) Based on the first result, we prove for the first time a trade-off between the number of ancillary qubits and the circuit size: any n-qubit d-sparse quantum state can be prepared by a quantum circuit of size O((nd)/(log(n + m)) + n) using m ancillary qubits for any m ∈ O((nd)/(log nd) + n). (ii) We establish a matching lower bound Ω((nd)/(log(n+m))+n) under some reasonable assumptions, and obtain a slightly weaker lower bound Ω((nd)/(log(n+m)+log d) + n) without any assumptions.
- With unlimited ancillary qubits: Given an arbitrary amount of ancillary qubits available, the circuit size for preparing n-qubit d-sparse quantum states is Θ((nd)/(log nd) + n).
Cite as
Lvzhou Li and Jingquan Luo. Nearly Optimal Circuit Size for Sparse Quantum State Preparation. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 113:1-113:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{li_et_al:LIPIcs.ICALP.2025.113,
author = {Li, Lvzhou and Luo, Jingquan},
title = {{Nearly Optimal Circuit Size for Sparse Quantum State Preparation}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {113:1--113:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.113},
URN = {urn:nbn:de:0030-drops-234900},
doi = {10.4230/LIPIcs.ICALP.2025.113},
annote = {Keywords: Quantum computing, quantum state preparation, circuit complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Jingxun Liang and Renfei Zhou
Abstract
Fully indexable dictionaries (FID) store sets of integer keys while supporting rank/select queries. They serve as basic building blocks in many succinct data structures. Despite the great importance of FIDs, no known FID is succinct with efficient query time when the universe size U is a large polynomial in the number of keys n, which is the conventional parameter regime for dictionary problems. In this paper, we design an FID that uses log binom(U,n) + n/((log U/t)^{Ω(t)}) bits of space, and answers rank/select queries in O(t + log log n) time in the worst case, for any parameter 1 ≤ t ≤ log n / log log n, provided U = n^{1 + Θ(1)}. This time-space trade-off matches known lower bounds for FIDs [Pǎtraşcu and Thorup, 2006; Pǎtraşcu and Viola, 2010; Viola, 2023] when t ≤ log^{0.99} n.
Our techniques also lead to efficient succinct data structures for the fundamental problem of maintaining n integers each of 𝓁 = Θ(log n) bits and supporting partial-sum queries, with a trade-off between O(t) query time and n𝓁 + n / (log n / t)^{Ω(t)} bits of space. Prior to this work, no known data structure for the partial-sum problem achieves constant query time with n 𝓁 + o(n) bits of space usage.
Cite as
Jingxun Liang and Renfei Zhou. Optimal Static Fully Indexable Dictionaries. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 114:1-114:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{liang_et_al:LIPIcs.ICALP.2025.114,
author = {Liang, Jingxun and Zhou, Renfei},
title = {{Optimal Static Fully Indexable Dictionaries}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {114:1--114:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.114},
URN = {urn:nbn:de:0030-drops-234918},
doi = {10.4230/LIPIcs.ICALP.2025.114},
annote = {Keywords: data structures, dictionaries, space efficiency}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Leah London Arazi and Amir Shpilka
Abstract
This paper studies the hazard-free formula complexity of Boolean functions.
Our first result shows that unate functions are the only Boolean functions for which the monotone formula complexity of the hazard-derivative equals the hazard-free formula complexity of the function itself. Consequently, they are the only functions for which the hazard-derivative approach of Ikenmeyer et al. (J. ACM, 2019) yields optimal bounds.
Our second result proves that the hazard-free formula complexity of a uniformly random Boolean function is at most 2^{(1+o(1))n}. Prior to this, no better upper bound than O(3ⁿ) was known. Notably, unlike in the general case of Boolean circuits and formulas, where the typical complexity is derived from that of the multiplexer function with n-bit selector, the hazard-free formula complexity of a random function is smaller than the optimal hazard-free formula for the multiplexer by an exponential factor in n.
We provide two proofs of this fact. The first is direct, bounding the number of prime implicants of a random Boolean function and using this bound to construct a DNF of the claimed size. The second introduces a new and independently interesting result: a weak converse to the hazard-derivative lower bound method, which gives an upper bound on the hazard-free complexity of a function in terms of the monotone complexity of a subset of its hazard-derivatives.
Additionally, we explore the hazard-free formula complexity of block composition of Boolean functions and obtain a result in the hazard-free setting that is analogous to a result of Karchmer, Raz, and Wigderson (Computational Complexity, 1995) in the monotone setting. We show that our result implies a stronger lower bound on the hazard-free formula depth of the block composition of the set covering function with the multiplexer function than the bound obtained via the hazard-derivative method.
Cite as
Leah London Arazi and Amir Shpilka. On the Complexity of Hazard-Free Formulas. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 115:1-115:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{londonarazi_et_al:LIPIcs.ICALP.2025.115,
author = {London Arazi, Leah and Shpilka, Amir},
title = {{On the Complexity of Hazard-Free Formulas}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {115:1--115:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.115},
URN = {urn:nbn:de:0030-drops-234920},
doi = {10.4230/LIPIcs.ICALP.2025.115},
annote = {Keywords: Hazard-free computation, Boolean formulas, monotone formulas, Karchmer-Wigderson games, communication complexity, lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Sepideh Mahabadi, Mohammad Roghani, and Jakub Tarnawski
Abstract
We study the problem of estimating the size of a maximum matching in sublinear time. The problem has been studied extensively in the literature and various algorithms and lower bounds are known for it. Our result is a 0.5109-approximation algorithm with a running time of Õ(n√n).
All previous algorithms either provide only a marginal improvement (e.g., 2^{-280}) over the 0.5-approximation that arises from estimating a maximal matching, or have a running time that is nearly n². Our approach is also arguably much simpler than other algorithms beating 0.5-approximation.
Cite as
Sepideh Mahabadi, Mohammad Roghani, and Jakub Tarnawski. A 0.51-Approximation of Maximum Matching in Sublinear n^{1.5} Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 116:1-116:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{mahabadi_et_al:LIPIcs.ICALP.2025.116,
author = {Mahabadi, Sepideh and Roghani, Mohammad and Tarnawski, Jakub},
title = {{A 0.51-Approximation of Maximum Matching in Sublinear n^\{1.5\} Time}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {116:1--116:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.116},
URN = {urn:nbn:de:0030-drops-234932},
doi = {10.4230/LIPIcs.ICALP.2025.116},
annote = {Keywords: Sublinear Algorithms, Maximum Matching, Maximal Matching, Approximation Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Samuel McCauley, Benjamin Moseley, Aidin Niaparast, Helia Niaparast, and Shikha Singh
Abstract
The algorithms-with-predictions framework has been used extensively to develop online algorithms with improved beyond-worst-case competitive ratios. Recently, there is growing interest in leveraging predictions for designing data structures with improved beyond-worst-case running times. In this paper, we study the fundamental data structure problem of maintaining approximate shortest paths in incremental graphs in the algorithms-with-predictions model. Given a sequence σ of edges that are inserted one at a time, the goal is to maintain approximate shortest paths from the source to each vertex in the graph at each time step. Before any edges arrive, the data structure is given a prediction of the online edge sequence σ̂ which is used to "warm start" its state.
As our main result, we design a learned algorithm that maintains (1+ε)-approximate single-source shortest paths, which runs in Õ(m η log W/ε) time, where W is the weight of the heaviest edge and η is the prediction error. We show these techniques immediately extend to the all-pairs shortest-path setting as well. Our algorithms are consistent (performing nearly as fast as the offline algorithm) when predictions are nearly perfect, have a smooth degradation in performance with respect to the prediction error and, in the worst case, match the best offline algorithm up to logarithmic factors. That is, the algorithms are "ideal" in the algorithms-with-predictions model.
As a building block, we study the offline incremental approximate single-source shortest-path (SSSP) problem. In the offline incremental SSSP problem, the edge sequence σ is known a priori and the goal is to construct a data structure that can efficiently return the length of the shortest paths in the intermediate graph G_t consisting of the first t edges, for all t. Note that the offline incremental problem is defined in the worst-case setting (without predictions) and is of independent interest.
Cite as
Samuel McCauley, Benjamin Moseley, Aidin Niaparast, Helia Niaparast, and Shikha Singh. Incremental Approximate Single-Source Shortest Paths with Predictions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 117:1-117:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{mccauley_et_al:LIPIcs.ICALP.2025.117,
author = {McCauley, Samuel and Moseley, Benjamin and Niaparast, Aidin and Niaparast, Helia and Singh, Shikha},
title = {{Incremental Approximate Single-Source Shortest Paths with Predictions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {117:1--117:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.117},
URN = {urn:nbn:de:0030-drops-234946},
doi = {10.4230/LIPIcs.ICALP.2025.117},
annote = {Keywords: Algorithms with Predictions, Shortest Paths, Approximation Algorithms, Dynamic Graph Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Boning Meng, Juqiu Wang, and Mingji Xia
Abstract
In this article, we study the computational complexity of counting weighted Eulerian orientations, denoted as #EO. This problem is considered a pivotal scenario in the complexity classification for Holant, a counting framework of great significance. Our results consist of three parts. First, we prove a complexity dichotomy theorem for #EO defined by a set of binary and quaternary signatures, which generalizes the previous dichotomy for the six-vertex model. Second, we prove a dichotomy for #EO defined by a set of so-called pure signatures, which possess the closure property under gadget construction. Finally, we present a polynomial-time algorithm for #EO defined by specific rebalancing signatures, which extends the algorithm for pure signatures to a broader range of problems, including #EO defined by non-pure signatures such as f_40. We also construct a signature f_56 that is not rebalancing, and whether #EO(f_56) is computable in polynomial time remains open.
Cite as
Boning Meng, Juqiu Wang, and Mingji Xia. P-Time Algorithms for Typical #EO Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 118:1-118:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{meng_et_al:LIPIcs.ICALP.2025.118,
author = {Meng, Boning and Wang, Juqiu and Xia, Mingji},
title = {{P-Time Algorithms for Typical #EO Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {118:1--118:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.118},
URN = {urn:nbn:de:0030-drops-234953},
doi = {10.4230/LIPIcs.ICALP.2025.118},
annote = {Keywords: Counting complexity, Eulerian orientation, Holant, #P-hardness, Dichotomy theorem}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Slobodan Mitrović, Anish Mukherjee, Piotr Sankowski, and Wen-Horng Sheu
Abstract
We design a deterministic algorithm for the (1+ε)-approximate maximum matching problem. Our primary result demonstrates that this problem can be solved in O(ε^{-6}) semi-streaming passes, improving upon the O(ε^{-19}) pass-complexity algorithm by [Fischer, Mitrović, and Uitto, STOC'22]. This contributes substantially toward resolving Open question 2 from [Assadi, SOSA'24]. Leveraging the framework introduced in [FMU'22], our algorithm achieves an analogous round complexity speed-up for computing a (1+ε)-approximate maximum matching in both the Massively Parallel Computation (MPC) and CONGEST models.
The data structures maintained by our algorithm are formulated using blossom notation and represented through alternating trees. This approach enables a simplified correctness analysis by treating specific components as if operating on bipartite graphs, effectively circumventing certain technical intricacies present in prior work.
Cite as
Slobodan Mitrović, Anish Mukherjee, Piotr Sankowski, and Wen-Horng Sheu. Faster Semi-Streaming Matchings via Alternating Trees. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 119:1-119:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{mitrovic_et_al:LIPIcs.ICALP.2025.119,
author = {Mitrovi\'{c}, Slobodan and Mukherjee, Anish and Sankowski, Piotr and Sheu, Wen-Horng},
title = {{Faster Semi-Streaming Matchings via Alternating Trees}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {119:1--119:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.119},
URN = {urn:nbn:de:0030-drops-234965},
doi = {10.4230/LIPIcs.ICALP.2025.119},
annote = {Keywords: streaming algorithms, approximation algorithms, maximum matching}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Hendrik Molter, Meirav Zehavi, and Amit Zivan
Abstract
We provide the first algorithm for computing an optimal tree decomposition for a given graph G that runs in single exponential time in the feedback vertex number of G, that is, in time 2^{𝒪(fvn(G))}⋅ n^{𝒪(1)}, where fvn(G) is the feedback vertex number of G and n is the number of vertices of G. On a classification level, this improves the previously known results by Chapelle et al. [Discrete Applied Mathematics '17] and Fomin et al. [Algorithmica '18], who independently showed that an optimal tree decomposition can be computed in single exponential time in the vertex cover number of G.
One of the biggest open problems in the area of parameterized complexity is whether we can compute an optimal tree decomposition in single exponential time in the treewidth of the input graph. The currently best known algorithm by Korhonen and Lokshtanov [STOC '23] runs in 2^{𝒪(tw(G)²)}⋅ n⁴ time, where tw(G) is the treewidth of G. Our algorithm improves upon this result on graphs G where fvn(G) ∈ o(tw(G)²). On a different note, since fvn(G) is an upper bound on tw(G), our algorithm can also be seen either as an important step towards a positive resolution of the above-mentioned open problem, or, if its answer is negative, then a mark of the tractability border of single exponential time algorithms for the computation of treewidth.
Cite as
Hendrik Molter, Meirav Zehavi, and Amit Zivan. Treewidth Parameterized by Feedback Vertex Number. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 120:1-120:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{molter_et_al:LIPIcs.ICALP.2025.120,
author = {Molter, Hendrik and Zehavi, Meirav and Zivan, Amit},
title = {{Treewidth Parameterized by Feedback Vertex Number}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {120:1--120:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.120},
URN = {urn:nbn:de:0030-drops-234979},
doi = {10.4230/LIPIcs.ICALP.2025.120},
annote = {Keywords: Treewidth, Tree Decomposition, Exact Algorithms, Single Exponential Time, Feedback Vertex Number, Dynamic Programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Tamio-Vesa Nakajima and Stanislav Živný
Abstract
Given a (multi)graph G which contains a bipartite subgraph with ρ edges, what is the largest triangle-free subgraph of G that can be found efficiently? We present an SDP-based algorithm that finds one with at least 0.8823 ρ edges, thus improving on the subgraph with 0.878 ρ edges obtained by the classic Max-Cut algorithm of Goemans and Williamson. On the other hand, by a reduction from Håstad’s 3-bit PCP we show that it is NP-hard to find a triangle-free subgraph with (25 / 26 + ε) ρ ≈ (0.961 + ε) ρ edges.
As an application, we classify the Maximum Promise Constraint Satisfaction Problem, denoted byMaxPCSP(G, H), for all bipartite G: Given an input (multi)graph X which admits a G-colouring satisfying ρ edges, find an H-colouring of X that satisfies ρ edges. This problem is solvable in polynomial time, apart from trivial cases, if H contains a triangle, and is NP-hard otherwise.
Cite as
Tamio-Vesa Nakajima and Stanislav Živný. Maximum Bipartite vs. Triangle-Free Subgraph. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 121:1-121:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{nakajima_et_al:LIPIcs.ICALP.2025.121,
author = {Nakajima, Tamio-Vesa and \v{Z}ivn\'{y}, Stanislav},
title = {{Maximum Bipartite vs. Triangle-Free Subgraph}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {121:1--121:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.121},
URN = {urn:nbn:de:0030-drops-234987},
doi = {10.4230/LIPIcs.ICALP.2025.121},
annote = {Keywords: approximation, promise constraint satisfaction, triangle-free subgraphs}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Naoto Ohsaka
Abstract
The Probabilistically Checkable Reconfiguration Proof (PCRP) theorem, proven by Hirahara and Ohsaka (STOC 2024) [Hirahara and Ohsaka, 2024] and Karthik C. S. and Manurangsi [{Karthik {C. S.}} and Manurangsi, 2023], provides a new PCP-type characterization of PSPACE: A language L is in PSPACE if and only if there exists a probabilistic verifier 𝒱 and a pair of polynomial-time computable proofs π^ini, π^end such that the following hold for every input x:
- If x ∈ L, then π^ini(x) can be transformed into π^end(x) by repeatedly flipping a single bit of the proof at a time, while making 𝒱(x) to accept every intermediate proof with probability 1.
- If x ∉ L, then any such transformation induces a proof that is rejected by 𝒱(x) with probability more than 1/2. The PCRP theorem finds many applications in PSPACE-hardness of approximation for reconfiguration problems.
In this paper, we present an alternative proof of the PCRP theorem that is "simpler" than those of Hirahara and Ohsaka [Hirahara and Ohsaka, 2024] and Karthik C. S. and Manurangsi [Karthik C. S. and Manurangsi, 2023]. Our PCRP system is obtained by combining simple robustization and composition steps in a modular fashion, which renders its analysis more intuitive. The crux of implementing the robustization step is an error-correcting code that enjoys both list decodability and reconfigurability, the latter of which enables to reconfigure between a pair of codewords, while avoiding getting too close to any other codewords.
Cite as
Naoto Ohsaka. Yet Another Simple Proof of the PCRP Theorem. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 122:1-122:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{ohsaka:LIPIcs.ICALP.2025.122,
author = {Ohsaka, Naoto},
title = {{Yet Another Simple Proof of the PCRP Theorem}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {122:1--122:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.122},
URN = {urn:nbn:de:0030-drops-234995},
doi = {10.4230/LIPIcs.ICALP.2025.122},
annote = {Keywords: reconfiguration problems, hardness of approximation, probabilistic proof systems}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Chirag Pabbaraju and Ali Vakilian
Abstract
In the Markov paging model, one assumes that page requests are drawn from a Markov chain over the pages in memory, and the goal is to maintain a fast cache that suffers few page faults in expectation. While computing the optimal online algorithm (OPT) for this problem naively takes time exponential in the size of the cache, the best-known polynomial-time approximation algorithm is the dominating distribution algorithm due to Lund, Phillips and Reingold (FOCS 1994), who showed that the algorithm is 4-competitive against OPT. We substantially improve their analysis and show that the dominating distribution algorithm is in fact 2-competitive against OPT. We also show a lower bound of 1.5907-competitiveness for this algorithm - to the best of our knowledge, no such lower bound was previously known.
Cite as
Chirag Pabbaraju and Ali Vakilian. New and Improved Bounds for Markov Paging. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 123:1-123:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{pabbaraju_et_al:LIPIcs.ICALP.2025.123,
author = {Pabbaraju, Chirag and Vakilian, Ali},
title = {{New and Improved Bounds for Markov Paging}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {123:1--123:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.123},
URN = {urn:nbn:de:0030-drops-235005},
doi = {10.4230/LIPIcs.ICALP.2025.123},
annote = {Keywords: Beyond Worst-case Analyis, Online Paging, Markov Paging}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Daniel Paul-Pena and C. Seshadhri
Abstract
We study the classic problem of subgraph counting, where we wish to determine the number of occurrences of a fixed pattern graph H in an input graph G of n vertices. Our focus is on bounded degeneracy inputs, a rich family of graph classes that also characterizes real-world massive networks. Building on the seminal techniques introduced by Chiba-Nishizeki (SICOMP 1985), a recent line of work has built subgraph counting algorithms for bounded degeneracy graphs. Assuming fine-grained complexity conjectures, there is a complete characterization of patterns H for which linear time subgraph counting is possible. For every r ≥ 6, there exists an H with r vertices that cannot be counted in linear time.
In this paper, we initiate a study of subquadratic algorithms for subgraph counting on bounded degeneracy graphs. We prove that when H has at most 9 vertices, subgraph counting can be done in Õ(n^{5/3}) time. As a secondary result, we give improved algorithms for counting cycles of length at most 10. Previously, no subquadratic algorithms were known for the above problems on bounded degeneracy graphs.
Our main conceptual contribution is a framework that reduces subgraph counting in bounded degeneracy graphs to counting smaller hypergraphs in arbitrary graphs. We believe that our results will help build a general theory of subgraph counting for bounded degeneracy graphs.
Cite as
Daniel Paul-Pena and C. Seshadhri. Subgraph Counting in Subquadratic Time for Bounded Degeneracy Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 124:1-124:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{paulpena_et_al:LIPIcs.ICALP.2025.124,
author = {Paul-Pena, Daniel and Seshadhri, C.},
title = {{Subgraph Counting in Subquadratic Time for Bounded Degeneracy Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {124:1--124:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.124},
URN = {urn:nbn:de:0030-drops-235010},
doi = {10.4230/LIPIcs.ICALP.2025.124},
annote = {Keywords: Homomorphism counting, Bounded degeneracy graphs, Fine-grained complexity, Subgraph counting}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aditya Potukuchi and Shikha Singh
Abstract
Properties of stable matchings in the popular random-matching-market model have been studied for over 50 years. In a random matching market, each agent has complete preferences drawn uniformly and independently at random. Wilson (1972), Knuth (1976) and Pittel (1989) proved that in balanced random matching markets, the proposers are matched to their ln nth choice on average. In this paper, we consider competitive markets with n jobs and n+k candidates, and partial lists where each agent only ranks their top d choices. Despite the long history of the problem, the following fundamental question remains unanswered for these generalized markets: what is the tight threshold on list length d that results in a perfect stable matching with high probability? In this paper, we answer this question exactly - we prove a sharp threshold d₀ = ln n ⋅ ln (n+k)/(k+1) on the existence of perfect stable matchings when k = o(n). That is, we show that if d < (1-ε) d₀, then no stable matching matches all jobs; moreover, if d > (1+ ε) d₀, then all jobs are matched in every stable matching with high probability. This bound improves and generalizes recent results by Kanoria, Min and Qian (2021).
Furthermore, we extend the line of work studying the effect of imbalance on the expected rank of the proposers (termed the "stark effect of competition"). We establish the regime in unbalanced markets that forces this stark effect to take shape in markets with partial preferences.
Cite as
Aditya Potukuchi and Shikha Singh. Unbalanced Random Matching Markets with Partial Preferences. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 125:1-125:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{potukuchi_et_al:LIPIcs.ICALP.2025.125,
author = {Potukuchi, Aditya and Singh, Shikha},
title = {{Unbalanced Random Matching Markets with Partial Preferences}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {125:1--125:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.125},
URN = {urn:nbn:de:0030-drops-235025},
doi = {10.4230/LIPIcs.ICALP.2025.125},
annote = {Keywords: stable matching, probabilistic method, Gale-Shapley algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Lars Rohwedder
Abstract
Given n jobs with processing times p₁,...,p_n ∈ ℕ and m ≤ n machines with speeds s₁,...,s_m ∈ ℕ our goal is to allocate the jobs to machines minimizing the makespan. We present an algorithm that solves the problem in time p_{max}^{O(d)} ⋅ n, where p_{max} is the maximum processing time and d ≤ p_{max} is the number of distinct processing times. This is essentially the best possible due to a lower bound based on the exponential time hypothesis (ETH).
Our result improves over prior works that had a quadratic term in d in the exponent and answers an open question by Koutecký and Zink. The algorithm is based on integer programming techniques combined with novel ideas from modular arithmetic. It can also be implemented efficiently for the more compact high-multiplicity instance encoding.
Cite as
Lars Rohwedder. ETH-Tight FPT Algorithm for Makespan Minimization on Uniform Machines. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 126:1-126:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{rohwedder:LIPIcs.ICALP.2025.126,
author = {Rohwedder, Lars},
title = {{ETH-Tight FPT Algorithm for Makespan Minimization on Uniform Machines}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {126:1--126:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.126},
URN = {urn:nbn:de:0030-drops-235037},
doi = {10.4230/LIPIcs.ICALP.2025.126},
annote = {Keywords: Scheduling, Integer Programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Lars Rohwedder, Arman Rouhani, and Leo Wennmann
Abstract
We present a dependent randomized rounding scheme, which rounds fractional solutions to integral solutions satisfying certain hard constraints on the output while preserving Chernoff-like concentration properties. In contrast to previous dependent rounding schemes, our algorithm guarantees that the cost of the rounded integral solution does not exceed that of the fractional solution. Our algorithm works for a class of assignment problems with restrictions similar to those of prior works.
In a non-trivial combination of our general result with a classical approach from Shmoys and Tardos [Math. Programm.'93] and more recent linear programming techniques developed for the restricted assignment variant by Bansal, Sviridenko [STOC'06] and Davies, Rothvoss, Zhang [SODA'20], we derive a O(log n)-approximation algorithm for the Budgeted Santa Claus Problem. In this new variant, the goal is to allocate resources with different values to players, maximizing the minimum value a player receives, and satisfying a budget constraint on player-resource allocation costs.
Cite as
Lars Rohwedder, Arman Rouhani, and Leo Wennmann. Cost Preserving Dependent Rounding for Allocation Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 127:1-127:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{rohwedder_et_al:LIPIcs.ICALP.2025.127,
author = {Rohwedder, Lars and Rouhani, Arman and Wennmann, Leo},
title = {{Cost Preserving Dependent Rounding for Allocation Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {127:1--127:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.127},
URN = {urn:nbn:de:0030-drops-235049},
doi = {10.4230/LIPIcs.ICALP.2025.127},
annote = {Keywords: Matching, Randomized Rounding, Santa Claus, Approximation Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Lars Rohwedder and Leander Schnaars
Abstract
We provide an algorithm giving a 140/41 (< 3.415)-approximation for Coflow Scheduling and a 4.36-approximation for Coflow Scheduling with release dates. This improves upon the best known 4- and respectively 5-approximations and addresses an open question posed by Agarwal, Rajakrishnan, Narayan, Agarwal, Shmoys, and Vahdat [Agarwal et al., 2018], Fukunaga [Fukunaga, 2022], and others. We additionally show that in an asymptotic setting, the algorithm achieves a (2+ε)-approximation, which is essentially optimal under ℙ ≠ NP. The improvements are achieved using a novel edge allocation scheme using iterated LP rounding together with a framework which enables establishing strong bounds for combinations of several edge allocation algorithms.
Cite as
Lars Rohwedder and Leander Schnaars. 3.415-Approximation for Coflow Scheduling via Iterated Rounding. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 128:1-128:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{rohwedder_et_al:LIPIcs.ICALP.2025.128,
author = {Rohwedder, Lars and Schnaars, Leander},
title = {{3.415-Approximation for Coflow Scheduling via Iterated Rounding}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {128:1--128:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.128},
URN = {urn:nbn:de:0030-drops-235050},
doi = {10.4230/LIPIcs.ICALP.2025.128},
annote = {Keywords: Coflow Scheduling, Approximation Algorithms, Iterated Rounding}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Thomas Schneider and Pascal Schweitzer
Abstract
The Weisfeiler-Leman (WL) algorithms form a family of incomplete approaches to the graph isomorphism problem. They recently found various applications in algorithmic group theory and machine learning. In fact, the algorithms form a parameterized family: for each k ∈ ℕ there is a corresponding k-dimensional algorithm WLk. The algorithms become increasingly powerful with increasing dimension, but at the same time the running time increases. The WL-dimension of a graph G is the smallest k ∈ ℕ for which WLk correctly decides isomorphism between G and every other graph. In some sense, the WL-dimension measures how difficult it is to test isomorphism of one graph to others using a fairly general class of combinatorial algorithms. Nowadays, it is a standard measure in descriptive complexity theory for the structural complexity of a graph.
We prove that the WL-dimension of a graph on n vertices is at most 3/20 ⋅ n + o(n) = 0.15 ⋅ n + o(n).
Reducing the question to coherent configurations, the proof develops various techniques to analyze their structure. This includes sufficient conditions under which a fiber can be restored uniquely up to isomorphism if it is removed, a recursive proof exploiting a degree reduction and treewidth bounds, as well as an exhaustive analysis of interspaces involving small fibers.
As a base case, we also analyze the dimension of coherent configurations with small fiber size and thereby graphs with small color class size.
Cite as
Thomas Schneider and Pascal Schweitzer. An Upper Bound on the Weisfeiler-Leman Dimension. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 129:1-129:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{schneider_et_al:LIPIcs.ICALP.2025.129,
author = {Schneider, Thomas and Schweitzer, Pascal},
title = {{An Upper Bound on the Weisfeiler-Leman Dimension}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {129:1--129:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.129},
URN = {urn:nbn:de:0030-drops-235065},
doi = {10.4230/LIPIcs.ICALP.2025.129},
annote = {Keywords: Weisfeiler-Leman dimension, descriptive complexity, coherent configurations}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ahmed Shalaby and Damien Woods
Abstract
The information-encoding molecules RNA and DNA bind via base pairing to form an exponentially large set of secondary structures. Practitioners need algorithms to predict the most favoured structures, called minimum free energy (MFE) structures, or to compute a partition function that allows assigning a probability to any structure. MFE prediction is NP-hard in the presence pseudoknots - base pairings that violate a restricted planarity condition. However, for single-stranded unpseudoknotted structures, there are polynomial time dynamic programming algorithms. For multiple strands, the problem is significantly more complicated: Codon, Hajiaghayi and Thachuk [DNA27, 2021] proved it NP-hard for N bases and 𝒪(N) strands. Dirks, Bois, Schaeffer, Winfree and Pierce [SIAM Review, 2007] gave a polynomial time partition function algorithm for multiple (𝒪(1)) strands, now widely-used, however their technique did not generalise to MFE which they left open.
We give an 𝒪(N⁴) time algorithm for unpseudoknotted multiple (𝒪(1)) strand MFE prediction, answering the open problem from Dirks et al. The challenge lies in considering the rotational symmetry of secondary structures, a global feature not immediately amenable to local subproblem decomposition used in dynamic programming. Our proof has two main technical contributions: First, a characterisation of symmetric secondary structures implying only quadratically many need to be considered when computing the rotational symmetry penalty. Second, that bound is leveraged by a backtracking algorithm to efficiently find the MFE in an exponential space of contenders.
Cite as
Ahmed Shalaby and Damien Woods. An Efficient Algorithm to Compute the Minimum Free Energy of Interacting Nucleic Acid Strands. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 130:1-130:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{shalaby_et_al:LIPIcs.ICALP.2025.130,
author = {Shalaby, Ahmed and Woods, Damien},
title = {{An Efficient Algorithm to Compute the Minimum Free Energy of Interacting Nucleic Acid Strands}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {130:1--130:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.130},
URN = {urn:nbn:de:0030-drops-235071},
doi = {10.4230/LIPIcs.ICALP.2025.130},
annote = {Keywords: Minimum free energy, MFE, partition function, nucleic acid, DNA, RNA, secondary structure, computational complexity, algorithm analysis and design, dynamic programming}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aleksandros Sobczyk
Abstract
In this work we revisit the arithmetic and bit complexity of Hermitian eigenproblems. Recently, [BGVKS, FOCS 2020] proved that a (non-Hermitian) matrix A can be diagonalized with a randomized algorithm in O(n^{ω}log²(n/ε)) arithmetic operations, where ω≲ 2.371 is the square matrix multiplication exponent, and [Shah, SODA 2025] significantly improved the bit complexity for the Hermitian case. Our main goal is to obtain similar deterministic complexity bounds for various Hermitian eigenproblems. In the Real RAM model, we show that a Hermitian matrix can be diagonalized deterministically in O(n^{ω}log(n)+n²polylog(n/ε)) arithmetic operations, improving the classic deterministic Õ(n³) algorithms, and derandomizing the aforementioned state-of-the-art. The main technical step is a complete, detailed analysis of a well-known divide-and-conquer tridiagonal eigensolver of Gu and Eisenstat [GE95], when accelerated with the Fast Multipole Method, asserting that it can accurately diagonalize a symmetric tridiagonal matrix in nearly-O(n²) operations. In finite precision, we show that an algorithm by Schönhage [Sch72] to reduce a Hermitian matrix to tridiagonal form is stable in the floating point model, using O(log(n/ε)) bits of precision. This leads to a deterministic algorithm to compute all the eigenvalues of a Hermitian matrix in O(n^{ω}ℱ(log(n/ε)) + n²polylog(n/ε)) bit operations, where ℱ(b) ∈ Õ(b) is the bit complexity of a single floating point operation on b bits. This improves the best known Õ(n³) deterministic and O(n^{ω}log²(n/ε)ℱ(log(n/ε))) randomized complexities. We conclude with some other useful subroutines such as computing spectral gaps, condition numbers, and spectral projectors, and with some open problems.
Cite as
Aleksandros Sobczyk. Deterministic Complexity Analysis of Hermitian Eigenproblems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 131:1-131:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{sobczyk:LIPIcs.ICALP.2025.131,
author = {Sobczyk, Aleksandros},
title = {{Deterministic Complexity Analysis of Hermitian Eigenproblems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {131:1--131:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.131},
URN = {urn:nbn:de:0030-drops-235081},
doi = {10.4230/LIPIcs.ICALP.2025.131},
annote = {Keywords: Hermitian eigenproblem, eigenvalues, SVD, tridiagonal reduction, matrix multiplication time, diagonalization, bit complexity}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Aurelio L. Sulser and Maximilian Probst Gutenberg
Abstract
In this work, we present the first algorithm to compute expander decompositions in an m-edge directed graph with near-optimal time Õ(m). Further, our algorithm can maintain such a decomposition in a dynamic graph and again obtains near-optimal update times. Our result improves over previous algorithms [Bernstein et al., 2020; Hua et al., 2023] that only obtained algorithms optimal up to subpolynomial factors.
In order to obtain our new algorithm, we present a new push-pull-relabel flow framework that generalizes the classic push-relabel flow algorithm [Goldberg and Tarjan, 1988] which was later dynamized for computing expander decompositions in undirected graphs [Henzinger et al., 2020; Saranurak and Wang, 2019]. We then show that the flow problems formulated in recent work [Hua et al., 2023] to decompose directed graphs can be solved much more efficiently in the push-pull-relabel flow framework.
Recently, our algorithm has already been employed to obtain the currently fastest algorithm to compute min-cost flows [Van Den Brand et al., 2024]. We further believe that our algorithm can be used to speed-up and simplify recent breakthroughs in combinatorial graph algorithms towards fast maximum flow algorithms [Chuzhoy and Khanna, 2024; Chuzhoy and Khanna, 2024; Bernstein et al., 2024].
Cite as
Aurelio L. Sulser and Maximilian Probst Gutenberg. Near-Optimal Algorithm for Directed Expander Decompositions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 132:1-132:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{sulser_et_al:LIPIcs.ICALP.2025.132,
author = {Sulser, Aurelio L. and Gutenberg, Maximilian Probst},
title = {{Near-Optimal Algorithm for Directed Expander Decompositions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {132:1--132:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.132},
URN = {urn:nbn:de:0030-drops-235096},
doi = {10.4230/LIPIcs.ICALP.2025.132},
annote = {Keywords: Directed Expander Decomposition, Push-Pull-Relabel Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ivor van der Hoog, Thijs van der Horst, and Tim Ophelders
Abstract
Given a trajectory T and a distance Δ, we wish to find a set C of curves of complexity at most 𝓁, such that we can cover T with subcurves that each are within Fréchet distance Δ to at least one curve in C. We call C an (𝓁,Δ)-clustering and aim to find an (𝓁,Δ)-clustering of minimum cardinality. This problem variant was introduced by Akitaya et al. (2021) and shown to be NP-complete. The main focus has therefore been on bicriteria approximation algorithms, allowing for the clustering to be an (𝓁, Θ(Δ))-clustering of roughly optimal size.
We present algorithms that construct (𝓁,4Δ)-clusterings of 𝒪(k log n) size, where k is the size of the optimal (𝓁, Δ)-clustering. We use 𝒪(n³) space and 𝒪(k n³ log⁴ n) time. Our algorithms significantly improve upon the clustering quality (improving the approximation factor in Δ) and size (whenever 𝓁 ∈ Ω(log n / log k)). We offer deterministic running times improving known expected bounds by a factor near-linear in 𝓁. Additionally, we match the space usage of prior work, and improve it substantially, by a factor super-linear in n𝓁, when compared to deterministic results.
Cite as
Ivor van der Hoog, Thijs van der Horst, and Tim Ophelders. Faster, Deterministic and Space Efficient Subtrajectory Clustering. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 133:1-133:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{vanderhoog_et_al:LIPIcs.ICALP.2025.133,
author = {van der Hoog, Ivor and van der Horst, Thijs and Ophelders, Tim},
title = {{Faster, Deterministic and Space Efficient Subtrajectory Clustering}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {133:1--133:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.133},
URN = {urn:nbn:de:0030-drops-235109},
doi = {10.4230/LIPIcs.ICALP.2025.133},
annote = {Keywords: Fr\'{e}chet distance, clustering, set cover}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Penghui Yao and Mingnan Zhao
Abstract
Developing explicit pseudorandom generators (PRGs) for prominent categories of Boolean functions is a key focus in computational complexity theory. In this paper, we investigate the PRGs against the functions of degree-d polynomial threshold functions (PTFs) over Gaussian space. Our main result is an explicit construction of PRG with seed length poly(k, d, 1/ε)⋅log n that can fool any function of k degree-d PTFs with probability at least 1 - ε. More specifically, we show that the summation of L independent R-moment-matching Gaussian vectors ε-fools functions of k degree-d PTFs, where L = poly(k, d, 1/ε) and R = O(log kd/ε). The PRG is then obtained by applying an appropriate discretization to Gaussian vectors with bounded independence.
Cite as
Penghui Yao and Mingnan Zhao. A Pseudorandom Generator for Functions of Low-Degree Polynomial Threshold Functions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 134:1-134:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{yao_et_al:LIPIcs.ICALP.2025.134,
author = {Yao, Penghui and Zhao, Mingnan},
title = {{A Pseudorandom Generator for Functions of Low-Degree Polynomial Threshold Functions}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {134:1--134:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.134},
URN = {urn:nbn:de:0030-drops-235112},
doi = {10.4230/LIPIcs.ICALP.2025.134},
annote = {Keywords: Pseudorandom generators, polynomial threshold functions}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Daniel Ye
Abstract
We consider the independent set problem in the semi-streaming model. For any input graph G = (V, E) with n vertices, an independent set is a set of vertices with no edges between any two elements. In the semi-streaming model, G is presented as a stream of edges and any algorithm must use Õ(n) bits of memory to output a large independent set at the end of the stream.
Prior work has designed various semi-streaming algorithms for finding independent sets. Due to the hardness of finding maximum and maximal independent sets in the semi-streaming model, the focus has primarily been on finding independent sets in terms of certain parameters, such as the maximum degree Δ. In particular, there is a simple randomized algorithm that obtains independent sets of size n/(Δ+1) in expectation, which can also be achieved with high probability using more complicated algorithms. For deterministic algorithms, the best bounds are significantly weaker. The best we know is a straightforward algorithm that finds an Ω̃(n/(Δ²)) size independent set.
We show that this straightforward algorithm is nearly optimal by proving that any deterministic semi-streaming algorithm can only output an Õ(n/(Δ²)) size independent set. Our result proves a strong separation between the power of deterministic and randomized semi-streaming algorithms for the independent set problem.
Cite as
Daniel Ye. Deterministic Independent Sets in the Semi-Streaming Model. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 135:1-135:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{ye:LIPIcs.ICALP.2025.135,
author = {Ye, Daniel},
title = {{Deterministic Independent Sets in the Semi-Streaming Model}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {135:1--135:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.135},
URN = {urn:nbn:de:0030-drops-235129},
doi = {10.4230/LIPIcs.ICALP.2025.135},
annote = {Keywords: Sublinear Algorithms, Derandomization, Semi-Streaming Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Ben Young
Abstract
The Holant theorem is a powerful tool for studying the computational complexity of counting problems. Due to the great expressiveness of the Holant framework, a converse to the Holant theorem would itself be a very powerful counting indistinguishability theorem. The most general converse does not hold, but we prove the following, still highly general, version: if any two sets of real-valued signatures are Holant-indistinguishable, then they are equivalent up to an orthogonal transformation. This resolves a partially open conjecture of Xia (2010). Consequences of this theorem include the well-known result that homomorphism counts from all graphs determine a graph up to isomorphism, the classical sufficient condition for simultaneous orthogonal similarity of sets of real matrices, and a combinatorial characterization of sets of simultaneosly orthogonally decomposable (odeco) symmetric tensors.
Cite as
Ben Young. The Converse of the Real Orthogonal Holant Theorem. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 136:1-136:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{young:LIPIcs.ICALP.2025.136,
author = {Young, Ben},
title = {{The Converse of the Real Orthogonal Holant Theorem}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {136:1--136:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.136},
URN = {urn:nbn:de:0030-drops-235138},
doi = {10.4230/LIPIcs.ICALP.2025.136},
annote = {Keywords: Holant, Counting Indistinguishability, Odeco}
}
Document
Track A: Algorithms, Complexity and Games
Authors:
Junyao Zhao
Abstract
Online contention resolution scheme (OCRS) is a powerful technique for online decision making, which - in the case of matroids - given a matroid and a prior distribution of active elements, selects a subset of active elements that satisfies the matroid constraint in an online fashion. OCRS has been studied mostly for product distributions in the literature. Recently, universal OCRS, that works even for correlated distributions, has gained interest, because it naturally generalizes the classic notion, and its existence in the random-order arrival model turns out to be equivalent to the matroid secretary conjecture. However, currently very little is known about how to design universal OCRSs for any arrival model. In this work, we consider a natural and relatively flexible arrival model, where the OCRS is allowed to preselect (i.e., non-adaptively select) the arrival order of the elements, and within this model, we design simple and optimal universal OCRSs that are computationally efficient. In the course of deriving our OCRSs, we also discover an efficient reduction from universal online contention resolution to the matroid secretary problem for any arrival model, answering a question posed in [Dughmi, 2020].
Cite as
Junyao Zhao. Universal Online Contention Resolution with Preselected Order. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 137:1-137:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{zhao:LIPIcs.ICALP.2025.137,
author = {Zhao, Junyao},
title = {{Universal Online Contention Resolution with Preselected Order}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {137:1--137:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.137},
URN = {urn:nbn:de:0030-drops-235147},
doi = {10.4230/LIPIcs.ICALP.2025.137},
annote = {Keywords: Matroids, online contention resolution schemes, secretary problems}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Michal Ajdarów, James C. A. Main, Petr Novotný, and Mickael Randour
Abstract
Markov decision processes (MDPs) are a canonical model to reason about decision making within a stochastic environment. We study a fundamental class of infinite MDPs: one-counter MDPs (OC-MDPs). They extend finite MDPs via an associated counter taking natural values, thus inducing an infinite MDP over the set of configurations (current state and counter value). We consider two characteristic objectives: reaching a target state (state-reachability), and reaching a target state with counter value zero (selective termination). The synthesis problem for the latter is not known to be decidable and connected to major open problems in number theory. Furthermore, even seemingly simple strategies (e.g., memoryless ones) in OC-MDPs might be impossible to build in practice (due to the underlying infinite configuration space): we need finite, and preferably small, representations.
To overcome these obstacles, we introduce two natural classes of concisely represented strategies based on a (possibly infinite) partition of counter values in intervals. For both classes, and both objectives, we study the verification problem (does a given strategy ensure a high enough probability for the objective?), and two synthesis problems (does there exist such a strategy?): one where the interval partition is fixed as input, and one where it is only parameterized. We develop a generic approach based on a compression of the induced infinite MDP that yields decidability in all cases, with all complexities within PSPACE.
Cite as
Michal Ajdarów, James C. A. Main, Petr Novotný, and Mickael Randour. Taming Infinity One Chunk at a Time: Concisely Represented Strategies in One-Counter MDPs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 138:1-138:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{ajdarow_et_al:LIPIcs.ICALP.2025.138,
author = {Ajdar\'{o}w, Michal and Main, James C. A. and Novotn\'{y}, Petr and Randour, Mickael},
title = {{Taming Infinity One Chunk at a Time: Concisely Represented Strategies in One-Counter MDPs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {138:1--138:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.138},
URN = {urn:nbn:de:0030-drops-235157},
doi = {10.4230/LIPIcs.ICALP.2025.138},
annote = {Keywords: one-counter Markov decision processes, randomised strategies, termination, reachability}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Djamel Eddine Amir and Benjamin Hellouin de Menibus
Abstract
Motivated by the notion of strong computable type for sets in computable analysis, we define the notion of strong computable type for G-shifts, where G is a finitely generated group with decidable word problem. A G-shift has strong computable type if one can compute its language from the complement of its language. We obtain a characterization of G-shifts with strong computable type in terms of a notion of minimality with respect to properties with a bounded computational complexity. We provide a self-contained direct proof, and also explain how this characterization can be obtained from an existing similar characterization for sets by Amir and Hoyrup, and discuss its connexions with results by Jeandel on closure spaces. We apply this characterization to several classes of shifts that are minimal with respect to specific properties. This provides a unifying approach that not only generalizes many existing results but also has the potential to yield new findings effortlessly. In contrast to the case of sets, we prove that strong computable type for G-shifts is preserved under products. We conclude by discussing some generalizations and future directions.
Cite as
Djamel Eddine Amir and Benjamin Hellouin de Menibus. Minimality and Computability of Languages of G-Shifts. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 139:1-139:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{amir_et_al:LIPIcs.ICALP.2025.139,
author = {Amir, Djamel Eddine and Hellouin de Menibus, Benjamin},
title = {{Minimality and Computability of Languages of G-Shifts}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {139:1--139:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.139},
URN = {urn:nbn:de:0030-drops-235161},
doi = {10.4230/LIPIcs.ICALP.2025.139},
annote = {Keywords: shifts, subshifts, minimal shifts, computable language, computability, strong computable type, descriptive complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Alexey Barsukov, Michael Pinsker, and Jakub Rydval
Abstract
Guarded Monotone Strict NP (GMSNP) extends Monotone Monadic Strict NP (MMSNP) by guarded existentially quantified predicates of arbitrary arities. We prove that the containment problem for GMSNP is decidable, thereby settling an open question of Bienvenu, ten Cate, Lutz, and Wolter, later restated by Bourhis and Lutz. Our proof also comes with a 2NEXPTIME upper bound on the complexity of the problem, which matches the lower bound for containment of MMSNP due to Bourhis and Lutz. In order to obtain these results, we significantly improve the state of knowledge of the model-theoretic properties of GMSNP. Bodirsky, Knäuer, and Starke previously showed that every GMSNP sentence defines a finite union of CSPs of ω-categorical structures. We show that these structures can be used to obtain a reduction from the containment problem for GMSNP to the much simpler problem of testing the existence of a certain map called recolouring, albeit in a more general setting than GMSNP; a careful analysis of this yields said upper bound. As a secondary contribution, we refine the construction of Bodirsky, Knäuer, and Starke by adding a restricted form of homogeneity to the properties of these structures, making the logic amenable to future complexity classifications for query evaluation using techniques developed for infinite-domain CSPs.
Cite as
Alexey Barsukov, Michael Pinsker, and Jakub Rydval. Containment for Guarded Monotone Strict NP. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 140:1-140:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{barsukov_et_al:LIPIcs.ICALP.2025.140,
author = {Barsukov, Alexey and Pinsker, Michael and Rydval, Jakub},
title = {{Containment for Guarded Monotone Strict NP}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {140:1--140:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.140},
URN = {urn:nbn:de:0030-drops-235176},
doi = {10.4230/LIPIcs.ICALP.2025.140},
annote = {Keywords: guarded, monotone, SNP, forbidden patterns, query containment, recolouring, decidability, computational complexity, \omega-categoricity, constraint satisfaction, homogeneity, amalgamation property, Ramsey property, canonical function}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Gabriel Bathie, Nathanaël Fijalkow, and Corto Mascle
Abstract
Property testing is concerned with the design of algorithms making a sublinear number of queries to distinguish whether the input satisfies a given property or is far from having this property. A seminal paper of Alon, Krivelevich, Newman, and Szegedy in 2001 introduced property testing of formal languages: the goal is to determine whether an input word belongs to a given language, or is far from any word in that language. They constructed the first property testing algorithm for the class of all regular languages. This opened a line of work with improved complexity results and applications to streaming algorithms. In this work, we show a trichotomy result: the class of regular languages can be divided into three classes, each associated with an optimal query complexity. Our analysis yields effective characterizations for all three classes using so-called minimal blocking sequences, reasoning directly and combinatorially on automata.
Cite as
Gabriel Bathie, Nathanaël Fijalkow, and Corto Mascle. The Trichotomy of Regular Property Testing. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 141:1-141:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bathie_et_al:LIPIcs.ICALP.2025.141,
author = {Bathie, Gabriel and Fijalkow, Nathana\"{e}l and Mascle, Corto},
title = {{The Trichotomy of Regular Property Testing}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {141:1--141:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.141},
URN = {urn:nbn:de:0030-drops-235186},
doi = {10.4230/LIPIcs.ICALP.2025.141},
annote = {Keywords: property testing, regular languages}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Nikolay Bazhenov, Dariusz Kalociński, and Michał Wrocławski
Abstract
We investigate whether every computable member of a given class of structures admits a fully primitive recursive (also known as punctual) or fully P-TIME copy. A class with this property is referred to as punctually robust or P-TIME robust, respectively. We present both positive and negative results for structures corresponding to well-known representations of trees, such as binary trees, ordered trees, sequential (or prefix) trees, and partially ordered (poset) trees. A corollary of one of our results on trees is that semilattices and lattices are not punctually robust. In the main result of the paper, we demonstrate that, unlike Boolean algebras, modal algebras - that is, Boolean algebras with modality - are not punctually robust. The question of whether distributive lattices are punctually robust remains open. The paper contributes to a decades-old program on effective and feasible algebra, which has recently gained momentum due to rapid developments in punctual structure theory and its connections to online presentations of structures.
Cite as
Nikolay Bazhenov, Dariusz Kalociński, and Michał Wrocławski. Online and Feasible Presentability: From Trees to Modal Algebras. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 142:1-142:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bazhenov_et_al:LIPIcs.ICALP.2025.142,
author = {Bazhenov, Nikolay and Kaloci\'{n}ski, Dariusz and Wroc{\l}awski, Micha{\l}},
title = {{Online and Feasible Presentability: From Trees to Modal Algebras}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {142:1--142:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.142},
URN = {urn:nbn:de:0030-drops-235190},
doi = {10.4230/LIPIcs.ICALP.2025.142},
annote = {Keywords: Algebraic structure, computable structure, fully primitive recursive structure, punctual structure, polynomial-time computable structure, punctual robustness, tree, semilattice, lattice, Boolean algebra, modal algebra}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Valérie Berthé, Herman Goulet-Ouellet, and Dominique Perrin
Abstract
We study density of rational languages under shift invariant probability measures on spaces of two-sided infinite words, which generalizes the classical notion of density studied in formal languages and automata theory. The density for a language is defined as the limit in average (if it exists) of the probability that a word of a given length belongs to the language. We establish the existence of densities for all rational languages under all shift invariant measures. We also give explicit formulas under certain conditions, in particular when the language is aperiodic. Our approach combines tools and ideas from semigroup theory and ergodic theory.
Cite as
Valérie Berthé, Herman Goulet-Ouellet, and Dominique Perrin. Density of Rational Languages Under Shift Invariant Measures. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 143:1-143:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{berthe_et_al:LIPIcs.ICALP.2025.143,
author = {Berth\'{e}, Val\'{e}rie and Goulet-Ouellet, Herman and Perrin, Dominique},
title = {{Density of Rational Languages Under Shift Invariant Measures}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {143:1--143:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.143},
URN = {urn:nbn:de:0030-drops-235203},
doi = {10.4230/LIPIcs.ICALP.2025.143},
annote = {Keywords: Automata theory, Symbolic dynamics, Semigroup theory, Ergodic theory}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Markus Bläser, Julian Dörfler, Maciej Liśkiewicz, and Benito van der Zander
Abstract
We study the complexity of satisfiability problems in probabilistic and causal reasoning. Given random variables X₁, X₂,… over finite domains, the basic terms are probabilities of propositional formulas over atomic events X_i = x_i, such as ℙ(X₁ = x₁) or ℙ(X₁ = x₁ ∨ X₂ = x₂). The basic terms can be combined using addition (yielding linear terms) or multiplication (polynomial terms). The probabilistic satisfiability problem asks whether a joint probability distribution satisfies a Boolean combination of (in)equalities over such terms. Fagin et al. [Fagin et al., 1990] showed that for basic and linear terms, this problem is NP-complete, making it no harder than Boolean satisfiability, while Mossé et al. [Mossé et al., 2022] proved that for polynomial terms, it is complete for the existential theory of the reals.
Pearl’s Causal Hierarchy (PCH) extends the probabilistic setting with interventional and counterfactual reasoning, enriching the expressiveness of the languages. However, Mossé et al. [Mossé et al., 2022] found that the complexity of satisfiability remains unchanged. Van der Zander et al. [van der Zander et al., 2023] showed that introducing a marginalization operator to languages induces a significant increase in complexity.
We extend this line of work by adding two new dimensions to the problem by constraining the models. First, we fix the graph structure of the underlying structural causal model, motivated by settings like Pearl’s do-calculus, and give a nearly complete landscape across different arithmetics and PCH levels. Second, we study small models. While earlier work showed that satisfiable instances admit polynomial-size models, this is no longer guaranteed with compact marginalization. We characterize the complexities of satisfiability under small-model constraints across different settings.
Cite as
Markus Bläser, Julian Dörfler, Maciej Liśkiewicz, and Benito van der Zander. Probabilistic and Causal Satisfiability: Constraining the Model. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 144:1-144:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{blaser_et_al:LIPIcs.ICALP.2025.144,
author = {Bl\"{a}ser, Markus and D\"{o}rfler, Julian and Li\'{s}kiewicz, Maciej and van der Zander, Benito},
title = {{Probabilistic and Causal Satisfiability: Constraining the Model}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {144:1--144:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.144},
URN = {urn:nbn:de:0030-drops-235214},
doi = {10.4230/LIPIcs.ICALP.2025.144},
annote = {Keywords: Existential theory of the real numbers, Computational complexity, Probabilistic logic, Structural Causal Models}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Manuel Bodirsky, Georg Loho, and Mateusz Skomra
Abstract
We present a polynomial-time reduction from max-average constraints to the feasibility problem for semidefinite programs. This shows that Condon’s simple stochastic games, stochastic mean payoff games, and in particular mean payoff games and parity games can all be reduced to semidefinite programming.
Cite as
Manuel Bodirsky, Georg Loho, and Mateusz Skomra. Reducing Stochastic Games to Semidefinite Programming. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 145:1-145:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bodirsky_et_al:LIPIcs.ICALP.2025.145,
author = {Bodirsky, Manuel and Loho, Georg and Skomra, Mateusz},
title = {{Reducing Stochastic Games to Semidefinite Programming}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {145:1--145:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.145},
URN = {urn:nbn:de:0030-drops-235224},
doi = {10.4230/LIPIcs.ICALP.2025.145},
annote = {Keywords: Mean-payoff games, stochastic games, semidefinite programming, max-average constraints, max-atom problem}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
León Bohn, Yong Li, Christof Löding, and Sven Schewe
Abstract
Families of deterministic finite automata (FDFA) represent regular ω-languages through their ultimately periodic words (UP-words). An FDFA accepts pairs of words, where the first component corresponds to a prefix of the UP-word, and the second component represents a period of that UP-word. An FDFA is termed saturated if, for each UP-word, either all or none of the pairs representing that UP-word are accepted. We demonstrate that determining whether a given FDFA is saturated can be accomplished in polynomial time, thus improving the known PSPACE upper bound by an exponential. We illustrate the application of this result by presenting the first polynomial learning algorithms for representations of the class of all regular ω-languages. Furthermore, we establish that deciding a weaker property, referred to as almost saturation, is PSPACE-complete. Since FDFAs do not necessarily define regular ω-languages when they are not saturated, we also address the regularity problem and show that it is PSPACE-complete. Finally, we explore a variant of FDFAs called families of deterministic weak automata (FDWA), where the semantics for the periodic part of the UP-word considers ω-words instead of finite words. We demonstrate that saturation for FDWAs is also decidable in polynomial time, that FDWAs always define regular ω-languages, and we compare the succinctness of these different models.
Cite as
León Bohn, Yong Li, Christof Löding, and Sven Schewe. Saturation Problems for Families of Automata. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 146:1-146:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bohn_et_al:LIPIcs.ICALP.2025.146,
author = {Bohn, Le\'{o}n and Li, Yong and L\"{o}ding, Christof and Schewe, Sven},
title = {{Saturation Problems for Families of Automata}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {146:1--146:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.146},
URN = {urn:nbn:de:0030-drops-235239},
doi = {10.4230/LIPIcs.ICALP.2025.146},
annote = {Keywords: Families of Automata, automata learning, FDFAs}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Édouard Bonnet, Samuel Braunfeld, Ioannis Eleftheriadis, Colin Geniet, Nikolas Mählmann, Michał Pilipczuk, Wojciech Przybyszewski, and Szymon Toruńczyk
Abstract
A graph class 𝒞 is monadically dependent if one cannot interpret all graphs in colored graphs from 𝒞 using a fixed first-order interpretation. We prove that monadically dependent classes can be exactly characterized by the following property, which we call flip-separability: for every r ∈ ℕ, ε > 0, and every graph G ∈ 𝒞 equipped with a weight function on vertices, one can apply a bounded (in terms of 𝒞,r,ε) number of flips (complementations of the adjacency relation on a subset of vertices) to G so that in the resulting graph, every radius-r ball contains at most an ε-fraction of the total weight. On the way to this result, we introduce a robust toolbox for working with various notions of local separations in monadically dependent classes.
Cite as
Édouard Bonnet, Samuel Braunfeld, Ioannis Eleftheriadis, Colin Geniet, Nikolas Mählmann, Michał Pilipczuk, Wojciech Przybyszewski, and Szymon Toruńczyk. Separability Properties of Monadically Dependent Graph Classes. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 147:1-147:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{bonnet_et_al:LIPIcs.ICALP.2025.147,
author = {Bonnet, \'{E}douard and Braunfeld, Samuel and Eleftheriadis, Ioannis and Geniet, Colin and M\"{a}hlmann, Nikolas and Pilipczuk, Micha{\l} and Przybyszewski, Wojciech and Toru\'{n}czyk, Szymon},
title = {{Separability Properties of Monadically Dependent Graph Classes}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {147:1--147:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.147},
URN = {urn:nbn:de:0030-drops-235246},
doi = {10.4230/LIPIcs.ICALP.2025.147},
annote = {Keywords: Structural graph theory, Monadic dependence}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Jin-Yi Cai and Jin Soo Ihm
Abstract
Holant problems are a general framework to study the computational complexity of counting problems. It is a more expressive framework than counting constraint satisfaction problems (CSP) which are in turn more expressive than counting graph homomorphisms (GH). In this paper, we prove the first complexity dichotomy of Holant^*₃(ℱ) where ℱ is an arbitrary set of symmetric, real valued constraint functions on domain size 3. We give an explicit tractability criterion and prove that, if ℱ satisfies this criterion then Holant^*₃(ℱ) is polynomial time computable, and otherwise it is #P-hard, with no intermediate cases. We show that the geometry of the tensor decomposition of the constraint functions plays a central role in the formulation as well as the structural internal logic of the dichotomy.
Cite as
Jin-Yi Cai and Jin Soo Ihm. Holant* Dichotomy on Domain Size 3: A Geometric Perspective. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 148:1-148:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{cai_et_al:LIPIcs.ICALP.2025.148,
author = {Cai, Jin-Yi and Ihm, Jin Soo},
title = {{Holant* Dichotomy on Domain Size 3: A Geometric Perspective}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {148:1--148:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.148},
URN = {urn:nbn:de:0030-drops-235254},
doi = {10.4230/LIPIcs.ICALP.2025.148},
annote = {Keywords: Holant problem, Complexity dichotomy, Higher domain}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Antonio Casares and Pierre Ohlmann
Abstract
In the context of 2-player zero-sum infinite duration games played on (potentially infinite) graphs, the memory of an objective is the smallest integer k such that in any game won by Eve, she has a strategy with ≤ k states of memory. For ω-regular objectives, checking whether the memory equals a given number k was not known to be decidable. In this work, we focus on objectives in BC(Σ⁰₂), i.e. recognised by a potentially infinite deterministic parity automaton. We provide a class of automata that recognise objectives with memory ≤ k, leading to the following results:
- for ω-regular objectives, the memory can be computed in NP;
- given two objectives W₁ and W₂ in BC(Σ⁰₂) and assuming W₁ is prefix-independent, the memory of W₁ ∪ W₂ is at most the product of the memories of W₁ and W₂. Our results also apply to chromatic memory, the variant where strategies can update their memory state only depending on which colour is seen.
Cite as
Antonio Casares and Pierre Ohlmann. The Memory of ω-Regular and BC(Σ⁰₂) Objectives. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 149:1-149:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{casares_et_al:LIPIcs.ICALP.2025.149,
author = {Casares, Antonio and Ohlmann, Pierre},
title = {{The Memory of \omega-Regular and BC(\Sigma⁰₂) Objectives}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {149:1--149:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.149},
URN = {urn:nbn:de:0030-drops-235267},
doi = {10.4230/LIPIcs.ICALP.2025.149},
annote = {Keywords: Infinite duration games, Strategy complexity, Omega-regular}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Krishnendu Chatterjee, Laurent Doyen, Jean-François Raskin, and Ocan Sankur
Abstract
We consider multiple-environment Markov decision processes (MEMDP), which consist of a finite set of MDPs over the same state space, representing different scenarios of transition structure and probability. The value of a strategy is the probability to satisfy the objective, here a parity objective, in the worst-case scenario, and the value of an MEMDP is the supremum of the values achievable by a strategy.
We show that deciding whether the value is 1 is a PSPACE-complete problem, and even in P when the number of environments is fixed, along with new insights to the almost-sure winning problem, which is to decide if there exists a strategy with value 1. Pure strategies are sufficient for theses problems, whereas randomization is necessary in general when the value is smaller than 1. We present an algorithm to approximate the value, running in double exponential space. Our results are in contrast to the related model of partially-observable MDPs where all these problems are known to be undecidable.
Cite as
Krishnendu Chatterjee, Laurent Doyen, Jean-François Raskin, and Ocan Sankur. The Value Problem for Multiple-Environment MDPs with Parity Objective. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 150:1-150:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chatterjee_et_al:LIPIcs.ICALP.2025.150,
author = {Chatterjee, Krishnendu and Doyen, Laurent and Raskin, Jean-Fran\c{c}ois and Sankur, Ocan},
title = {{The Value Problem for Multiple-Environment MDPs with Parity Objective}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {150:1--150:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.150},
URN = {urn:nbn:de:0030-drops-235272},
doi = {10.4230/LIPIcs.ICALP.2025.150},
annote = {Keywords: Markov decision processes, imperfect information, randomized strategies, limit-sure winning}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Miroslav Chodil and Antonín Kučera
Abstract
The Probabilistic Computational Tree Logic (PCTL) is the main specification formalism for discrete probabilistic systems modeled by Markov chains. Despite serious research attempts, the decidability of PCTL satisfiability and validity problems remained unresolved for 30 years. We show that both problems are highly undecidable, i.e., beyond the arithmetical hierarchy. Consequently, there is no sound and complete deductive system for PCTL.
Cite as
Miroslav Chodil and Antonín Kučera. The Satisfiability and Validity Problems for Probabilistic Computational Tree Logic Are Highly Undecidable. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 151:1-151:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{chodil_et_al:LIPIcs.ICALP.2025.151,
author = {Chodil, Miroslav and Ku\v{c}era, Anton{\'\i}n},
title = {{The Satisfiability and Validity Problems for Probabilistic Computational Tree Logic Are Highly Undecidable}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {151:1--151:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.151},
URN = {urn:nbn:de:0030-drops-235281},
doi = {10.4230/LIPIcs.ICALP.2025.151},
annote = {Keywords: Satisfiability, temporal logics, probabilistic CTL}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Thomas Colcombet, Amina Doumane, and Denis Kuperberg
Abstract
We establish that the bisimulation invariant fragment of MSO over finite transition systems is expressively equivalent over finite transition systems to modal μ-calculus, a question that had remained open for several decades.
The proof goes by translating the question to an algebraic framework, and showing that the languages of regular trees that are recognised by finitary tree algebras whose sorts zero and one are finite are the regular ones. This corresponds for trees to a weak form of the key translation of Wilke algebras to omega-semigroup over infinite words, and was also a missing piece in the algebraic theory of regular languages of infinite trees for twenty years.
Cite as
Thomas Colcombet, Amina Doumane, and Denis Kuperberg. Tree Algebras and Bisimulation-Invariant MSO on Finite Graphs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 152:1-152:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{colcombet_et_al:LIPIcs.ICALP.2025.152,
author = {Colcombet, Thomas and Doumane, Amina and Kuperberg, Denis},
title = {{Tree Algebras and Bisimulation-Invariant MSO on Finite Graphs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {152:1--152:16},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.152},
URN = {urn:nbn:de:0030-drops-235294},
doi = {10.4230/LIPIcs.ICALP.2025.152},
annote = {Keywords: MSO, mu-calculus, finite graphs, bisimulation, tree algebra}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Wojciech Czerwiński, Ismaël Jecker, Sławomir Lasota, and Łukasz Orlikowski
Abstract
The reachability problem in 3-dimensional vector addition systems with states (3-VASS) is known to be PSpace-hard, and to belong to Tower. We significantly narrow down the complexity gap by proving the problem to be solvable in doubly-exponential space. The result follows from a new upper bound on the length of the shortest path: if there is a path between two configurations of a 3-VASS then there is also one of at most triply-exponential length. We show it by introducing a novel technique of approximating the reachability sets of 2-VASS by small semi-linear sets.
Cite as
Wojciech Czerwiński, Ismaël Jecker, Sławomir Lasota, and Łukasz Orlikowski. Reachability in 3-VASS Is Elementary. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 153:1-153:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{czerwinski_et_al:LIPIcs.ICALP.2025.153,
author = {Czerwi\'{n}ski, Wojciech and Jecker, Isma\"{e}l and Lasota, S{\l}awomir and Orlikowski, {\L}ukasz},
title = {{Reachability in 3-VASS Is Elementary}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {153:1--153:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.153},
URN = {urn:nbn:de:0030-drops-235307},
doi = {10.4230/LIPIcs.ICALP.2025.153},
annote = {Keywords: vector addition systems, Petri nets, reachability problem, dimension three, doubly exponential space, length of shortest path}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Ruiwen Dong
Abstract
We show that Submonoid Membership is decidable in n-dimensional lamplighter groups (ℤ/pℤ) ≀ ℤⁿ for any prime p and integer n. More generally, we show decidability of Submonoid Membership in semidirect products of the form 𝒴 ⋊ ℤⁿ, where 𝒴 is any finitely presented module over the Laurent polynomial ring 𝔽_p[X₁^{±}, …, X_n^{±}]. Combined with a result of Shafrir (2024), this gives the first example of a group G and a finite index subgroup G̃ ≤ G, such that Submonoid Membership is decidable in G̃ but undecidable in G.
To obtain our decidability result, we reduce Submonoid Membership in 𝒴 ⋊ ℤⁿ to solving S-unit equations over 𝔽_p[X₁^{±}, …, X_n^{±}]-modules. We show that the solution set of such equations is effectively p-automatic, extending a result of Adamczewski and Bell (2012). As an intermediate result, we also obtain that the solution set of the Knapsack Problem in 𝒴 ⋊ ℤⁿ is effectively p-automatic.
Cite as
Ruiwen Dong. Submonoid Membership in n-Dimensional Lamplighter Groups and S-Unit Equations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 154:1-154:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{dong:LIPIcs.ICALP.2025.154,
author = {Dong, Ruiwen},
title = {{Submonoid Membership in n-Dimensional Lamplighter Groups and S-Unit Equations}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {154:1--154:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.154},
URN = {urn:nbn:de:0030-drops-235316},
doi = {10.4230/LIPIcs.ICALP.2025.154},
annote = {Keywords: Submonoid Membership, lamplighter groups, S-unit equations, p-automatic sets, Knapsack in groups}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Emmanuel Filiot, Ismaël Jecker, Khushraj Madnani, and Saina Sunny
Abstract
Finite (word) state transducers extend finite state automata by defining a binary relation over finite words, called rational relation. If the rational relation is the graph of a function, this function is said to be rational. The class of sequential functions is a strict subclass of rational functions, defined as the functions recognised by input-deterministic finite state transducers. The class membership problems between those classes are known to be decidable. We consider approximate versions of these problems and show they are decidable as well. This includes the approximate functionality problem, which asks whether given a rational relation (by a transducer), is it close to a rational function, and the approximate determinisation problem, which asks whether a given rational function is close to a sequential function. We prove decidability results for several classical distances, including Hamming and Levenshtein edit distance. Finally, we investigate the approximate uniformisation problem, which asks, given a rational relation R, whether there exists a sequential function that is close to some function uniformising R. As its exact version, we prove that this problem is undecidable.
Cite as
Emmanuel Filiot, Ismaël Jecker, Khushraj Madnani, and Saina Sunny. Approximate Problems for Finite Transducers. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 155:1-155:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{filiot_et_al:LIPIcs.ICALP.2025.155,
author = {Filiot, Emmanuel and Jecker, Isma\"{e}l and Madnani, Khushraj and Sunny, Saina},
title = {{Approximate Problems for Finite Transducers}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {155:1--155:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.155},
URN = {urn:nbn:de:0030-drops-235329},
doi = {10.4230/LIPIcs.ICALP.2025.155},
annote = {Keywords: Finite state transducers, Edit distance, Determinisation, Functionality}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Lukas Fleischer, Florian Stober, Alexander Thumm, and Armin Weiß
Abstract
The membership problem for an algebraic structure asks whether a given element is contained in some substructure, which is usually given by generators. In this work we study the membership problem, as well as the conjugacy problem, for finite inverse semigroups. The closely related membership problem for finite semigroups has been shown to be PSPACE-complete in the transformation model by Kozen (1977) and NL-complete in the Cayley table model by Jones, Lien, and Laaser (1976). More recently, both the membership and the conjugacy problem for finite inverse semigroups were shown to be PSPACE-complete in the partial bijection model by Jack (2023).
Here we present a more detailed analysis of the complexity of the membership and conjugacy problems parametrized by varieties of finite inverse semigroups. We establish dichotomy theorems for the partial bijection model and for the Cayley table model. In the partial bijection model these problems are in NC (resp. NP for conjugacy) for strict inverse semigroups and PSPACE-complete otherwise. In the Cayley table model we obtain general 𝖫-algorithms as well as NPOLYLOGTIME upper bounds for Clifford semigroups and 𝖫-completeness otherwise.
Furthermore, by applying our findings, we show the following: the intersection non-emptiness problem for inverse automata is PSPACE-complete even for automata with only two states; the subpower membership problem is in NC for every strict inverse semigroup and PSPACE-complete otherwise; the minimum generating set and the equation satisfiability problems are in NP for varieties of finite strict inverse semigroups and PSPACE-complete otherwise.
Cite as
Lukas Fleischer, Florian Stober, Alexander Thumm, and Armin Weiß. Membership and Conjugacy in Inverse Semigroups. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 156:1-156:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{fleischer_et_al:LIPIcs.ICALP.2025.156,
author = {Fleischer, Lukas and Stober, Florian and Thumm, Alexander and Wei{\ss}, Armin},
title = {{Membership and Conjugacy in Inverse Semigroups}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {156:1--156:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.156},
URN = {urn:nbn:de:0030-drops-235330},
doi = {10.4230/LIPIcs.ICALP.2025.156},
annote = {Keywords: inverse semigroups, membership, conjugacy, finite automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Christina Gehnen, Dominique Unruh, and Joost-Pieter Katoen
Abstract
Conditioning is a key feature in probabilistic programming to enable modeling the influence of data (also known as observations) to the probability distribution described by such programs. Determining the posterior distribution is also known as Bayesian inference. This paper equips a quantum while-language with conditioning, defines its denotational and operational semantics over infinite-dimensional Hilbert spaces, and shows their equivalence. We provide sufficient conditions for the existence of weakest (liberal) precondition-transformers and derive inductive characterizations of these transformers. It is shown how w(l)p-transformers can be used to assess the effect of Bayesian inference on (possibly diverging) quantum programs.
Cite as
Christina Gehnen, Dominique Unruh, and Joost-Pieter Katoen. Bayesian Inference in Quantum Programs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 157:1-157:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{gehnen_et_al:LIPIcs.ICALP.2025.157,
author = {Gehnen, Christina and Unruh, Dominique and Katoen, Joost-Pieter},
title = {{Bayesian Inference in Quantum Programs}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {157:1--157:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.157},
URN = {urn:nbn:de:0030-drops-235345},
doi = {10.4230/LIPIcs.ICALP.2025.157},
annote = {Keywords: Quantum Program Logics, Weakest Preconditions, Bayesian Inference, Program Semantics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Santiago Guzmán-Pro and Barnaby Martin
Abstract
In recent years, much attention has been placed on the complexity of graph homomorphism problems when the input is restricted to ℙ_k-free and ℙ_k-subgraph-free graphs. We consider the directed version of this research line, by addressing the question is it true that digraph homomorphism problems CSP(H) have a P versus NP-complete dichotomy when the input is restricted to ℙ→_k-free (resp. ℙ→_k-subgraph-free) digraphs? Our main contribution in this direction shows that if CSP(H) is NP-complete, then there is a positive integer N such that CSP(H) remains NP-hard even for ℙ→_N-subgraph-free digraphs. Moreover, CSP(H) becomes polynomial-time solvable for ℙ→_{N-1}-subgraph-free acyclic digraphs. We then verify the questions above for digraphs on three vertices and a family of smooth tournaments. We prove these results by establishing a connection between F-(subgraph)-free algorithmics and constraint satisfaction theory. On the way, we introduce restricted CSPs, i.e., problems of the form CSP(H) restricted to yes-instances of CSP(H') - these were called restricted homomorphism problems by Hell and Nešetřil. Another main result of this paper presents a P versus NP-complete dichotomy for these problems. Moreover, this complexity dichotomy is accompanied by an algebraic dichotomy in the spirit of the finite domain CSP dichotomy.
Cite as
Santiago Guzmán-Pro and Barnaby Martin. Restricted CSPs and F-Free Digraph Algorithmics. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 158:1-158:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{guzmanpro_et_al:LIPIcs.ICALP.2025.158,
author = {Guzm\'{a}n-Pro, Santiago and Martin, Barnaby},
title = {{Restricted CSPs and F-Free Digraph Algorithmics}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {158:1--158:21},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.158},
URN = {urn:nbn:de:0030-drops-235352},
doi = {10.4230/LIPIcs.ICALP.2025.158},
annote = {Keywords: Digraph homomorphisms, constraint satisfaction problems, subgraph-free algorithmics}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Fugen Hagihara and Akitoshi Kawamura
Abstract
A real-valued sequence f = {f(n)}_{n ∈ ℕ} is said to be second-order holonomic if it satisfies a linear recurrence f (n + 2) = P (n) f (n + 1) + Q (n) f (n) for all sufficiently large n, where P, Q ∈ ℝ(x) are rational functions. We study the ultimate sign of such a sequence, i.e., the repeated pattern that the signs of f (n) follow for sufficiently large n. For each P, Q we determine all ultimate signs that f can have, and show how they partition the space of initial values of f. This completes the prior work by Neumann, Ouaknine and Worrell, who have settled some restricted cases. As a corollary, it follows that when P, Q have rational coefficients, f either has an ultimate sign of length 1, 2, 3, 4, 6, 8 or 12, or never falls into a repeated sign pattern. We also give a partial algorithm that finds the ultimate sign of f (or tells that there is none) in almost all cases.
Cite as
Fugen Hagihara and Akitoshi Kawamura. The Ultimate Signs of Second-Order Holonomic Sequences. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 159:1-159:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{hagihara_et_al:LIPIcs.ICALP.2025.159,
author = {Hagihara, Fugen and Kawamura, Akitoshi},
title = {{The Ultimate Signs of Second-Order Holonomic Sequences}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {159:1--159:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.159},
URN = {urn:nbn:de:0030-drops-235363},
doi = {10.4230/LIPIcs.ICALP.2025.159},
annote = {Keywords: Holonomic sequences, ultimate signs, Skolem Problem, Positivity Problem}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Olivier Idir and Karoliina Lehtinen
Abstract
The parity index problem of tree automata asks, given a regular tree language L and a set of priorities J, is L J-feasible, that is, recognised by a nondeterministic parity automaton with priorities J? This is a long-standing open problem, of which only a few sub-cases and variations are known to be decidable. In a significant but technically difficult step, Colcombet and Löding reduced the problem to the uniform universality of distance-parity automata. In this article, we revisit the index problem using tools from the parity game literature.
We add some counters to Lehtinen’s register game, originally used to solve parity games in quasipolynomial time, and use this novel game to characterise J-feasibility. This provides a alternative proof to Colcombet and Löding’s reduction.
We then provide a second characterisation, based on the notion of attractor decompositions and the complexity of their structure, as measured by a parameterised version of their Strahler number, which we call n-Strahler number. Finally, we rephrase this result using the notion of universal tree extended to automata: a guidable automaton recognises a [1,2j]-feasible language if and only if it admits a universal tree with n-Strahler number j, for some n. In particular, a language recognised by a guidable automaton {A} is Büchi-feasible if and only if there is a uniform bound n ∈ ℕ such that all trees in the language admit an accepting run with an attractor decomposition of width bounded by n. Equivalently, the language is Büchi-feasible if and only if {A} admits a finite universal tree.
While we do not solve the decidability of the index problem, our work makes the state-of-the-art more accessible and brings to light the deep relationships between the J-feasibility of a language and attractor decompositions, universal trees and Lehtinen’s register game.
Cite as
Olivier Idir and Karoliina Lehtinen. Using Games and Universal Trees to Characterise the Nondeterministic Index of Tree Languages. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 160:1-160:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{idir_et_al:LIPIcs.ICALP.2025.160,
author = {Idir, Olivier and Lehtinen, Karoliina},
title = {{Using Games and Universal Trees to Characterise the Nondeterministic Index of Tree Languages}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {160:1--160:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.160},
URN = {urn:nbn:de:0030-drops-235377},
doi = {10.4230/LIPIcs.ICALP.2025.160},
annote = {Keywords: Tree automata, parity automata, Mostowski index, Strahler number, attractor decomposition, universal trees}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Paweł M. Idziak, Piotr Kawałek, and Jacek Krzaczkowski
Abstract
Nonuniform deterministic finite automata (NUDFA) over monoids were invented by Barrington in [Barrington, 1985] to study boundaries of nonuniform constant-memory computation. Later, results on these automata helped to identify interesting classes of groups for which equation satisfiability problem (PolSat) is solvable in (probabilistic) polynomial time [Mikael Goldmann and Alexander Russell, 2002; Idziak et al., 2022]. Based on these results, we present a full characterization of groups, for which the identity checking problem (called PolEqv) has a probabilistic polynomial-time algorithm. We also go beyond groups, and propose how to generalise the notion of NUDFA to arbitrary finite algebraic structures. We study satisfiability of these automata in this more general setting. As a consequence, we present a full description of finite algebras from congruence modular varieties for which testing circuit equivalence CEqv can be solved by a probabilistic polynomial-time procedure. In our proofs we use two computational complexity assumptions: randomized Expotential Time Hypothesis and Constant Degree Hypothesis.
Cite as
Paweł M. Idziak, Piotr Kawałek, and Jacek Krzaczkowski. Nonuniform Deterministic Finite Automata over Finite Algebraic Structures. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 161:1-161:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{idziak_et_al:LIPIcs.ICALP.2025.161,
author = {Idziak, Pawe{\l} M. and Kawa{\l}ek, Piotr and Krzaczkowski, Jacek},
title = {{Nonuniform Deterministic Finite Automata over Finite Algebraic Structures}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {161:1--161:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.161},
URN = {urn:nbn:de:0030-drops-235386},
doi = {10.4230/LIPIcs.ICALP.2025.161},
annote = {Keywords: program satisfiability, circuit equivalence, identity checking}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Simon Iosti, Denis Kuperberg, and Quentin Moreau
Abstract
We study the positive logic FO^+ on finite words, and its fragments, pursuing and refining the work initiated in [Denis Kuperberg, 2023]. First, we transpose notorious logic equivalences into positive first-order logic: FO^+ is equivalent to LTL^+, and its two-variable fragment FO^{2+} with (resp. without) successor available is equivalent to UTL^+ with (resp. without) the "next" operator X available. This shows that despite previous negative results, the class of FO^+-definable languages exhibits some form of robustness. We then exhibit an example of an FO-definable monotone language on one predicate, that is not FO^+-definable, refining the example from [Denis Kuperberg, 2023] with 3 predicates. Moreover, we show that such a counter-example cannot be FO²-definable. Finally, we provide a new example distinguishing the positive and monotone versions of FO² without quantifier alternation. This does not rely on a variant of the previously known counter-example, and witnesses a new phenomenon.
Cite as
Simon Iosti, Denis Kuperberg, and Quentin Moreau. Positive and Monotone Fragments of FO and LTL. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 162:1-162:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{iosti_et_al:LIPIcs.ICALP.2025.162,
author = {Iosti, Simon and Kuperberg, Denis and Moreau, Quentin},
title = {{Positive and Monotone Fragments of FO and LTL}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {162:1--162:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.162},
URN = {urn:nbn:de:0030-drops-235398},
doi = {10.4230/LIPIcs.ICALP.2025.162},
annote = {Keywords: Positive logic, LTL, separation, first-order, monotone}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Toghrul Karimov
Abstract
A discrete-time linear dynamical system (LDS) is given by an update matrix M ∈ ℝ^{d× d}, and has the trajectories ⟨s, Ms, M²s, …⟩ for s ∈ ℝ^d. Reachability-type decision problems of linear dynamical systems, most notably the Skolem Problem, lie at the forefront of decidability: typically, sound and complete algorithms are known only in low dimensions, and these rely on sophisticated tools from number theory and Diophantine approximation. Recently, however, o-minimality has emerged as a counterpoint to these number-theoretic tools that allows us to decide certain modifications of the classical problems of LDS without any dimension restrictions. In this paper, we first introduce the Decomposition Method, a framework that captures all applications of o-minimality to decision problems of LDS that are currently known to us. We then use the Decomposition Method to show decidability of the Robust Safety Problem (restricted to bounded initial sets) in arbitrary dimension: given a matrix M, a bounded semialgebraic set S of initial points, and a semialgebraic set T of unsafe points, it is decidable whether there exists ε > 0 such that all orbits that begin in the ε-ball around S avoid T.
Cite as
Toghrul Karimov. Verification of Linear Dynamical Systems via O-Minimality of the Real Numbers. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 163:1-163:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{karimov:LIPIcs.ICALP.2025.163,
author = {Karimov, Toghrul},
title = {{Verification of Linear Dynamical Systems via O-Minimality of the Real Numbers}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {163:1--163:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.163},
URN = {urn:nbn:de:0030-drops-235401},
doi = {10.4230/LIPIcs.ICALP.2025.163},
annote = {Keywords: Linear dynamical systems, reachability problems, o-minimality}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Karoliina Lehtinen and Nathan Lhote
Abstract
Over words, nondeterministic Büchi automata and alternating weak automata are as expressive as parity automata with any number of priorities. Over trees, the Büchi acceptance condition is strictly weaker and the more priorities we allow, the more languages parity automata can recognise. We say that on words, the parity-index hierarchies of nondeterministic and alternating automata collapse to the Büchi and weak level, respectively, while both are infinite over trees.
We ask when is Büchi enough?, that is, on which classes of trees are nondeterministc Büchi automata as expressive as parity automata. Similarly for alternating weak automata. We work in the setting of unranked unordered trees, in which there is no order among the children of nodes.
We find that for nondeterministic and alternating automata, the parity-index hierarchy collapses to the Büchi level and weak level, respectively, for any class of trees of finitely bounded Cantor-Bendixson rank, a topological measure of tree complexity. Over trees of countable Cantor-Bendixson rank, (a.k.a. thin trees) the parity-index hierarchy of both nondeterministic and alternating automata collapses to the level [1,2,3], as was already known for ordered trees. These results are in some sense optimal: on the class of trees of finite but unbounded Cantor-Bendixson rank, two priorities do not suffice to recognise all parity-recognisable languages, even for alternating automata.
Cite as
Karoliina Lehtinen and Nathan Lhote. A Collapse of the Parity Index Hierarchy of Tree Automata, Based on Cantor-Bendixson Ranks. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 164:1-164:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{lehtinen_et_al:LIPIcs.ICALP.2025.164,
author = {Lehtinen, Karoliina and Lhote, Nathan},
title = {{A Collapse of the Parity Index Hierarchy of Tree Automata, Based on Cantor-Bendixson Ranks}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {164:1--164:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.164},
URN = {urn:nbn:de:0030-drops-235418},
doi = {10.4230/LIPIcs.ICALP.2025.164},
annote = {Keywords: Parity tree automata, alternating automata, Cantor-Bendixson rank}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Fabian Lenke, Stefan Milius, Henning Urbat, and Thorsten Wißmann
Abstract
Regular languages - the languages accepted by deterministic finite automata - are known to be precisely the languages recognized by finite monoids. This characterization is the origin of algebraic language theory. In this paper, we generalize the correspondence between automata and monoids to automata with generic computational effects given by a monad, providing the foundations of an effectful algebraic language theory. We show that, under suitable conditions on the monad, a language is computable by an effectful automaton precisely when it is recognizable by (1) an effectful monoid morphism into an effect-free finite monoid, and (2) a monoid morphism into a monad-monoid bialgebra whose carrier is a finitely generated algebra for the monad, the former mode of recognition being conceptually completely new. Our prime application is a novel algebraic approach to languages computed by probabilistic finite automata. Additionally, we derive new algebraic characterizations for nondeterministic probabilistic finite automata and for weighted finite automata over unrestricted semirings, generalizing previous results on weighted algebraic recognition over commutative rings.
Cite as
Fabian Lenke, Stefan Milius, Henning Urbat, and Thorsten Wißmann. Algebraic Language Theory with Effects. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 165:1-165:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{lenke_et_al:LIPIcs.ICALP.2025.165,
author = {Lenke, Fabian and Milius, Stefan and Urbat, Henning and Wi{\ss}mann, Thorsten},
title = {{Algebraic Language Theory with Effects}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {165:1--165:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.165},
URN = {urn:nbn:de:0030-drops-235423},
doi = {10.4230/LIPIcs.ICALP.2025.165},
annote = {Keywords: Automaton, Monoid, Monad, Effect, Algebraic language theory}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Moritz Lichter and Benedikt Pago
Abstract
We show that various recent algorithms for finite-domain constraint satisfaction problems (CSP), which are based on solving their affine integer relaxations, do not solve all tractable and not even all Maltsev CSPs. This rules them out as candidates for a universal polynomial-time CSP algorithm. The algorithms are ℤ-affine k-consistency, BLP+AIP, BA^{k}, and CLAP. We thereby answer a question by Brakensiek, Guruswami, Wrochna, and Živný [Joshua Brakensiek et al., 2020] whether a constant level of BA^{k}solves all tractable CSPs in the negative: Indeed, not even a sublinear level k suffices. We also refute a conjecture by Dalmau and Opršal [Víctor Dalmau and Jakub Opršal, 2024] (LICS 2024) that every CSP is either solved by ℤ-affine k-consistency or admits a Datalog reduction from 3-colorability. For the cohomological k-consistency algorithm, that is also based on affine relaxations, we show that it correctly solves our counterexample but fails on an NP-complete template.
Cite as
Moritz Lichter and Benedikt Pago. Limitations of Affine Integer Relaxations for Solving Constraint Satisfaction Problems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 166:1-166:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{lichter_et_al:LIPIcs.ICALP.2025.166,
author = {Lichter, Moritz and Pago, Benedikt},
title = {{Limitations of Affine Integer Relaxations for Solving Constraint Satisfaction Problems}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {166:1--166:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.166},
URN = {urn:nbn:de:0030-drops-235431},
doi = {10.4230/LIPIcs.ICALP.2025.166},
annote = {Keywords: constraint satisfaction, affine relaxation, promise CSPs, \mathbb{Z}-affine k-consistency, cohomological k-consistency algorithm, Tseitin, graph isomorphism}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Nikolas Mählmann
Abstract
The graph parameter shrub-depth is a dense analog of tree-depth. We characterize classes of bounded shrub-depth by forbidden induced subgraphs. The obstructions are well-controlled flips of large half-graphs and of disjoint unions of many long paths. Applying this characterization, we show that on every hereditary class of unbounded shrub-depth, MSO is more expressive than FO. This confirms a conjecture of [Gajarský and Hliněný; LMCS 2015] who proved that on classes of bounded shrub-depth FO and MSO have the same expressive power. Combined, the two results fully characterize the hereditary classes on which FO and MSO coincide, answering an open question by [Elberfeld, Grohe, and Tantau; LICS 2012].
Our work is inspired by the notion of stability from model theory. A graph class 𝒞 is MSO-stable, if no MSO-formula can define arbitrarily long linear orders in graphs from 𝒞. We show that a hereditary graph class is MSO-stable if and only if it has bounded shrub-depth. As a key ingredient, we prove that every hereditary class of unbounded shrub-depth FO-interprets the class of all paths. This improves upon a result of [Ossona de Mendez, Pilipczuk, and Siebertz; Eur. J. Comb. 2025] who showed the same statement for FO-transductions instead of FO-interpretations.
Cite as
Nikolas Mählmann. Forbidden Induced Subgraphs for Bounded Shrub-Depth and the Expressive Power of MSO. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 167:1-167:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{mahlmann:LIPIcs.ICALP.2025.167,
author = {M\"{a}hlmann, Nikolas},
title = {{Forbidden Induced Subgraphs for Bounded Shrub-Depth and the Expressive Power of MSO}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {167:1--167:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.167},
URN = {urn:nbn:de:0030-drops-235444},
doi = {10.4230/LIPIcs.ICALP.2025.167},
annote = {Keywords: Shrub-Depth, Forbidden Induced Subgraphs, MSO, Stability Theory}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Olga Martynova and Alexander Okhotin
Abstract
It is proved that the family of tree languages recognized by nondeterministic tree-walking automata is not closed under complementation, solving a problem raised by Bojańczyk and Colcombet (https://doi.org/10.1137/050645427, SIAM J. Comp. 38 (2008) 658-701). In addition, it is shown that nondeterministic tree-walking automata are stronger than unambiguous tree-walking automata.
Cite as
Olga Martynova and Alexander Okhotin. Nondeterministic Tree-Walking Automata Are Not Closed Under Complementation. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 168:1-168:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{martynova_et_al:LIPIcs.ICALP.2025.168,
author = {Martynova, Olga and Okhotin, Alexander},
title = {{Nondeterministic Tree-Walking Automata Are Not Closed Under Complementation}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {168:1--168:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.168},
URN = {urn:nbn:de:0030-drops-235459},
doi = {10.4230/LIPIcs.ICALP.2025.168},
annote = {Keywords: Finite automata, tree-walking automata, complementation}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Tamio-Vesa Nakajima, Zephyr Verwimp, Marcin Wrochna, and Stanislav Živný
Abstract
Using the algebraic approach to promise constraint satisfaction problems, we establish complexity classifications of three natural variants of hypergraph colourings: standard nonmonochromatic colourings, conflict-free colourings, and linearly-ordered colourings.
Firstly, we show that finding an 𝓁-colouring of a k-colourable r-uniform hypergraph is NP-hard for all constant 2 ≤ k ≤ 𝓁 and r ≥ 3. This provides a shorter proof of a celebrated result by Dinur et al. [FOCS'02/Combinatorica'05].
Secondly, we show that finding an 𝓁-conflict-free colouring of an r-uniform hypergraph that admits a k-conflict-free colouring is NP-hard for all constant 2 ≤ k ≤ 𝓁 and r ≥ 4, except for r = 4 and k = 2 (and any 𝓁); this case is solvable in polynomial time. The case of r = 3 is the standard nonmonochromatic colouring, and the case of r = 2 is the notoriously difficult open problem of approximate graph colouring.
Thirdly, we show that finding an 𝓁-linearly-ordered colouring of an r-uniform hypergraph that admits a k-linearly-ordered colouring is NP-hard for all constant 3 ≤ k ≤ 𝓁 and r ≥ 4, thus improving on the results of Nakajima and Živný [ICALP'22/ACM TocT'23].
Cite as
Tamio-Vesa Nakajima, Zephyr Verwimp, Marcin Wrochna, and Stanislav Živný. Complexity of Approximate Conflict-Free, Linearly-Ordered, and Nonmonochromatic Hypergraph Colourings. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 169:1-169:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{nakajima_et_al:LIPIcs.ICALP.2025.169,
author = {Nakajima, Tamio-Vesa and Verwimp, Zephyr and Wrochna, Marcin and \v{Z}ivn\'{y}, Stanislav},
title = {{Complexity of Approximate Conflict-Free, Linearly-Ordered, and Nonmonochromatic Hypergraph Colourings}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {169:1--169:10},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.169},
URN = {urn:nbn:de:0030-drops-235460},
doi = {10.4230/LIPIcs.ICALP.2025.169},
annote = {Keywords: hypergraph colourings, conflict-free colourings, unique-maximum colourings, linearly-ordered colourings}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Tikhon Pshenitsyn
Abstract
The Lambek calculus is a substructural logic known to be closely related to the formal language theory: on the one hand, it is used for generating formal languages by means of categorial grammars and, on the other hand, it has formal language semantics, with respect to which it is sound and complete. This paper studies a similar relation between first-order intuitionistic linear logic ILL1 along with its multiplicative fragment MILL1 on the one hand and the hypergraph grammar theory on the other. In the first part, we introduce a novel concept of hypergraph first-order logic categorial grammar, which is a generalisation of string MILL1 grammars introduced in Richard Moot’s works. We prove that hypergraph ILL1 grammars generate all recursively enumerable hypergraph languages and that hypergraph MILL1 grammars are as powerful as linear-time hypergraph transformation systems. In addition, we show that the class of languages generated by string MILL1 grammars is closed under intersection and that it includes a non-semilinear language as well as an NP-complete one. This shows how much more powerful string MILL1 grammars are as compared to Lambek categorial grammars. In the second part, we develop hypergraph language models for MILL1. In such models, formulae of the logic are interpreted as hypergraph languages and multiplicative conjunction is interpreted using parallel composition, which is one of the operations of HR-algebras introduced by Courcelle. We prove completeness of the universal-implicative fragment of MILL1 with respect to these models and thus present a new kind of semantics for a fragment of first-order linear logic.
Cite as
Tikhon Pshenitsyn. First-Order Intuitionistic Linear Logic and Hypergraph Languages. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 170:1-170:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{pshenitsyn:LIPIcs.ICALP.2025.170,
author = {Pshenitsyn, Tikhon},
title = {{First-Order Intuitionistic Linear Logic and Hypergraph Languages}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {170:1--170:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.170},
URN = {urn:nbn:de:0030-drops-235473},
doi = {10.4230/LIPIcs.ICALP.2025.170},
annote = {Keywords: linear logic, categorial grammar, MILL1 grammar, first-order logic, hypergraph language, graph transformation, language semantics, HR-algebra}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Philippe Schnoebelen, J. Veron, and Isa Vialard
Abstract
We express the piecewise complexity of words using tools and concepts from tropical algebra. This allows us to define a notion of piecewise signature of a word that has size log(n)m^{O(1)} where m is the alphabet size and n is the length of the word. The piecewise signature of a concatenation can be computed from the signatures of its components, allowing a polynomial-time algorithm for computing the piecewise complexity of SLP-compressed words.
Cite as
Philippe Schnoebelen, J. Veron, and Isa Vialard. A Tropical Approach to the Compositional Piecewise Complexity of Words and Compressed Words. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 171:1-171:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{schnoebelen_et_al:LIPIcs.ICALP.2025.171,
author = {Schnoebelen, Philippe and Veron, J. and Vialard, Isa},
title = {{A Tropical Approach to the Compositional Piecewise Complexity of Words and Compressed Words}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {171:1--171:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.171},
URN = {urn:nbn:de:0030-drops-235481},
doi = {10.4230/LIPIcs.ICALP.2025.171},
annote = {Keywords: Tropical semiring, Subwords and subsequences, piecewise complexity, SLP-compressed words}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Authors:
Spencer Van Koevering, Wojciech Różowski, and Alexandra Silva
Abstract
We propose Weighted Guarded Kleene Algebra with Tests (wGKAT), an uninterpreted weighted programming language equipped with branching, conditionals, and loops. We provide an operational semantics for wGKAT using a variant of weighted automata and introduce a sound and complete axiomatization. We also provide a polynomial time decision procedure for bisimulation equivalence.
Cite as
Spencer Van Koevering, Wojciech Różowski, and Alexandra Silva. Weighted GKAT: Completeness and Complexity. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 172:1-172:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
Copy BibTex To Clipboard
@InProceedings{vankoevering_et_al:LIPIcs.ICALP.2025.172,
author = {Van Koevering, Spencer and R\'{o}\.{z}owski, Wojciech and Silva, Alexandra},
title = {{Weighted GKAT: Completeness and Complexity}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {172:1--172:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.172},
URN = {urn:nbn:de:0030-drops-235492},
doi = {10.4230/LIPIcs.ICALP.2025.172},
annote = {Keywords: Weighted Programming, Automata, Axiomatization, Decision Procedure}
}