LIPIcs, Volume 351

33rd Annual European Symposium on Algorithms (ESA 2025)



Thumbnail PDF

Event

ESA 2025, September 15-17, 2025, Warsaw, Poland

Editors

Anne Benoit
  • ENS Lyon, France
Haim Kaplan
  • Tel Aviv University, Israel
Sebastian Wild
  • University of Marburg, Germany
  • University of Liverpool, UK
Grzegorz Herman
  • Jagiellonian University, Kraków, Poland

Publication Details

  • published at: 2025-10-01
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-395-9

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 351, ESA 2025, Complete Volume

Authors: Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman


Abstract
LIPIcs, Volume 351, ESA 2025, Complete Volume

Cite as

33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 1-1880, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{benoit_et_al:LIPIcs.ESA.2025,
  title =	{{LIPIcs, Volume 351, ESA 2025, Complete Volume}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{1--1880},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025},
  URN =		{urn:nbn:de:0030-drops-248072},
  doi =		{10.4230/LIPIcs.ESA.2025},
  annote =	{Keywords: LIPIcs, Volume 351, ESA 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 0:i-0:xxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{benoit_et_al:LIPIcs.ESA.2025.0,
  author =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{0:i--0:xxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.0},
  URN =		{urn:nbn:de:0030-drops-248069},
  doi =		{10.4230/LIPIcs.ESA.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Graph Decompositions and Length-Constrained Expanders (Invited Talk)

Authors: Bernhard Haeupler


Abstract
Graph decompositions are powerful algorithmic tools with wide applications to graph structures (e.g., spanners, hopsets, sparsifiers, oblivious routings, etc.) and network optimization algorithms, including parallel, distributed and dynamic algorithms for flow and distance problems. Classical graph decompositions include - low-diameter decomposition, which captures 𝓁_1-quantities like lengths and costs, and - expander decomposition, which captures 𝓁_∞-quantities like flows and congestion. This keynote starts with a brief survey of these classical decompositions, then presents length-constrained expanders and length-constrained expander decompositions - a recent and technically rich generalization that simultaneously controls length and congestion (𝓁_1 & 𝓁_∞). Length-constrained expander decompositions significantly broaden and extend the range of applications for graph decompositions, and this talk will discuss several examples and ways to leverage their power.

Cite as

Bernhard Haeupler. Graph Decompositions and Length-Constrained Expanders (Invited Talk). In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{haeupler:LIPIcs.ESA.2025.1,
  author =	{Haeupler, Bernhard},
  title =	{{Graph Decompositions and Length-Constrained Expanders}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.1},
  URN =		{urn:nbn:de:0030-drops-244699},
  doi =		{10.4230/LIPIcs.ESA.2025.1},
  annote =	{Keywords: Length-Constrained Expanders, Graph Decomposition, Network Optimization Algorithms}
}
Document
Invited Talk
Securing Dynamic Data: A Primer on Differentially Private Data Structures (Invited Talk)

Authors: Monika Henzinger and Roodabeh Safavi


Abstract
We give an introduction into differential privacy in the dynamic setting, called the continual observation setting.

Cite as

Monika Henzinger and Roodabeh Safavi. Securing Dynamic Data: A Primer on Differentially Private Data Structures (Invited Talk). In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2025.2,
  author =	{Henzinger, Monika and Safavi, Roodabeh},
  title =	{{Securing Dynamic Data: A Primer on Differentially Private Data Structures}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{2:1--2:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.2},
  URN =		{urn:nbn:de:0030-drops-244702},
  doi =		{10.4230/LIPIcs.ESA.2025.2},
  annote =	{Keywords: Differential privacy, continual observation}
}
Document
Generalized Graph Packing Problems Parameterized by Treewidth

Authors: Barış Can Esmer and Dániel Marx


Abstract
H-Packing is the problem of finding a maximum number of vertex-disjoint copies of H in a given graph G. H-Partition is the special case of finding a set of vertex-disjoint copies that cover each vertex of G exactly once. Our goal is to study these problems and some generalizations on bounded-treewidth graphs. The case of H being a triangle is well understood: given a tree decomposition of G having treewidth tw, the K₃-Packing problem can be solved in time 2^tw⋅ n^O(1), while Lokshtanov et al. [ACM Transactions on Algorithms 2018] showed, under the Strong Exponential-Time Hypothesis (SETH), that there is no (2-ε)^tw⋅ n^O(1) algorithm for any ε > 0 even for K₃-Partition. Similar results can be obtained for any other clique K_d for d ≥ 3. We provide generalizations in two directions: - We consider a generalization of the problem where every vertex can be used at most c times for some c ≥ 1. When H is any clique K_d with d ≥ 3, then we give upper and lower bounds showing that the optimal running time increases to (c+1)^tw⋅ n^O(1). We consider two variants depending on whether a copy of H can be used multiple times in the packing. - If H is not a clique, then the dependence of the running time on treewidth may not be even single exponential. Specifically, we show that if H is any fixed graph where not every 2-connected component is a clique, then there is no 2^o(tw log tw)⋅ n^O(1) algorithm for H-Partition, assuming the Exponential-Time Hypothesis (ETH).

Cite as

Barış Can Esmer and Dániel Marx. Generalized Graph Packing Problems Parameterized by Treewidth. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canesmer_et_al:LIPIcs.ESA.2025.3,
  author =	{Can Esmer, Bar{\i}\c{s} and Marx, D\'{a}niel},
  title =	{{Generalized Graph Packing Problems Parameterized by Treewidth}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.3},
  URN =		{urn:nbn:de:0030-drops-244713},
  doi =		{10.4230/LIPIcs.ESA.2025.3},
  annote =	{Keywords: Graph Packing, Graph Partitioning, Parameterized Complexity, Treewidth, Pathwidth, pw-SETH, Single-Exponential Lower Bound, Slightly Superexponential Lower Bound}
}
Document
Optimal Quantum Algorithm for Estimating Fidelity to a Pure State

Authors: Wang Fang and Qisheng Wang


Abstract
We present an optimal quantum algorithm for fidelity estimation between two quantum states when one of them is pure. In particular, the (square root) fidelity of a mixed state to a pure state can be estimated to within additive error ε by using Θ(1/ε) queries to their state-preparation circuits, achieving a quadratic speedup over the folklore O(1/ε²). Our approach is technically simple, and can moreover estimate the quantity √{tr(ρσ²)} that is not common in the literature. To the best of our knowledge, this is the first query-optimal approach to fidelity estimation involving mixed states.

Cite as

Wang Fang and Qisheng Wang. Optimal Quantum Algorithm for Estimating Fidelity to a Pure State. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fang_et_al:LIPIcs.ESA.2025.4,
  author =	{Fang, Wang and Wang, Qisheng},
  title =	{{Optimal Quantum Algorithm for Estimating Fidelity to a Pure State}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.4},
  URN =		{urn:nbn:de:0030-drops-244727},
  doi =		{10.4230/LIPIcs.ESA.2025.4},
  annote =	{Keywords: Quantum computing, fidelity estimation, quantum algorithms, quantum query complexity}
}
Document
External-Memory Priority Queues with Optimal Insertions

Authors: Gerth Stølting Brodal, Michael T. Goodrich, John Iacono, Jared Lo, Ulrich Meyer, Victor Pagan, Nodari Sitchinava, and Rolf Svenning


Abstract
We present an external-memory priority queue structure supporting Insert and DeleteMin with amortized 𝒪(1) and 𝒪(lg N) comparisons, respectively, and amortized 𝒪(1/B) and 𝒪(1/B log_{M/B} N/B) I/Os, respectively. Here, M is the size of the internal memory, B is the block size of I/Os between internal and external memory, and N is the number of elements in the priority queue just before an operation is performed. Previous external-memory priority queues required amortized 𝒪(lg N) comparisons and 𝒪(1/B log_{M/B} N/B) I/Os for both Insert and DeleteMin. The construction requires the minimal assumption M ≥ 2B.

Cite as

Gerth Stølting Brodal, Michael T. Goodrich, John Iacono, Jared Lo, Ulrich Meyer, Victor Pagan, Nodari Sitchinava, and Rolf Svenning. External-Memory Priority Queues with Optimal Insertions. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2025.5,
  author =	{Brodal, Gerth St{\o}lting and Goodrich, Michael T. and Iacono, John and Lo, Jared and Meyer, Ulrich and Pagan, Victor and Sitchinava, Nodari and Svenning, Rolf},
  title =	{{External-Memory Priority Queues with Optimal Insertions}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.5},
  URN =		{urn:nbn:de:0030-drops-244734},
  doi =		{10.4230/LIPIcs.ESA.2025.5},
  annote =	{Keywords: priority queues, external memory, cache aware, amortized complexity}
}
Document
On the Complexity of Knapsack Under Explorable Uncertainty: Hardness and Algorithms

Authors: Jens Schlöter


Abstract
In the knapsack problem under explorable uncertainty, we are given a knapsack instance with uncertain item profits. Instead of having access to the precise profits, we are only given uncertainty intervals that are guaranteed to contain the corresponding profits. The actual item profit can be obtained via a query. The goal of the problem is to adaptively query item profits until the revealed information suffices to compute an optimal (or approximate) solution to the underlying knapsack instance. Since queries are costly, the objective is to minimize the number of queries. In the offline variant of this problem, we assume knowledge of the precise profits and the task is to compute a query set of minimum cardinality that a third party without access to the profits could use to identify an optimal (or approximate) knapsack solution. We show that this offline variant is complete for the second-level of the polynomial hierarchy, i.e., Σ₂^p-complete, and cannot be approximated within a non-trivial factor unless Σ₂^p = Δ₂^p. Motivated by these strong hardness results, we consider a "resource-augmented" variant of the problem where the requirements on the query set computed by an algorithm are less strict than the requirements on the optimal solution we compare against. More precisely, a query set computed by the algorithm must reveal sufficient information to identify an approximate knapsack solution, while the optimal query set we compare against has to reveal sufficient information to identify an optimal solution. We show that this resource-augmented setting allows interesting non-trivial algorithmic results.

Cite as

Jens Schlöter. On the Complexity of Knapsack Under Explorable Uncertainty: Hardness and Algorithms. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schloter:LIPIcs.ESA.2025.6,
  author =	{Schl\"{o}ter, Jens},
  title =	{{On the Complexity of Knapsack Under Explorable Uncertainty: Hardness and Algorithms}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.6},
  URN =		{urn:nbn:de:0030-drops-244740},
  doi =		{10.4230/LIPIcs.ESA.2025.6},
  annote =	{Keywords: Explorable uncertainty, knapsack, queries, approximation algorithms}
}
Document
Graph Modification of Bounded Size to Minor-Closed Classes as Fast as Vertex Deletion

Authors: Laure Morelle, Ignasi Sau, and Dimitrios M. Thilikos


Abstract
A replacement action is a function ℒ that maps each graph H to a collection of graphs of size at most |V(H)|. Given a graph class ℋ, we consider a general family of graph modification problems, called ℒ-Replacement to ℋ, where the input is a graph G and the question is whether it is possible to replace some induced subgraph H₁ of G on at most k vertices by a graph H₂ in ℒ(H₁) so that the resulting graph belongs to ℋ. ℒ-Replacement to ℋ can simulate many graph modification problems including vertex deletion, edge deletion/addition/edition/contraction, vertex identification, subgraph complementation, independent set deletion, (induced) matching deletion/contraction, etc. We present two algorithms. The first one solves ℒ-Replacement to ℋ in time 2^poly(k) ⋅ |V(G)|² for every minor-closed graph class ℋ, where poly is a polynomial whose degree depends on ℋ, under a mild technical condition on ℒ. This generalizes the results of Morelle, Sau, Stamoulis, and Thilikos [ICALP 2020, ICALP 2023] for the particular case of Vertex Deletion to ℋ within the same running time. Our second algorithm is an improvement of the first one when ℋ is the class of graphs embeddable in a surface of Euler genus at most g and runs in time 2^𝒪(k⁹) ⋅ |V(G)|², where the 𝒪(⋅) notation depends on g. To the best of our knowledge, these are the first parameterized algorithms with a reasonable parametric dependence for such a general family of graph modification problems to minor-closed classes.

Cite as

Laure Morelle, Ignasi Sau, and Dimitrios M. Thilikos. Graph Modification of Bounded Size to Minor-Closed Classes as Fast as Vertex Deletion. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{morelle_et_al:LIPIcs.ESA.2025.7,
  author =	{Morelle, Laure and Sau, Ignasi and Thilikos, Dimitrios M.},
  title =	{{Graph Modification of Bounded Size to Minor-Closed Classes as Fast as Vertex Deletion}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.7},
  URN =		{urn:nbn:de:0030-drops-244751},
  doi =		{10.4230/LIPIcs.ESA.2025.7},
  annote =	{Keywords: Graph modification problems, Parameterized complexity, Graph minors, Flat Wall theorem, Irrelevant vertex technique, Algorithmic meta-theorem, Parametric dependence, Dynamic programming}
}
Document
Fast Computation of k-Runs, Parameterized Squares, and Other Generalised Squares

Authors: Yuto Nakashima, Jakub Radoszewski, and Tomasz Waleń


Abstract
A k-mismatch square is a string of the form XY where X and Y are two equal-length strings that have at most k mismatches. Kolpakov and Kucherov [Theor. Comput. Sci., 2003] defined two notions of k-mismatch repeats, called k-repetitions and k-runs, each representing a sequence of consecutive k-mismatch squares of equal length. They proposed algorithms for computing k-repetitions and k-runs working in 𝒪(nklog k+output) time for a string of length n over an integer alphabet, where output is the number of the reported repeats. We show that output = 𝒪(nk log k), both in case of k-repetitions and k-runs, which implies that the complexity of their algorithms is actually 𝒪(nk log k). We apply this result to computing parameterized squares. A parameterized square is a string of the form XY such that X and Y parameterized-match, i.e., there exists a bijection f on the alphabet such that f(X) = Y. Two parameterized squares XY and X'Y' are equivalent if they parameterized match. Recently Hamai et al. [SPIRE 2024] showed that a string of length n over an alphabet of size σ contains less than nσ non-equivalent parameterized squares, improving an earlier bound by Kociumaka et al. [Theor. Comput. Sci., 2016]. We apply our bound for k-mismatch repeats to propose an algorithm that reports all non-equivalent parameterized squares in 𝒪(nσ log σ) time. We also show that the number of non-equivalent parameterized squares can be computed in 𝒪(n log n) time. This last algorithm applies to squares under any substring compatible equivalence relation and also to counting squares that are distinct as strings. In particular, this improves upon the 𝒪(nσ)-time algorithm of Gawrychowski et al. [CPM 2023] for counting order-preserving squares that are distinct as strings if σ = ω(log n).

Cite as

Yuto Nakashima, Jakub Radoszewski, and Tomasz Waleń. Fast Computation of k-Runs, Parameterized Squares, and Other Generalised Squares. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nakashima_et_al:LIPIcs.ESA.2025.8,
  author =	{Nakashima, Yuto and Radoszewski, Jakub and Wale\'{n}, Tomasz},
  title =	{{Fast Computation of k-Runs, Parameterized Squares, and Other Generalised Squares}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.8},
  URN =		{urn:nbn:de:0030-drops-244768},
  doi =		{10.4230/LIPIcs.ESA.2025.8},
  annote =	{Keywords: string algorithm, k-mismatch square, parameterized square, order-preserving square, maximum gapped repeat}
}
Document
MorphisHash: Improving Space Efficiency of ShockHash for Minimal Perfect Hashing

Authors: Stefan Hermann


Abstract
A minimal perfect hash function (MPHF) maps a set of n keys to unique positions {1, …, n}. Representing an MPHF requires at least log₂(e)≈ 1.443 bits per key. ShockHash is a technique to construct an MPHF and requires just slightly more space. It gives each key two random candidate positions. If each key can be mapped to one of its two candidate positions such that there is exactly one key mapped to each position, then an MPHF is found. If not, ShockHash repeats the process with a new set of random candidate positions. ShockHash has to store how many repetitions were required and for each key to which of the two candidate positions it is mapped. However, when a given set of candidate positions can be used as MPHF then there is not only one but multiple ways of mapping the keys to one of their candidate positions such that the mapping results in an MPHF. This redundancy makes up for the majority of the remaining space overhead in ShockHash. In this paper, we present MorphisHash which almost completely eliminates this redundancy. Our theoretical result is that MorphisHash saves Θ(ln(n)) bits in expectation compared to ShockHash. This corresponds to a factor of 20 less space overhead in practice. Just like ShockHash, MorphisHash can be used as a building block within RecSplit to obtain MorphisHash-RS. When compared for same space consumption, MorphisHash-RS can be constructed up to 21 times faster than ShockHash-RS. The technique to accomplish this might be of a more general interest to compress data structures.

Cite as

Stefan Hermann. MorphisHash: Improving Space Efficiency of ShockHash for Minimal Perfect Hashing. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 9:1-9:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hermann:LIPIcs.ESA.2025.9,
  author =	{Hermann, Stefan},
  title =	{{MorphisHash: Improving Space Efficiency of ShockHash for Minimal Perfect Hashing}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{9:1--9:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.9},
  URN =		{urn:nbn:de:0030-drops-244779},
  doi =		{10.4230/LIPIcs.ESA.2025.9},
  annote =	{Keywords: compressed data structure, perfect hashing, random graph, pseudoforest, component}
}
Document
Connected Partitions via Connected Dominating Sets

Authors: Aikaterini Niklanovits, Kirill Simonov, Shaily Verma, and Ziena Zeif


Abstract
The classical theorem due to Győri and Lovász states that any k-connected graph G admits a partition into k connected subgraphs, where each subgraph has a prescribed size and contains a prescribed vertex, as long as the total size of target subgraphs is equal to the size of G. However, this result is notoriously evasive in terms of efficient constructions, and it is still unknown whether such a partition can be computed in polynomial time, even for k = 5. We make progress towards an efficient constructive version of the Győri-Lovász theorem by considering a natural strengthening of the k-connectivity requirement. Specifically, we show that the desired connected partition can be found in polynomial time, if G contains k disjoint connected dominating sets. As a consequence of this result, we give several efficient approximate and exact constructive versions of the original Győri-Lovász theorem: - On general graphs, a Győri-Lovász partition with k parts can be computed in polynomial time when the input graph has connectivity Ω(k ⋅ log² n); - On convex bipartite graphs, connectivity of 4k is sufficient; - On biconvex graphs and interval graphs, connectivity of k is sufficient, meaning that our algorithm gives a "true" constructive version of the theorem on these graph classes.

Cite as

Aikaterini Niklanovits, Kirill Simonov, Shaily Verma, and Ziena Zeif. Connected Partitions via Connected Dominating Sets. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{niklanovits_et_al:LIPIcs.ESA.2025.10,
  author =	{Niklanovits, Aikaterini and Simonov, Kirill and Verma, Shaily and Zeif, Ziena},
  title =	{{Connected Partitions via Connected Dominating Sets}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.10},
  URN =		{urn:nbn:de:0030-drops-244785},
  doi =		{10.4230/LIPIcs.ESA.2025.10},
  annote =	{Keywords: Gy\H{o}ri-Lov\'{a}sz theorem, connected dominating sets, graph classes}
}
Document
Courcelle’s Theorem for Lipschitz Continuity

Authors: Tatsuya Gima, Soh Kumabe, and Yuichi Yoshida


Abstract
Lipschitz continuity of algorithms, introduced by Kumabe and Yoshida (FOCS'23), measures the stability of an algorithm against small input perturbations. Algorithms with small Lipschitz continuity are desirable, as they ensure reliable decision-making and reproducible scientific research. Several studies have proposed Lipschitz continuous algorithms for various combinatorial optimization problems, but these algorithms are problem-specific, requiring a separate design for each problem. To address this issue, we provide the first algorithmic meta-theorem in the field of Lipschitz continuous algorithms. Our result can be seen as a Lipschitz continuous analogue of Courcelle’s theorem, which offers Lipschitz continuous algorithms for problems on bounded-treewidth graphs. Specifically, we consider the problem of finding a vertex set in a graph that maximizes or minimizes the total weight, subject to constraints expressed in monadic second-order logic (MSO₂). We show that for any ε > 0, there exists a (1±ε)-approximation algorithm for the problem with a polylogarithmic Lipschitz constant on bounded treewidth graphs. On such graphs, our result outperforms most existing Lipschitz continuous algorithms in terms of approximability and/or Lipschitz continuity. Further, we provide similar results for problems on bounded-clique-width graphs subject to constraints expressed in MSO₁. Additionally, we construct a Lipschitz continuous version of Baker’s decomposition using our meta-theorem as a subroutine.

Cite as

Tatsuya Gima, Soh Kumabe, and Yuichi Yoshida. Courcelle’s Theorem for Lipschitz Continuity. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gima_et_al:LIPIcs.ESA.2025.11,
  author =	{Gima, Tatsuya and Kumabe, Soh and Yoshida, Yuichi},
  title =	{{Courcelle’s Theorem for Lipschitz Continuity}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.11},
  URN =		{urn:nbn:de:0030-drops-244793},
  doi =		{10.4230/LIPIcs.ESA.2025.11},
  annote =	{Keywords: Fixed-Parameter Tractability, Algorithmic Meta-Theorem, Lipschitz Continuity}
}
Document
Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better

Authors: Jacobus Conradi and Anne Driemel


Abstract
Many application areas collect unstructured trajectory data. In subtrajectory clustering, one is interested to find patterns in this data using a hybrid combination of segmentation and clustering. We analyze two variants of this problem based on the well-known SetCover and CoverageMaximization problems. In both variants the set system is induced by metric balls under the Fréchet distance centered at polygonal curves. Our algorithms focus on improving the running time of the update step of the generic greedy algorithm by means of a careful combination of sweeps through a candidate space. In the first variant, we are given a polygonal curve P of complexity n, distance threshold Δ and complexity bound 𝓁 and the goal is to identify a minimum-size set of center curves 𝒞, where each center curve is of complexity at most 𝓁 and every point p on P is covered. A point p on P is covered if it is part of a subtrajectory π_p of P such that there is a center c ∈ 𝒞 whose Fréchet distance to π_p is at most Δ. We present an approximation algorithm for this problem with a running time of 𝒪((n²𝓁 + √{k_Δ}n^{5/2})log²n), where k_Δ is the size of an optimal solution. The algorithm gives a bicriterial approximation guarantee that relaxes the Fréchet distance threshold by a constant factor and the size of the solution by a factor of 𝒪(log n). The second problem variant asks for the maximum fraction of the input curve P that can be covered using k center curves, where k ≤ n is a parameter to the algorithm. For the second problem variant, our techniques lead to an algorithm with a running time of 𝒪((k+𝓁)n²log²n) and similar approximation guarantees. Note that in both algorithms k,k_Δ ∈ O(n) and hence the running time is cubic, or better if k ≪ n.

Cite as

Jacobus Conradi and Anne Driemel. Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 12:1-12:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{conradi_et_al:LIPIcs.ESA.2025.12,
  author =	{Conradi, Jacobus and Driemel, Anne},
  title =	{{Subtrajectory Clustering and Coverage Maximization in Cubic Time, or Better}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.12},
  URN =		{urn:nbn:de:0030-drops-244806},
  doi =		{10.4230/LIPIcs.ESA.2025.12},
  annote =	{Keywords: Clustering, Set cover, Fr\'{e}chet distance, Approximation algorithms}
}
Document
Tight Bounds for Some Classical Problems Parameterized by Cutwidth

Authors: Narek Bojikian, Vera Chekan, and Stefan Kratsch


Abstract
Cutwidth is a widely studied parameter and it quantifies how well a graph can be decomposed along small edge-cuts. It complements pathwidth, which captures decomposition by small vertex separators, and it is well-known that cutwidth upper-bounds pathwidth. The SETH-tight parameterized complexity of problems on graphs of bounded pathwidth (and treewidth) has been actively studied over the past decade while for cutwidth the complexity of many classical problems remained open. For Hamiltonian Cycle, it is known that a (2+√2)^{pw} n^𝒪(1) algorithm is optimal for pathwidth under SETH [Cygan et al. JACM 2018]. Van Geffen et al. [J. Graph Algorithms Appl. 2020] and Bojikian et al. [STACS 2023] asked which running time is optimal for this problem parameterized by cutwidth. We answer this question with (1+√2)^{ctw} n^𝒪(1) by providing matching upper and lower bounds. Second, as our main technical contribution, we close the gap left by van Heck [2018] for Partition Into Triangles (and Triangle Packing) by improving both upper and lower bound and getting a tight bound of ∛{3}^{ctw} n^𝒪(1), which to our knowledge exhibits the only known tight non-integral basis apart from Hamiltonian Cycle [Cygan et al. JACM 2018] and C₄-Hitting Set [SODA 2025]. We show that the cuts inducing a disjoint union of paths of length three (unions of so-called Z-cuts) lie at the core of the complexity of the problem - usually lower-bound constructions use simpler cuts inducing either a matching or a disjoint union of bicliques. Finally, we determine the optimal running times for Max Cut (2^{ctw} n^𝒪(1)) and Induced Matching (3^{ctw} n^𝒪(1)) by providing matching lower bounds for the existing algorithms - the latter result also answers an open question for treewidth by Chaudhary and Zehavi [WG 2023].

Cite as

Narek Bojikian, Vera Chekan, and Stefan Kratsch. Tight Bounds for Some Classical Problems Parameterized by Cutwidth. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bojikian_et_al:LIPIcs.ESA.2025.13,
  author =	{Bojikian, Narek and Chekan, Vera and Kratsch, Stefan},
  title =	{{Tight Bounds for Some Classical Problems Parameterized by Cutwidth}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.13},
  URN =		{urn:nbn:de:0030-drops-244815},
  doi =		{10.4230/LIPIcs.ESA.2025.13},
  annote =	{Keywords: Parameterized complexity, cutwidth, Hamiltonian cycle, triangle packing, max cut, induced matching}
}
Document
Testing Sumsets Is Hard

Authors: Xi Chen, Shivam Nadimpalli, Tim Randolph, Rocco A. Servedio, and Or Zamir


Abstract
A subset S of the Boolean hypercube 𝔽₂ⁿ is a sumset if S = {a + b : a, b ∈ A} for some A ⊆ 𝔽₂ⁿ. Sumsets are central objects of study in additive combinatorics, where they play a role in several of the field’s most important results. We prove a lower bound of Ω(2^{n/2}) for the number of queries needed to test whether a Boolean function f:𝔽₂ⁿ → {0,1} is the indicator function of a sumset, ruling out an efficient testing algorithm for sumsets. Our lower bound for testing sumsets follows from sharp bounds on the related problem of shift testing, which may be of independent interest. We also give a near-optimal {2^{n/2} ⋅ poly(n)}-query algorithm for a smoothed analysis formulation of the sumset refutation problem. Finally, we include a simple proof that the number of different sumsets in 𝔽₂ⁿ is 2^{(1±o(1))2^{n-1}}.

Cite as

Xi Chen, Shivam Nadimpalli, Tim Randolph, Rocco A. Servedio, and Or Zamir. Testing Sumsets Is Hard. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2025.14,
  author =	{Chen, Xi and Nadimpalli, Shivam and Randolph, Tim and Servedio, Rocco A. and Zamir, Or},
  title =	{{Testing Sumsets Is Hard}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.14},
  URN =		{urn:nbn:de:0030-drops-244822},
  doi =		{10.4230/LIPIcs.ESA.2025.14},
  annote =	{Keywords: Sumsets, additive combinatorics, property testing, Boolean functions}
}
Document
Linear Layouts Revisited: Stacks, Queues, and Exact Algorithms

Authors: Thomas Depian, Simon D. Fink, Robert Ganian, and Vaishali Surianarayanan


Abstract
In spite of the extensive study of stack and queue layouts, many fundamental questions remain open concerning the complexity-theoretic frontiers for computing stack and queue layouts. A stack (resp. queue) layout places vertices along a line and assigns edges to pages so that no two edges on the same page are crossing (resp. nested). We provide three new algorithms which together substantially expand our understanding of these problems: 1) A fixed-parameter algorithm for computing minimum-page stack and queue layouts w.r.t. the vertex integrity of an n-vertex graph G. This result is motivated by an open question in the literature and generalizes the previous algorithms parameterizing by the vertex cover number of G. The proof relies on a newly developed Ramsey pruning technique. Vertex integrity intuitively measures the vertex deletion distance to a subgraph with only small connected components. 2) An n^𝒪(q 𝓁) algorithm for computing 𝓁-page stack and queue layouts of page width at most q. This is the first algorithm avoiding a double-exponential dependency on the parameters. The page width of a layout measures the maximum number of edges one needs to cross on any page to reach the outer face. 3) A 2^𝒪(n) algorithm for computing 1-page queue layouts. This improves upon the previously fastest n^𝒪(n) algorithm and can be seen as a counterpart to the recent subexponential algorithm for computing 2-page stack layouts [ICALP'24], but relies on an entirely different technique.

Cite as

Thomas Depian, Simon D. Fink, Robert Ganian, and Vaishali Surianarayanan. Linear Layouts Revisited: Stacks, Queues, and Exact Algorithms. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 15:1-15:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{depian_et_al:LIPIcs.ESA.2025.15,
  author =	{Depian, Thomas and Fink, Simon D. and Ganian, Robert and Surianarayanan, Vaishali},
  title =	{{Linear Layouts Revisited: Stacks, Queues, and Exact Algorithms}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{15:1--15:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.15},
  URN =		{urn:nbn:de:0030-drops-244835},
  doi =		{10.4230/LIPIcs.ESA.2025.15},
  annote =	{Keywords: stack layouts, queue layouts, parameterized algorithms, vertex integrity, Ramsey theory}
}
Document
On Algorithmic Applications of ℱ-Branchwidth

Authors: Benjamin Bergougnoux, Thekla Hamm, Lars Jaffke, and Paloma T. Lima


Abstract
F-branchwidth is a framework for width measures of graphs, recently introduced by Eiben et al. [ITCS 2022], that captures tree-width, co-tree-width, clique-width, and mim-width, and several of their generalizations and interpolations. In this work, we search for algorithmic applications of F-branchwidth measures that do not have an equivalent counterpart in the literature so far. Our first contribution is a minimal set of eleven F-branchwidth measures such that each of the infinitely many F-branchwidth measures is equivalent to one of the eleven. We observe that for the FO Model Checking problem, each F-branchwidth is either equivalent to clique-width (and therefore has an FPT-algorithm by formula length plus the width) or the problem remains as hard as on general graphs even on graphs of constant width. Next, we study the number of equivalence classes of the neighborhood equivalence in a decomposition, which upper bounds the run time of the model checking algorithm for ACDN logic recently introduced by Bergougnoux et al. [SODA 2023]. We give structural lower bounds that show that for each F-branchwidth, an efficient model checking algorithm was already known or cannot be obtained via this method. Lastly, we classify the complexity of Independent Set parameterized by any F-branchwidth except for one open case. Also here, our contributions are lower bounds. In this context, we also prove that Independent Set on graphs of mim-width w cannot be solved in time n^o(w) unless the Exponential Time Hypothesis fails, answering an open question in the literature.

Cite as

Benjamin Bergougnoux, Thekla Hamm, Lars Jaffke, and Paloma T. Lima. On Algorithmic Applications of ℱ-Branchwidth. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.ESA.2025.16,
  author =	{Bergougnoux, Benjamin and Hamm, Thekla and Jaffke, Lars and Lima, Paloma T.},
  title =	{{On Algorithmic Applications of ℱ-Branchwidth}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.16},
  URN =		{urn:nbn:de:0030-drops-244849},
  doi =		{10.4230/LIPIcs.ESA.2025.16},
  annote =	{Keywords: Graph width parameters, parameterized complexity, F-branchwidth, tree-width, clique-width, rank-width, mim-width, FO model checking, DN logic, Independent Set, ETH}
}
Document
Weighted Matching in a Poly-Streaming Model

Authors: Ahammed Ullah, S M Ferdous, and Alex Pothen


Abstract
We introduce the poly-streaming model, a generalization of streaming models of computation in which k processors process k data streams containing a total of N items. The algorithm is allowed 𝒪(f(k)⋅M₁) space, where M₁ is either o (N) or the space bound for a sequential streaming algorithm. Processors may communicate as needed. Algorithms are assessed by the number of passes, per-item processing time, total runtime, space usage, communication cost, and solution quality. We design a single-pass algorithm in this model for approximating the maximum weight matching (MWM) problem. Given k edge streams and a parameter ε > 0, the algorithm computes a (2+ε)-approximate MWM. We analyze its performance in a shared-memory parallel setting: for any constant ε > 0, it runs in time 𝒪̃(L_{max}+n), where n is the number of vertices and L_{max} is the maximum stream length. It supports 𝒪(1) per-edge processing time using 𝒪̃(k⋅n) space. We further generalize the design to hierarchical architectures, in which k processors are partitioned into r groups, each with its own shared local memory. The total intergroup communication is 𝒪̃(r⋅n) bits, while all other performance guarantees are preserved. We evaluate the algorithm on a shared-memory system using graphs with trillions of edges. It achieves substantial speedups as k increases and produces matchings with weights significantly exceeding the theoretical guarantee. On our largest test graph, it reduces runtime by nearly two orders of magnitude and memory usage by five orders of magnitude compared to an offline algorithm.

Cite as

Ahammed Ullah, S M Ferdous, and Alex Pothen. Weighted Matching in a Poly-Streaming Model. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ullah_et_al:LIPIcs.ESA.2025.17,
  author =	{Ullah, Ahammed and Ferdous, S M and Pothen, Alex},
  title =	{{Weighted Matching in a Poly-Streaming Model}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.17},
  URN =		{urn:nbn:de:0030-drops-244858},
  doi =		{10.4230/LIPIcs.ESA.2025.17},
  annote =	{Keywords: Streaming Algorithms, Matchings, Graphs, Parallel Algorithms}
}
Document
Efficiency of Learned Indexes on Genome Spectra

Authors: Md. Hasin Abrar, Paul Medvedev, and Giorgio Vinciguerra


Abstract
Data structures on a multiset of genomic k-mers are at the heart of many bioinformatic tools. As genomic datasets grow in scale, the efficiency of these data structures increasingly depends on how well they leverage the inherent patterns in the data. One recent and effective approach is the use of learned indexes that approximate the rank function of a multiset using a piecewise linear function with very few segments. However, theoretical worst-case analysis struggles to predict the practical performance of these indexes. We address this limitation by developing a novel measure of piecewise-linear approximability of the data, called CaPLa (Canonical Piecewise Linear approximability). CaPLa builds on the empirical observation that a power-law model often serves as a reasonable proxy for piecewise linear-approximability, while explicitly accounting for deviations from a true power-law fit. We prove basic properties of CaPLa and present an efficient algorithm to compute it. We then demonstrate that CaPLa can accurately predict space bounds for data structures on real data. Empirically, we analyze over 500 genomes through the lens of CaPLa, revealing that it varies widely across the tree of life and even within individual genomes. Finally, we study the robustness of CaPLa as a measure and the factors that make genomic k-mer multisets different from random ones.

Cite as

Md. Hasin Abrar, Paul Medvedev, and Giorgio Vinciguerra. Efficiency of Learned Indexes on Genome Spectra. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{abrar_et_al:LIPIcs.ESA.2025.18,
  author =	{Abrar, Md. Hasin and Medvedev, Paul and Vinciguerra, Giorgio},
  title =	{{Efficiency of Learned Indexes on Genome Spectra}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.18},
  URN =		{urn:nbn:de:0030-drops-244865},
  doi =		{10.4230/LIPIcs.ESA.2025.18},
  annote =	{Keywords: Genome spectra, piecewise linear approximation, learned index, k-mers}
}
Document
On Finding 𝓁-Th Smallest Perfect Matchings

Authors: Nicolas El Maalouly, Sebastian Haslebacher, Adrian Taubner, and Lasse Wulf


Abstract
Given an undirected weighted graph G and an integer k, Exact-Weight Perfect Matching (EWPM) is the problem of finding a perfect matching of weight exactly k in G. In this paper, we study EWPM and its variants. The EWPM problem is famous, since in the case of unary encoded weights, Mulmuley, Vazirani, and Vazirani showed almost 40 years ago that the problem can be solved in randomized polynomial time. However, up to this date no derandomization is known. Our first result is a simple deterministic algorithm for EWPM that runs in time n^𝒪(𝓁), where 𝓁 is the number of distinct weights that perfect matchings in G can take. In fact, we show how to find an 𝓁-th smallest perfect matching in any weighted graph (even if the weights are encoded in binary, in which case EWPM in general is known to be NP-complete) in time n^𝒪(𝓁) for any integer 𝓁. Similar next-to-optimal variants have also been studied recently for the shortest path problem. For our second result, we extend the list of problems that are known to be equivalent to EWPM. We show that EWPM is equivalent under a weight-preserving reduction to the Exact Cycle Sum problem (ECS) in undirected graphs with a conservative (i.e. no negative cycles) weight function. To the best of our knowledge, we are the first to study this problem. As a consequence, the latter problem is contained in RP if the weights are encoded in unary. Finally, we identify a special case of EWPM, called BCPM, which was recently studied by El Maalouly, Steiner and Wulf. We show that BCPM is equivalent under a weight-preserving transformation to another problem recently studied by Schlotter and Sebő as well as Geelen and Kapadia: the Shortest Odd Cycle problem (SOC) in undirected graphs with conservative weights. Finally, our n^𝒪(𝓁) algorithm works in this setting as well, identifying a tractable special case of SOC, BCPM, and ECS.

Cite as

Nicolas El Maalouly, Sebastian Haslebacher, Adrian Taubner, and Lasse Wulf. On Finding 𝓁-Th Smallest Perfect Matchings. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 19:1-19:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{elmaalouly_et_al:LIPIcs.ESA.2025.19,
  author =	{El Maalouly, Nicolas and Haslebacher, Sebastian and Taubner, Adrian and Wulf, Lasse},
  title =	{{On Finding 𝓁-Th Smallest Perfect Matchings}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{19:1--19:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.19},
  URN =		{urn:nbn:de:0030-drops-244875},
  doi =		{10.4230/LIPIcs.ESA.2025.19},
  annote =	{Keywords: Exact Matching, Perfect Matching, Exact-Weight Perfect Matching, Shortest Odd Cycle, Exact Cycle Sum, l-th Smallest Solution, l-th Largest Solution, k-th Best Solution, Derandomization}
}
Document
Beeping Deterministic CONGEST Algorithms in Graphs

Authors: Pawel Garncarek, Dariusz R. Kowalski, Shay Kutten, and Miguel A. Mosteiro


Abstract
Beeping Network (BN) is a popular graph-based model of wireless computation, which applies the OR operation to one-bit messages sent simultaneously by neighbors. It admits fast (polylogarithmic in the number of nodes n) randomized solutions to many graph problems, but all known deterministic algorithms for non-trivial graph problems are at least polynomial in the maximum node degree Δ. We improve known results for deterministic algorithms by showing that this polynomial can be as low as Õ(Δ²). More precisely, we show how to simulate a single round of any CONGEST algorithm in any network in O(Δ² polylog n) beeping rounds, each accommodating at most one beep per node, even if the nodes intend to send different messages to different neighbors. This upper bound reduces polynomially the time for a deterministic simulation of CONGEST in a Beeping Network, comparing to the best known algorithms, and nearly matches the time obtained recently using randomization (up to a poly-logarithmic factor) as well as the lower bound. Specifically, any algorithm designed for the CONGEST networks can be run in BNs with O(Δ² polylog n) multiplicative overhead, e.g., we can now deterministically compute an MIS in any BN in O(Δ² polylog n) beeping rounds, improving the previous best Θ(Δ³)-round solution. For h-hop simulations, we prove a lower bound Ω(Δ^{h+1}), and we design a nearly matching algorithm that is able to "pipeline" the node-to-node information in a faster way than beeping layer-by-layer.

Cite as

Pawel Garncarek, Dariusz R. Kowalski, Shay Kutten, and Miguel A. Mosteiro. Beeping Deterministic CONGEST Algorithms in Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{garncarek_et_al:LIPIcs.ESA.2025.20,
  author =	{Garncarek, Pawel and Kowalski, Dariusz R. and Kutten, Shay and Mosteiro, Miguel A.},
  title =	{{Beeping Deterministic CONGEST Algorithms in Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.20},
  URN =		{urn:nbn:de:0030-drops-244880},
  doi =		{10.4230/LIPIcs.ESA.2025.20},
  annote =	{Keywords: Beeping Networks, CONGEST Networks, deterministic simulations, graph algorithms}
}
Document
A Unified FPT Framework for Crossing Number Problems

Authors: Éric Colin de Verdière and Petr Hliněný


Abstract
The basic (and traditional) crossing number problem is to determine the minimum number of crossings in a topological drawing of an input graph in the plane. We develop a unified framework that smoothly captures many generalized crossing number problems, and that yields fixed-parameter tractable (FPT) algorithms for them not only in the plane but also on surfaces. Our framework takes the following form. We fix a surface S, an integer r, and a map κ from the set of topological drawings of graphs in S to ℤ_+ ∪ {∞}, satisfying some natural monotonicity conditions, but essentially describing the allowed drawings and how we want to count the crossings in them. Then deciding whether an input graph G has an allowed drawing D on S with κ(D) ≤ r can be done in time quadratic in the size of G (and exponential in other parameters). More generally, we may take as input an edge-colored graph, and distinguish crossings by the colors of the involved edges; and we may allow to perform a bounded number of edge removals and vertex splits to G before drawing it. The proof is a reduction to the embeddability of a graph on a two-dimensional simplicial complex. This framework implies, in a unified way, quadratic FPT algorithms for many topological crossing number variants established in the graph drawing community. Some of these variants already had previously published FPT algorithms, mostly relying on Courcelle’s metatheorem, but for many of those, we obtain an algorithm with a better runtime. Moreover, our framework extends, at no cost, to these crossing number variants in any fixed surface.

Cite as

Éric Colin de Verdière and Petr Hliněný. A Unified FPT Framework for Crossing Number Problems. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{colindeverdiere_et_al:LIPIcs.ESA.2025.21,
  author =	{Colin de Verdi\`{e}re, \'{E}ric and Hlin\v{e}n\'{y}, Petr},
  title =	{{A Unified FPT Framework for Crossing Number Problems}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.21},
  URN =		{urn:nbn:de:0030-drops-244897},
  doi =		{10.4230/LIPIcs.ESA.2025.21},
  annote =	{Keywords: computational geometry, fixed-parameter tractability, graph drawing, graph embedding, crossing number, two-dimensional simplicial complex, surface}
}
Document
Constructing Long Paths in Graph Streams

Authors: Christian Konrad and Chhaya Trehan


Abstract
In the graph stream model of computation, an algorithm processes the edges of an n-vertex input graph in one or more sequential passes while using a memory that is sublinear in the input size. The streaming model poses significant challenges for algorithmically constructing long paths. Many known algorithms that are tasked with extending an existing path as a subroutine require an entire pass over the input to add a single additional edge. This raises a fundamental question: Are multiple passes inherently necessary to construct paths of non-trivial lengths, or can a single pass suffice? To address this question, we systematically study the Longest Path problem in the one-pass streaming model. In this problem, given a desired approximation factor α, the objective is to compute a path of length at least lp(G)/α, where lp(G) is the length of a longest path in the input graph G. We study the problem in the insertion-only and the insertion-deletion streaming models, and we give algorithms as well as space lower bounds for both undirected and directed graphs. Our results are: 1) We show that for undirected graphs, in both the insertion-only and the insertion-deletion models, there are semi-streaming algorithms, i.e., algorithms that use space O(n poly log n), that compute a path of length at least d/3 with high probability, where d is the average degree of the input graph. These algorithms can also yield an α-approximation to Longest Path using space Õ(n²/α). 2) Next, we show that such a result cannot be achieved for directed graphs, even in the insertion-only model. We show that computing a (n^{1-o(1)})-approximation to Longest Path in directed graphs in the insertion-only model requires space Ω(n²). This result is in line with recent results that demonstrate that processing directed graphs is often significantly harder than undirected graphs in the streaming model. 3) We further complement our results with two additional lower bounds. First, we show that semi-streaming space is insufficient for small constant factor approximations to Longest Path for undirected graphs in the insertion-only model. Last, in undirected graphs in the insertion-deletion model, we show that computing an α-approximation requires space Ω(n²/α³).

Cite as

Christian Konrad and Chhaya Trehan. Constructing Long Paths in Graph Streams. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{konrad_et_al:LIPIcs.ESA.2025.22,
  author =	{Konrad, Christian and Trehan, Chhaya},
  title =	{{Constructing Long Paths in Graph Streams}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.22},
  URN =		{urn:nbn:de:0030-drops-244902},
  doi =		{10.4230/LIPIcs.ESA.2025.22},
  annote =	{Keywords: Longest Path Problem, Streaming Algorithms, One-way Two-party Communication Complexity}
}
Document
Computing Largest Subsets of Points Whose Convex Hulls Have Bounded Area and Diameter

Authors: Gianmarco Picarella, Marc van Kreveld, Frank Staals, and Sjoerd de Vries


Abstract
We study the problem of computing a convex region with bounded area and diameter that contains the maximum number of points from a given point set P. We show that this problem can be solved in O(n⁶k) time and O(n³k) space, where n is the size of P and k is the maximum number of points in the found region. We experimentally compare this new algorithm with an existing algorithm that does the same but without the diameter constraint, which runs in O(n³k) time. For the new algorithm, we use different diameters. We use both synthetic data and data from an application in cancer detection, which motivated our research.

Cite as

Gianmarco Picarella, Marc van Kreveld, Frank Staals, and Sjoerd de Vries. Computing Largest Subsets of Points Whose Convex Hulls Have Bounded Area and Diameter. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 23:1-23:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{picarella_et_al:LIPIcs.ESA.2025.23,
  author =	{Picarella, Gianmarco and van Kreveld, Marc and Staals, Frank and de Vries, Sjoerd},
  title =	{{Computing Largest Subsets of Points Whose Convex Hulls Have Bounded Area and Diameter}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{23:1--23:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.23},
  URN =		{urn:nbn:de:0030-drops-244919},
  doi =		{10.4230/LIPIcs.ESA.2025.23},
  annote =	{Keywords: convex polygon, dynamic programming, implementation}
}
Document
Fréchet Distance in Unweighted Planar Graphs

Authors: Ivor van der Hoog, Thijs van der Horst, Eva Rotenberg, and Lasse Wulf


Abstract
The Fréchet distance is a distance measure between trajectories in ℝ^d or walks in a graph G. Given constant-time shortest path queries, the Discrete Fréchet distance D_G(P, Q) between two walks P and Q can be computed in O(|P|⋅|Q|) time using a dynamic program. Driemel, van der Hoog, and Rotenberg [SoCG'22] show that for weighted planar graphs this approach is likely tight, as there can be no strongly-subquadratic algorithm to compute a 1.01-approximation of D_G(P, Q) unless the Orthogonal Vector Hypothesis (OVH) fails. Such quadratic-time conditional lower bounds are common to many Fréchet distance variants. However, they can be circumvented by assuming that the input comes from some well-behaved class: There exist (1+ε)-approximations, both in weighted graphs and in ℝ^d, that take near-linear time for c-packed or κ-straight walks in the graph. In ℝ^d there also exists a near-linear time algorithm to compute the Fréchet distance whenever all input edges are long compared to the distance. We consider computing the Fréchet distance in unweighted planar graphs. We show that there exist no strongly-subquadratic 1.25-approximations of the discrete Fréchet distance between two disjoint simple paths in an unweighted planar graph in strongly subquadratic time, unless OVH fails. This improves the previous lower bound, both in terms of generality and approximation factor. We subsequently show that adding graph structure circumvents this lower bound: If the graph is a regular tiling with unit-weighted edges, then there exists an Õ((|P|+|Q|)^{1.5})-time algorithm to compute D_G(P, Q). Our result has natural implications in the plane, as it allows us to define a new class of well-behaved curves that facilitate (1+ε)-approximations of their discrete Fréchet distance in subquadratic time.

Cite as

Ivor van der Hoog, Thijs van der Horst, Eva Rotenberg, and Lasse Wulf. Fréchet Distance in Unweighted Planar Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ESA.2025.24,
  author =	{van der Hoog, Ivor and van der Horst, Thijs and Rotenberg, Eva and Wulf, Lasse},
  title =	{{Fr\'{e}chet Distance in Unweighted Planar Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.24},
  URN =		{urn:nbn:de:0030-drops-244924},
  doi =		{10.4230/LIPIcs.ESA.2025.24},
  annote =	{Keywords: Fr\'{e}chet distance, planar graphs, lower bounds, approximation algorithms}
}
Document
Instance-Optimal Imprecise Convex Hull

Authors: Sarita de Berg, Ivor van der Hoog, Eva Rotenberg, Daniel Rutschmann, and Sampson Wong


Abstract
Imprecise measurements of a point set P = (p₁, …, p_n) can be modelled by a family of regions F = (R₁, …, R_n), where each imprecise region R_i ∈ F contains a unique point p_i ∈ P. A retrieval models an accurate measurement by replacing an imprecise region R_i with its corresponding point p_i. We construct the convex hull of an imprecise point set in the plane, by determining the cyclic ordering of the convex hull vertices of P as efficiently as possible. Efficiency is interpreted in two ways: (i) minimising the number of retrievals, and (ii) the computation time to determine the set of regions that must be retrieved. Previous works focused on only one of these two aspects: either minimising retrievals or optimising algorithmic runtime. Our contribution is the first to simultaneously achieve both. Let r(F, P) denote the minimal number of retrievals required by any algorithm to determine the convex hull of P for a given instance (F, P). For a family F of n constant-complexity polygons, our main result is a reconstruction algorithm that performs Θ(r(F, P)) retrievals in O(r(F, P) log³ n) time. Compared to previous approaches that achieve optimal retrieval counts, we improve the runtime per retrieval from polynomial to polylogarithmic. We extend the generality of previous results to simple k-gons, to pairwise disjoint disks with radii in [1,k], and to unit disks where at most k disks overlap in a single point. Our runtime scales linearly with k.

Cite as

Sarita de Berg, Ivor van der Hoog, Eva Rotenberg, Daniel Rutschmann, and Sampson Wong. Instance-Optimal Imprecise Convex Hull. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{deberg_et_al:LIPIcs.ESA.2025.25,
  author =	{de Berg, Sarita and van der Hoog, Ivor and Rotenberg, Eva and Rutschmann, Daniel and Wong, Sampson},
  title =	{{Instance-Optimal Imprecise Convex Hull}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.25},
  URN =		{urn:nbn:de:0030-drops-244932},
  doi =		{10.4230/LIPIcs.ESA.2025.25},
  annote =	{Keywords: convex hull, imprecise geometry preprocessing model, partial information}
}
Document
Faster Dynamic 2-Edge Connectivity in Directed Graphs

Authors: Loukas Georgiadis, Konstantinos Giannis, and Giuseppe F. Italiano


Abstract
Let G be a directed graph with n vertices and m edges. We present a deterministic algorithm that maintains the 2-edge-connected components of G under a sequence of m edge insertions, with a total running time of O(n² log n). This significantly improves upon the previous best bound of O(mn) for graphs that are not very sparse. After each insertion, our algorithm supports the following queries with asymptotically optimal efficiency: - Test in constant time whether two query vertices v and w are 2-edge-connected in G. - Report in O(n) time all the 2-edge-connected components of G. Our approach builds on the recent framework of Georgiadis, Italiano, and Kosinas [FOCS 2024] for computing the 3-edge-connected components of a directed graph in linear time, which leverages the minset-poset technique of Gabow [TALG 2016]. Additionally, we provide a deterministic decremental algorithm for maintaining 2-edge-connectivity in strongly connected directed graphs. Given a sequence of m edge deletions, our algorithm maintains the 2-edge-connected components in total time n^(2+o(1)), while supporting the same queries as the incremental algorithm. This result assumes that the edges of a fixed spanning tree of G and of its reverse graph G^R are not deleted. Previously, the best known bound for the decremental problem was O(mn log n), obtained by a randomized algorithm without restrictions on the deletions. In contrast to prior dynamic algorithms for 2-edge-connectivity in directed graphs, our method avoids the incremental computation of dominator trees, thereby circumventing the known conditional lower bound of Ω(mn).

Cite as

Loukas Georgiadis, Konstantinos Giannis, and Giuseppe F. Italiano. Faster Dynamic 2-Edge Connectivity in Directed Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 26:1-26:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.ESA.2025.26,
  author =	{Georgiadis, Loukas and Giannis, Konstantinos and Italiano, Giuseppe F.},
  title =	{{Faster Dynamic 2-Edge Connectivity in Directed Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{26:1--26:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.26},
  URN =		{urn:nbn:de:0030-drops-244945},
  doi =		{10.4230/LIPIcs.ESA.2025.26},
  annote =	{Keywords: Connectivity, dynamic algorithms, directed graphs}
}
Document
Online Makespan Scheduling Under Scenarios

Authors: Ekin Ergen


Abstract
We consider a natural extension of online makespan scheduling on identical parallel machines by introducing scenarios. A scenario is a subset of jobs, and the task of our problem is to find a global assignment of the jobs to machines so that the maximum makespan under a scenario, i.e., the maximum makespan of any schedule restricted to a scenario, is minimized. For varying values of the number of scenarios and machines, we explore the competitiveness of online algorithms. We prove tight and near-tight bounds, several of which are achieved through novel constructions. In particular, we leverage the interplay between the unit processing time case of our problem and the hypergraph coloring problem both ways: We use hypergraph coloring techniques to steer an adversarial family of instances proving lower bounds for our problem, which in turn leads to lower bounds for several variants of online hypergraph coloring.

Cite as

Ekin Ergen. Online Makespan Scheduling Under Scenarios. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ergen:LIPIcs.ESA.2025.27,
  author =	{Ergen, Ekin},
  title =	{{Online Makespan Scheduling Under Scenarios}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.27},
  URN =		{urn:nbn:de:0030-drops-244950},
  doi =		{10.4230/LIPIcs.ESA.2025.27},
  annote =	{Keywords: online scheduling, scenario-based model, online algorithms}
}
Document
Sliding Squares in Parallel

Authors: Hugo A. Akitaya, Sándor P. Fekete, Peter Kramer, Saba Molaei, Christian Rieck, Frederick Stock, and Tobias Wallner


Abstract
We consider algorithmic problems motivated by modular robotic reconfiguration in the sliding square model, in which we are given n square-shaped modules in a (labeled or unlabeled) start configuration and need to find a schedule of sliding moves to transform it into a desired goal configuration, maintaining connectivity of the configuration at all times. Recent work has aimed at minimizing the total number of moves, resulting in fully sequential schedules that can perform reconfiguration in 𝒪(n²) moves, or 𝒪(nP) for arrangements of bounding box perimeter size P. We provide first results in the sliding square model that exploit parallel motion, performing reconfiguration in worst-case optimal makespan of 𝒪(P). We also provide tight bounds on the complexity of the problem by showing that even deciding the possibility of reconfiguration within makespan 1 is NP-complete in the unlabeled case. In the labeled variant, we note that deciding the same for makespan 2 is NP-complete, while makespan 1 is straightforward.

Cite as

Hugo A. Akitaya, Sándor P. Fekete, Peter Kramer, Saba Molaei, Christian Rieck, Frederick Stock, and Tobias Wallner. Sliding Squares in Parallel. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{a.akitaya_et_al:LIPIcs.ESA.2025.28,
  author =	{A. Akitaya, Hugo and Fekete, S\'{a}ndor P. and Kramer, Peter and Molaei, Saba and Rieck, Christian and Stock, Frederick and Wallner, Tobias},
  title =	{{Sliding Squares in Parallel}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.28},
  URN =		{urn:nbn:de:0030-drops-244961},
  doi =		{10.4230/LIPIcs.ESA.2025.28},
  annote =	{Keywords: Sliding squares, parallel motion, reconfigurability, motion planning, multi-agent path finding, makespan, swarm robotics, computational geometry}
}
Document
The Tape Reconfiguration Problem and Its Consequences for Dominating Set Reconfiguration

Authors: Nicolas Bousquet, Quentin Deschamps, Arnaud Mary, Amer E. Mouawad, and Théo Pierron


Abstract
A dominating set of a graph G = (V,E) is a set of vertices D ⊆ V whose closed neighborhood is V, i.e., N[D] = V. We view a dominating set as a collection of tokens placed on the vertices of D. In the token sliding variant of the Dominating Set Reconfiguration problem (TS-DSR), we seek to transform a source dominating set into a target dominating set in G by sliding tokens along edges, and while maintaining a dominating set all along the transformation. TS-DSR is known to be PSPACE-complete even restricted to graphs of pathwidth w, for some non-explicit constant w and to be XL-complete parameterized by the size k of the solution. The first contribution of this article consists in using a novel approach to provide the first explicit constant for which the TS-DSR problem is PSPACE-complete, a question that was left open in the literature. From a parameterized complexity perspective, the token jumping variant of DSR, i.e., where tokens can jump to arbitrary vertices, is known to be FPT when parameterized by the size of the dominating sets on nowhere dense classes of graphs. But, in contrast, no non-trivial result was known about TS-DSR. We prove that DSR is actually much harder in the sliding model since it is XL-complete when restricted to bounded pathwidth graphs and even when parameterized by k plus the feedback vertex set number of the graph. This gives, for the first time, a difference of behavior between the complexity under token sliding and token jumping for some problem on graphs of bounded treewidth. All our results are obtained using a brand new method, based on the hardness of the so-called Tape Reconfiguration problem, a problem we believe to be of independent interest. We complement these hardness results with a positive result showing that DSR (parameterized by k) in the sliding model is FPT on planar graphs, also answering an open problem from the literature.

Cite as

Nicolas Bousquet, Quentin Deschamps, Arnaud Mary, Amer E. Mouawad, and Théo Pierron. The Tape Reconfiguration Problem and Its Consequences for Dominating Set Reconfiguration. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bousquet_et_al:LIPIcs.ESA.2025.29,
  author =	{Bousquet, Nicolas and Deschamps, Quentin and Mary, Arnaud and Mouawad, Amer E. and Pierron, Th\'{e}o},
  title =	{{The Tape Reconfiguration Problem and Its Consequences for Dominating Set Reconfiguration}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.29},
  URN =		{urn:nbn:de:0030-drops-244974},
  doi =		{10.4230/LIPIcs.ESA.2025.29},
  annote =	{Keywords: combinatorial reconfiguration, parameterized complexity, structural graph parameters, treewidth, dominating set}
}
Document
Reconstructing Random Graphs from Distance Queries

Authors: Michael Krivelevich and Maksim Zhukovskii


Abstract
We estimate the minimum number of distance queries that is sufficient to reconstruct the binomial random graph G(n,p) with constant diameter with high probability. We get a tight (up to a constant factor) answer for all p > n^{-1+o(1)} outside "threshold windows" around n^{-k/(k+1)+o(1)}, k ∈ ℤ_{> 0}: with high probability the query complexity equals Θ(n^{4-d}p^{2-d}), where d is the diameter of the random graph. This demonstrates the following non-monotone behaviour: the query complexity jumps down at moments when the diameter gets larger; yet, between these moments the query complexity grows. We also show that there exists a non-adaptive algorithm that reconstructs the random graph with O(n^{4-d}p^{2-d}ln n) distance queries with high probability, and this is best possible.

Cite as

Michael Krivelevich and Maksim Zhukovskii. Reconstructing Random Graphs from Distance Queries. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{krivelevich_et_al:LIPIcs.ESA.2025.30,
  author =	{Krivelevich, Michael and Zhukovskii, Maksim},
  title =	{{Reconstructing Random Graphs from Distance Queries}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.30},
  URN =		{urn:nbn:de:0030-drops-244982},
  doi =		{10.4230/LIPIcs.ESA.2025.30},
  annote =	{Keywords: random graphs, graph reconstruction, distance queries, query complexity}
}
Document
An Optimal Algorithm for Shortest Paths in Unweighted Disk Graphs

Authors: Bruce W. Brewer and Haitao Wang


Abstract
Given in the plane a set S of n points and a set of disks centered at these points, the disk graph G(S) induced by these disks has vertex set S and an edge between two vertices if their disks intersect. Note that the disks may have different radii. We consider the problem of computing shortest paths from a source point s ∈ S to all vertices in G(S) where the length of a path in G(S) is defined as the number of edges in the path. The previously best algorithm solves the problem in O(nlog² n) time. A lower bound of Ω(nlog n) is also known for this problem under the algebraic decision tree model. In this paper, we present an O(nlog n) time algorithm, which matches the lower bound and thus is optimal. Another virtue of our algorithm is that it is quite simple.

Cite as

Bruce W. Brewer and Haitao Wang. An Optimal Algorithm for Shortest Paths in Unweighted Disk Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 31:1-31:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{brewer_et_al:LIPIcs.ESA.2025.31,
  author =	{Brewer, Bruce W. and Wang, Haitao},
  title =	{{An Optimal Algorithm for Shortest Paths in Unweighted Disk Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{31:1--31:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.31},
  URN =		{urn:nbn:de:0030-drops-244997},
  doi =		{10.4230/LIPIcs.ESA.2025.31},
  annote =	{Keywords: disk graphs, weighted Voronoi diagrams, shortest paths}
}
Document
Linear-Time Multilevel Graph Partitioning via Edge Sparsification

Authors: Lars Gottesbüren, Nikolai Maas, Dominik Rosch, Peter Sanders, and Daniel Seemaier


Abstract
The current landscape of balanced graph partitioning is divided into high-quality but expensive multilevel algorithms and cheaper approaches with linear running time, such as single-level algorithms and streaming algorithms. We demonstrate how to achieve the best of both worlds with a linear time multilevel algorithm. Multilevel algorithms construct a hierarchy of increasingly smaller graphs by repeatedly contracting clusters of nodes. Our approach preserves their distinct advantage, allowing refinement of the partition over multiple levels with increasing detail. At the same time, we use edge sparsification to guarantee geometric size reduction between the levels and thus linear running time. We provide a proof of the linear running time as well as additional insights into the behavior of multilevel algorithms, showing that graphs with low modularity are most likely to trigger worst-case running time. We evaluate multiple approaches for edge sparsification and integrate our algorithm into the state-of-the-art multilevel partitioner KaMinPar, maintaining its excellent parallel scalability. As demonstrated in detailed experiments, this results in a 1.49× average speedup (up to 4× for some instances) with only 1% loss in solution quality. Moreover, our algorithm clearly outperforms state-of-the-art single-level and streaming approaches.

Cite as

Lars Gottesbüren, Nikolai Maas, Dominik Rosch, Peter Sanders, and Daniel Seemaier. Linear-Time Multilevel Graph Partitioning via Edge Sparsification. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2025.32,
  author =	{Gottesb\"{u}ren, Lars and Maas, Nikolai and Rosch, Dominik and Sanders, Peter and Seemaier, Daniel},
  title =	{{Linear-Time Multilevel Graph Partitioning via Edge Sparsification}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.32},
  URN =		{urn:nbn:de:0030-drops-245007},
  doi =		{10.4230/LIPIcs.ESA.2025.32},
  annote =	{Keywords: Graph Partitioning, Graph Algorithms, Linear Time Algorithms, Graph Sparsification}
}
Document
Better Indexing for Rectangular Pattern Matching

Authors: Paweł Gawrychowski and Adam Górkiewicz


Abstract
We revisit the complexity of building, given a two-dimensional string of size n, an indexing structure that allows locating all k occurrences of a two-dimensional pattern of size m. While a structure of size 𝒪(n) with query time 𝒪(m+k) is known for this problem under the additional assumption that the pattern is a square [Giancarlo, SICOMP 1995], a popular belief was that for rectangular patterns one cannot achieve such (or even similar) bounds, due to a lower bound for a certain natural class of approaches [Giancarlo, WADS 1993]. We show that, in fact, it is possible to construct a very simple structure of size 𝒪(nlog n) that supports such queries for any rectangular pattern in 𝒪(m+klog^{ε}n) time, for any ε > 0.

Cite as

Paweł Gawrychowski and Adam Górkiewicz. Better Indexing for Rectangular Pattern Matching. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 33:1-33:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2025.33,
  author =	{Gawrychowski, Pawe{\l} and G\'{o}rkiewicz, Adam},
  title =	{{Better Indexing for Rectangular Pattern Matching}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{33:1--33:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.33},
  URN =		{urn:nbn:de:0030-drops-245011},
  doi =		{10.4230/LIPIcs.ESA.2025.33},
  annote =	{Keywords: 2D strings, pattern matching, string indexing}
}
Document
On the Approximability of Train Routing and the Min-Max Disjoint Paths Problem

Authors: Umang Bhaskar, Katharina Eickhoff, Lennart Kauther, Jannik Matuschke, Britta Peis, and Laura Vargas Koch


Abstract
In train routing, the headway is the minimum distance that must be maintained between successive trains for safety and robustness. We introduce a model for train routing that requires a fixed headway to be maintained between trains, and study the problem of minimizing the makespan, i.e., the arrival time of the last train, in a single-source single-sink network. For this problem, we first show that there exists an optimal solution where trains move in convoys - that is, the optimal paths for any two trains are either the same or are arc-disjoint. Via this insight, we are able to reduce the approximability of our train routing problem to that of the min-max disjoint paths problem, which asks for a collection of disjoint paths where the maximum length of any path in the collection is as small as possible. While min-max disjoint paths inherits a strong inapproximability result on directed acyclic graphs from the multi-level bottleneck assignment problem, we show that a natural greedy composition approach yields a logarithmic approximation in the number of disjoint paths for series-parallel graphs. We also present an alternative analysis of this approach that yields a guarantee depending on how often the decomposition tree of the series-parallel graph alternates between series and parallel compositions on any root-leaf path.

Cite as

Umang Bhaskar, Katharina Eickhoff, Lennart Kauther, Jannik Matuschke, Britta Peis, and Laura Vargas Koch. On the Approximability of Train Routing and the Min-Max Disjoint Paths Problem. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhaskar_et_al:LIPIcs.ESA.2025.34,
  author =	{Bhaskar, Umang and Eickhoff, Katharina and Kauther, Lennart and Matuschke, Jannik and Peis, Britta and Vargas Koch, Laura},
  title =	{{On the Approximability of Train Routing and the Min-Max Disjoint Paths Problem}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.34},
  URN =		{urn:nbn:de:0030-drops-245029},
  doi =		{10.4230/LIPIcs.ESA.2025.34},
  annote =	{Keywords: Train Routing, Scheduling, Approximation Algorithms, Flows over Time, Min-Max Disjoint Paths}
}
Document
The Geodesic Fréchet Distance Between Two Curves Bounding a Simple Polygon

Authors: Thijs van der Horst, Marc van Kreveld, Tim Ophelders, and Bettina Speckmann


Abstract
The Fréchet distance is a popular similarity measure that is well-understood for polygonal curves in ℝ^d: near-quadratic time algorithms exist, and conditional lower bounds suggest that these results cannot be improved significantly, even in one dimension and when approximating with a factor less than three. We consider the special case where the curves bound a simple polygon and distances are measured via geodesics inside this simple polygon. Here the conditional lower bounds do not apply; Efrat et al. (2002) were able to give a near-linear time 2-approximation algorithm. In this paper, we significantly improve upon their result: we present a (1+ε)-approximation algorithm, for any ε > 0, that runs in 𝒪(1/(ε) (n+m log n) log nm log 1/(ε)) time for a simple polygon bounded by two curves with n and m vertices, respectively. To do so, we show how to compute the reachability of specific groups of points in the free space at once, by interpreting the free space as one between separated one-dimensional curves. We solve this one-dimensional problem in near-linear time, generalizing a result by Bringmann and Künnemann (2015). Finally, we give a linear time exact algorithm if the two curves bound a convex polygon.

Cite as

Thijs van der Horst, Marc van Kreveld, Tim Ophelders, and Bettina Speckmann. The Geodesic Fréchet Distance Between Two Curves Bounding a Simple Polygon. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanderhorst_et_al:LIPIcs.ESA.2025.35,
  author =	{van der Horst, Thijs and van Kreveld, Marc and Ophelders, Tim and Speckmann, Bettina},
  title =	{{The Geodesic Fr\'{e}chet Distance Between Two Curves Bounding a Simple Polygon}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.35},
  URN =		{urn:nbn:de:0030-drops-245038},
  doi =		{10.4230/LIPIcs.ESA.2025.35},
  annote =	{Keywords: Fr\'{e}chet distance, approximation, geodesic, simple polygon}
}
Document
Efficient Contractions of Dynamic Graphs - With Applications

Authors: Monika Henzinger, Evangelos Kosinas, Robin Münk, and Harald Räcke


Abstract
A non-trivial minimum cut (NMC) sparsifier is a multigraph Ĝ that preserves all non-trivial minimum cuts of a given undirected graph G. We introduce a flexible data structure for fully dynamic graphs that can efficiently provide an NMC sparsifier upon request at any point during the sequence of updates. We employ simple dynamic forest data structures to achieve a fast from-scratch construction of the sparsifier at query time. Based on the strength of the adversary and desired type of time bounds, the data structure comes with different guarantees. Specifically, let G be a fully dynamic simple graph with n vertices and minimum degree δ. Then our data structure supports an insertion/deletion of an edge to/from G in n^o(1) worst-case time. Furthermore, upon request, it can return w.h.p. an NMC sparsifier of G that has O(n/δ) vertices and O(n) edges, in Ô(n) time. The probabilistic guarantees hold against an adaptive adversary. Alternatively, the update and query times can be improved to Õ(1) and Õ(n) respectively, if amortized-time guarantees are sufficient, or if the adversary is oblivious. Throughout the paper, we use Õ to hide polylogarithmic factors and Ô to hide subpolynomial (i.e., n^o(1)) factors. We discuss two applications of our new data structure. First, it can be used to efficiently report a cactus representation of all minimum cuts of a fully dynamic simple graph. Building this cactus for the NMC sparsifier instead of the original graph allows for a construction time that is sublinear in the number of edges. Against an adaptive adversary, we can with high probability output the cactus representation in worst-case Ô(n) time. Second, our data structure allows us to efficiently compute the maximal k-edge-connected subgraphs of undirected simple graphs, by repeatedly applying a minimum cut algorithm on the NMC sparsifier. Specifically, we can compute with high probability the maximal k-edge-connected subgraphs of a simple graph with n vertices and m edges in Õ(m+n²/k) time. This improves the best known time bounds for k = Ω(n^{1/8}) and naturally extends to the case of fully dynamic graphs.

Cite as

Monika Henzinger, Evangelos Kosinas, Robin Münk, and Harald Räcke. Efficient Contractions of Dynamic Graphs - With Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ESA.2025.36,
  author =	{Henzinger, Monika and Kosinas, Evangelos and M\"{u}nk, Robin and R\"{a}cke, Harald},
  title =	{{Efficient Contractions of Dynamic Graphs - With Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.36},
  URN =		{urn:nbn:de:0030-drops-245047},
  doi =		{10.4230/LIPIcs.ESA.2025.36},
  annote =	{Keywords: Graph Algorithms, Cut Sparsifiers, Dynamic Algorithms}
}
Document
Hardness of Computation of Quantum Invariants on 3-Manifolds with Restricted Topology

Authors: Henrique Ennes and Clément Maria


Abstract
Quantum invariants in low-dimensional topology offer a wide variety of valuable invariants about knots and 3-manifolds, presented by explicit formulas that are readily computable. Their computational complexity has been actively studied and is tightly connected to topological quantum computing. In this article, we prove that for any 3-manifold quantum invariant in the Reshetikhin-Turaev model, there is a deterministic polynomial time algorithm that, given as input an arbitrary closed 3-manifold M, outputs a closed 3-manifold M' with the same quantum invariant, such that M' is hyperbolic, contains no low genus embedded incompressible surface, and is presented by a strongly irreducible Heegaard diagram. Our construction relies on properties of Heegaard splittings and the Hempel distance. At the level of computational complexity, this proves that the hardness of computing a given quantum invariant of 3-manifolds is preserved even when severely restricting the topology and the combinatorics of the input. This positively answers a question raised by Samperton [Samperton, 2023].

Cite as

Henrique Ennes and Clément Maria. Hardness of Computation of Quantum Invariants on 3-Manifolds with Restricted Topology. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ennes_et_al:LIPIcs.ESA.2025.37,
  author =	{Ennes, Henrique and Maria, Cl\'{e}ment},
  title =	{{Hardness of Computation of Quantum Invariants on 3-Manifolds with Restricted Topology}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.37},
  URN =		{urn:nbn:de:0030-drops-245057},
  doi =		{10.4230/LIPIcs.ESA.2025.37},
  annote =	{Keywords: 3-manifold, Heegaard splitting, Hempel distance, Quantum invariant, polynomial time reduction}
}
Document
Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique

Authors: Nikhil Kumar, J. J. Nan, and Chaitanya Swamy


Abstract
In the classical survivable-network-design problem (SNDP), we are given an undirected graph G = (V, E), non-negative edge costs, and some k tuples (s_i,t_i,r_i), where s_i,t_i ∈ V and r_i ∈ ℤ_+. The objective is to find a minimum-cost subset H ⊆ E such that each s_i-t_i pair remains connected even after the failure of any r_i-1 edges. It is well-known that SNDP can be equivalently modeled using a weakly-supermodular cut-requirement function f, where the objective is to find the minimum-cost subset of edges that picks at least f(S) edges across every cut S ⊆ V. Recently, motivated by fault-tolerance in graph spanners, Dinitz, Koranteng, and Kortsartz proposed a variant of SNDP that enforces a relative level of fault tolerance with respect to G. Even if a feasible SNDP-solution may not exist due to G lacking the required fault-tolerance, the goal is to find a solution H that is at least as fault-tolerant as G itself. They formalize the latter condition in terms of paths and fault-sets, which gives rise to path-relative SNDP (which they call relative SNDP). Along these lines, we introduce a new model of relative network design, called cut-relative SNDP (CR-SNDP), where the goal is to select a minimum-cost subset of edges that satisfies the given (weakly-supermodular) cut-requirement function to the maximum extent possible, i.e., by picking min{f(S), |δ_G(S)|} edges across every cut S ⊆ V. Unlike SNDP, the cut-relative and path-relative versions of SNDP are not equivalent. The resulting cut-requirement function for CR-SNDP (as also path-relative SNDP) is not weakly supermodular, and extreme-point solutions to the natural LP-relaxation need not correspond to a laminar family of tight cut constraints. Consequently, standard techniques cannot be used directly to design approximation algorithms for this problem. We develop a novel decomposition technique to circumvent this difficulty and use it to give a tight 2-approximation algorithm for CR-SNDP. We also show some new hardness results for these relative-SNDP problems.

Cite as

Nikhil Kumar, J. J. Nan, and Chaitanya Swamy. Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 38:1-38:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ESA.2025.38,
  author =	{Kumar, Nikhil and Nan, J. J. and Swamy, Chaitanya},
  title =	{{Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{38:1--38:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.38},
  URN =		{urn:nbn:de:0030-drops-245061},
  doi =		{10.4230/LIPIcs.ESA.2025.38},
  annote =	{Keywords: Approximation algorithms, Network Design, Cut-requirement functions, Weak Supermodularity, Iterative rounding, LP rounding algorithms}
}
Document
Going Beyond Surfaces in Diameter Approximation

Authors: Michał Włodarczyk


Abstract
Calculating the diameter of an undirected graph requires quadratic running time under the Strong Exponential Time Hypothesis and this barrier works even against any approximation better than 3/2. For planar graphs with positive edge weights, there are known (1+ε)-approximation algorithms with running time poly(1/ε, log n)⋅ n. However, these algorithms rely on shortest path separators and this technique falls short to yield efficient algorithms beyond graphs of bounded genus. In this work we depart from embedding-based arguments and obtain diameter approximations relying on VC set systems and the local treewidth property. We present two orthogonal extensions of the planar case by giving (1+ε)-approximation algorithms with the following running times: - 𝒪_h((1/ε)^𝒪(h) ⋅ nlog² n)-time algorithm for graphs excluding an apex graph of size h as a minor, - 𝒪_d((1/ε)^𝒪(d) ⋅ nlog² n)-time algorithm for the class of d-apex graphs. As a stepping stone, we obtain efficient (1+ε)-approximate distance oracles for graphs excluding an apex graph of size h as a minor. Our oracle has preprocessing time 𝒪_h((1/ε)⁸⋅ nlog nlog W) and query time 𝒪_h((1/ε)²⋅log n log W), where W is the metric stretch. Such oracles have been so far only known for bounded genus graphs. All our algorithms are deterministic.

Cite as

Michał Włodarczyk. Going Beyond Surfaces in Diameter Approximation. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wlodarczyk:LIPIcs.ESA.2025.39,
  author =	{W{\l}odarczyk, Micha{\l}},
  title =	{{Going Beyond Surfaces in Diameter Approximation}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.39},
  URN =		{urn:nbn:de:0030-drops-245076},
  doi =		{10.4230/LIPIcs.ESA.2025.39},
  annote =	{Keywords: diameter, approximation, distance oracles, graph minors, treewidth}
}
Document
Maximum List r-Colorable Induced Subgraphs in kP₃-Free Graphs

Authors: Esther Galby, Paloma T. Lima, Andrea Munaro, and Amir Nikabadi


Abstract
We show that, for every fixed positive integers r and k, Max-Weight List r-Colorable Induced Subgraph admits a polynomial-time algorithm on kP₃-free graphs. This problem is a common generalization of Max-Weight Independent Set, Odd Cycle Transversal and List r-Coloring, among others. Our result has several consequences. First, it implies that, for every fixed r ≥ 5, assuming 𝖯 ≠ NP, Max-Weight List r-Colorable Induced Subgraph is polynomial-time solvable on H-free graphs if and only if H is an induced subgraph of either kP₃ or P₅+kP₁, for some k ≥ 1. Second, it makes considerable progress toward a complexity dichotomy for Odd Cycle Transversal on H-free graphs, allowing to answer a question of Agrawal, Lima, Lokshtanov, Rzążewski, Saurabh, and Sharma [ACM Trans. Algorithms 2025]. Third, it gives a short and self-contained proof of the known result of Chudnovsky, Hajebi, and Spirkl [Combinatorica 2024] that List r-Coloring on kP₃-free graphs is polynomial-time solvable for every fixed r and k. We also consider two natural distance-d generalizations of Max-Weight Independent Set and List r-Coloring and provide polynomial-time algorithms on kP₃-free graphs for every fixed integers r, k, and d ≥ 6.

Cite as

Esther Galby, Paloma T. Lima, Andrea Munaro, and Amir Nikabadi. Maximum List r-Colorable Induced Subgraphs in kP₃-Free Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 40:1-40:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{galby_et_al:LIPIcs.ESA.2025.40,
  author =	{Galby, Esther and Lima, Paloma T. and Munaro, Andrea and Nikabadi, Amir},
  title =	{{Maximum List r-Colorable Induced Subgraphs in kP₃-Free Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{40:1--40:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.40},
  URN =		{urn:nbn:de:0030-drops-245086},
  doi =		{10.4230/LIPIcs.ESA.2025.40},
  annote =	{Keywords: Hereditary classes, list coloring, odd cycle transversal, independent set}
}
Document
Min-Max Correlation Clustering via Neighborhood Similarity

Authors: Nairen Cao, Steven Roche, and Hsin-Hao Su


Abstract
We present an efficient algorithm for the min-max correlation clustering problem. The input is a complete graph where edges are labeled as either positive (+) or negative (-), and the objective is to find a clustering that minimizes the 𝓁_∞-norm of the disagreement vector over all vertices. We address this problem with an efficient (3 + ε)-approximation algorithm that runs in nearly linear time, Õ(|E^+|), where |E^+| denotes the number of positive edges. This improves upon the previous best-known approximation guarantee of 4 by Heidrich, Irmai, and Andres [Heidrich et al., 2024], whose algorithm runs in O(|V|² + |V| D²) time, where |V| is the number of nodes and D is the maximum degree in the graph (V,E^+). Furthermore, we extend our algorithm to the massively parallel computation (MPC) model and the semi-streaming model. In the MPC model, our algorithm runs on machines with memory sublinear in the number of nodes and takes O(1) rounds. In the streaming model, our algorithm requires only Õ(|V|) space, where |V| is the number of vertices in the graph. Our algorithms are purely combinatorial. They are based on a novel structural observation about the optimal min-max instance, which enables the construction of a (3 + ε)-approximation algorithm using O(|E^+|) neighborhood similarity queries. By leveraging random projection, we further show these queries can be computed in nearly linear time.

Cite as

Nairen Cao, Steven Roche, and Hsin-Hao Su. Min-Max Correlation Clustering via Neighborhood Similarity. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 41:1-41:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cao_et_al:LIPIcs.ESA.2025.41,
  author =	{Cao, Nairen and Roche, Steven and Su, Hsin-Hao},
  title =	{{Min-Max Correlation Clustering via Neighborhood Similarity}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{41:1--41:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.41},
  URN =		{urn:nbn:de:0030-drops-245098},
  doi =		{10.4230/LIPIcs.ESA.2025.41},
  annote =	{Keywords: Min Max Correlation Clustering, Approximate algorithms}
}
Document
(Multivariate) k-SUM as Barrier to Succinct Computation

Authors: Geri Gokaj, Marvin Künnemann, Sabine Storandt, and Carina Truschel


Abstract
How does the time complexity of a problem change when the input is given succinctly rather than explicitly? We study this question for several geometric problems defined on a set X of N points in ℤ^d. As succinct representation, we choose a sumset (or Minkowski sum) representation: Instead of receiving X explicitly, we are given sets A,B of n points that define X as A+B = {a+b∣ a ∈ A,b ∈ B}. We investigate the fine-grained complexity of this succinct version for several Õ(N)-time computable geometric primitives. Remarkably, we can tie their complexity tightly to the complexity of corresponding k-SUM problems. Specifically, we introduce as All-ints 3-SUM(n,n,k) the following multivariate, multi-output variant of 3-SUM: given sets A,B of size n and set C of size k, determine for all c ∈ C whether there are a ∈ A and b ∈ B with a+b = c. We obtain the following results: 1) Succinct closest L_∞-pair requires time N^{1-o(1)} under the 3-SUM hypothesis, while succinct furthest L_∞-pair can be solved in time Õ(n). 2) Succinct bichromatic closest L_∞-Pair requires time N^{1-o(1)} iff the 4-SUM hypothesis holds. 3) The following problems are fine-grained equivalent to All-ints 3-SUM(n,n,k): succinct skyline computation in 2D with output size k and succinct batched orthogonal range search with k given ranges. This establishes conditionally tight Õ(min{nk, N})-time algorithms for these problems. We obtain further connections with All-ints 3-SUM(n,n,k) for succinctly computing independent sets in unit interval graphs. Thus, (Multivariate) k-SUM problems precisely capture the barrier for enabling sumset-succinct computation for various geometric primitives.

Cite as

Geri Gokaj, Marvin Künnemann, Sabine Storandt, and Carina Truschel. (Multivariate) k-SUM as Barrier to Succinct Computation. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 42:1-42:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gokaj_et_al:LIPIcs.ESA.2025.42,
  author =	{Gokaj, Geri and K\"{u}nnemann, Marvin and Storandt, Sabine and Truschel, Carina},
  title =	{{(Multivariate) k-SUM as Barrier to Succinct Computation}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{42:1--42:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.42},
  URN =		{urn:nbn:de:0030-drops-245101},
  doi =		{10.4230/LIPIcs.ESA.2025.42},
  annote =	{Keywords: Fine-grained complexity theory, sumsets, additive combinatorics, succinct inputs, computational geometry}
}
Document
Edge Clique Partition and Cover Beyond Independence

Authors: Fedor V. Fomin, Petr A. Golovach, Danil Sagunov, and Kirill Simonov


Abstract
Covering and partitioning the edges of a graph into cliques are classical problems at the intersection of combinatorial optimization and graph theory, having been studied through a range of algorithmic and complexity-theoretic lenses. Despite the well-known fixed-parameter tractability of these problems when parameterized by the total number of cliques, such a parameterization often fails to be meaningful for sparse graphs. In many real-world instances, on the other hand, the minimum number of cliques in an edge cover or partition can be very close to the size of a maximum independent set α(G). Motivated by this observation, we investigate above-α parameterizations of the edge clique cover and partition problems. Concretely, we introduce and study Edge Clique Cover Above Independent Set (ECC/α) and Edge Clique Partition Above Independent Set (ECP/α), where the goal is to cover or partition all edges of a graph using at most α(G) + k cliques, and k is the parameter. Our main results reveal a distinct complexity landscape for the two variants. We show that ECP/α is fixed-parameter tractable, whereas ECC/α is NP-complete for all k ≥ 2, yet can be solved in polynomial time for k ∈ {0,1}. These findings highlight intriguing differences between the two problems when viewed through the lens of parameterization above a natural lower bound. Finally, we demonstrate that ECC/α becomes fixed-parameter tractable when parameterized by k + ω(G), where ω(G) is the size of a maximum clique of the graph G. This result is particularly relevant for sparse graphs, in which ω is typically small. For H-minor free graphs, we design a subexponential algorithm of running time f(H)^√k ⋅ n^𝒪(1).

Cite as

Fedor V. Fomin, Petr A. Golovach, Danil Sagunov, and Kirill Simonov. Edge Clique Partition and Cover Beyond Independence. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 43:1-43:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2025.43,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Sagunov, Danil and Simonov, Kirill},
  title =	{{Edge Clique Partition and Cover Beyond Independence}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{43:1--43:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.43},
  URN =		{urn:nbn:de:0030-drops-245113},
  doi =		{10.4230/LIPIcs.ESA.2025.43},
  annote =	{Keywords: edge clique partition, edge clique cover, independence number, parameterized complexity, above guarantee}
}
Document
Near-Optimal Vertex Fault-Tolerant Labels for Steiner Connectivity

Authors: Koustav Bhanja and Asaf Petruschka


Abstract
We present a compact labeling scheme for determining whether a designated set of terminals in a graph remains connected after any f (or less) vertex failures occur. An f-FT Steiner connectivity labeling scheme for an n-vertex graph G = (V,E) with terminal set U ⊆ V provides labels to the vertices of G, such that given only the labels of any subset F ⊆ V with |F| ≤ f, one can determine if U remains connected in G-F. The main complexity measure is the maximum label length. The special case U = V of global connectivity has been recently studied by Jiang, Parter, and Petruschka [Yonggang Jiang et al., 2025], who provided labels of n^{1-1/f} ⋅ poly(f,log n) bits. This is near-optimal (up to poly(f,log n) factors) by a lower bound of Long, Pettie and Saranurak [Yaowei Long et al., 2025]. Our scheme achieves labels of |U|^{1-1/f} ⋅ poly(f, log n) for general U ⊆ V, which is near-optimal for any given size |U| of the terminal set. To handle terminal sets, our approach differs from [Yonggang Jiang et al., 2025]. We use a well-structured Steiner tree for U produced by a decomposition theorem of Duan and Pettie [Ran Duan and Seth Pettie, 2020], and bypass the need for Nagamochi-Ibaraki sparsification [Hiroshi Nagamochi and Toshihide Ibaraki, 1992].

Cite as

Koustav Bhanja and Asaf Petruschka. Near-Optimal Vertex Fault-Tolerant Labels for Steiner Connectivity. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 44:1-44:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhanja_et_al:LIPIcs.ESA.2025.44,
  author =	{Bhanja, Koustav and Petruschka, Asaf},
  title =	{{Near-Optimal Vertex Fault-Tolerant Labels for Steiner Connectivity}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{44:1--44:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.44},
  URN =		{urn:nbn:de:0030-drops-245123},
  doi =		{10.4230/LIPIcs.ESA.2025.44},
  annote =	{Keywords: Fault Tolerance, Labeling Schemes, Steiner Connectivity}
}
Document
Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds

Authors: Itai Boneh, Egor Gorbachev, and Tomasz Kociumaka


Abstract
The edit distance ed(X,Y) of two strings X,Y ∈ Σ^* is the minimum number of character edits (insertions, deletions, and substitutions) needed to transform X into Y. Its weighted counterpart ed^w(X,Y) minimizes the total cost of edits, where the costs of individual edits, depending on the edit type and the characters involved, are specified using a function w, normalized so that each edit costs at least one. The textbook dynamic-programming procedure, given strings X,Y ∈ Σ^{≤ n} and oracle access to w, computes ed^w(X,Y) in 𝒪(n²) time. Nevertheless, one can achieve better running times if the computed distance, denoted k, is small: 𝒪(n+k²) for unit weights [Landau and Vishkin; JCSS'88] and Õ(n+√{nk³}) for arbitrary weights [Cassis, Kociumaka, Wellnitz; FOCS'23]. In this paper, we study the dynamic version of the weighted edit distance problem, where the goal is to maintain ed^w(X,Y) for strings X,Y ∈ Σ^{≤ n} that change over time, with each update specified as an edit in X or Y. Very recently, Gorbachev and Kociumaka [STOC'25] showed that the unweighted distance ed(X,Y) can be maintained in Õ(k) time per update after Õ(n+k²)-time preprocessing; here, k denotes the current value of ed(X,Y). Their algorithm generalizes to small integer weights, but the underlying approach is incompatible with large weights. Our main result is a dynamic algorithm that maintains ed^w(X,Y) in Õ(k^{3-γ}) time per update after Õ(nk^γ)-time preprocessing. Here, γ ∈ [0,1] is a real trade-off parameter and k ≥ 1 is an integer threshold fixed at preprocessing time, with ∞ returned whenever ed^w(X,Y) > k. We complement our algorithm with conditional lower bounds showing fine-grained optimality of our trade-off for γ ∈ [0.5,1) and justifying our choice to fix k. We also generalize our solution to a much more robust setting while preserving the fine-grained optimal trade-off. Our full algorithm maintains X ∈ Σ^{≤ n} subject not only to character edits but also substring deletions and copy-pastes, each supported in Õ(k²) time. Instead of dynamically maintaining Y, it answers queries that, given any string Y specified through a sequence of 𝒪(k) arbitrary edits transforming X into Y, in Õ(k^{3-γ}) time compute ed^w(X,Y) or report that ed^w(X,Y) > k.

Cite as

Itai Boneh, Egor Gorbachev, and Tomasz Kociumaka. Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.ESA.2025.45,
  author =	{Boneh, Itai and Gorbachev, Egor and Kociumaka, Tomasz},
  title =	{{Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.45},
  URN =		{urn:nbn:de:0030-drops-245139},
  doi =		{10.4230/LIPIcs.ESA.2025.45},
  annote =	{Keywords: Edit distance, dynamic algorithms, conditional lower bounds}
}
Document
Max-Distance Sparsification for Diversification and Clustering

Authors: Soh Kumabe


Abstract
Let 𝒟 be a set family that is the solution domain of some combinatorial problem. The max-min diversification problem on 𝒟 is the problem to select k sets from 𝒟 such that the Hamming distance between any two selected sets is at least d. FPT algorithms parameterized by k+𝓁, where 𝓁 = max_{D ∈ 𝒟}|D|, and k+d have been actively studied recently for several specific domains. This paper provides unified algorithmic frameworks to solve this problem. Specifically, for each parameterization k+𝓁 and k+d, we provide an FPT oracle algorithm for the max-min diversification problem using oracles related to 𝒟. We then demonstrate that our frameworks provide the first FPT algorithms on several new domains 𝒟, including the domain of t-linear matroid intersection, almost 2-SAT, minimum edge s,t-flows, vertex sets of s,t-mincut, vertex sets of edge bipartization, and Steiner trees. We also demonstrate that our frameworks generalize most of the existing domain-specific tractability results. Our main technical breakthrough is introducing the notion of max-distance sparsifier of 𝒟, a domain on which the max-min diversification problem is equivalent to the same problem on the original domain 𝒟. The core of our framework is to design FPT oracle algorithms that construct a constant-size max-distance sparsifier of 𝒟. Using max-distance sparsifiers, we provide FPT algorithms for the max-min and max-sum diversification problems on 𝒟, as well as k-center and k-sum-of-radii clustering problems on 𝒟, which are also natural problems in the context of diversification and have their own interests.

Cite as

Soh Kumabe. Max-Distance Sparsification for Diversification and Clustering. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kumabe:LIPIcs.ESA.2025.46,
  author =	{Kumabe, Soh},
  title =	{{Max-Distance Sparsification for Diversification and Clustering}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.46},
  URN =		{urn:nbn:de:0030-drops-245146},
  doi =		{10.4230/LIPIcs.ESA.2025.46},
  annote =	{Keywords: Fixed-Parameter Tractability, Diversification, Clustering}
}
Document
Fast and Lightweight Distributed Suffix Array Construction

Authors: Manuel Haag, Florian Kurpicz, Peter Sanders, and Matthias Schimek


Abstract
The suffix array contains the lexicographical order of all suffixes of a text. It is one of the most well-studied text indices with applications in bioinformatics, compression, and pattern matching. The main bottleneck of distributed-memory suffix array construction algorithms is their memory requirements. Even careful implementations require 30×-60× the input size as working memory. We present a scalable and lightweight distributed-memory adaptation of the difference cover (DCX) suffix array construction algorithm. Our approach relies on novel bucketing and random chunk redistribution techniques which reduce our memory requirement to 20×-26× the input size for medium-sized inputs and to 14×-15× for large-sized inputs. Regarding running time, we achieve speedups of up to 5× over current state-of-the-art distributed suffix array construction algorithms.

Cite as

Manuel Haag, Florian Kurpicz, Peter Sanders, and Matthias Schimek. Fast and Lightweight Distributed Suffix Array Construction. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{haag_et_al:LIPIcs.ESA.2025.47,
  author =	{Haag, Manuel and Kurpicz, Florian and Sanders, Peter and Schimek, Matthias},
  title =	{{Fast and Lightweight Distributed Suffix Array Construction}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.47},
  URN =		{urn:nbn:de:0030-drops-245154},
  doi =		{10.4230/LIPIcs.ESA.2025.47},
  annote =	{Keywords: Distributed Computing, Suffix Array Construction}
}
Document
The Support of Bin Packing Is Exponential

Authors: Klaus Jansen, Lis Pirotton, and Malte Tutas


Abstract
Consider the classical Bin Packing problem with d different item sizes s_i and amounts of items a_i. The support of a Bin Packing solution is the number of differently filled bins. In this work, we show that the lower bound on the support of this problem is 2^Ω(d). Our lower bound matches the upper bound of 2^d given by Eisenbrand and Shmonin [Oper.Research Letters '06] up to a constant factor. This result has direct implications for the time complexity of several Bin Packing algorithms, such as Goemans and Rothvoss [SODA '14], Jansen and Klein [SODA '17] and Jansen and Solis-Oba [IPCO '10]. To achieve our main result, we develop a technique to aggregate equality constrained ILPs with many constraints into an equivalent ILP with one constraint. Our technique contrasts existing aggregation techniques as we manage to integrate upper bounds on variables into the resulting constraint. We believe this technique can be useful for solving general ILPs or the d-dimensional knapsack problem.

Cite as

Klaus Jansen, Lis Pirotton, and Malte Tutas. The Support of Bin Packing Is Exponential. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 48:1-48:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2025.48,
  author =	{Jansen, Klaus and Pirotton, Lis and Tutas, Malte},
  title =	{{The Support of Bin Packing Is Exponential}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{48:1--48:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.48},
  URN =		{urn:nbn:de:0030-drops-245167},
  doi =		{10.4230/LIPIcs.ESA.2025.48},
  annote =	{Keywords: Bin Packing, Integer Programming, Support}
}
Document
Efficient Top-Down Updates in AVL Trees

Authors: Vincent Jugé


Abstract
Since AVL trees were invented in 1962, two major open questions about rebalancing operations, which found positive answers in other balanced binary search trees, were left open: can these operations be performed top-down (with a fixed look-ahead), and can they use an amortised constant number of write operations per update? We propose an algorithm that solves both questions positively.

Cite as

Vincent Jugé. Efficient Top-Down Updates in AVL Trees. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{juge:LIPIcs.ESA.2025.49,
  author =	{Jug\'{e}, Vincent},
  title =	{{Efficient Top-Down Updates in AVL Trees}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.49},
  URN =		{urn:nbn:de:0030-drops-245172},
  doi =		{10.4230/LIPIcs.ESA.2025.49},
  annote =	{Keywords: AVL trees, data structures, amortised complexity}
}
Document
Online Hitting Sets for Disks of Bounded Radii

Authors: Minati De, Satyam Singh, and Csaba D. Tóth


Abstract
We present algorithms for the online minimum hitting set problem in geometric range spaces: Given a set P of n points in the plane and a sequence of geometric objects that arrive one-by-one, we need to maintain a hitting set at all times. For disks of radii in the interval [1,M], we present an O(log M log n)-competitive algorithm. This result generalizes from disks to positive homothets of any convex body in the plane with scaling factors in the interval [1,M]. As a main technical tool, we reduce the problem to the online hitting set problem for a finite subset of integer points and bottomless rectangles. Specifically, for a given N > 1, we present an O(log N)-competitive algorithm for the variant where P is a subset of an N× N section of the integer lattice, and the geometric objects are bottomless rectangles.

Cite as

Minati De, Satyam Singh, and Csaba D. Tóth. Online Hitting Sets for Disks of Bounded Radii. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 50:1-50:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ESA.2025.50,
  author =	{De, Minati and Singh, Satyam and T\'{o}th, Csaba D.},
  title =	{{Online Hitting Sets for Disks of Bounded Radii}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{50:1--50:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.50},
  URN =		{urn:nbn:de:0030-drops-245181},
  doi =		{10.4230/LIPIcs.ESA.2025.50},
  annote =	{Keywords: Geometric Hitting Set, Online Algorithm, Homothets, Disks}
}
Document
Quantum Approximate k-Minimum Finding

Authors: Minbo Gao, Zhengfeng Ji, and Qisheng Wang


Abstract
Quantum k-minimum finding is a fundamental subroutine with numerous applications in combinatorial problems and machine learning. Previous approaches typically assume oracle access to exact function values, making it challenging to integrate this subroutine with other quantum algorithms. In this paper, we propose an (almost) optimal quantum k-minimum finding algorithm that works with approximate values for all k ≥ 1, extending a result of van Apeldoorn, Gilyén, Gribling, and de Wolf (FOCS 2017) for k = 1. As practical applications, we present efficient quantum algorithms for identifying the k smallest expectation values among multiple observables and for determining the k lowest ground state energies of a Hamiltonian with a known eigenbasis.

Cite as

Minbo Gao, Zhengfeng Ji, and Qisheng Wang. Quantum Approximate k-Minimum Finding. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 51:1-51:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gao_et_al:LIPIcs.ESA.2025.51,
  author =	{Gao, Minbo and Ji, Zhengfeng and Wang, Qisheng},
  title =	{{Quantum Approximate k-Minimum Finding}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{51:1--51:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.51},
  URN =		{urn:nbn:de:0030-drops-245192},
  doi =		{10.4230/LIPIcs.ESA.2025.51},
  annote =	{Keywords: Quantum Computing, Quantum Algorithms, Quantum Minimum Finding}
}
Document
Beating Competitive Ratio 4 for Graphic Matroid Secretary

Authors: Kiarash Banihashem, MohammadTaghi Hajiaghayi, Dariusz R. Kowalski, Piotr Krysta, Danny Mittal, and Jan Olkowski


Abstract
One of the classic problems in online decision-making is the secretary problem, where the goal is to hire the best secretary out of n rankable applicants or, in a natural extension, to maximize the probability of selecting the largest number from a sequence arriving in random order. Many works have considered generalizations of this problem where one can accept multiple values subject to a combinatorial constraint. The seminal work of Babaioff, Immorlica, Kempe, and Kleinberg (SODA'07, JACM'18) proposed the matroid secretary conjecture, suggesting that there exists an O(1)-competitive algorithm for the matroid constraint, and many works since have attempted to obtain algorithms for both general matroids and specific classes of matroids. The ultimate goal of these results is to obtain an e-competitive algorithm, and the strong matroid secretary conjecture states that this is possible for general matroids. One of the most important classes of matroids is the graphic matroid, where a set of edges in a graph is deemed independent if it contains no cycle. Given the rich combinatorial structure of graphs, obtaining algorithms for these matroids is often seen as a good first step towards solving the problem for general matroids. For matroid secretary, Babaioff et al. (SODA'07, JACM'18) first studied graphic matroid case and obtained a 16-competitive algorithm. Subsequent works have improved the competitive ratio, most recently to 4 by Soto, Turkieltaub, and Verdugo (SODA'18). In this paper, we break the 4-competitive barrier for the problem, obtaining a new algorithm with a competitive ratio of 3.95. For the special case of simple graphs (i.e., graphs that do not contain parallel edges) we further improve this to 3.77. Intuitively, solving the problem for simple graphs is easier as they do not contain cycles of length two. A natural question that arises is whether we can obtain a ratio arbitrarily close to e by assuming the graph has a large enough girth. We answer this question affirmatively, proving that one can obtain a competitive ratio arbitrarily close to e even for constant values of girth, providing further evidence for the strong matroid secretary conjecture. We further show that this bound is tight: for any constant g, one cannot obtain a competitive ratio better than e even if we assume that the input graph has girth at least g. To our knowledge, such a bound was not previously known even for simple graphs.

Cite as

Kiarash Banihashem, MohammadTaghi Hajiaghayi, Dariusz R. Kowalski, Piotr Krysta, Danny Mittal, and Jan Olkowski. Beating Competitive Ratio 4 for Graphic Matroid Secretary. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 52:1-52:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{banihashem_et_al:LIPIcs.ESA.2025.52,
  author =	{Banihashem, Kiarash and Hajiaghayi, MohammadTaghi and Kowalski, Dariusz R. and Krysta, Piotr and Mittal, Danny and Olkowski, Jan},
  title =	{{Beating Competitive Ratio 4 for Graphic Matroid Secretary}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{52:1--52:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.52},
  URN =		{urn:nbn:de:0030-drops-245205},
  doi =		{10.4230/LIPIcs.ESA.2025.52},
  annote =	{Keywords: online algorithms, graphic matroids, secretary problem}
}
Document
When Is String Reconstruction Using de Bruijn Graphs Hard?

Authors: Ben Bals, Sebastiaan van Krieken, Solon P. Pissis, Leen Stougie, and Hilde Verbeek


Abstract
The reduction of the fragment assembly problem to (variations of) the classical Eulerian trail problem [Pevzner et al., PNAS 2001] has led to remarkable progress in genome assembly. This reduction employs the notion of de Bruijn graph G = (V,E) of order k over an alphabet Σ. A single Eulerian trail in G represents a candidate genome reconstruction. Bernardini et al. have also introduced the complementary idea in data privacy [ALENEX 2020] based on z-anonymity. Let S be a private string that we would like to release, preventing, however, its full reconstruction. For a privacy threshold z > 0, we compute the largest k for which there exist at least z Eulerian trails in the order-k de Bruijn graph of S, and release a string S' obtained via a random Eulerian trail. The pressing question is: How hard is it to reconstruct a best string from a de Bruijn graph given a function that models domain knowledge? Such a function maps every length-k string to an interval of positions where it may occur in the reconstructed string. By the above reduction to de Bruijn graphs, the latter function translates into a function c mapping every edge to an interval where it may occur in an Eulerian trail. This gives rise to the following basic problem on graphs: Given an instance (G,c), can we efficiently compute an Eulerian trail respecting c? Hannenhalli et al. [CABIOS 1996] formalized this problem and showed that it is NP-complete. Ben-Dor et al. [J. Comput. Biol. 2002] showed that it is NP-complete, even on de Bruin graphs with |Σ| = 4. In this work, we settle the lower-bound side of this problem by showing that finding a c-respecting Eulerian trail in de Bruijn graphs over alphabets of size 2 is NP-complete. We then shift our focus to parametrization aiming to capture the quality of our domain knowledge in the complexity. Ben-Dor et al. developed an algorithm to solve the problem on de Bruijn graphs in 𝒪(m⋅w^{1.5} 4^w) time, where m = |E| and w is the maximum interval length over all edges in E. Bumpus and Meeks [Algorithmica 2023] later rediscovered the same algorithm on temporal graphs, which highlights the relevance of this problem in other contexts. Our central contribution is showing how combinatorial insights lead to exponential-time improvements over the state-of-the-art algorithm. In particular, for the important class of de Bruijn graphs, we develop an algorithm parametrized by w (log w+1) /(k-1): for a de Bruijn graph of order k, it runs in 𝒪(mw⋅2^{w(log(w)+1)/(k-1)}) time. Our result improves on the state of the art by roughly an exponent of (log(w)+1)/(k-1). The existing algorithms have a natural interpretation for string reconstruction: when for each length-k string, we know a small range of positions it must lie in, string reconstruction can be solved in linear time. Our improved algorithm shows that it is enough when the range of positions is small relative to k. We then generalize both the existing and our novel FPT algorithm by allowing the cost at every position of an interval to vary. In this optimization version, our hardness result translates into inapproximability and the FPT algorithms work with a slight extension. Surprisingly, even in this more general setting, we extend the FPT algorithms to count and enumerate the min-cost Eulerian trails. The counting result has direct applications in the data privacy framework of Bernardini et al.

Cite as

Ben Bals, Sebastiaan van Krieken, Solon P. Pissis, Leen Stougie, and Hilde Verbeek. When Is String Reconstruction Using de Bruijn Graphs Hard?. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bals_et_al:LIPIcs.ESA.2025.53,
  author =	{Bals, Ben and van Krieken, Sebastiaan and Pissis, Solon P. and Stougie, Leen and Verbeek, Hilde},
  title =	{{When Is String Reconstruction Using de Bruijn Graphs Hard?}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{53:1--53:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.53},
  URN =		{urn:nbn:de:0030-drops-245215},
  doi =		{10.4230/LIPIcs.ESA.2025.53},
  annote =	{Keywords: string algorithm, graph algorithm, de Bruijn graph, Eulerian trail}
}
Document
Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems

Authors: Bingbing Hu and Adam Polak


Abstract
Most of the known tight lower bounds for dynamic problems are based on the Online Boolean Matrix-Vector Multiplication (OMv) Hypothesis, which is not as well studied and understood as some more popular hypotheses in fine-grained complexity. It would be desirable to base hardness of dynamic problems on a more believable hypothesis. We propose analogues of the OMv Hypothesis for variants of matrix multiplication that are known to be harder than Boolean product in the offline setting, namely: equality, dominance, min-witness, min-max, and bounded monotone min-plus products. These hypotheses are a priori weaker assumptions than the standard (Boolean) OMv Hypothesis and yet we show that they are actually equivalent to it. This establishes the first such fine-grained equivalence class for dynamic problems.

Cite as

Bingbing Hu and Adam Polak. Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 54:1-54:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ESA.2025.54,
  author =	{Hu, Bingbing and Polak, Adam},
  title =	{{Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{54:1--54:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.54},
  URN =		{urn:nbn:de:0030-drops-245228},
  doi =		{10.4230/LIPIcs.ESA.2025.54},
  annote =	{Keywords: Fine-grained complexity, OMv hypothesis, reductions, equivalence class}
}
Document
Safe Sequences via Dominators in DAGs for Path-Covering Problems

Authors: Francisco Sena, Romeo Rizzi, and Alexandru I. Tomescu


Abstract
A path-covering problem on a directed acyclic graph (DAG) requires finding a set of source-to-sink paths that cover all the nodes, all the arcs, or subsets thereof, and additionally they are optimal with respect to some function. In this paper we study safe sequences of nodes or arcs, namely sequences that appear in some path of every path cover of a DAG. We show that safe sequences admit a simple characterization via cutnodes. Moreover, we establish a connection between maximal safe sequences and leaf-to-root paths in the source- and sink-dominator trees of the DAG, which may be of independent interest in the extensive literature on dominators. With dominator trees, safe sequences admit an O(n)-size representation and a linear-time output-sensitive enumeration algorithm running in time O(m + o), where n and m are the number of nodes and arcs, respectively, and o is the total length of the maximal safe sequences. We then apply maximal safe sequences to simplify Integer Linear Programs (ILPs) for two path-covering problems, LeastSquares and MinPathError, which are at the core of RNA transcript assembly problems from bioinformatics. On various datasets, maximal safe sequences can be computed in under 0.1 seconds per graph, on average, and ILP solvers whose search space is reduced in this manner exhibit significant speed-ups. For example on graphs with a large width, average speed-ups are in the range 50-250× for MinPathError and in the range 80-350× for LeastSquares. Optimizing ILPs using safe sequences can thus become a fast building block of practical RNA transcript assembly tools, and more generally, of path-covering problems.

Cite as

Francisco Sena, Romeo Rizzi, and Alexandru I. Tomescu. Safe Sequences via Dominators in DAGs for Path-Covering Problems. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 55:1-55:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sena_et_al:LIPIcs.ESA.2025.55,
  author =	{Sena, Francisco and Rizzi, Romeo and Tomescu, Alexandru I.},
  title =	{{Safe Sequences via Dominators in DAGs for Path-Covering Problems}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{55:1--55:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.55},
  URN =		{urn:nbn:de:0030-drops-245230},
  doi =		{10.4230/LIPIcs.ESA.2025.55},
  annote =	{Keywords: directed acyclic graph, path cover, dominator tree, integer linear programming, least squares, minimum path error}
}
Document
PLS-Completeness of String Permutations

Authors: Dominik Scheder and Johannes Tantow


Abstract
Bitstrings can be permuted via permutations and compared via the lexicographic order. In this paper we study the complexity of finding a minimum of a bitstring via given permutations. As finding a global optimum is known to be NP-complete [László Babai and Eugene M. Luks, 1983], we study the local optima via the class PLS [David S. Johnson et al., 1988] and show hardness for PLS. Additionally, we show that even for one permutation the global optimization problem is NP-complete and give a formula that has these permutation as its symmetries. This answers an open question inspired from Kołodziejczyk and Thapen [Leszek Aleksander Kolodziejczyk and Neil Thapen, 2024] and stated at the SAT and interactions seminar in Dagstuhl.

Cite as

Dominik Scheder and Johannes Tantow. PLS-Completeness of String Permutations. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{scheder_et_al:LIPIcs.ESA.2025.56,
  author =	{Scheder, Dominik and Tantow, Johannes},
  title =	{{PLS-Completeness of String Permutations}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.56},
  URN =		{urn:nbn:de:0030-drops-245245},
  doi =		{10.4230/LIPIcs.ESA.2025.56},
  annote =	{Keywords: PLS, total search problems, local search, permutation groups, symmetry}
}
Document
Improved Hardness-Of-Approximation for Token-Swapping

Authors: Sam Hiken and Nicole Wein


Abstract
We study the token swapping problem, in which we are given a graph with an initial assignment of one distinct token to each vertex, and a final desired assignment (again with one token per vertex). The goal is to find the minimum length sequence of swaps of adjacent tokens required to get from the initial to the final assignment. The token swapping problem is known to be NP-complete. It is also known to have a polynomial-time 4-approximation algorithm. From the hardness-of-approximation side, it is known to be NP-hard to approximate with a ratio better than 1001/1000. Our main result is an improvement of the approximation ratio of the lower bound: We show that it is NP-hard to approximate with ratio better than 14/13. We then turn our attention to the 0/1-weighted version, in which every token has a weight of either 0 or 1, and the cost of a swap is the sum of the weights of the two participating tokens. Unlike standard token swapping, no constant-factor approximation is known for this version, and we provide an explanation. We prove that 0/1-weighted token swapping is NP-hard to approximate with ratio better than (1-ε) ln(n) for any constant ε > 0. Lastly, we prove two barrier results for the standard (unweighted) token swapping problem. We show that one cannot beat the current best known approximation ratio of 4 using a large class of algorithms which includes all known algorithms, nor can one beat it using a common analysis framework.

Cite as

Sam Hiken and Nicole Wein. Improved Hardness-Of-Approximation for Token-Swapping. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 57:1-57:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hiken_et_al:LIPIcs.ESA.2025.57,
  author =	{Hiken, Sam and Wein, Nicole},
  title =	{{Improved Hardness-Of-Approximation for Token-Swapping}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{57:1--57:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.57},
  URN =		{urn:nbn:de:0030-drops-245251},
  doi =		{10.4230/LIPIcs.ESA.2025.57},
  annote =	{Keywords: algorithms, token-swapping, hardness-of-approximation, lower-bounds}
}
Document
Streaming Diameter of High-Dimensional Points

Authors: Magnús M. Halldórsson, Nicolaos Matsakis, and Pavel Veselý


Abstract
We improve the space bound for streaming approximation of Diameter but also of Farthest Neighbor queries, Minimum Enclosing Ball and its Coreset, in high-dimensional Euclidean spaces. In particular, our deterministic streaming algorithms store 𝒪(ε^{-2}log(1/(ε))) points. This improves by a factor of ε^{-1} the previous space bound of Agarwal and Sharathkumar (SODA 2010), while retaining the state-of-the-art approximation guarantees, such as √2+ε for Diameter or Farthest Neighbor queries, and also offering a simpler and more complete argument. Moreover, we show that storing Ω(ε^{-1}) points is necessary for a streaming (√2+ε)-approximation of Farthest Pair and Farthest Neighbor queries.

Cite as

Magnús M. Halldórsson, Nicolaos Matsakis, and Pavel Veselý. Streaming Diameter of High-Dimensional Points. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 58:1-58:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{halldorsson_et_al:LIPIcs.ESA.2025.58,
  author =	{Halld\'{o}rsson, Magn\'{u}s M. and Matsakis, Nicolaos and Vesel\'{y}, Pavel},
  title =	{{Streaming Diameter of High-Dimensional Points}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{58:1--58:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.58},
  URN =		{urn:nbn:de:0030-drops-245263},
  doi =		{10.4230/LIPIcs.ESA.2025.58},
  annote =	{Keywords: streaming algorithm, farthest pair, diameter, minimum enclosing ball, coreset}
}
Document
Improved Dominance Filtering for Unions and Minkowski Sums of Pareto Sets

Authors: Konstantinos Karathanasis, Spyros Kontogiannis, and Christos Zaroliagis


Abstract
A key task in multi-objective optimization is to compute the Pareto frontier (a.k.a. Pareto subset) P of a given d-dimensional objective space F; that is, a maximal subset P ⊆ F such that every element in P is non-dominated (i.e., it is better in at least one criterion, against any other point) within F. This process, called dominance-filtering, often involves handling objective spaces derived from either the union or the Minkowski sum of two given partial objective spaces which are Pareto sets themselves, and constitutes a major bottleneck in several multi-objective optimization techniques. In this work, we introduce three new data structures, ND^{+}-trees, QND^{+}-trees and TND^{+}-trees, which are designed for efficiently indexing non-dominated objective vectors and performing dominance-checks. We also devise three new algorithms that efficiently filter out dominated objective vectors from the union or the Minkowski sum of two Pareto sets. An extensive experimental evaluation on both synthetically generated and real-world data sets reveals that our new algorithms outperform state-of-art techniques for dominance-filtering of unions and Minkowski sums of Pareto sets, and scale well w.r.t. the number of d ≥ 3 criteria and the sets' sizes.

Cite as

Konstantinos Karathanasis, Spyros Kontogiannis, and Christos Zaroliagis. Improved Dominance Filtering for Unions and Minkowski Sums of Pareto Sets. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 59:1-59:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{karathanasis_et_al:LIPIcs.ESA.2025.59,
  author =	{Karathanasis, Konstantinos and Kontogiannis, Spyros and Zaroliagis, Christos},
  title =	{{Improved Dominance Filtering for Unions and Minkowski Sums of Pareto Sets}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{59:1--59:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.59},
  URN =		{urn:nbn:de:0030-drops-245277},
  doi =		{10.4230/LIPIcs.ESA.2025.59},
  annote =	{Keywords: Multi-Objective Optimization, Multi-Dimensional Data Structures, Pareto Sets, Algorithm Engineering}
}
Document
Fast and Memory-Efficient BWT Construction of Repetitive Texts Using Lyndon Grammars

Authors: Jannik Olbrich


Abstract
The Burrows-Wheeler Transform (BWT) serves as the basis for many important sequence indexes. On very large datasets (e.g. genomic databases), classical BWT construction algorithms are often infeasible because they usually need to have the entire dataset in main memory. Fortunately, such large datasets are often highly repetitive. It can thus be beneficial to compute the BWT from a compressed representation. We propose an algorithm for computing the BWT via the Lyndon straight-line program, a grammar based on the standard factorization of Lyndon words. Our algorithm can also be used to compute the extended BWT (eBWT) of a multiset of sequences. We empirically evaluate our implementation and find that we can compute the BWT and eBWT of very large datasets faster and/or with less memory than competing methods.

Cite as

Jannik Olbrich. Fast and Memory-Efficient BWT Construction of Repetitive Texts Using Lyndon Grammars. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{olbrich:LIPIcs.ESA.2025.60,
  author =	{Olbrich, Jannik},
  title =	{{Fast and Memory-Efficient BWT Construction of Repetitive Texts Using Lyndon Grammars}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.60},
  URN =		{urn:nbn:de:0030-drops-245286},
  doi =		{10.4230/LIPIcs.ESA.2025.60},
  annote =	{Keywords: Burrows-Wheeler Transform, Grammar compression}
}
Document
ε-Net Algorithm Implementation on Hyperbolic Surfaces

Authors: Vincent Despré, Camille Lanuel, Marc Pouget, and Monique Teillaud


Abstract
We propose an implementation, using the CGAL library, of an algorithm to compute ε-nets on hyperbolic surfaces proposed by Despré, Lanuel and Teillaud [Despré et al., 2024]. We describe the data structure, detail the implemented algorithm and report experimental results on hyperbolic surfaces of genus 2. The implementation differs from the cited algorithm on several aspects. In particular, we use a different data structure, based on combinatorial maps, to represent a triangulation of a surface. We explain how to generate fundamental polygons to represent our input hyperbolic surfaces and the arithmetic issues related to the number type of the coordinates of their vertices.

Cite as

Vincent Despré, Camille Lanuel, Marc Pouget, and Monique Teillaud. ε-Net Algorithm Implementation on Hyperbolic Surfaces. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 61:1-61:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{despre_et_al:LIPIcs.ESA.2025.61,
  author =	{Despr\'{e}, Vincent and Lanuel, Camille and Pouget, Marc and Teillaud, Monique},
  title =	{{\epsilon-Net Algorithm Implementation on Hyperbolic Surfaces}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{61:1--61:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.61},
  URN =		{urn:nbn:de:0030-drops-245296},
  doi =		{10.4230/LIPIcs.ESA.2025.61},
  annote =	{Keywords: Hyperbolic surface, Delaunay triangulation, Data structure, Combinatorial map, Implementation, CGAL}
}
Document
Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering

Authors: Sina Bagheri Nezhad, Sayan Bandyapadhyay, and Tianzhi Chen


Abstract
In a seminal work, Chierichetti et al. [Chierichetti et al., 2017] introduced the (t,k)-fair clustering problem: Given a set of red points and a set of blue points in a metric space, a clustering is called fair if the number of red points in each cluster is at most t times and at least 1/t times the number of blue points in that cluster. The goal is to compute a fair clustering with at most k clusters that optimizes certain objective function. Considering this problem, they designed a polynomial-time O(1)- and O(t)-approximation for the k-center and the k-median objective, respectively. Recently, Carta et al. [Carta et al., 2024] studied this problem with the sum-of-radii objective and obtained a (6+ε)-approximation with running time O((k log_{1+ε}(k/ε))^k n^O(1)), i.e., fixed-parameter tractable in k. Here n is the input size. In this work, we design the first polynomial-time O(1)-approximation for (t,k)-fair clustering with the sum-of-radii objective, improving the result of Carta et al. Our result places sum-of-radii in the same group of objectives as k-center, that admit polynomial-time O(1)-approximations. This result also implies a polynomial-time O(1)-approximation for the Euclidean version of the problem, for which an f(k)⋅n^O(1)-time (1+ε)-approximation was known due to Drexler et al. [Drexler et al., 2023]. Here f is an exponential function of k. We are also able to extend our result to any arbitrary 𝓁 ≥ 2 number of colors when t = 1. This matches known results for the k-center and k-median objectives in this case. The significant disparity of sum-of-radii compared to k-center and k-median presents several complex challenges, all of which we successfully overcome in our work. Our main contribution is a novel cluster-merging-based analysis technique for sum-of-radii that helps us achieve the constant-approximation bounds.

Cite as

Sina Bagheri Nezhad, Sayan Bandyapadhyay, and Tianzhi Chen. Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 62:1-62:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bagherinezhad_et_al:LIPIcs.ESA.2025.62,
  author =	{Bagheri Nezhad, Sina and Bandyapadhyay, Sayan and Chen, Tianzhi},
  title =	{{Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{62:1--62:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.62},
  URN =		{urn:nbn:de:0030-drops-245309},
  doi =		{10.4230/LIPIcs.ESA.2025.62},
  annote =	{Keywords: fair clustering, sum-of-radii clustering, approximation algorithms}
}
Document
Connected k-Median with Disjoint and Non-Disjoint Clusters

Authors: Jan Eube, Kelin Luo, Dorian Reineccius, Heiko Röglin, and Melanie Schmidt


Abstract
The connected k-median problem is a constrained clustering problem that combines distance-based k-clustering with connectivity information. The problem allows to input a metric space and an unweighted undirected connectivity graph that is completely unrelated to the metric space. The goal is to compute k centers and corresponding clusters such that each cluster forms a connected subgraph of G, and such that the k-median cost is minimized. The problem has applications in very different fields like geodesy (particularly districting), social network analysis (especially community detection), or bioinformatics. We study a version with overlapping clusters where points can be part of multiple clusters which is natural for the use case of community detection. This problem variant is Ω(log n)-hard to approximate, and our main result is an 𝒪(k² log n)-approximation algorithm for the problem. We complement it with an Ω(n^{1-ε})-hardness result for the case of disjoint clusters without overlap with general connectivity graphs, as well as an exact algorithm in this setting if the connectivity graph is a tree.

Cite as

Jan Eube, Kelin Luo, Dorian Reineccius, Heiko Röglin, and Melanie Schmidt. Connected k-Median with Disjoint and Non-Disjoint Clusters. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eube_et_al:LIPIcs.ESA.2025.63,
  author =	{Eube, Jan and Luo, Kelin and Reineccius, Dorian and R\"{o}glin, Heiko and Schmidt, Melanie},
  title =	{{Connected k-Median with Disjoint and Non-Disjoint Clusters}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.63},
  URN =		{urn:nbn:de:0030-drops-245317},
  doi =		{10.4230/LIPIcs.ESA.2025.63},
  annote =	{Keywords: Clustering, Connectivity constraints, Approximation algorithms}
}
Document
A Dynamic Piecewise-Linear Geometric Index with Worst-Case Guarantees

Authors: Emil Toftegaard Gæde, Ivor van der Hoog, Eva Rotenberg, and Tord Stordalen


Abstract
Indexing data is a fundamental problem in computer science. The input is a set S of n distinct integers from a universe 𝒰. Indexing queries take a value q ∈ 𝒰 and return the membership, predecessor or rank of q in S. A range query takes two values q, r ∈ 𝒰 and returns the set S ∩ [q,r]. Recently, various papers study a special case where the the input data behaves in an approximately piece-wise linear way. Given the sorted (rank,value) pairs, and given some constant ε, one wants to maintain a small number of axis-disjoint line-segments such that, for each rank, the value is within ± ε of the corresponding line-segment. Ferragina and Vinciguerra (VLDB 2020) observe that this geometric problem is useful for solving indexing problems, particularly when the number of line-segments is small compared to the size of the dataset. We study the dynamic version of this geometric problem. In the dynamic setting, inserting or deleting just one data point may cause up to three line-segments to be merged, or one line-segment to be split at most three-way. To determine and compute this, we use techniques from dynamic maintenance of convex hulls, and provide new algorithms with worst-case guarantees, including an O(log n) algorithm to compute a separating line between two non-intersecting convex hulls - an operation previously missing from the literature. We then use our fully-dynamic geometry-based subroutine in an indexing data structure, combining it with a natural hashing technique. The resulting indexing data structure has theoretically efficient worst-case guarantees in expectation. We compare its practical performance to the solution of Ferragina and Vinciguerra, which was shown to perform better in certain structured settings [Sun, Zhou, Li VLDB 2023]. Our empirical analysis shows that our solution supports more efficient range queries in the special case where the update sequence contains many deletions.

Cite as

Emil Toftegaard Gæde, Ivor van der Hoog, Eva Rotenberg, and Tord Stordalen. A Dynamic Piecewise-Linear Geometric Index with Worst-Case Guarantees. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gaede_et_al:LIPIcs.ESA.2025.64,
  author =	{G{\ae}de, Emil Toftegaard and van der Hoog, Ivor and Rotenberg, Eva and Stordalen, Tord},
  title =	{{A Dynamic Piecewise-Linear Geometric Index with Worst-Case Guarantees}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.64},
  URN =		{urn:nbn:de:0030-drops-245323},
  doi =		{10.4230/LIPIcs.ESA.2025.64},
  annote =	{Keywords: Algorithms Engineering, Data Structures, Indexing, Convex Hulls}
}
Document
From Theory to Practice: Engineering Approximation Algorithms for Dynamic Orientation

Authors: Ernestine Grossmann, Henrik Reinstädtler, Eva Rotenberg, Christian Schulz, Ivor van der Hoog, and Juliette Vlieghe


Abstract
Dynamic graph algorithms have seen significant theoretical advancements, but practical evaluations often lag behind. This work bridges the gap between theory and practice by engineering and empirically evaluating recently developed approximation algorithms for dynamically maintaining graph orientations. We comprehensively describe the underlying data structures, including efficient bucketing techniques and round-robin updates. Our implementation has a natural parameter λ, which allows for a trade-off between algorithmic efficiency and the quality of the solution. In the extensive experimental evaluation, we demonstrate that our implementation offers a considerable speedup. Using different quality metrics, we show that our implementations are very competitive and can outperform previous methods. Overall, our approach solves more instances than other methods while being up to 112 times faster on instances that are solvable by all methods compared.

Cite as

Ernestine Grossmann, Henrik Reinstädtler, Eva Rotenberg, Christian Schulz, Ivor van der Hoog, and Juliette Vlieghe. From Theory to Practice: Engineering Approximation Algorithms for Dynamic Orientation. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 65:1-65:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grossmann_et_al:LIPIcs.ESA.2025.65,
  author =	{Grossmann, Ernestine and Reinst\"{a}dtler, Henrik and Rotenberg, Eva and Schulz, Christian and van der Hoog, Ivor and Vlieghe, Juliette},
  title =	{{From Theory to Practice: Engineering Approximation Algorithms for Dynamic Orientation}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{65:1--65:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.65},
  URN =		{urn:nbn:de:0030-drops-245331},
  doi =		{10.4230/LIPIcs.ESA.2025.65},
  annote =	{Keywords: Dynamic graphs, out-orientation}
}
Document
Bicriteria Approximation for k-Edge-Connectivity

Authors: Zeev Nutov and Reut Cohen


Abstract
In the k-Edge Connected Spanning Subgraph (k-ECSS) problem we are given a (multi-)graph G = (V,E) with edge costs and an integer k, and seek a min-cost k-edge-connected spanning subgraph of G. The problem admits a 2-approximation algorithm and no better approximation ratio is known. Recently, Hershkowitz, Klein, and Zenklusen [STOC 24] gave a bicriteria (1,k-10)-approximation algorithm that computes a (k-10)-edge-connected spanning subgraph of cost at most the optimal value of a standard Cut-LP for k-ECSS. We improve the bicriteria approximation to (1,k-4) and also give another non-trivial bicriteria approximation (3/2,k-2). The k-Edge-Connected Spanning Multi-subgraph (k-ECSM) problem is almost the same as k-ECSS, except that any edge can be selected multiple times at the same cost. A (1,k-p) bicriteria approximation for k-ECSS w.r.t. Cut-LP implies approximation ratio 1+p/k for k-ECSM, hence our result also improves the approximation ratio for k-ECSM.

Cite as

Zeev Nutov and Reut Cohen. Bicriteria Approximation for k-Edge-Connectivity. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 66:1-66:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nutov_et_al:LIPIcs.ESA.2025.66,
  author =	{Nutov, Zeev and Cohen, Reut},
  title =	{{Bicriteria Approximation for k-Edge-Connectivity}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{66:1--66:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.66},
  URN =		{urn:nbn:de:0030-drops-245343},
  doi =		{10.4230/LIPIcs.ESA.2025.66},
  annote =	{Keywords: k-edge-connected subgraph, bicriteria approximation, iterative LP-rounding}
}
Document
Compact Representation of Semilinear and Terrain-Like Graphs

Authors: Jean Cardinal and Yelena Yuditsky


Abstract
We consider the existence and construction of biclique covers of graphs, consisting of coverings of their edge sets by complete bipartite graphs. The size of such a cover is the sum of the sizes of the bicliques. Small-size biclique covers of graphs are ubiquitous in computational geometry, and have been shown to be useful compact representations of graphs. We give a brief survey of classical and recent results on biclique covers and their applications, and give new families of graphs having biclique covers of near-linear size. In particular, we show that semilinear graphs, whose edges are defined by linear relations in bounded dimensional space, always have biclique covers of size O(npolylog n). This generalizes many previously known results on special classes of graphs including interval graphs, permutation graphs, and graphs of bounded boxicity, but also new classes such as intersection graphs of L-shapes in the plane. It also directly implies the bounds for Zarankiewicz’s problem derived by Basit, Chernikov, Starchenko, Tao, and Tran (Forum Math. Sigma, 2021). We also consider capped graphs, also known as terrain-like graphs, defined as ordered graphs forbidding a certain ordered pattern on four vertices. Terrain-like graphs contain the induced subgraphs of terrain visibility graphs. We give an elementary proof that these graphs admit biclique partitions of size O(nlog³ n). This provides a simple combinatorial analogue of a classical result from Agarwal, Alon, Aronov, and Suri on polygon visibility graphs (Discrete Comput. Geom. 1994). Finally, we prove that there exists families of unit disk graphs on n vertices that do not admit biclique coverings of size o(n^{4/3}), showing that we are unlikely to improve on Szemerédi-Trotter type incidence bounds for higher-degree semialgebraic graphs.

Cite as

Jean Cardinal and Yelena Yuditsky. Compact Representation of Semilinear and Terrain-Like Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 67:1-67:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cardinal_et_al:LIPIcs.ESA.2025.67,
  author =	{Cardinal, Jean and Yuditsky, Yelena},
  title =	{{Compact Representation of Semilinear and Terrain-Like Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{67:1--67:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.67},
  URN =		{urn:nbn:de:0030-drops-245359},
  doi =		{10.4230/LIPIcs.ESA.2025.67},
  annote =	{Keywords: Biclique covers, intersection graphs, visibility graphs, Zarankiewicz’s problem}
}
Document
Faster Algorithm for Second (s,t)-Mincut and Breaking Quadratic Barrier for Dual Edge Sensitivity for (s,t)-Mincut

Authors: Surender Baswana, Koustav Bhanja, and Anupam Roy


Abstract
Let G be a directed graph on n vertices and m edges. In this article, we study (s,t)-cuts of second minimum capacity and present the following algorithmic and graph-theoretic results. 1) Second (s,t)-mincut: Vazirani and Yannakakis [ICALP 1992] designed the first algorithm for computing an (s,t)-cut of second minimum capacity using {O}(n²) maximum (s,t)-flow computations. We present the following algorithm that improves the running time significantly. For directed integer-weighted graphs, there is an algorithm that can compute an (s,t)-cut of second minimum capacity using Õ(√n) maximum (s,t)-flow computations with high probability. To achieve this result, a close relationship of independent interest is established between (s,t)-cuts of second minimum capacity and global mincuts in directed weighted graphs. 2) Minimum+1 (s,t)-cuts: Minimum+1 (s,t)-cuts have been studied quite well recently [Baswana, Bhanja, and Pandey, ICALP 2022 & TALG 2023], which is a special case of second (s,t)-mincut. We present the following structural result and the first nontrivial algorithm for minimum+1 (s,t)-cuts. 3) Algorithm: For directed multi-graphs, we design an algorithm that, given any maximum (s,t)-flow, computes a minimum+1 (s,t)-cut, if it exists, in O(m) time. 4) Structure: The existing structures for storing and characterizing all minimum+1 (s,t)-cuts occupy {O}(mn) space [Baswana, Bhanja, and Pandey, TALG 2023]. For undirected multi-graphs, we design a directed acyclic graph (DAG) occupying only {O}(m) space that stores and characterizes all minimum+1 (s,t)-cuts. This matches the space bound of the widely-known DAG structure for all (s,t)-mincuts [Picard and Queyranne, Math. Prog. Studies 1980]. 5) Dual Edge Sensitivity Oracle: The study of minimum+1 (s,t)-cuts often turns out to be useful in designing dual edge sensitivity oracles - a compact data structure for efficiently reporting an (s,t)-mincut after insertion/failure of any given pair of query edges. It has been shown recently [Bhanja, ICALP 2025] that any dual edge sensitivity oracle for (s,t)-mincut in undirected multi-graphs must occupy Ω(n²) space in the worst-case irrespective of the query time. Interestingly, for undirected unweighted simple graphs, we break this quadratic barrier while achieving a non-trivial query time as follows. There is an O(n√n) space data structure that can report an (s,t)-mincut in O(min{m,n√n}) time after the insertion/failure of any given pair of query edges. To arrive at our results, as one of our key techniques, we establish interesting relationships between (s,t)-cuts of capacity (minimum+Δ), Δ ≥ 0, and maximum (s,t)-flow. We believe that these techniques and the graph-theoretic result in 2.(b) are of independent interest.

Cite as

Surender Baswana, Koustav Bhanja, and Anupam Roy. Faster Algorithm for Second (s,t)-Mincut and Breaking Quadratic Barrier for Dual Edge Sensitivity for (s,t)-Mincut. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 68:1-68:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{baswana_et_al:LIPIcs.ESA.2025.68,
  author =	{Baswana, Surender and Bhanja, Koustav and Roy, Anupam},
  title =	{{Faster Algorithm for Second (s,t)-Mincut and Breaking Quadratic Barrier for Dual Edge Sensitivity for (s,t)-Mincut}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{68:1--68:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.68},
  URN =		{urn:nbn:de:0030-drops-245369},
  doi =		{10.4230/LIPIcs.ESA.2025.68},
  annote =	{Keywords: mincut, second mincut, compact structure, fault tolerant, sensitivity oracle, dual edges, st mincut, global mincut, characterization}
}
Document
Bandwidth vs BFS Width in Matrix Reordering, Graph Reconstruction, and Graph Drawing

Authors: David Eppstein, Michael T. Goodrich, and Songyu (Alfred) Liu


Abstract
We provide the first approximation quality guarantees for the Cuthull-McKee heuristic for reordering symmetric matrices to have low bandwidth, and we provide an algorithm for reconstructing bounded-bandwidth graphs from distance oracles with near-linear query complexity. To prove these results we introduce a new width parameter, BFS width, and we prove polylogarithmic upper and lower bounds on the BFS width of graphs of bounded bandwidth. Unlike other width parameters, such as bandwidth, pathwidth, and treewidth, BFS width can easily be computed in polynomial time. Bounded BFS width implies bounded bandwidth, pathwidth, and treewidth, which in turn imply fixed-parameter tractable algorithms for many problems that are NP-hard for general graphs. In addition to their applications to matrix ordering, we also provide applications of BFS width to graph reconstruction, to reconstruct graphs from distance queries, and graph drawing, to construct arc diagrams of small height.

Cite as

David Eppstein, Michael T. Goodrich, and Songyu (Alfred) Liu. Bandwidth vs BFS Width in Matrix Reordering, Graph Reconstruction, and Graph Drawing. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 69:1-69:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eppstein_et_al:LIPIcs.ESA.2025.69,
  author =	{Eppstein, David and Goodrich, Michael T. and Liu, Songyu (Alfred)},
  title =	{{Bandwidth vs BFS Width in Matrix Reordering, Graph Reconstruction, and Graph Drawing}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{69:1--69:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.69},
  URN =		{urn:nbn:de:0030-drops-245373},
  doi =		{10.4230/LIPIcs.ESA.2025.69},
  annote =	{Keywords: Graph algorithms, graph theory, graph width, bandwidth, treewidth}
}
Document
Improved Parallel Derandomization via Finite Automata with Applications

Authors: Jeff Giliberti and David G. Harris


Abstract
A central approach to algorithmic derandomization is the construction of small-support probability distributions that "fool” randomized algorithms, often enabling efficient parallel (NC) implementations. An abstraction of this idea is fooling polynomial-space statistical tests computed via finite automata [Sivakumar STOC'02]; this encompasses a wide range of properties including k-wise independence and sums of random variables. We present new parallel algorithms to fool finite-state automata, with significantly reduced processor complexity. Briefly, our approach is to iteratively sparsify distributions using a work-efficient lattice rounding routine and maintain accuracy by tracking an aggregate weighted error that is determined by the Lipschitz value of the statistical tests being fooled. We illustrate with improved applications to the Gale-Berlekamp Switching Game and to approximate MAX-CUT via SDP rounding. These involve further several optimizations, such as the truncation of the state space of the automata and FFT-based convolutions to compute transition probabilities efficiently.

Cite as

Jeff Giliberti and David G. Harris. Improved Parallel Derandomization via Finite Automata with Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 70:1-70:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{giliberti_et_al:LIPIcs.ESA.2025.70,
  author =	{Giliberti, Jeff and Harris, David G.},
  title =	{{Improved Parallel Derandomization via Finite Automata with Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{70:1--70:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.70},
  URN =		{urn:nbn:de:0030-drops-245381},
  doi =		{10.4230/LIPIcs.ESA.2025.70},
  annote =	{Keywords: Parallel Algorithms, Derandomization, MAX-CUT, Gale-Berlekamp Switching Game}
}
Document
Simpler Universally Optimal Dijkstra

Authors: Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann


Abstract
Let G be a weighted (directed) graph with n vertices and m edges. Given a source vertex s, Dijkstra’s algorithm computes the shortest path lengths from s to all other vertices in O(m + n log n) time. This bound is known to be worst-case optimal via a reduction to sorting. Theoretical computer science has developed numerous fine-grained frameworks for analyzing algorithmic performance beyond standard worst-case analysis, such as instance optimality and output sensitivity. Haeupler, Hladík, Rozhoň, Tarjan, and Tětek [FOCS '24] consider the notion of universal optimality, a refined complexity measure that accounts for both the graph topology and the edge weights. For a fixed graph topology, the universal running time of a weighted graph algorithm is defined as its worst-case running time over all possible edge weightings of G. An algorithm is universally optimal if no other algorithm achieves a better asymptotic universal running time on any particular graph topology. Haeupler, Hladík, Rozhoň, Tarjan, and Tětek show that Dijkstra’s algorithm can be made universally optimal by replacing the heap with a custom data structure. Their approach builds on Iacono’s [SWAT '00] working-set bound ϕ(x). This is a technical definition that, intuitively, for a heap element x, counts the maximum number of simultaneously-present elements y that were pushed onto the heap whilst x was in the heap. They design a new heap data structure that can pop an element x in O(1 + log ϕ(x)) time. They show that Dijkstra’s algorithm with their heap data structure is universally optimal. In this work, we revisit their result. We use a simpler heap property that we will call timestamp optimality, where the cost of popping an element x is logarithmic in the number of elements inserted between pushing and popping x. We show that timestamp optimal heaps are not only easier to define but also easier to implement. Using these time stamps, we provide a significantly simpler proof that Dijkstra’s algorithm, with the right kind of heap, is universally optimal.

Cite as

Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann. Simpler Universally Optimal Dijkstra. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 71:1-71:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ESA.2025.71,
  author =	{van der Hoog, Ivor and Rotenberg, Eva and Rutschmann, Daniel},
  title =	{{Simpler Universally Optimal Dijkstra}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{71:1--71:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.71},
  URN =		{urn:nbn:de:0030-drops-245390},
  doi =		{10.4230/LIPIcs.ESA.2025.71},
  annote =	{Keywords: Graph algorithms, instance optimality, Fibonnacci heaps, simplification}
}
Document
Color Distance Oracles and Snippets: Separation Between Exact and Approximate Solutions

Authors: Noam Horowicz and Tsvi Kopelowitz


Abstract
In the snippets problem, the goal is to preprocess a text T so that given two pattern queries, P₁ and P₂, one can quickly locate the occurrences of the two patterns in T that are closest to each other, or report the distance between these occurrences. Kopelowitz and Krauthgamer [CPM2016] showed upper bound tradeoffs and conditional lower bounds tradeoffs for the snippets problem, by utilizing connections between the snippets problem and the problem of constructing a color distance oracle (CDO), which is a data structure that preprocess a set of points with associated colors so that given two colors c and c' one can quickly find the (distance between the) closest pair of points where one has color c and the other has color c'. However, the existing upper bound and lower bound curves are not tight. Inspired by recent advances by Kopelowitz and Vassilevska-Williams [ICALP2020] regarding tradeoff curves for Set-disjointness data structures, in this paper we introduce new conditionally optimal algorithms for a (1+ε) approximation version of the snippets problem and a (1+ε) approximation version of the CDO problem, by applying fast matrix multiplication. For example, for CDO on n points in an array, if the preprocessing time is Õ(n^a) and the query time is Õ(n^b) then, assuming that ω = 2 (where ω is the exponent of n in the runtime of the fastest matrix multiplication algorithm on two squared matrices of size n× n), we show that approximate CDO can be solved with the following tradeoff a + 2b = 2 (if 0 ≤ b ≤ 1/3) 2a + b = 3 (if 1/3 ≤ b ≤ 1). Moreover, we prove that for exact CDO on points in an array, the algorithm of Kopelowitz and Krauthgamer [CPM2016], which obtains a tradeoff of a+b = 2, is essentially optimal assuming that the strong all-pairs shortest paths hypothesis holds for randomized algorithms. Thus, we demonstrate that the exact version of CDO is strictly harder than the approximate version. Moreover, this separation carries over to the snippets problem.

Cite as

Noam Horowicz and Tsvi Kopelowitz. Color Distance Oracles and Snippets: Separation Between Exact and Approximate Solutions. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 72:1-72:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{horowicz_et_al:LIPIcs.ESA.2025.72,
  author =	{Horowicz, Noam and Kopelowitz, Tsvi},
  title =	{{Color Distance Oracles and Snippets: Separation Between Exact and Approximate Solutions}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{72:1--72:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.72},
  URN =		{urn:nbn:de:0030-drops-245403},
  doi =		{10.4230/LIPIcs.ESA.2025.72},
  annote =	{Keywords: data structures, fast matrix multiplication, fine-grained complexity, pattern matching, distance oracles}
}
Document
Classical Algorithms for Constant Approximation of the Ground State Energy of Local Hamiltonians

Authors: François Le Gall


Abstract
We construct classical algorithms computing an approximation of the ground state energy of an arbitrary k-local Hamiltonian acting on n qubits. We first consider the setting where a good "guiding state" is available, which is the main setting where quantum algorithms are expected to achieve an exponential speedup over classical methods. We show that a constant approximation (i.e., an approximation with constant relative accuracy) of the ground state energy can be computed classically in poly (1/χ,n) time and poly(n) space, where χ denotes the overlap between the guiding state and the ground state (as in prior works in dequantization, we assume sample-and-query access to the guiding state). This gives a significant improvement over the recent classical algorithm by Gharibian and Le Gall (SICOMP 2023), and matches (up to a polynomial overhead) both the time and space complexities of quantum algorithms for constant approximation of the ground state energy. We also obtain classical algorithms for higher-precision approximation. For the setting where no guided state is given (i.e., the standard version of the local Hamiltonian problem), we obtain a classical algorithm computing a constant approximation of the ground state energy in 2^O(n) time and poly(n) space. To our knowledge, before this work it was unknown how to classically achieve these bounds simultaneously, even for constant approximation. We also discuss complexity-theoretic aspects of our results.

Cite as

François Le Gall. Classical Algorithms for Constant Approximation of the Ground State Energy of Local Hamiltonians. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 73:1-73:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{legall:LIPIcs.ESA.2025.73,
  author =	{Le Gall, Fran\c{c}ois},
  title =	{{Classical Algorithms for Constant Approximation of the Ground State Energy of Local Hamiltonians}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{73:1--73:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.73},
  URN =		{urn:nbn:de:0030-drops-245419},
  doi =		{10.4230/LIPIcs.ESA.2025.73},
  annote =	{Keywords: approximation algorithms, quantum computing, dequantization}
}
Document
Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications

Authors: Paweł Gawrychowski, Egor Gorbachev, and Tomasz Kociumaka


Abstract
Min-plus matrix multiplication is a fundamental tool for designing algorithms operating on distances in graphs and different problems solvable by dynamic programming. We know that, assuming the APSP hypothesis, no subcubic-time algorithm exists for the case of general matrices. However, in many applications the matrices admit certain structural properties that can be used to design faster algorithms. For example, when considering a planar graph, one often works with a Monge matrix A, meaning that the density matrix A^◻ has non-negative entries, that is, A^◻_{i,j} := A_{i+1,j} + A_{i,j+1} - A_{i,j} -A_{i+1,j+1} ≥ 0. The min-plus product of two n×n Monge matrices can be computed in 𝒪(n²) time using the famous SMAWK algorithm. In applications such as longest common subsequence, edit distance, and longest increasing subsequence, the matrices are even more structured, as observed by Tiskin [J. Discrete Algorithms, 2008]: they are (or can be converted to) simple unit-Monge matrices, meaning that the density matrix is a permutation matrix and, furthermore, the first column and the last row of the matrix consist of only zeroes. Such matrices admit an implicit representation of size 𝒪(n) and, as shown by Tiskin [SODA 2010 & Algorithmica, 2015], their min-plus product can be computed in 𝒪(nlog n) time. Russo [SPIRE 2010 & Theor. Comput. Sci., 2012] identified a general structural property of matrices that admit such efficient representation and min-plus multiplication algorithms: the core size δ, defined as the number of non-zero entries in the density matrices of the input and output matrices. He provided an adaptive implementation of the SMAWK algorithm that runs in 𝒪((n+δ)log³ n) or 𝒪((n+δ)log² n) time (depending on the representation of the input matrices). In this work, we further investigate the core size as the parameter that enables efficient min-plus matrix multiplication. On the combinatorial side, we provide a (linear) bound on the core size of the product matrix in terms of the core sizes of the input matrices. On the algorithmic side, we generalize Tiskin’s algorithm (but, arguably, with a more elementary analysis) to solve the core-sparse Monge matrix multiplication problem in 𝒪(n+δlog δ) ⊆ 𝒪(n + δ log n) time, matching the complexity for simple unit-Monge matrices. As witnessed by the recent work of Gorbachev and Kociumaka [STOC'25] for edit distance with integer weights, our generalization opens up the possibility of speed-ups for weighted sequence alignment problems. Furthermore, our multiplication algorithm is also capable of producing an efficient data structure for recovering the witness for any given entry of the output matrix. This allows us, for example, to preprocess an integer array of size n in Õ(n) time so that the longest increasing subsequence of any sub-array can be reconstructed in Õ(𝓁) time, where 𝓁 is the length of the reported subsequence. In comparison, Karthik C. S. and Rahul [arXiv, 2024] recently achieved 𝒪(𝓁+n^{1/2}polylog n)-time reporting after 𝒪(n^{3/2}polylog n)-time preprocessing.

Cite as

Paweł Gawrychowski, Egor Gorbachev, and Tomasz Kociumaka. Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 74:1-74:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2025.74,
  author =	{Gawrychowski, Pawe{\l} and Gorbachev, Egor and Kociumaka, Tomasz},
  title =	{{Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{74:1--74:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.74},
  URN =		{urn:nbn:de:0030-drops-245427},
  doi =		{10.4230/LIPIcs.ESA.2025.74},
  annote =	{Keywords: Min-plus matrix multiplication, Monge matrix, longest increasing subsequence}
}
Document
An Improved Bound for Plane Covering Paths

Authors: Hugo A. Akitaya, Greg Aloupis, Ahmad Biniaz, Prosenjit Bose, Jean-Lou De Carufel, Cyril Gavoille, John Iacono, Linda Kleist, Michiel Smid, Diane Souvaine, and Leonidas Theocharous


Abstract
A covering path for a finite set P of points in the plane is a polygonal path such that every point of P lies on a segment of the path. The vertices of the path need not be at points of P. A covering path is plane if its segments do not cross each other. Let π(n) be the minimum number such that every set of n points in the plane admits a plane covering path with at most π(n) segments. We prove that π(n) ≤ ⌈6n/7⌉. This improves the previous best-known upper bound of ⌈21n/22⌉, due to Biniaz (SoCG 2023). Our proof is constructive and yields a simple O(n log n)-time algorithm for computing a plane covering path.

Cite as

Hugo A. Akitaya, Greg Aloupis, Ahmad Biniaz, Prosenjit Bose, Jean-Lou De Carufel, Cyril Gavoille, John Iacono, Linda Kleist, Michiel Smid, Diane Souvaine, and Leonidas Theocharous. An Improved Bound for Plane Covering Paths. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 75:1-75:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{a.akitaya_et_al:LIPIcs.ESA.2025.75,
  author =	{A. Akitaya, Hugo and Aloupis, Greg and Biniaz, Ahmad and Bose, Prosenjit and De Carufel, Jean-Lou and Gavoille, Cyril and Iacono, John and Kleist, Linda and Smid, Michiel and Souvaine, Diane and Theocharous, Leonidas},
  title =	{{An Improved Bound for Plane Covering Paths}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{75:1--75:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.75},
  URN =		{urn:nbn:de:0030-drops-245432},
  doi =		{10.4230/LIPIcs.ESA.2025.75},
  annote =	{Keywords: Covering Path, Upper Bound, Simple Algorithm}
}
Document
A 3.3904-Competitive Online Algorithm for List Update with Uniform Costs

Authors: Mateusz Basiak, Marcin Bienkowski, Martin Böhm, Marek Chrobak, Łukasz Jeż, Jiří Sgall, and Agnieszka Tatarczuk


Abstract
We consider the List Update problem where the cost of each swap is assumed to be 1. This is in contrast to the "standard" model, in which an algorithm is allowed to swap the requested item with previous items for free. We construct an online algorithm Full-Or-Partial-Move (FPM), whose competitive ratio is at most 3.3904, improving over the previous best known bound of 4.

Cite as

Mateusz Basiak, Marcin Bienkowski, Martin Böhm, Marek Chrobak, Łukasz Jeż, Jiří Sgall, and Agnieszka Tatarczuk. A 3.3904-Competitive Online Algorithm for List Update with Uniform Costs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 76:1-76:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{basiak_et_al:LIPIcs.ESA.2025.76,
  author =	{Basiak, Mateusz and Bienkowski, Marcin and B\"{o}hm, Martin and Chrobak, Marek and Je\.{z}, {\L}ukasz and Sgall, Ji\v{r}{\'\i} and Tatarczuk, Agnieszka},
  title =	{{A 3.3904-Competitive Online Algorithm for List Update with Uniform Costs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{76:1--76:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.76},
  URN =		{urn:nbn:de:0030-drops-245442},
  doi =		{10.4230/LIPIcs.ESA.2025.76},
  annote =	{Keywords: List update, work functions, amortized analysis, online algorithms, competitive analysis}
}
Document
Tolerant Testers for Subgraph-Freeness

Authors: Reut Levi and Jonathan Meiri


Abstract
In this paper we study the problem of tolerantly testing the property of being H-free (which also implies distance approximation from being H-free). In the general-graphs model, we show that for tolerant K_k-freeness testing can be achieved with query complexity that is polynomial in the arboricity of the input graph G, arb(G), and independent of the size of G (for graphs in which the average degree is Ω(1)). Specifically for triangles, our algorithm distinguished graphs which are ε-close to being triangle-free from graphs that 3ε(1+η)-far from being triangle-free with expected query complexity which is Õ(arb³(G)) (for constant η and ε). For general k-cliques our algorithm distinguishes graphs which are ε-close to being K_k-free from graphs which are binom(k,2)ε(1+η)-far from being K_k-free with expected query complexity which is polynomial in k, ε, γ and arb(G). We then generalize our result and provide a similar result for any motif H which is 2-connected of radius 1. This includes for example the wheel-graph. Finally, we show that our tester can be applied to the bounded-degree model for tolerantly testing H-freeness for any motif H. The query complexity of the algorithm is polynomial in the degree bound, d, improving the previous state-of-the-art by Marko and Ron (TALG 2009) that obtained quasi-polynomial query complexity in d.

Cite as

Reut Levi and Jonathan Meiri. Tolerant Testers for Subgraph-Freeness. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 77:1-77:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{levi_et_al:LIPIcs.ESA.2025.77,
  author =	{Levi, Reut and Meiri, Jonathan},
  title =	{{Tolerant Testers for Subgraph-Freeness}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{77:1--77:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.77},
  URN =		{urn:nbn:de:0030-drops-245456},
  doi =		{10.4230/LIPIcs.ESA.2025.77},
  annote =	{Keywords: Tolerant Testing, Property Testing, Subgraph freeness, distance approximation, arboricity}
}
Document
Testing Depth First Search Numbering

Authors: Artur Czumaj, Christian Sohler, and Stefan Walzer


Abstract
Property Testing is a formal framework to study the computational power and complexity of sampling from combinatorial objects. A central goal in standard graph property testing is to understand which graph properties are testable with sublinear query complexity. Here, a graph property P is testable with a sublinear query complexity if there is an algorithm that makes a sublinear number of queries to the input graph and accepts with probability at least 2/3, if the graph has property P, and rejects with probability at least 2/3 if it is ε-far from every graph that has property P. In this paper, we introduce a new variant of the bounded degree graph model. In this variant, in addition to the standard representation of a bounded degree graph, we assume that every vertex v has a unique label num(v) from {1, … , |V|}, and in addition to the standard queries in the bounded degree graph model, we also allow a property testing algorithm to query for the label of a vertex (but not for a vertex with a given label). Our new model is motivated by certain graph processes such as a DFS traversal, which assign consecutive numbers (labels) to the vertices of the graph. We want to study which of these numberings can be tested in sublinear time. As a first step in understanding such a model, we develop a property testing algorithm for discovery times of a DFS traversal with query complexity O(n^{1/3}/ε) and for constant ε > 0 we give a matching lower bound.

Cite as

Artur Czumaj, Christian Sohler, and Stefan Walzer. Testing Depth First Search Numbering. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 78:1-78:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ESA.2025.78,
  author =	{Czumaj, Artur and Sohler, Christian and Walzer, Stefan},
  title =	{{Testing Depth First Search Numbering}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{78:1--78:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.78},
  URN =		{urn:nbn:de:0030-drops-245466},
  doi =		{10.4230/LIPIcs.ESA.2025.78},
  annote =	{Keywords: Randomized Algorithms, Graph Algorithms, Property Testing}
}
Document
Semi-Streaming Algorithms for Hypergraph Matching

Authors: Henrik Reinstädtler, S M Ferdous, Alex Pothen, Bora Uçar, and Christian Schulz


Abstract
We propose two one-pass streaming algorithms for the NP-hard hypergraph matching problem. The first algorithm stores a small subset of potential matching edges in a stack using dual variables to select edges. It has an approximation guarantee of 1/(d(1+ε)) and requires 𝒪((n/ε)log²n) bits of memory, where n is the number of vertices in the hypergraph, d is the maximum number of vertices in a hyperedge, and ε > 0 is a parameter to be chosen. The second algorithm computes, stores, and updates a single matching as the edges stream, with an approximation ratio dependent on a parameter α. Its best approximation guarantee is 1/((2d-1) + 2 √{d(d-1)}), and it requires only 𝒪(n) memory. We have implemented both algorithms and compared them with respect to solution quality, memory consumption, and running times on two diverse sets of hypergraphs with a non-streaming greedy and a naive streaming algorithm. Our results show that the streaming algorithms achieve much better solution quality than naive algorithms when facing adverse orderings. Furthermore, these algorithms reduce the memory required by a factor of 13 in the geometric mean on our test problems, and also outperform the offline Greedy algorithm in running time.

Cite as

Henrik Reinstädtler, S M Ferdous, Alex Pothen, Bora Uçar, and Christian Schulz. Semi-Streaming Algorithms for Hypergraph Matching. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 79:1-79:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{reinstadtler_et_al:LIPIcs.ESA.2025.79,
  author =	{Reinst\"{a}dtler, Henrik and Ferdous, S M and Pothen, Alex and U\c{c}ar, Bora and Schulz, Christian},
  title =	{{Semi-Streaming Algorithms for Hypergraph Matching}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{79:1--79:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.79},
  URN =		{urn:nbn:de:0030-drops-245478},
  doi =		{10.4230/LIPIcs.ESA.2025.79},
  annote =	{Keywords: hypergraph, matching, semi-streaming}
}
Document
Online Metric TSP

Authors: Christian Bertram


Abstract
In the online metric traveling salesperson problem, n points of a metric space arrive one by one and have to be placed (immediately and irrevocably) into empty cells of a size-n array. The goal is to minimize the sum of distances between consecutive points in the array. This problem was introduced by Abrahamsen, Bercea, Beretta, Klausen, and Kozma [ESA'24] as a generalization of the online sorting problem, which was introduced by Aamand, Abrahamsen, Beretta, and Kleist [SODA'23] as a tool in their study of online geometric packing problems. Online metric TSP has been studied for a range of fixed metric spaces. For 1-dimensional Euclidean space, the problem is equivalent to online sorting, where an optimal competitive ratio of Θ(√n) is known. For d-dimensional Euclidean space, the best-known upper bound is O(2^d √{dn log n}), leaving a gap to the Ω(√n) lower bound. Finally, for the uniform metric, where all distances are 0 or 1, the optimal competitive ratio is known to be Θ(log n). We study the problem for a general metric space, presenting an algorithm with competitive ratio O(√n). In particular, we close the gap for d-dimensional Euclidean space, completely removing the dependence on dimension. One might hope to simultaneously guarantee competitive ratio O(√n) in general and O(log n) for the uniform metric, but we show that this is impossible.

Cite as

Christian Bertram. Online Metric TSP. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 80:1-80:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bertram:LIPIcs.ESA.2025.80,
  author =	{Bertram, Christian},
  title =	{{Online Metric TSP}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{80:1--80:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.80},
  URN =		{urn:nbn:de:0030-drops-245485},
  doi =		{10.4230/LIPIcs.ESA.2025.80},
  annote =	{Keywords: online algorithm, metric space, TSP}
}
Document
An O(nlog n) Algorithm for Single-Source Shortest Paths in Disk Graphs

Authors: Mark de Berg and Sergio Cabello


Abstract
We prove that the single-source shortest-path problem on disk graphs can be solved in O(n log n) expected time, and that it can be solved on intersection graphs of fat triangles in O(n log³ n) time.

Cite as

Mark de Berg and Sergio Cabello. An O(nlog n) Algorithm for Single-Source Shortest Paths in Disk Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 81:1-81:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{deberg_et_al:LIPIcs.ESA.2025.81,
  author =	{de Berg, Mark and Cabello, Sergio},
  title =	{{An O(nlog n) Algorithm for Single-Source Shortest Paths in Disk Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{81:1--81:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.81},
  URN =		{urn:nbn:de:0030-drops-245494},
  doi =		{10.4230/LIPIcs.ESA.2025.81},
  annote =	{Keywords: shortest path, geometric intersection graph, disk graph, fat triangles}
}
Document
Buffered Partially-Persistent External-Memory Search Trees

Authors: Gerth Stølting Brodal, Casper Moldrup Rysgaard, and Rolf Svenning


Abstract
We present an optimal partially-persistent external-memory search tree with amortized I/O bounds matching those achieved by the non-persistent B^{ε}-tree by Brodal and Fagerberg [SODA 2003]. In a partially-persistent data structure, each update creates a new version. All past versions can be queried, but only the current version can be updated. Operations should be efficient with respect to the size N_v of the accessed version v. For any parameter 0 < ε < 1, our data structure supports insertions and deletions in amortized 𝒪(1/(ε B^{1 - ε}) log_B N_v) I/Os, where B is the external-memory block size. It also supports successor and range reporting queries in amortized 𝒪(1/ε log_B N_v + K/B) I/Os, where K is the number of keys reported. The space usage of the data structure is linear in the total number of updates. We make the standard and minimal assumption that the internal memory has size M ≥ 2B. The previous state-of-the-art external-memory partially-persistent search tree by Arge, Danner and Teh [JEA 2003] supports all operations in worst-case 𝒪(log_B N_v + K/B) I/Os, matching the bounds achieved by the classical B-tree by Bayer and McCreight [Acta Informatica 1972]. Our data structure successfully combines buffering updates with partial persistence. The I/O bounds can also be achieved in the worst-case sense, by slightly modifying our data structure and under the requirement that the memory size M = Ω(B^{1-ε} log₂(max_v N_v)). For updates, where the I/O bound is o(1), we assume that the I/Os are performed evenly spread out among the updates (by performing buffer-overflows incrementally). The worst-case result slightly improves the memory requirement over the previous ephemeral external-memory dictionary by Das, Iacono, and Nekrich (ISAAC 2022), who achieved matching worst-case I/O bounds but required M = Ω(B log_B N), where N is the size of the current dictionary.

Cite as

Gerth Stølting Brodal, Casper Moldrup Rysgaard, and Rolf Svenning. Buffered Partially-Persistent External-Memory Search Trees. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 82:1-82:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2025.82,
  author =	{Brodal, Gerth St{\o}lting and Rysgaard, Casper Moldrup and Svenning, Rolf},
  title =	{{Buffered Partially-Persistent External-Memory Search Trees}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{82:1--82:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.82},
  URN =		{urn:nbn:de:0030-drops-245507},
  doi =		{10.4230/LIPIcs.ESA.2025.82},
  annote =	{Keywords: B-tree, buffered updates, partial persistence, external memory}
}
Document
Fault-Tolerant Matroid Bases

Authors: Matthias Bentert, Fedor V. Fomin, Petr A. Golovach, and Laure Morelle


Abstract
We investigate the problem of constructing fault-tolerant bases in matroids. Given a matroid ℳ and a redundancy parameter k, a k-fault-tolerant basis is a minimum-size set of elements such that, even after the removal of any k elements, the remaining subset still spans the entire ground set. Since matroids generalize linear independence across structures such as vector spaces, graphs, and set systems, this problem unifies and extends several fault-tolerant concepts appearing in prior research. Our main contribution is a fixed-parameter tractable (FPT) algorithm for the k-fault-tolerant basis problem, parameterized by both k and the rank r of the matroid. This two-variable parameterization by k + r is shown to be tight in the following sense. On the one hand, the problem is already NP-hard for k = 1. On the other hand, it is Para-NP-hard for r ≥ 3 and polynomial-time solvable for r ≤ 2.

Cite as

Matthias Bentert, Fedor V. Fomin, Petr A. Golovach, and Laure Morelle. Fault-Tolerant Matroid Bases. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 83:1-83:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ESA.2025.83,
  author =	{Bentert, Matthias and Fomin, Fedor V. and Golovach, Petr A. and Morelle, Laure},
  title =	{{Fault-Tolerant Matroid Bases}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{83:1--83:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.83},
  URN =		{urn:nbn:de:0030-drops-245511},
  doi =		{10.4230/LIPIcs.ESA.2025.83},
  annote =	{Keywords: Parameterized Complexity, matroids, robust bases}
}
Document
Property Testing of Curve Similarity

Authors: Peyman Afshani, Maike Buchin, Anne Driemel, Marena Richter, and Sampson Wong


Abstract
We propose sublinear algorithms for probabilistic testing of the discrete and continuous Fréchet distance - a standard similarity measure for curves. We assume the algorithm is given access to the input curves via a query oracle: a query returns the set of vertices of the curve that lie within a radius δ of a specified vertex of the other curve. The goal is to use a small number of queries to determine with constant probability whether the two curves are similar (i.e., their discrete Fréchet distance is at most δ) or they are "ε-far" (for 0 < ε < 2) from being similar, i.e., more than an ε-fraction of the two curves must be ignored for them to become similar. We present two algorithms which are sublinear assuming that the curves are t-approximate shortest paths in the ambient metric space, for some t ≪ n. The first algorithm uses O(t/ε log t/ε) queries and is given the value of t in advance. The second algorithm does not have explicit knowledge of the value of t and therefore needs to gain implicit knowledge of the straightness of the input curves through its queries. We show that the discrete Fréchet distance can still be tested using roughly O({t³+t² log n}/ε) queries ignoring logarithmic factors in t. Our algorithms work in a matrix representation of the input and may be of independent interest to matrix testing. Our algorithms use a mild uniform sampling condition that constrains the edge lengths of the curves, similar to a polynomially bounded aspect ratio. Applied to testing the continuous Fréchet distance of t-straight curves, our algorithms can be used for (1+ε')-approximate testing using essentially the same bounds as stated above with an additional factor of poly(1/(ε')).

Cite as

Peyman Afshani, Maike Buchin, Anne Driemel, Marena Richter, and Sampson Wong. Property Testing of Curve Similarity. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 84:1-84:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ESA.2025.84,
  author =	{Afshani, Peyman and Buchin, Maike and Driemel, Anne and Richter, Marena and Wong, Sampson},
  title =	{{Property Testing of Curve Similarity}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{84:1--84:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.84},
  URN =		{urn:nbn:de:0030-drops-245522},
  doi =		{10.4230/LIPIcs.ESA.2025.84},
  annote =	{Keywords: Fr\'{e}chet distance, Trajectory Analysis, Curve Similarity, Property Testing, Monotonicity Testing}
}
Document
A Simple yet Exact Analysis of the MultiQueue

Authors: Stefan Walzer and Marvin Williams


Abstract
The MultiQueue is a relaxed concurrent priority queue consisting of n internal priority queues, where an insertion uses a random queue and a deletion considers two random queues and deletes the minimum from the one with the smaller minimum. The rank error of the deletion is the number of smaller elements in the MultiQueue. Alistarh et al. [Alistarh et al., 2017] have demonstrated in a sophisticated potential argument that the expected rank error remains bounded by 𝒪(n) over long sequences of deletions. In this paper we present a simpler analysis by identifying the stable distribution of an underlying Markov chain and with it the long-term distribution of the rank error exactly. Simple calculations then reveal the expected long-term rank error to be (5/6)n-1+1/(6n). Our arguments generalize to deletion schemes where the probability to delete from a given queue depends only on the rank of the queue. Specifically, this includes deleting from the best of c randomly selected queues for any c > 1.

Cite as

Stefan Walzer and Marvin Williams. A Simple yet Exact Analysis of the MultiQueue. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 85:1-85:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{walzer_et_al:LIPIcs.ESA.2025.85,
  author =	{Walzer, Stefan and Williams, Marvin},
  title =	{{A Simple yet Exact Analysis of the MultiQueue}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{85:1--85:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.85},
  URN =		{urn:nbn:de:0030-drops-245533},
  doi =		{10.4230/LIPIcs.ESA.2025.85},
  annote =	{Keywords: MultiQueue, concurrent data structure, stochastic process, Markov chain}
}
Document
New Algorithms for Pigeonhole Equal Subset Sum

Authors: Ce Jin, Ryan Williams, and Stan Zhang


Abstract
We study the Pigeonhole Equal Subset Sum problem, which is a total-search variant of the Subset Sum problem introduced by Papadimitriou (1994): we are given a set of n positive integers {w₁,…,w_n} with the additional restriction that ∑_{i=1}^n w_i < 2ⁿ - 1, and want to find two different subsets A,B ⊆ [n] such that ∑_{i∈A} w_i = ∑_{i∈B} w_i. Very recently, Jin and Wu (ICALP 2024) gave a randomized algorithm solving Pigeonhole Equal Subset Sum in O^*(2^{0.4n}) time, beating the classical meet-in-the-middle algorithm with O^*(2^{n/2}) runtime. In this paper, we refine Jin and Wu’s techniques to improve the runtime even further to O^*(2^{n/3}).

Cite as

Ce Jin, Ryan Williams, and Stan Zhang. New Algorithms for Pigeonhole Equal Subset Sum. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 86:1-86:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ESA.2025.86,
  author =	{Jin, Ce and Williams, Ryan and Zhang, Stan},
  title =	{{New Algorithms for Pigeonhole Equal Subset Sum}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{86:1--86:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.86},
  URN =		{urn:nbn:de:0030-drops-245541},
  doi =		{10.4230/LIPIcs.ESA.2025.86},
  annote =	{Keywords: pigeonhole principle, subset sums}
}
Document
Multicut Problems in Almost-Planar Graphs: the Dependency of Complexity on the Demand Pattern

Authors: Florian Hörsch and Dániel Marx


Abstract
Given a graph G, a set T of terminal vertices, and a demand graph H on T, the Multicut problem asks for a set of edges of minimum weight that separates the pairs of terminals specified by the edges of H. The Multicut problem can be solved in polynomial time if the number of terminals and the genus of the graph is bounded (Colin de Verdière [Algorithmica, 2017]). Restricting the possible demand graphs in the input leads to special cases of Multicut whose complexity might be different from the general problem. Focke et al. [SoCG 2024] systematically characterized which special cases of Multicut are fixed-parameter tractable parameterized by the number of terminals on planar graphs. Moreover, extending these results beyond planar graphs, they precisely determined how the parameter genus influences the complexity and presented partial results of this form for graphs that can be made planar by the deletion of π edges. Continuing this line of work, we complete the picture on how this parameter π influences the complexity of different special cases and precisely determine the influence of the crossing number, another parameter measuring closeness to planarity. Formally, let ℋ be any class of graphs (satisfying a mild closure property) and let Multicut(ℋ) be the special case when the demand graph H is in ℋ. Our first main result is showing that if ℋ has the combinatorial property of having bounded distance to extended bicliques, then Multicut(ℋ) on unweighted graphs is FPT parameterized by the number t of terminals and π. For the case when ℋ does not have this combinatorial property, Focke et al. [SoCG 2024] showed that O(√t) is essentially the best possible exponent of the running time; together with our result, this gives a complete understanding of how the parameter π influences complexity on unweighted graphs. Our second main result is giving an algorithm whose existence shows that the parameter crossing number behaves analogously if we consider Multicut(ℋ) on weighted graphs.

Cite as

Florian Hörsch and Dániel Marx. Multicut Problems in Almost-Planar Graphs: the Dependency of Complexity on the Demand Pattern. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 87:1-87:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{horsch_et_al:LIPIcs.ESA.2025.87,
  author =	{H\"{o}rsch, Florian and Marx, D\'{a}niel},
  title =	{{Multicut Problems in Almost-Planar Graphs: the Dependency of Complexity on the Demand Pattern}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{87:1--87:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.87},
  URN =		{urn:nbn:de:0030-drops-245553},
  doi =		{10.4230/LIPIcs.ESA.2025.87},
  annote =	{Keywords: MultiCut, Multiway Cut, Parameterized Complexity, Tight Bounds, Embedded Graph, Planar Graph, Crossing Number}
}
Document
Parameterized Approximability for Modular Linear Equations

Authors: Konrad K. Dabrowski, Peter Jonsson, Sebastian Ordyniak, George Osipov, and Magnus Wahlström


Abstract
We consider the Min-r-Lin(ℤ_m) problem: given a system S of length-r linear equations modulo m, find Z ⊆ S of minimum cardinality such that S-Z is satisfiable. The problem is NP-hard and UGC-hard to approximate in polynomial time within any constant factor even when r = m = 2. We focus on parameterized approximation with solution size as the parameter. Dabrowski, Jonsson, Ordyniak, Osipov and Wahlström [SODA-2023] showed that Min-r-Lin(ℤ_m) is in FPT if m is prime (i.e. ℤ_m is a field), and it is W[1]-hard if m is not a prime power. We show that Min-r-Lin(ℤ_{pⁿ}) is FPT-approximable within a factor of 2 for every prime p and integer n ≥ 2. This implies that Min-2-Lin(ℤ_m), m ∈ ℤ^+, is FPT-approximable within a factor of 2ω(m) where ω(m) counts the number of distinct prime divisors of m. The high-level idea behind the algorithm is to solve tighter and tighter relaxations of the problem, decreasing the set of possible values for the variables at each step. When working over ℤ_{pⁿ} and viewing the values in base-p, one can roughly think of a relaxation as fixing the number of trailing zeros and the least significant nonzero digits of the values assigned to the variables. To solve the relaxed problem, we construct a certain graph where solutions can be identified with a particular collection of cuts. The relaxation may hide obstructions that will only become visible in the next iteration of the algorithm, which makes it difficult to find optimal solutions. To deal with this, we use a strategy based on shadow removal [Marx & Razgon, STOC-2011] to compute solutions that (1) cost at most twice as much as the optimum and (2) allow us to reduce the set of values for all variables simultaneously. We complement the algorithmic result with two lower bounds, ruling out constant-factor FPT-approximation for Min-3-Lin(R) over any nontrivial ring R and for Min-2-Lin(R) over some finite commutative rings R.

Cite as

Konrad K. Dabrowski, Peter Jonsson, Sebastian Ordyniak, George Osipov, and Magnus Wahlström. Parameterized Approximability for Modular Linear Equations. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 88:1-88:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dabrowski_et_al:LIPIcs.ESA.2025.88,
  author =	{Dabrowski, Konrad K. and Jonsson, Peter and Ordyniak, Sebastian and Osipov, George and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Approximability for Modular Linear Equations}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{88:1--88:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.88},
  URN =		{urn:nbn:de:0030-drops-245562},
  doi =		{10.4230/LIPIcs.ESA.2025.88},
  annote =	{Keywords: parameterized complexity, approximation algorithms, linear equations}
}
Document
A Simple Algorithm for Trimmed Multipoint Evaluation

Authors: Nick Fischer, Melvin Kallmayer, and Leo Wennmann


Abstract
Evaluating a polynomial on a set of points is a fundamental task in computer algebra. In this work, we revisit a particular variant called trimmed multipoint evaluation: given an n-variate polynomial with bounded individual degree d and total degree D, the goal is to evaluate it on a natural class of input points. This problem arises as a key subroutine in recent algorithmic results [Dinur; SODA '21], [Dell, Haak, Kallmayer, Wennmann; SODA '25]. It is known that trimmed multipoint evaluation can be solved in near-linear time [van der Hoeven, Schost; AAECC '13] by a clever yet somewhat involved algorithm. We give a simple recursive algorithm that avoids heavy computer-algebraic machinery, and can be readily understood by researchers without specialized background.

Cite as

Nick Fischer, Melvin Kallmayer, and Leo Wennmann. A Simple Algorithm for Trimmed Multipoint Evaluation. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 89:1-89:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ESA.2025.89,
  author =	{Fischer, Nick and Kallmayer, Melvin and Wennmann, Leo},
  title =	{{A Simple Algorithm for Trimmed Multipoint Evaluation}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{89:1--89:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.89},
  URN =		{urn:nbn:de:0030-drops-245574},
  doi =		{10.4230/LIPIcs.ESA.2025.89},
  annote =	{Keywords: Algebraic Algorithms, Multipoint Evaluation, Interpolation, LU Decomposition}
}
Document
Separating Two Points with Obstacles in the Plane: Improved Upper and Lower Bounds

Authors: Jack Spalding-Jamieson and Anurag Murty Naredla


Abstract
Given two points in the plane, and a set of "obstacles" given as curves through the plane with assigned weights, we consider the point-separation problem, which asks for a minimum-weight subset of the obstacles separating the two points. A few computational models for this problem have been previously studied. We give a unified approach to this problem in all models via a reduction to a particular shortest-path problem, and obtain improved running times in essentially all cases. In addition, we also give fine-grained lower bounds for many cases.

Cite as

Jack Spalding-Jamieson and Anurag Murty Naredla. Separating Two Points with Obstacles in the Plane: Improved Upper and Lower Bounds. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 90:1-90:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{spaldingjamieson_et_al:LIPIcs.ESA.2025.90,
  author =	{Spalding-Jamieson, Jack and Naredla, Anurag Murty},
  title =	{{Separating Two Points with Obstacles in the Plane: Improved Upper and Lower Bounds}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{90:1--90:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.90},
  URN =		{urn:nbn:de:0030-drops-245598},
  doi =		{10.4230/LIPIcs.ESA.2025.90},
  annote =	{Keywords: obstacle separation, point separation, geometric intersection graph, Z₂-homology, fine-grained lower bounds}
}
Document
Near-Optimal Differentially Private Graph Algorithms via the Multidimensional AboveThreshold Mechanism

Authors: Laxman Dhulipala, Monika Henzinger, George Z. Li, Quanquan C. Liu, A. R. Sricharan, and Leqi Zhu


Abstract
Many differentially private and classical non-private graph algorithms rely crucially on determining whether some property of each vertex meets a threshold. For example, for the k-core decomposition problem, the classic peeling algorithm iteratively removes a vertex if its induced degree falls below a threshold. The sparse vector technique (SVT) is generally used to transform non-private threshold queries into private ones with only a small additive loss in accuracy. However, a naive application of SVT in the graph setting leads to an amplification of the error by a factor of n due to composition, as SVT is applied to every vertex. In this paper, we resolve this problem by formulating a novel generalized sparse vector technique which we call the Multidimensional AboveThreshold (MAT) Mechanism which generalizes SVT (applied to vectors with one dimension) to vectors with multiple dimensions. When applied to vectors with n dimensions, we solve a number of important graph problems with better bounds than previous work. Specifically, we apply our MAT mechanism to obtain a set of improved bounds for a variety of problems including k-core decomposition, densest subgraph, low out-degree ordering, and vertex coloring. We give a tight local edge differentially private (LEDP) algorithm for k-core decomposition that results in an approximation with O(ε^{-1} log n) additive error and no multiplicative error in O(n) rounds. We also give a new (2+η)-factor multiplicative, O(ε^{-1} log n) additive error algorithm in O(log² n) rounds for any constant η > 0. Both of these results are asymptotically tight against our new lower bound of Ω(log n) for any constant-factor approximation algorithm for k-core decomposition. Our new algorithms for k-core decomposition also directly lead to new algorithms for the related problems of densest subgraph and low out-degree ordering. Finally, we give novel LEDP differentially private defective coloring algorithms that use number of colors given in terms of the arboricity of the graph.

Cite as

Laxman Dhulipala, Monika Henzinger, George Z. Li, Quanquan C. Liu, A. R. Sricharan, and Leqi Zhu. Near-Optimal Differentially Private Graph Algorithms via the Multidimensional AboveThreshold Mechanism. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 91:1-91:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dhulipala_et_al:LIPIcs.ESA.2025.91,
  author =	{Dhulipala, Laxman and Henzinger, Monika and Li, George Z. and Liu, Quanquan C. and Sricharan, A. R. and Zhu, Leqi},
  title =	{{Near-Optimal Differentially Private Graph Algorithms via the Multidimensional AboveThreshold Mechanism}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{91:1--91:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.91},
  URN =		{urn:nbn:de:0030-drops-245601},
  doi =		{10.4230/LIPIcs.ESA.2025.91},
  annote =	{Keywords: differential privacy, abovethreshold, densest subgraph}
}
Document
Incremental Maximization for a Broad Class of Objectives

Authors: Yann Disser and David Weckbecker


Abstract
We consider incremental maximization problems, where the solution has to be built up gradually by adding elements one after the other. In every step, the incremental solution must be competitive, compared against the optimum solution of the current cardinality. We prove that a competitive solution always exists when the objective function is monotone and β-accountable, by providing a scaling algorithm that guarantees a constant competitive ratio. This generalizes known results and, importantly, yields the first competitive algorithm for the natural class of monotone and subadditive objective functions.

Cite as

Yann Disser and David Weckbecker. Incremental Maximization for a Broad Class of Objectives. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 92:1-92:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{disser_et_al:LIPIcs.ESA.2025.92,
  author =	{Disser, Yann and Weckbecker, David},
  title =	{{Incremental Maximization for a Broad Class of Objectives}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{92:1--92:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.92},
  URN =		{urn:nbn:de:0030-drops-245613},
  doi =		{10.4230/LIPIcs.ESA.2025.92},
  annote =	{Keywords: incremental maximization, competitive analysis, subadditive functions}
}
Document
Recognizing and Realizing Temporal Reachability Graphs

Authors: Thomas Erlebach, Othon Michail, and Nils Morawietz


Abstract
A temporal graph 𝒢 = (G,λ) can be represented by an underlying graph G = (V,E) together with a function λ that assigns to each edge e ∈ E the set of time steps during which e is present. The reachability graph of 𝒢 is the directed graph D = (V,A) with (u,v) ∈ A if and only if there is a temporal path from u to v. We study the Reachability Graph Realizability (RGR) problem that asks whether a given directed graph D = (V,A) is the reachability graph of some temporal graph. The question can be asked for undirected or directed temporal graphs, for reachability defined via strict or non-strict temporal paths, and with or without restrictions on λ (simple, proper, or both). Answering an open question posed by Casteigts et al. (TCS 2024), we show that all variants of the problem are NP-complete, except for two variants that become trivial in the directed case. For undirected temporal graphs, we consider the complexity of the problem with respect to the solid graph, that is, the graph containing all edges that could potentially receive a label in any realization. We show that the RGR problem is fixed-parameter tractable for the feedback edge set number of the solid graph. As we show, the latter parameter can presumably not be replaced by smaller parameters like feedback vertex set number or treedepth, since the problem is W[2]-hard for them.

Cite as

Thomas Erlebach, Othon Michail, and Nils Morawietz. Recognizing and Realizing Temporal Reachability Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 93:1-93:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{erlebach_et_al:LIPIcs.ESA.2025.93,
  author =	{Erlebach, Thomas and Michail, Othon and Morawietz, Nils},
  title =	{{Recognizing and Realizing Temporal Reachability Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{93:1--93:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.93},
  URN =		{urn:nbn:de:0030-drops-245627},
  doi =		{10.4230/LIPIcs.ESA.2025.93},
  annote =	{Keywords: parameterized complexity, temporal graphs, FPT algorithm, feedback edge set, directed graph recognition}
}
Document
Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime

Authors: Tomasz Kociumaka and Ali Shahali


Abstract
The tree edit distance is a natural dissimilarity measure between rooted ordered trees whose nodes are labeled over an alphabet Σ. It is defined as the minimum number of node edits - insertions, deletions, and relabelings - required to transform one tree into the other. The weighted variant assigns costs ≥ 1 to edits (based on node labels), minimizing total cost rather than edit count. The unweighted tree edit distance between two trees of total size n can be computed in 𝒪(n^{2.6857}) time; in contrast, determining the weighted tree edit distance is fine-grained equivalent to the All-Pairs Shortest Paths (APSP) problem and requires n³/2^Ω(√{log n}) time [Nogler, Polak, Saha, Vassilevska Williams, Xu, Ye; STOC'25]. These impractical super-quadratic times for large, similar trees motivate the bounded version, parameterizing runtime by the distance k to enable faster algorithms for k ≪ n. Prior algorithms for bounded unweighted edit distance achieve 𝒪(nk²log n) [Akmal & Jin; ICALP’21] and 𝒪(n + k⁷log k) [Das, Gilbert, Hajiaghayi, Kociumaka, Saha; STOC'23]. For weighted, only 𝒪(n + k^{15}) is known [Das, Gilbert, Hajiaghayi, Kociumaka, Saha; STOC'23]. We present an 𝒪(n + k⁶ log k)-time algorithm for bounded tree edit distance in both weighted/unweighted settings. First, we devise a simpler weighted 𝒪(nk² log n)-time algorithm. Next, we exploit periodic structures in input trees via an optimized universal kernel: modifying prior 𝒪(n)-time 𝒪(k⁵)-size kernels to generate such structured instances, enabling efficient analysis.

Cite as

Tomasz Kociumaka and Ali Shahali. Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 94:1-94:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kociumaka_et_al:LIPIcs.ESA.2025.94,
  author =	{Kociumaka, Tomasz and Shahali, Ali},
  title =	{{Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{94:1--94:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.94},
  URN =		{urn:nbn:de:0030-drops-245634},
  doi =		{10.4230/LIPIcs.ESA.2025.94},
  annote =	{Keywords: tree edit distance, edit distance, kernelization, dynamic programming}
}
Document
The Planted Orthogonal Vectors Problem

Authors: David Kühnemann, Adam Polak, and Alon Rosen


Abstract
In the k-Orthogonal Vectors (k-OV) problem we are given k sets, each containing n binary vectors of dimension d = n^o(1), and our goal is to pick one vector from each set so that at each coordinate at least one vector has a zero. It is a central problem in fine-grained complexity, conjectured to require n^{k-o(1)} time in the worst case. We propose a way to plant a solution among vectors with i.i.d. p-biased entries, for appropriately chosen p, so that the planted solution is the unique one. Our conjecture is that the resulting k-OV instances still require time n^{k-o(1)} to solve, on average. Our planted distribution has the property that any subset of strictly less than k vectors has the same marginal distribution as in the model distribution, consisting of i.i.d. p-biased random vectors. We use this property to give average-case search-to-decision reductions for k-OV.

Cite as

David Kühnemann, Adam Polak, and Alon Rosen. The Planted Orthogonal Vectors Problem. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 95:1-95:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kuhnemann_et_al:LIPIcs.ESA.2025.95,
  author =	{K\"{u}hnemann, David and Polak, Adam and Rosen, Alon},
  title =	{{The Planted Orthogonal Vectors Problem}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{95:1--95:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.95},
  URN =		{urn:nbn:de:0030-drops-245640},
  doi =		{10.4230/LIPIcs.ESA.2025.95},
  annote =	{Keywords: Average-case complexity, fine-grained complexity, orthogonal vectors}
}
Document
Counting Small Induced Subgraphs: Scorpions Are Easy but Not Trivial

Authors: Radu Curticapean, Simon Döring, and Daniel Neuen


Abstract
In the parameterized problem #IndSub(Φ) for fixed graph properties Φ, given as input a graph G and an integer k, the task is to compute the number of induced k-vertex subgraphs satisfying Φ. Dörfler et al. [Algorithmica 2022] and Roth et al. [SICOMP 2024] conjectured that #IndSub(Φ) is #W[1]-hard for all non-meager properties Φ, i.e., properties that are nontrivial for infinitely many k. This conjecture has been confirmed for several restricted types of properties, including all hereditary properties [STOC 2022] and all edge-monotone properties [STOC 2024]. We refute this conjecture by showing that induced k-vertex graphs that are scorpions can be counted in time O(n⁴) for all k. Scorpions were introduced more than 50 years ago in the context of the evasiveness conjecture. A simple variant of this construction results in graph properties that achieve arbitrary intermediate complexity assuming ETH. Moreover, we formulate an updated conjecture on the complexity of #IndSub(Φ) that correctly captures the complexity status of scorpions and related constructions.

Cite as

Radu Curticapean, Simon Döring, and Daniel Neuen. Counting Small Induced Subgraphs: Scorpions Are Easy but Not Trivial. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 96:1-96:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{curticapean_et_al:LIPIcs.ESA.2025.96,
  author =	{Curticapean, Radu and D\"{o}ring, Simon and Neuen, Daniel},
  title =	{{Counting Small Induced Subgraphs: Scorpions Are Easy but Not Trivial}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{96:1--96:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.96},
  URN =		{urn:nbn:de:0030-drops-245651},
  doi =		{10.4230/LIPIcs.ESA.2025.96},
  annote =	{Keywords: induced subgraphs, counting complexity, parameterized complexity, scorpions}
}
Document
A Fast and Simple Algorithm for the Resource Constrained Shortest Path Problem

Authors: Saman Ahmadi, Andrea Raith, and Mahdi Jalili


Abstract
Constrained pathfinding is a classic yet challenging network optimization problem with broad applicability across many real-world domains. The Resource-Constrained Shortest Path (RCSP) problem focuses on finding cost-optimal paths that satisfy multiple resource constraints. In this paper, we propose a novel heuristic-guided search framework that accelerates constrained search in large-scale networks, including those with negative costs and resources, by leveraging efficient queuing and pruning strategies. Experimental results on real-world benchmark maps show that our framework achieves up to two orders of magnitude speedup over state-of-the-art methods, demonstrating its effectiveness in solving challenging RCSP instances within limited time.

Cite as

Saman Ahmadi, Andrea Raith, and Mahdi Jalili. A Fast and Simple Algorithm for the Resource Constrained Shortest Path Problem. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 97:1-97:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ahmadi_et_al:LIPIcs.ESA.2025.97,
  author =	{Ahmadi, Saman and Raith, Andrea and Jalili, Mahdi},
  title =	{{A Fast and Simple Algorithm for the Resource Constrained Shortest Path Problem}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{97:1--97:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.97},
  URN =		{urn:nbn:de:0030-drops-245668},
  doi =		{10.4230/LIPIcs.ESA.2025.97},
  annote =	{Keywords: constrained pathfinding, shortest path problem, heuristic search}
}
Document
Fine-Grained Classification of Detecting Dominating Patterns

Authors: Jonathan Dransfeld, Marvin Künnemann, and Mirza Redzic


Abstract
We consider the following generalization of dominating sets: Let G be a host graph and P be a pattern graph P. A dominating P-pattern in G is a subset S of vertices in G that (1) forms a dominating set in G and (2) induces a subgraph isomorphic to P. The graph theory literature studies the properties of dominating P-patterns for various patterns P, including cliques, matchings, independent sets, cycles and paths. Previous work (Kunnemann, Redzic 2024) obtains algorithms and conditional lower bounds for detecting dominating P-patterns particularly for P being a k-clique, a k-independent set and a k-matching. Their results give conditionally tight lower bounds if k is sufficiently large (where the bound depends the matrix multiplication exponent ω). We ask: Can we obtain a classification of the fine-grained complexity for all patterns P? Indeed, we define a graph parameter ρ(P) such that if ω = 2, then (n^ρ(P) m^{(|V(P)|-ρ(P))/2})^{1±o(1)} is the optimal running time assuming the Orthogonal Vectors Hypothesis, for all patterns P except the triangle K₃. Here, the host graph G has n vertices and m = Θ(n^α) edges, where 1 ≤ α ≤ 2. The parameter ρ(P) is closely related (but sometimes different) to a parameter δ(P) = max_{S ⊆ V(P)} |S|-|N(S)| studied in (Alon 1981) to tightly quantify the maximum number of occurrences of induced subgraphs isomorphic to P. Our results stand in contrast to the lack of a full fine-grained classification of detecting an arbitrary (not necessarily dominating) induced P-pattern.

Cite as

Jonathan Dransfeld, Marvin Künnemann, and Mirza Redzic. Fine-Grained Classification of Detecting Dominating Patterns. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dransfeld_et_al:LIPIcs.ESA.2025.98,
  author =	{Dransfeld, Jonathan and K\"{u}nnemann, Marvin and Redzic, Mirza},
  title =	{{Fine-Grained Classification of Detecting Dominating Patterns}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.98},
  URN =		{urn:nbn:de:0030-drops-245679},
  doi =		{10.4230/LIPIcs.ESA.2025.98},
  annote =	{Keywords: fine-grained complexity theory, domination in graphs, subgraph isomorphism, classification theorem, parameterized algorithms}
}
Document
Engineering Minimal k-Perfect Hash Functions

Authors: Stefan Hermann, Sebastian Kirmayer, Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer


Abstract
Given a set S of n keys, a k-perfect hash function (kPHF) is a data structure that maps the keys to the first m integers, where each output integer can be hit by at most k input keys. When m = ⌈n/k⌉, the resulting function is called a minimal k-perfect hash function (MkPHF). Applications of kPHFs can be found in external memory data structures or to create efficient 1-perfect hash functions, which in turn have a wide range of applications from databases to bioinformatics. Several papers from the 1980s look at external memory data structures with small internal memory indexes. However, actual k-perfect hash functions are surprisingly rare, and the area has not seen a lot of research recently. At the same time, recent research in 1-perfect hashing shows that there is a lack of efficient kPHFs. In this paper, we revive the area of k-perfect hashing, presenting four new constructions. Our implementations simultaneously dominate older approaches in space consumption, construction time, and query time. We see this paper as a possible starting point of an active line of research, similar to the area of 1-perfect hashing.

Cite as

Stefan Hermann, Sebastian Kirmayer, Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. Engineering Minimal k-Perfect Hash Functions. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 99:1-99:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hermann_et_al:LIPIcs.ESA.2025.99,
  author =	{Hermann, Stefan and Kirmayer, Sebastian and Lehmann, Hans-Peter and Sanders, Peter and Walzer, Stefan},
  title =	{{Engineering Minimal k-Perfect Hash Functions}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{99:1--99:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.99},
  URN =		{urn:nbn:de:0030-drops-245685},
  doi =		{10.4230/LIPIcs.ESA.2025.99},
  annote =	{Keywords: Compressed Data Structures, Perfect Hashing}
}
Document
Cut-Query Algorithms with Few Rounds

Authors: Yotam Kenneth-Mordoch and Robert Krauthgamer


Abstract
In the cut-query model, the algorithm can access the input graph G = (V,E) only via cut queries that report, given a set S ⊆ V, the total weight of edges crossing the cut between S and V⧵ S. This model was introduced by Rubinstein, Schramm and Weinberg [ITCS'18] and its investigation has so far focused on the number of queries needed to solve optimization problems, such as global minimum cut. We turn attention to the round complexity of cut-query algorithms, and show that several classical problems can be solved in this model with only a constant number of rounds. Our main results are algorithms for finding a minimum cut in a graph, that offer different tradeoffs between round complexity and query complexity, where n = |V| and δ(G) denotes the minimum degree of G: (i) Õ(n^{4/3}) cut queries in two rounds in unweighted graphs; (ii) Õ(rn^{1+1/r}/δ(G)^{1/r}) queries in 2r+1 rounds for any integer r ≥ 1 again in unweighted graphs; and (iii) Õ(rn^{1+(1+log_n W)/r}) queries in 4r+3 rounds for any r ≥ 1 in weighted graphs. We also provide algorithms that find a minimum (s,t)-cut and approximate the maximum cut in a few rounds.

Cite as

Yotam Kenneth-Mordoch and Robert Krauthgamer. Cut-Query Algorithms with Few Rounds. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 100:1-100:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kennethmordoch_et_al:LIPIcs.ESA.2025.100,
  author =	{Kenneth-Mordoch, Yotam and Krauthgamer, Robert},
  title =	{{Cut-Query Algorithms with Few Rounds}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{100:1--100:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.100},
  URN =		{urn:nbn:de:0030-drops-245692},
  doi =		{10.4230/LIPIcs.ESA.2025.100},
  annote =	{Keywords: Cut Queries, Round Complexity, Submodular Optimization}
}
Document
Improved Algorithms for Quantum MaxCut via Partially Entangled Matchings

Authors: Anuj Apte, Eunou Lee, Kunal Marwaha, Ojas Parekh, and James Sud


Abstract
We introduce a 0.611-approximation algorithm for Quantum MaxCut and a (1+√5)/4 ≈ 0.809-approximation algorithm for the EPR Hamiltonian of [King, 2023]. A novel ingredient in both of these algorithms is to partially entangle pairs of qubits associated to edges in a matching, while preserving the direction of their single-qubit Bloch vectors. This allows us to interpolate between product states and matching-based states with a tunable parameter.

Cite as

Anuj Apte, Eunou Lee, Kunal Marwaha, Ojas Parekh, and James Sud. Improved Algorithms for Quantum MaxCut via Partially Entangled Matchings. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 101:1-101:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{apte_et_al:LIPIcs.ESA.2025.101,
  author =	{Apte, Anuj and Lee, Eunou and Marwaha, Kunal and Parekh, Ojas and Sud, James},
  title =	{{Improved Algorithms for Quantum MaxCut via Partially Entangled Matchings}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{101:1--101:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.101},
  URN =		{urn:nbn:de:0030-drops-245705},
  doi =		{10.4230/LIPIcs.ESA.2025.101},
  annote =	{Keywords: Quantum computing, Quantum MaxCut, Maximum matching}
}
Document
A Combinatorial Proof of Universal Optimality for Computing a Planar Convex Hull

Authors: Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann


Abstract
For a planar point set P, its convex hull is the smallest convex polygon that encloses all points in P. The construction of the convex hull from an array I_P containing P is a fundamental problem in computational geometry. By sorting I_P in lexicographical order, one can construct the convex hull of P in O(n log n) time which is worst-case optimal. Standard worst-case analysis, however, has been criticized as overly coarse or pessimistic, and researchers search for more refined analyses. For an algorithm A, worst-case analysis fixes n, and considers the maximum running time of A across all size-n point sets P and permutations I_P of P. Output-sensitive analysis fixes n and k, and considers the maximum running time across all size-n points sets P with k hull points and permutations I_P of P. Universal analysis provides an even stronger guarantee. It fixes a point set P and considers the maximum running time across all permutations I_P of P. Kirkpatrick, McQueen, and Seidel [SICOMP'86] consider output-sensitive analysis. If the convex hull of P contains k points, then their algorithm runs in O(n log k) time. Afshani, Barbay, Chan [FOCS'07] prove that the algorithm by Kirkpatrick, McQueen, and Seidel is also universally optimal. Their proof restricts the model of computation to any algebraic decision tree model where the test functions have at most constant degree and at most a constant number of arguments. They rely upon involved algebraic arguments to construct a lower bound for each point set P that matches the universal running time of [SICOMP'86]. We provide a different proof of universal optimality. Instead of restricting the computational model, we further specify the output. We require as output (1) the convex hull, and (2) for each internal point of P a witness for it being internal. Our argument is shorter, perhaps simpler, and applicable in more general models of computation.

Cite as

Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann. A Combinatorial Proof of Universal Optimality for Computing a Planar Convex Hull. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 102:1-102:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ESA.2025.102,
  author =	{van der Hoog, Ivor and Rotenberg, Eva and Rutschmann, Daniel},
  title =	{{A Combinatorial Proof of Universal Optimality for Computing a Planar Convex Hull}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{102:1--102:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.102},
  URN =		{urn:nbn:de:0030-drops-245715},
  doi =		{10.4230/LIPIcs.ESA.2025.102},
  annote =	{Keywords: Convex hull, Combinatorial proofs, Universal optimality}
}
Document
On the Satisfiability of Random 3-SAT Formulas with k-Wise Independent Clauses

Authors: Ioannis Caragiannis, Nick Gravin, and Zhile Jiang


Abstract
The problem of identifying the satisfiability threshold of random 3-SAT formulas has received a lot of attention during the last decades and has inspired the study of other threshold phenomena in random combinatorial structures. The classical assumption in this line of research is that, for a given set of n Boolean variables, each clause is drawn uniformly at random among all sets of three literals from these variables, independently from other clauses. Here, we keep the uniform distribution of each clause, but deviate significantly from the independence assumption and consider richer families of probability distributions. For integer parameters n, m, and k, we denote by ℱ_k(n,m) the family of probability distributions that produce formulas with m clauses, each selected uniformly at random from all sets of three literals from the n variables, so that the clauses are k-wise independent. Our aim is to make general statements about the satisfiability or unsatisfiability of formulas produced by distributions in ℱ_k(n,m) for different values of the parameters n, m, and k. Our technical results are as follows: First, all probability distributions in ℱ₂(n,m) with m ∈ Ω(n³) return unsatisfiable formulas with high probability. This result is tight. We show that there exists a probability distribution 𝒟 ∈ ℱ₃(n,m) with m ∈ O(n³) so that a random formula drawn from 𝒟 is almost always satisfiable. In contrast, for m ∈ Ω(n²), any probability distribution 𝒟 ∈ ℱ₄(n,m) returns an unsatisfiable formula with high probability. This is our most surprising and technically involved result. Finally, for any integer k ≥ 2, any probability distribution 𝒟 ∈ ℱ_k(n,m) with m ∈ O(n^{1-1/k}) returns a satisfiable formula with high probability.

Cite as

Ioannis Caragiannis, Nick Gravin, and Zhile Jiang. On the Satisfiability of Random 3-SAT Formulas with k-Wise Independent Clauses. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 103:1-103:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{caragiannis_et_al:LIPIcs.ESA.2025.103,
  author =	{Caragiannis, Ioannis and Gravin, Nick and Jiang, Zhile},
  title =	{{On the Satisfiability of Random 3-SAT Formulas with k-Wise Independent Clauses}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{103:1--103:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.103},
  URN =		{urn:nbn:de:0030-drops-245721},
  doi =		{10.4230/LIPIcs.ESA.2025.103},
  annote =	{Keywords: Random 3-SAT, k-wise independence, Random bipartite graph}
}
Document
Optimal Antimatroid Sorting

Authors: Benjamin Aram Berendsohn


Abstract
The classical comparison-based sorting problem asks us to find the underlying total ordering of a given set of elements, where we can only access the elements via comparisons. In this paper, we study a restricted version, where, as a hint, a set T of possible total orderings is given, usually in some compressed form. Recently, an algorithm called topological heapsort with optimal running time was found for case where T is the set of topological orderings of a given directed acyclic graph, or, equivalently, T is the set of linear extensions of a partial ordering [Haeupler et al. 2024]. We show that a simple generalization of topological heapsort is applicable to a much broader class of restricted sorting problems, where T corresponds to a given antimatroid. As a consequence, we obtain optimal algorithms for the following restricted sorting problems, where the allowed total orders are … - … restricted by a given set of monotone precedence formulas; - … the perfect elimination orders of a given chordal graph; or - … the possible vertex search orders of a given connected rooted graph.

Cite as

Benjamin Aram Berendsohn. Optimal Antimatroid Sorting. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 104:1-104:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{berendsohn:LIPIcs.ESA.2025.104,
  author =	{Berendsohn, Benjamin Aram},
  title =	{{Optimal Antimatroid Sorting}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{104:1--104:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.104},
  URN =		{urn:nbn:de:0030-drops-245735},
  doi =		{10.4230/LIPIcs.ESA.2025.104},
  annote =	{Keywords: sorting, working-set heap, greedy, antimatroid}
}
Document
Parameterized Algorithms for Computing Pareto Sets

Authors: Joshua Marc Könen, Heiko Röglin, and Tarek Stuck


Abstract
The problem of computing the set of Pareto-optimal solutions has been studied for a variety of multiobjective optimization problems. For many such problems, algorithms are known that compute the Pareto set in (weak) output-polynomial time. These algorithms are often based on dynamic programming and by weak output-polynomial time, we mean that the running time depends polynomially on the size of the Pareto set but also on the sizes of the Pareto sets of the subproblems that occur in the dynamic program. For some problems, like the multiobjective minimum spanning tree problem, such algorithms are not known to exist and for other problems, like multiobjective versions of many NP-hard problems, such algorithms cannot exist, unless 𝒫 = 𝒩𝒫. Dynamic programming over tree decompositions is a common technique in parameterized algorithms. In this paper, we study whether this technique can also be applied to compute Pareto sets of multiobjective optimization problems. We first derive an algorithm to compute the Pareto set for the multicriteria s-t cut problem and show how this result can be applied to a polygon aggregation problem arising in cartography that has recently been introduced by Rottmann et al. (GIScience 2021). We also show how to apply these techniques to also compute the Pareto set of the multiobjective minimum spanning tree problem and for the multiobjective TSP. The running time of our algorithms is O(f(w)⋅poly(n,p_{max})), where f is some function in the treewidth w, n is the input size, and p_{max} is an upper bound on the size of the Pareto sets of the subproblems that occur in the dynamic program. Finally, we present an experimental evaluation of computing Pareto sets on real-world instances of polygon aggregation problems. For this matter we devised a task-specific data structure that allows for efficient storage and modification of large sets of Pareto-optimal solutions. Throughout the implementation process, we incorporated several improved strategies and heuristics that significantly reduced both runtime and memory usage, enabling us to solve instances with treewidth of up to 22 within reasonable amount of time. Moreover, we conducted a preprocessing study to compare different tree decompositions in terms of their estimated overall runtime.

Cite as

Joshua Marc Könen, Heiko Röglin, and Tarek Stuck. Parameterized Algorithms for Computing Pareto Sets. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 105:1-105:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{konen_et_al:LIPIcs.ESA.2025.105,
  author =	{K\"{o}nen, Joshua Marc and R\"{o}glin, Heiko and Stuck, Tarek},
  title =	{{Parameterized Algorithms for Computing Pareto Sets}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{105:1--105:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.105},
  URN =		{urn:nbn:de:0030-drops-245749},
  doi =		{10.4230/LIPIcs.ESA.2025.105},
  annote =	{Keywords: parameterized algorithms, treewidth, multicriteria optimization problems, multicriteria MST, multicriteria TSP, polygon aggregation}
}
Document
On Estimating the Quantum 𝓁_α Distance

Authors: Yupan Liu and Qisheng Wang


Abstract
We study the computational complexity of estimating the quantum 𝓁_α distance T_α(ρ₀,ρ₁), defined via the Schatten α-norm ‖A‖_α := tr(|A|^α)^{1/α}, given poly(n)-size state-preparation circuits of n-qubit quantum states ρ₀ and ρ₁. This quantity serves as a lower bound on the trace distance for α > 1. For any constant α > 1, we develop an efficient rank-independent quantum estimator for T_α(ρ₀,ρ₁) with time complexity poly(n), achieving an exponential speedup over the prior best results of exp(n) due to Wang, Guan, Liu, Zhang, and Ying (IEEE Trans. Inf. Theory 2024). Our improvement leverages efficiently computable uniform polynomial approximations of signed positive power functions within quantum singular value transformation, thereby eliminating the dependence on the rank of the states. Our quantum algorithm reveals a dichotomy in the computational complexity of the Quantum State Distinguishability Problem with Schatten α-norm (QSD_α), which involves deciding whether T_α(ρ₀,ρ₁) is at least 2/5 or at most 1/5. This dichotomy arises between the cases of constant α > 1 and α = 1: - For any 1+Ω(1) ≤ α ≤ O(1), QSD_α is BQP-complete. - For any 1 ≤ α ≤ 1+1/n, QSD_α is QSZK-complete, implying that no efficient quantum estimator for T_α(ρ₀,ρ₁) exists unless BQP = QSZK. The hardness results follow from reductions based on new rank-dependent inequalities for the quantum 𝓁_α distance with 1 ≤ α ≤ ∞, which are of independent interest.

Cite as

Yupan Liu and Qisheng Wang. On Estimating the Quantum 𝓁_α Distance. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 106:1-106:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ESA.2025.106,
  author =	{Liu, Yupan and Wang, Qisheng},
  title =	{{On Estimating the Quantum 𝓁\underline\alpha Distance}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{106:1--106:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.106},
  URN =		{urn:nbn:de:0030-drops-245758},
  doi =		{10.4230/LIPIcs.ESA.2025.106},
  annote =	{Keywords: quantum algorithms, quantum state testing, trace distance, Schatten norm}
}
Document
Length-Constrained Directed Expander Decomposition and Length-Constrained Vertex-Capacitated Flow Shortcuts

Authors: Bernhard Haeupler, Yaowei Long, Thatchaphol Saranurak, and Shengzhe Wang


Abstract
We show the existence of length-constrained expander decomposition in directed graphs and undirected vertex-capacitated graphs. Previously, its existence was shown only in undirected edge-capacitated graphs [Bernhard Haeupler et al., 2022; Haeupler et al., 2024]. Along the way, we prove the multi-commodity maxflow-mincut theorems for length-constrained expansion in both directed and undirected vertex-capacitated graphs. Based on our decomposition, we build a length-constrained flow shortcut for undirected vertex-capacitated graphs, which roughly speaking is a set of edges and vertices added to the graph so that every multi-commodity flow demand can be routed with approximately the same vertex-congestion and length, but all flow paths only contain few edges. This generalizes the shortcut for undirected edge-capacitated graphs from [Bernhard Haeupler et al., 2024]. Length-constrained expander decomposition and flow shortcuts have been crucial in the recent algorithms in undirected edge-capacitated graphs [Bernhard Haeupler et al., 2024; Haeupler et al., 2024]. Our work thus serves as a foundation to generalize these concepts to directed and vertex-capacitated graphs.

Cite as

Bernhard Haeupler, Yaowei Long, Thatchaphol Saranurak, and Shengzhe Wang. Length-Constrained Directed Expander Decomposition and Length-Constrained Vertex-Capacitated Flow Shortcuts. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 107:1-107:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ESA.2025.107,
  author =	{Haeupler, Bernhard and Long, Yaowei and Saranurak, Thatchaphol and Wang, Shengzhe},
  title =	{{Length-Constrained Directed Expander Decomposition and Length-Constrained Vertex-Capacitated Flow Shortcuts}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{107:1--107:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.107},
  URN =		{urn:nbn:de:0030-drops-245765},
  doi =		{10.4230/LIPIcs.ESA.2025.107},
  annote =	{Keywords: Length-Constrained Expander, Expander Decomposition, Shortcut}
}
Document
Deterministic Approximation Algorithm for Graph Burning

Authors: Matej Lieskovský


Abstract
Graph Burning models a contagion spreading in a network as a process such that in each step one node is infected and also the infection spreads to all neighbors of previously infected nodes. Formally, the burning number b(G) of a given graph G = (V,E), possibly with edge lengths, is the minimum number g such that there exists a sequence of nodes v₁,…,v_g satisfying the property that for each w ∈ V there exists i ∈ {1,…,g} so that the distance between w and v_i is at most g-i. We present an elegant deterministic 2.314-approximation algorithm for the Graph Burning problem on general graphs with arbitrary edge lengths. This algorithm matches the approximation ratio of the previous randomized 2.314-approximation algorithm and improves on the previous deterministic 3-approximation algorithm.

Cite as

Matej Lieskovský. Deterministic Approximation Algorithm for Graph Burning. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 108:1-108:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lieskovsky:LIPIcs.ESA.2025.108,
  author =	{Lieskovsk\'{y}, Matej},
  title =	{{Deterministic Approximation Algorithm for Graph Burning}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{108:1--108:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.108},
  URN =		{urn:nbn:de:0030-drops-245775},
  doi =		{10.4230/LIPIcs.ESA.2025.108},
  annote =	{Keywords: Graph Algorithms, Approximation Algorithms, Graph Burning}
}
Document
Combined Search and Encoding for Seeds, with an Application to Minimal Perfect Hashing

Authors: Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler


Abstract
Randomised algorithms often employ methods that can fail and that are retried with independent randomness until they succeed. Randomised data structures therefore often store indices of successful attempts, called seeds. If n such seeds are required (e.g., for independent substructures) the standard approach is to compute for each i ∈ [n] the smallest successful seed S_i and store S = (S_1,…,S_n). The central observation of this paper is that this is not space-optimal. We present a different algorithm that computes a sequence S' = (S_1',…,S_n') of successful seeds such that the entropy of S' undercuts the entropy of S by Ω(n) bits in most cases. To achieve a memory consumption of OPT+εn, the expected number of inspected seeds increases by a factor of 𝒪(1/ε). We demonstrate the usefulness of our findings with a novel construction for minimal perfect hash functions that, for n keys and any ε ∈ [n^{-3/7},1], has space requirement (1+ε)OPT and construction time 𝒪(n/ε). All previous approaches only support ε = ω(1/log n) or have construction times that increase exponentially with 1/ε. Our implementation beats the construction throughput of the state of the art by more than two orders of magnitude for ε ≤ 3%.

Cite as

Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler. Combined Search and Encoding for Seeds, with an Application to Minimal Perfect Hashing. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 109:1-109:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lehmann_et_al:LIPIcs.ESA.2025.109,
  author =	{Lehmann, Hans-Peter and Sanders, Peter and Walzer, Stefan and Ziegler, Jonatan},
  title =	{{Combined Search and Encoding for Seeds, with an Application to Minimal Perfect Hashing}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{109:1--109:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.109},
  URN =		{urn:nbn:de:0030-drops-245780},
  doi =		{10.4230/LIPIcs.ESA.2025.109},
  annote =	{Keywords: Random Seed, Encoding, Bernoulli Process, Backtracking, Perfect Hashing}
}
Document
Faster Exponential Algorithms for Cut Problems via Geometric Data Structures

Authors: László Kozma and Junqi Tan


Abstract
For many hard computational problems, simple algorithms that run in time 2ⁿ ⋅ n^O(1) arise, say, from enumerating all subsets of a size-n set. Finding (exponentially) faster algorithms is a natural goal that has driven much of the field of exact exponential algorithms (e.g., see Fomin and Kratsch, 2010). In this paper we obtain algorithms with running time O(1.9999977ⁿ) on input graphs with n vertices, for the following well-studied problems: - d-Cut: find a proper cut in which no vertex has more than d neighbors on the other side of the cut; - Internal Partition: find a proper cut in which every vertex has at least as many neighbors on its side of the cut as on the other side; and - (α,β)-Domination: given intervals α,β ⊆ [0,n], find a subset S of the vertices, so that for every vertex v ∈ S the number of neighbors of v in S is from α and for every vertex v ∉ S, the number of neighbors of v in S is from β. Our algorithms are exceedingly simple, combining the split and list technique (Horowitz and Sahni, 1974; Williams, 2005) with a tool from computational geometry: orthogonal range searching in the moderate dimensional regime (Chan, 2017). Our technique is applicable to the decision, optimization and counting versions of these problems and easily extends to various generalizations with more fine-grained, vertex-specific constraints, as well as to directed, balanced, and other variants. Algorithms with running times of the form cⁿ, for c < 2, were known for the first problem only for constant d, and for the third problem for certain special cases of α and β; for the second problem we are not aware of such results.

Cite as

László Kozma and Junqi Tan. Faster Exponential Algorithms for Cut Problems via Geometric Data Structures. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 110:1-110:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kozma_et_al:LIPIcs.ESA.2025.110,
  author =	{Kozma, L\'{a}szl\'{o} and Tan, Junqi},
  title =	{{Faster Exponential Algorithms for Cut Problems via Geometric Data Structures}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{110:1--110:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.110},
  URN =		{urn:nbn:de:0030-drops-245796},
  doi =		{10.4230/LIPIcs.ESA.2025.110},
  annote =	{Keywords: graph algorithms, cuts, exponential time, data structures}
}
Document
Hardness of Median and Center in the Ulam Metric

Authors: Nick Fischer, Elazar Goldenberg, Mursalin Habib, and Karthik C. S.


Abstract
The classical rank aggregation problem seeks to combine a set X of n permutations into a single representative "consensus" permutation. In this paper, we investigate two fundamental rank aggregation tasks under the well-studied Ulam metric: computing a median permutation (which minimizes the sum of Ulam distances to X) and computing a center permutation (which minimizes the maximum Ulam distance to X) in two settings. - Continuous Setting: In the continuous setting, the median/center is allowed to be any permutation. It is known that computing a center in the Ulam metric is NP-hard and we add to this by showing that computing a median is NP-hard as well via a simple reduction from the Max-Cut problem. While this result may not be unexpected, it had remained elusive until now and confirms a speculation by Chakraborty, Das, and Krauthgamer [SODA '21]. - Discrete Setting: In the discrete setting, the median/center must be a permutation from the input set. We fully resolve the fine-grained complexity of the discrete median and discrete center problems under the Ulam metric, proving that the naive Õ(n² L)-time algorithm (where L is the length of the permutation) is conditionally optimal. This resolves an open problem raised by Abboud, Bateni, Cohen-Addad, Karthik C. S., and Seddighin [APPROX '23]. Our reductions are inspired by the known fine-grained lower bounds for similarity measures, but we face and overcome several new highly technical challenges.

Cite as

Nick Fischer, Elazar Goldenberg, Mursalin Habib, and Karthik C. S.. Hardness of Median and Center in the Ulam Metric. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 111:1-111:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ESA.2025.111,
  author =	{Fischer, Nick and Goldenberg, Elazar and Habib, Mursalin and Karthik C. S.},
  title =	{{Hardness of Median and Center in the Ulam Metric}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{111:1--111:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.111},
  URN =		{urn:nbn:de:0030-drops-245809},
  doi =		{10.4230/LIPIcs.ESA.2025.111},
  annote =	{Keywords: Ulam distance, median, center, rank aggregation, fine-grained complexity}
}
Document
A Faster Parametric Search for the Integral Quickest Transshipment Problem

Authors: Mariia Anapolska, Dario van den Boom, Christina Büsing, and Timo Gersing


Abstract
Algorithms for computing fractional solutions to the quickest transshipment problem have been significantly improved since Hoppe and Tardos first solved the problem in strongly polynomial time. For integral solutions, however, no structural improvements on their algorithm itself have yet been proposed. Runtime improvements are limited to general progress on submodular function minimization (SFM), which is an integral part of Hoppe and Tardos' algorithm. In fact, SFM constitutes the main computational load of the algorithm, as the runtime is blown up by using it within Megiddo’s parametric search algorithm. We replace this part of Hoppe and Tardos' algorithm with a more efficient routine that solves only a linear number of SFM and, in contrast to previous techniques, exclusively uses minimum cost flow algorithms within Megiddo’s parametric search. Our approach improves the state-of-the-art runtime from 𝒪̃(m⁴ k^15) down to 𝒪̃(m²k⁵ + m⁴ k²), where k is the number of terminals and m is the number of arcs.

Cite as

Mariia Anapolska, Dario van den Boom, Christina Büsing, and Timo Gersing. A Faster Parametric Search for the Integral Quickest Transshipment Problem. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 112:1-112:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{anapolska_et_al:LIPIcs.ESA.2025.112,
  author =	{Anapolska, Mariia and van den Boom, Dario and B\"{u}sing, Christina and Gersing, Timo},
  title =	{{A Faster Parametric Search for the Integral Quickest Transshipment Problem}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{112:1--112:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.112},
  URN =		{urn:nbn:de:0030-drops-245817},
  doi =		{10.4230/LIPIcs.ESA.2025.112},
  annote =	{Keywords: Flow over time, dynamic transshipment, quickest transshipment, parametric submodular functions, efficient algorithms}
}
Document
Bootstrapping Dynamic APSP via Sparsification

Authors: Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein


Abstract
We give a simple algorithm for the dynamic approximate All-Pairs Shortest Paths (APSP) problem. Given a graph G = (V, E, l) with polynomially bounded edge lengths, our data structure processes |E| edge insertions and deletions in total time |E|^{1+o(1)} and provides query access to |E|^o(1)-approximate distances in time Õ(1) per query. We produce a data structure that mimics Thorup-Zwick distance oracles [Thorup and Zwick, 2005], but is dynamic and deterministic. Our algorithm selects a small number of pivot vertices. Then, for every other vertex, it reduces distance computation to maintaining distances to a small neighborhood around that vertex and to the nearest pivot. We maintain distances between pivots efficiently by representing them in a smaller graph and recursing. We maintain these smaller graphs by (a) reducing vertex count using the dynamic distance-preserving core graphs of Kyng-Meierhans-Probst Gutenberg [Kyng et al., 2024] in a black-box manner and (b) reducing edge-count using a dynamic spanner akin to Chen-Kyng-Liu-Meierhans-Probst Gutenberg [Chen et al., 2024]. Our dynamic spanner internally uses an APSP data structure. Choosing a large enough size reduction factor in the first step allows us to simultaneously bootstrap a spanner and a dynamic APSP data structure. Notably, our approach does not need expander graphs, an otherwise ubiquitous tool in derandomization.

Cite as

Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein. Bootstrapping Dynamic APSP via Sparsification. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 113:1-113:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kyng_et_al:LIPIcs.ESA.2025.113,
  author =	{Kyng, Rasmus and Meierhans, Simon and Z\"{o}cklein, Gernot},
  title =	{{Bootstrapping Dynamic APSP via Sparsification}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{113:1--113:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.113},
  URN =		{urn:nbn:de:0030-drops-245826},
  doi =		{10.4230/LIPIcs.ESA.2025.113},
  annote =	{Keywords: Dynamic Graph Algorithms, Spanners, Vertex Sparsification, Bootstrapping}
}
Document
A Deterministic Partition Tree and Applications

Authors: Haitao Wang


Abstract
In this paper, we present a deterministic variant of Chan’s randomized partition tree [Discret. Comput. Geom., 2012]. This result leads to numerous applications. In particular, for d-dimensional simplex range counting (for any constant d ≥ 2), we construct a data structure using O(n) space and O(n^{1+ε}) preprocessing time, such that each query can be answered in o(n^{1-1/d}) time (specifically, O(n^{1-1/d} / log^Ω(1) n) time), thereby breaking an Ω(n^{1-1/d}) lower bound known for the semigroup setting. Notably, our approach does not rely on any bit-packing techniques. We also obtain deterministic improvements for several other classical problems, including simplex range stabbing counting and reporting, segment intersection detection, counting and reporting, ray-shooting among segments, and more. Similar to Chan’s original randomized partition tree, we expect that additional applications will emerge in the future, especially in situations where deterministic results are preferred.

Cite as

Haitao Wang. A Deterministic Partition Tree and Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 114:1-114:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wang:LIPIcs.ESA.2025.114,
  author =	{Wang, Haitao},
  title =	{{A Deterministic Partition Tree and Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{114:1--114:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.114},
  URN =		{urn:nbn:de:0030-drops-245836},
  doi =		{10.4230/LIPIcs.ESA.2025.114},
  annote =	{Keywords: partition trees, simplex range searching, segment intersection queries, ray-shootings, multi-level data structures}
}
Document
Smoothed Analysis of Online Metric Problems

Authors: Christian Coester and Jack Umenberger


Abstract
We study three classical online problems - k-server, k-taxi, and chasing size k sets - through a lens of smoothed analysis. Our setting allows request locations to be adversarial up to small perturbations, interpolating between worst-case and average-case models. Specifically, we show that if the metric space is contained in a ball in any normed space and requests are drawn from distributions whose density functions are upper bounded by 1/σ times the uniform density over the ball, then all three problems admit polylog(k/σ)-competitive algorithms. Our approach is simple: it reduces smoothed instances to fully adversarial instances on finite metrics and leverages existing algorithms in a black-box manner. We also provide a lower bound showing that no algorithm can achieve a competitive ratio sub-polylogarithmic in k/σ, matching our upper bounds up to the exponent of the polylogarithm. In contrast, the best known competitive ratios for these problems in the fully adversarial setting are 2k-1, ∞ and Θ(k²), respectively.

Cite as

Christian Coester and Jack Umenberger. Smoothed Analysis of Online Metric Problems. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 115:1-115:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{coester_et_al:LIPIcs.ESA.2025.115,
  author =	{Coester, Christian and Umenberger, Jack},
  title =	{{Smoothed Analysis of Online Metric Problems}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{115:1--115:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.115},
  URN =		{urn:nbn:de:0030-drops-245847},
  doi =		{10.4230/LIPIcs.ESA.2025.115},
  annote =	{Keywords: Online Algorithms, Competitive Analysis, Smoothed Analysis, k-server, k-taxi, Metrical Service Systems}
}
Document
Fast Gaussian Elimination for Low Treewidth Matrices

Authors: Martin Fürer, Carlos Hoppen, and Vilmar Trevisan


Abstract
Let A = (a_{ij}) be an m× n matrix whose elements lie in an arbitrary field 𝔽, and let G be the bipartite graph with vertex set {v_1,…,v_m} ∪ {w_1,…,w_n} such that vertices v_i and w_j are adjacent if and only if a_{ij} ≠ 0. We introduce an algorithm that finds an m× n matrix U in row echelon form and a permutation matrix Q of order n, such that AQ is row equivalent to U. If a tree decomposition 𝒯 of G of width k and size O(k(m+n)) is part of the input, then Q and the columns of U that contain a pivot can be computed in time O(k²(m+n)). Among other things, this allows us to compute the rank and the determinant of A in time O(k²(m+n)). It also allows us to decide in time O(k²(m+n)) whether the linear system Ax = b has a solution and to compute a solution of the linear system in case it exists.

Cite as

Martin Fürer, Carlos Hoppen, and Vilmar Trevisan. Fast Gaussian Elimination for Low Treewidth Matrices. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 116:1-116:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{furer_et_al:LIPIcs.ESA.2025.116,
  author =	{F\"{u}rer, Martin and Hoppen, Carlos and Trevisan, Vilmar},
  title =	{{Fast Gaussian Elimination for Low Treewidth Matrices}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{116:1--116:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.116},
  URN =		{urn:nbn:de:0030-drops-245855},
  doi =		{10.4230/LIPIcs.ESA.2025.116},
  annote =	{Keywords: Gaussian elimination, FPT algorithms, treewidth}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail