LIPIcs, Volume 308

32nd Annual European Symposium on Algorithms (ESA 2024)



Thumbnail PDF

Event

ESA 2024, September 2-4, 2024, Royal Holloway, London, United Kingdom

Editors

Timothy Chan
  • University of Illinois at Urbana-Champaign, IL, USA
Johannes Fischer
  • TU Dortmund University, Germany
John Iacono
  • Université libre de Bruxelles, Belgium
Grzegorz Herman
  • Jagiellonian University, Kraków, Poland

Publication Details

  • published at: 2024-09-23
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-338-6
  • DBLP: db/conf/esa/esa2024

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 308, ESA 2024, Complete Volume

Authors: Timothy Chan, Johannes Fischer, John Iacono, and Grzegorz Herman


Abstract
LIPIcs, Volume 308, ESA 2024, Complete Volume

Cite as

32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 1-1734, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Proceedings{chan_et_al:LIPIcs.ESA.2024,
  title =	{{LIPIcs, Volume 308, ESA 2024, Complete Volume}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{1--1734},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024},
  URN =		{urn:nbn:de:0030-drops-210704},
  doi =		{10.4230/LIPIcs.ESA.2024},
  annote =	{Keywords: LIPIcs, Volume 308, ESA 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Timothy Chan, Johannes Fischer, John Iacono, and Grzegorz Herman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 0:i-0:xxii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ESA.2024.0,
  author =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{0:i--0:xxii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.0},
  URN =		{urn:nbn:de:0030-drops-210714},
  doi =		{10.4230/LIPIcs.ESA.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back (Invited Talk)

Authors: Vincent Cohen-Addad


Abstract
Correlation clustering is a classic model for clustering problems arising in machine learning and data mining. Given a set of data elements represented as vertices of a graph and pairwise similarity represented as edges, the goal is to find a partition of the vertex set so as to minimize the total number of edges across the parts plus the total number of non-edges within the parts. Introduced in the early 2000s [Bansal et al., 2004], correlation clustering has received a large amount of attention through the years. A natural linear programming relaxation was shown to have an integrality gap of at least 2 and at most 2.5 [Ailon et al., 2008] in 2005, and in 2015 at most 2.06 [Chawla et al., 2015]. In 2021, motivated by large-scale application new structural insights allowed to derive a simple, practical algorithm that achieved an O(1)-approximation in a variety of models (Massively Parallel, Sublinear, Streaming or Differentially-private) [Vincent Cohen{-}Addad et al., 2021; Cohen-Addad et al., 2022]. These new insights turned out to be a key building block in designing better algorithms: It serves as a pre-clustering of the input graph that enables algorithm with approximation guarantees significantly better than 2 [Vincent Cohen{-}Addad et al., 2023; Vincent Cohen{-}Addad et al., 2022]. It is a key component in the new algorithm that achieves a 1.44-approximation [Nairen Cao et al., 2024] and in the new local-search based 1.84-approximation for the Massively Parallel, Sublinear, and Streaming models [Vincent Cohen{-}Addad et al., 2024]. This talk will review the above recent development and what are the main open research directions. A collection of joint works with Nairen Cao, Silvio Lattanzi, Euiwoong Lee, Shi Li, David Rasmussen Lolck, Slobodan Mitrovic, Alantha Newman, Ashkan Norouzi-Fard, Nikos Parotsidis, Marcin Pilipczuk, Jakub Tarnawski, Mikkel Thorup, Lukas Vogl, Shuyi Yan, Hanwen Zhang.

Cite as

Vincent Cohen-Addad. Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back (Invited Talk). In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohenaddad:LIPIcs.ESA.2024.1,
  author =	{Cohen-Addad, Vincent},
  title =	{{Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.1},
  URN =		{urn:nbn:de:0030-drops-210728},
  doi =		{10.4230/LIPIcs.ESA.2024.1},
  annote =	{Keywords: Approximation Algorithms, Clustering, Local Model}
}
Document
Invited Talk
Simple (Invited Talk)

Authors: Eva Rotenberg


Abstract
Simplicity in algorithms has various aspects; interpretations and implications. One is the simplicity of the algorithmic solution itself: if an algorithm (or data structure) has a brief verbal description or can be written with few lines of pseudocode, this can lead to easier, more robust, and possibly more efficient implementations. Another aspect of simplicity relates to the proofs of correctness and efficiency of our algorithmic solutions. Here, we experience that algorithms and data structures with simpler proofs of statements about their properties can be easier to understand, easier to teach, and sometimes, easier to generalise. Simplification of proofs also receives attention in mathematics; here, too, simplification has benefits to clarity of exposition and possibility of generalisation. There are even examples of proof simplification leading to the design of new and more efficient algorithms. This talk will present examples illustrating these various aspects of simplicity. Examples where algorithmic simplification or proof simplification has led to improved performance of algorithms and data structures, in theory, in practice, or both. Finally, some of the most attractive questions in discrete mathematics and in theory of computing have a property in common: they are very simple to pose, but surprisingly, to our knowledge, not very simple to answer. The talk will include examples of such questions, which I leave as an open problem for the audience.

Cite as

Eva Rotenberg. Simple (Invited Talk). In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 2:1-2:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rotenberg:LIPIcs.ESA.2024.2,
  author =	{Rotenberg, Eva},
  title =	{{Simple}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{2:1--2:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.2},
  URN =		{urn:nbn:de:0030-drops-210739},
  doi =		{10.4230/LIPIcs.ESA.2024.2},
  annote =	{Keywords: Simplicity, graph algorithms, computational geometry, algorithmic simplification, data structures, combinatorics, proof simplification, dynamic graphs}
}
Document
From Donkeys to Kings in Tournaments

Authors: Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon


Abstract
A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a king. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k ≤ 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n^{3/2}) queries). On the other hand, we show that finding k ≥ 4 kings requires Ω(n²) queries, even in the randomized case. We consider the RAM model for k ≥ 4. We show an algorithm that finds k kings in time O(kn²), which is optimal for constant values of k. Alternatively, one can also find k ≥ 4 kings in time n^{ω} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.

Cite as

Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon. From Donkeys to Kings in Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.3,
  author =	{Abboud, Amir and Grossman, Tomer and Naor, Moni and Solomon, Tomer},
  title =	{{From Donkeys to Kings in Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.3},
  URN =		{urn:nbn:de:0030-drops-210740},
  doi =		{10.4230/LIPIcs.ESA.2024.3},
  annote =	{Keywords: Tournament Graphs, Kings, Query Complexity, Fine Grained Complexity}
}
Document
Worst-Case to Expander-Case Reductions: Derandomized and Generalized

Authors: Amir Abboud and Nathan Wallheimer


Abstract
A recent paper by Abboud and Wallheimer [ITCS 2023] presents self-reductions for various fundamental graph problems, which transform worst-case instances to expanders, thus proving that the complexity remains unchanged if the input is assumed to be an expander. An interesting corollary of their self-reductions is that if some problem admits such reduction, then the popular algorithmic paradigm based on expander-decompositions is useless against it. In this paper, we improve their core gadget, which augments a graph to make it an expander while retaining its important structure. Our new core construction has the benefit of being simple to analyze and generalize while obtaining the following results: - A derandomization of the self-reductions, showing that the equivalence between worst-case and expander-case holds even for deterministic algorithms, and ruling out the use of expander-decompositions as a derandomization tool. - An extension of the results to other models of computation, such as the Fully Dynamic model and the Congested Clique model. In the former, we either improve or provide an alternative approach to some recent hardness results for dynamic expander graphs by Henzinger, Paz, and Sricharan [ESA 2022]. In addition, we continue this line of research by designing new self-reductions for more problems, such as Max-Cut and dynamic Densest Subgraph, and demonstrating that the core gadget can be utilized to lift lower bounds based on the OMv Conjecture to expanders.

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions: Derandomized and Generalized. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.4,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions: Derandomized and Generalized}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.4},
  URN =		{urn:nbn:de:0030-drops-210751},
  doi =		{10.4230/LIPIcs.ESA.2024.4},
  annote =	{Keywords: Fine-grained complexity, expander graphs, self-reductions, worst-case to expander-case, expander decomposition, dynamic algorithms, exact and parameterized complexity, max-cut, maximum matching, k-clique detection, densest subgraph}
}
Document
Online Sorting and Online TSP: Randomized, Stochastic, and High-Dimensional

Authors: Mikkel Abrahamsen, Ioana O. Bercea, Lorenzo Beretta, Jonas Klausen, and László Kozma


Abstract
In the online sorting problem, n items are revealed one by one and have to be placed (immediately and irrevocably) into empty cells of a size-n array. The goal is to minimize the sum of absolute differences between items in consecutive cells. This natural problem was recently introduced by Aamand, Abrahamsen, Beretta, and Kleist (SODA 2023) as a tool in their study of online geometric packing problems. They showed that when the items are reals from the interval [0,1] a competitive ratio of O(√n) is achievable, and no deterministic algorithm can improve this ratio asymptotically. In this paper, we extend and generalize the study of online sorting in three directions: - randomized: we settle the open question of Aamand et al. by showing that the O(√n) competitive ratio for the online sorting of reals cannot be improved even with the use of randomness; - stochastic: we consider inputs consisting of n samples drawn uniformly at random from an interval, and give an algorithm with an improved competitive ratio of Õ(n^{1/4}). The result reveals connections between online sorting and the design of efficient hash tables; - high-dimensional: we show that Õ(√n)-competitive online sorting is possible even for items from ℝ^d, for arbitrary fixed d, in an adversarial model. This can be viewed as an online variant of the classical TSP problem where tasks (cities to visit) are revealed one by one and the salesperson assigns each task (immediately and irrevocably) to its timeslot. Along the way, we also show a tight O(log n)-competitiveness result for uniform metrics, i.e., where items are of different types and the goal is to order them so as to minimize the number of switches between consecutive items of different types.

Cite as

Mikkel Abrahamsen, Ioana O. Bercea, Lorenzo Beretta, Jonas Klausen, and László Kozma. Online Sorting and Online TSP: Randomized, Stochastic, and High-Dimensional. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abrahamsen_et_al:LIPIcs.ESA.2024.5,
  author =	{Abrahamsen, Mikkel and Bercea, Ioana O. and Beretta, Lorenzo and Klausen, Jonas and Kozma, L\'{a}szl\'{o}},
  title =	{{Online Sorting and Online TSP: Randomized, Stochastic, and High-Dimensional}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.5},
  URN =		{urn:nbn:de:0030-drops-210766},
  doi =		{10.4230/LIPIcs.ESA.2024.5},
  annote =	{Keywords: sorting, online algorithm, TSP}
}
Document
Lower Envelopes of Surface Patches in 3-Space

Authors: Pankaj K. Agarwal, Esther Ezra, and Micha Sharir


Abstract
Let Σ be a collection of n surface patches, each being the graph of a partially defined semi-algebraic function of constant description complexity, and assume that any triple of them intersect in at most s = 2 points. We show that the complexity of the lower envelope of the surfaces in Σ is O(n² log^{6+ε} n), for any ε > 0. This almost settles a long-standing open problem posed by Halperin and Sharir, thirty years ago, who showed the nearly-optimal albeit weaker bound of O(n²⋅ 2^{c√{log n}}) on the complexity of the lower envelope, where c > 0 is some constant. Our approach is fairly simple and is based on hierarchical cuttings and gradations, as well as a simple charging scheme. We extend our analysis to the case s > 2, under a "favorable cross section" assumption, in which case we show that the bound on the complexity of the lower envelope is O(n² log^{11+ε} n), for any ε > 0. Incorporating these bounds with the randomized incremental construction algorithms of Boissonnat and Dobrindt, we obtain efficient constructions of lower envelopes of surface patches with the above properties, whose overall expected running time is O(n² polylog), as well as efficient data structures that support point location queries in their minimization diagrams in O(log²n) expected time.

Cite as

Pankaj K. Agarwal, Esther Ezra, and Micha Sharir. Lower Envelopes of Surface Patches in 3-Space. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ESA.2024.6,
  author =	{Agarwal, Pankaj K. and Ezra, Esther and Sharir, Micha},
  title =	{{Lower Envelopes of Surface Patches in 3-Space}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{6:1--6:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.6},
  URN =		{urn:nbn:de:0030-drops-210772},
  doi =		{10.4230/LIPIcs.ESA.2024.6},
  annote =	{Keywords: Hierarchical cuttings, surface patches in 3-space, lower envelopes, charging scheme, gradation}
}
Document
Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments

Authors: Pankaj K. Agarwal, Haim Kaplan, Matthew J. Katz, and Micha Sharir


Abstract
In this paper we study a few proximity problems related to a set of pairwise-disjoint segments in {ℝ}². Let S be a set of n pairwise-disjoint segments in {ℝ}², and let r > 0 be a parameter. We define the segment proximity graph of S to be G_r(S) := (S,E), where E = {(e₁,e₂) ∣ dist(e₁,e₂) ≤ r} and dist (e₁,e₂) = min_{(p,q) ∈ e₁× e₂} ‖p-q‖ is the Euclidean distance between e₁ and e₂. We define the weight of an edge (e₁,e₂) ∈ E to be dist(e₁,e₂). We first present a simple grid-based O(nlog² n)-time algorithm for computing a BFS tree of G_r(S). We apply it to obtain an O^*(n^{6/5}) + O(nlog²nlogΔ)-time algorithm for the so-called reverse shortest path problem, in which we want to find the smallest value r^* for which G_{r^*}(S) contains a path of some specified length between two designated start and target segments (where the O^*(⋅) notation hides polylogarithmic factors). Here Δ = max_{e ≠ e' ∈ S} dist(e,e')/min_{e ≠ e' ∈ S} dist(e,e') is the spread of S. Next, we present a dynamic data structure that can maintain a set S of pairwise-disjoint segments in the plane under insertions/deletions, so that, for a query segment e from an unknown set Q of pairwise-disjoint segments, such that e does not intersect any segment in (the current version of) S, the segment of S closest to e can be computed in O(log⁵ n) amortized time. The amortized update time is also O(log⁵ n). We note that if the segments in S∪Q are allowed to intersect then the known lower bounds on halfplane range searching suggest that a sequence of n updates and queries may take at least close to Ω(n^{4/3}) time. One thus has to strongly rely on the non-intersecting property of S and Q to perform updates and queries in O(polylog(n)) (amortized) time each. Using these results on nearest-neighbor (NN) searching for disjoint segments, we show that a DFS tree (or forest) of G_r(S) can be computed in O^*(n) time. We also obtain an O^*(n)-time algorithm for constructing a minimum spanning tree of G_r(S). Finally, we present an O^*(n^{4/3})-time algorithm for computing a single-source shortest-path tree in G_r(S). This is the only result that does not exploit the disjointness of the input segments.

Cite as

Pankaj K. Agarwal, Haim Kaplan, Matthew J. Katz, and Micha Sharir. Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ESA.2024.7,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Katz, Matthew J. and Sharir, Micha},
  title =	{{Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.7},
  URN =		{urn:nbn:de:0030-drops-210782},
  doi =		{10.4230/LIPIcs.ESA.2024.7},
  annote =	{Keywords: segment proximity graphs, nearest neighbor searching, dynamic data structures, BFS, DFS, unit-disk graphs}
}
Document
Interval Selection in Sliding Windows

Authors: Cezar-Mihail Alexandru and Christian Konrad


Abstract
We initiate the study of the Interval Selection problem in the (streaming) sliding window model of computation. In this problem, an algorithm receives a potentially infinite stream of intervals on the line, and the objective is to maintain at every moment an approximation to a largest possible subset of disjoint intervals among the L most recent intervals, for some integer L. We give the following results: 1) In the unit-length intervals case, we give a 2-approximation sliding window algorithm with space Õ(|OPT|), and we show that any sliding window algorithm that computes a (2-ε)-approximation requires space Ω(L), for any ε > 0. 2) In the arbitrary-length case, we give a (11/3+ε)-approximation sliding window algorithm with space Õ(|OPT|), for any constant ε > 0, which constitutes our main result. We also show that space Ω(L) is needed for algorithms that compute a (2.5-ε)-approximation, for any ε > 0. Our main technical contribution is an improvement over the smooth histogram technique, which consists of running independent copies of a traditional streaming algorithm with different start times. By employing the one-pass 2-approximation streaming algorithm by Cabello and Pérez-Lantero [Theor. Comput. Sci. '17] for Interval Selection on arbitrary-length intervals as the underlying algorithm, the smooth histogram technique immediately yields a (4+ε)-approximation in this setting. Our improvement is obtained by forwarding the structure of the intervals identified in a run to the subsequent run, which constrains the shape of an optimal solution and allows us to target optimal intervals differently.

Cite as

Cezar-Mihail Alexandru and Christian Konrad. Interval Selection in Sliding Windows. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alexandru_et_al:LIPIcs.ESA.2024.8,
  author =	{Alexandru, Cezar-Mihail and Konrad, Christian},
  title =	{{Interval Selection in Sliding Windows}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.8},
  URN =		{urn:nbn:de:0030-drops-210795},
  doi =		{10.4230/LIPIcs.ESA.2024.8},
  annote =	{Keywords: Sliding window algorithms, Streaming algorithms, Interval selection}
}
Document
On Connections Between k-Coloring and Euclidean k-Means

Authors: Enver Aman, Karthik C. S., and Sharath Punna


Abstract
In the Euclidean k-means problems we are given as input a set of n points in ℝ^d and the goal is to find a set of k points C ⊆ ℝ^d, so as to minimize the sum of the squared Euclidean distances from each point in P to its closest center in C. In this paper, we formally explore connections between the k-coloring problem on graphs and the Euclidean k-means problem. Our results are as follows: - For all k ≥ 3, we provide a simple reduction from the k-coloring problem on regular graphs to the Euclidean k-means problem. Moreover, our technique extends to enable a reduction from a structured max-cut problem (which may be considered as a partial 2-coloring problem) to the Euclidean 2-means problem. Thus, we have a simple and alternate proof of the NP-hardness of Euclidean 2-means problem. - In the other direction, we mimic the O(1.7297ⁿ) time algorithm of Williams [TCS'05] for the max-cut of problem on n vertices to obtain an algorithm for the Euclidean 2-means problem with the same runtime, improving on the naive exhaustive search running in 2ⁿ⋅ poly(n,d) time. - We prove similar results and connections as above for the Euclidean k-min-sum problem.

Cite as

Enver Aman, Karthik C. S., and Sharath Punna. On Connections Between k-Coloring and Euclidean k-Means. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aman_et_al:LIPIcs.ESA.2024.9,
  author =	{Aman, Enver and Karthik C. S. and Punna, Sharath},
  title =	{{On Connections Between k-Coloring and Euclidean k-Means}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.9},
  URN =		{urn:nbn:de:0030-drops-210808},
  doi =		{10.4230/LIPIcs.ESA.2024.9},
  annote =	{Keywords: k-means, k-minsum, Euclidean space, fine-grained complexity}
}
Document
Sparse Outerstring Graphs Have Logarithmic Treewidth

Authors: Shinwoo An, Eunjin Oh, and Jie Xue


Abstract
An outerstring graph is the intersection graph of curves lying inside a disk with one endpoint on the boundary of the disk. We show that an outerstring graph with n vertices has treewidth O(αlog n), where α denotes the arboricity of the graph, with an almost matching lower bound of Ω(α log (n/α)). As a corollary, we show that a t-biclique-free outerstring graph has treewidth O(t(log t)log n). This leads to polynomial-time algorithms for most of the central NP-complete problems such as Independent Set, Vertex Cover, Dominating Set, Feedback Vertex Set, Coloring for sparse outerstring graphs. Also, we can obtain subexponential-time (exact, parameterized, and approximation) algorithms for various NP-complete problems such as Vertex Cover, Feedback Vertex Set and Cycle Packing for (not necessarily sparse) outerstring graphs.

Cite as

Shinwoo An, Eunjin Oh, and Jie Xue. Sparse Outerstring Graphs Have Logarithmic Treewidth. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{an_et_al:LIPIcs.ESA.2024.10,
  author =	{An, Shinwoo and Oh, Eunjin and Xue, Jie},
  title =	{{Sparse Outerstring Graphs Have Logarithmic Treewidth}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.10},
  URN =		{urn:nbn:de:0030-drops-210816},
  doi =		{10.4230/LIPIcs.ESA.2024.10},
  annote =	{Keywords: Outerstring graphs, geometric intersection graphs, treewidth}
}
Document
How to Reduce Temporal Cliques to Find Sparse Spanners

Authors: Sebastian Angrick, Ben Bals, Tobias Friedrich, Hans Gawendowicz, Niko Hastrich, Nicolas Klodt, Pascal Lenzner, Jonas Schmidt, George Skretas, and Armin Wells


Abstract
Many real-world networks, such as transportation or trade networks, are dynamic in the sense that the edge-set may change over time, but these changes are known in advance. This behavior is captured by the temporal graphs model, which has recently become a trending topic in theoretical computer science. A core open problem in the field is to prove the existence of linear-size temporal spanners in temporal cliques, i.e., sparse subgraphs of complete temporal graphs that ensure all-pairs reachability via temporal paths. So far, the best known result is the existence of temporal spanners with 𝒪(nlog n) many edges. We present significant progress towards proving whether linear-size temporal spanners exist in all temporal cliques. We adapt techniques used in previous works and heavily expand and generalize them. This allows us to show that the existence of a linear spanner in cliques and bi-cliques is equivalent and using this, we provide a simpler and more intuitive proof of the 𝒪(nlog n) bound by giving an efficient algorithm for finding linearithmic spanners. Moreover, we use our novel and efficiently computable approach to show that a large class of temporal cliques, called edge-pivotable graphs, admit linear-size temporal spanners. To contrast this, we investigate other classes of temporal cliques that do not belong to the class of edge-pivotable graphs. We introduce two such graph classes and we develop novel algorithmic techniques for establishing the existence of linear temporal spanners in these graph classes as well.

Cite as

Sebastian Angrick, Ben Bals, Tobias Friedrich, Hans Gawendowicz, Niko Hastrich, Nicolas Klodt, Pascal Lenzner, Jonas Schmidt, George Skretas, and Armin Wells. How to Reduce Temporal Cliques to Find Sparse Spanners. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{angrick_et_al:LIPIcs.ESA.2024.11,
  author =	{Angrick, Sebastian and Bals, Ben and Friedrich, Tobias and Gawendowicz, Hans and Hastrich, Niko and Klodt, Nicolas and Lenzner, Pascal and Schmidt, Jonas and Skretas, George and Wells, Armin},
  title =	{{How to Reduce Temporal Cliques to Find Sparse Spanners}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.11},
  URN =		{urn:nbn:de:0030-drops-210822},
  doi =		{10.4230/LIPIcs.ESA.2024.11},
  annote =	{Keywords: Temporal Graphs, temporal Clique, temporal Spanner, Reachability, Graph Connectivity, Graph Sparsification}
}
Document
Outlier Robust Multivariate Polynomial Regression

Authors: Vipul Arora, Arnab Bhattacharyya, Mathews Boban, Venkatesan Guruswami, and Esty Kelman


Abstract
We study the problem of robust multivariate polynomial regression: let p: ℝⁿ → ℝ be an unknown n-variate polynomial of degree at most d in each variable. We are given as input a set of random samples (𝐱_i,y_i) ∈ [-1,1]ⁿ × ℝ that are noisy versions of (𝐱_i,p(𝐱_i)). More precisely, each 𝐱_i is sampled independently from some distribution χ on [-1,1]ⁿ, and for each i independently, y_i is arbitrary (i.e., an outlier) with probability at most ρ < 1/2, and otherwise satisfies |y_i-p(𝐱_i)| ≤ σ. The goal is to output a polynomial p̂, of degree at most d in each variable, within an 𝓁_∞-distance of at most O(σ) from p. Kane, Karmalkar, and Price [FOCS'17] solved this problem for n = 1. We generalize their results to the n-variate setting, showing an algorithm that achieves a sample complexity of O_n(dⁿlog d), where the hidden constant depends on n, if χ is the n-dimensional Chebyshev distribution. The sample complexity is O_n(d^{2n}log d), if the samples are drawn from the uniform distribution instead. The approximation error is guaranteed to be at most O(σ), and the run-time depends on log(1/σ). In the setting where each 𝐱_i and y_i are known up to N bits of precision, the run-time’s dependence on N is linear. We also show that our sample complexities are optimal in terms of dⁿ. Furthermore, we show that it is possible to have the run-time be independent of 1/σ, at the cost of a higher sample complexity.

Cite as

Vipul Arora, Arnab Bhattacharyya, Mathews Boban, Venkatesan Guruswami, and Esty Kelman. Outlier Robust Multivariate Polynomial Regression. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arora_et_al:LIPIcs.ESA.2024.12,
  author =	{Arora, Vipul and Bhattacharyya, Arnab and Boban, Mathews and Guruswami, Venkatesan and Kelman, Esty},
  title =	{{Outlier Robust Multivariate Polynomial Regression}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.12},
  URN =		{urn:nbn:de:0030-drops-210830},
  doi =		{10.4230/LIPIcs.ESA.2024.12},
  annote =	{Keywords: Robust Statistics, Polynomial Regression, Sample Efficient Learning}
}
Document
Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights

Authors: Vikrant Ashvinkumar, Aaron Bernstein, Nairen Cao, Christoph Grunau, Bernhard Haeupler, Yonggang Jiang, Danupon Nanongkai, and Hsin-Hao Su


Abstract
This paper presents parallel, distributed, and quantum algorithms for single-source shortest paths when edges can have negative integer weights (negative-weight SSSP). We show a framework that reduces negative-weight SSSP in all these settings to n^{o(1)} calls to any SSSP algorithm that works on inputs with non-negative integer edge weights (non-negative-weight SSSP) with a virtual source. More specifically, for a directed graph with m edges, n vertices, undirected hop-diameter D, and polynomially bounded integer edge weights, we show randomized algorithms for negative-weight SSSP with - W_{SSSP}(m,n)n^{o(1)} work and S_{SSSP}(m,n)n^{o(1)} span, given access to a non-negative-weight SSSP algorithm with W_{SSSP}(m,n) work and S_{SSSP}(m,n) span in the parallel model, and - T_{SSSP}(n,D)n^{o(1)} rounds, given access to a non-negative-weight SSSP algorithm that takes T_{SSSP}(n,D) rounds in CONGEST, and - Q_{SSSP}(m,n)n^{o(1)} quantum edge queries, given access to a non-negative-weight SSSP algorithm that takes Q_{SSSP}(m,n) queries in the quantum edge query model. This work builds off the recent result of Bernstein, Nanongkai, Wulff-Nilsen [Bernstein et al., 2022], which gives a near-linear time algorithm for negative-weight SSSP in the sequential setting. Using current state-of-the-art non-negative-weight SSSP algorithms yields randomized algorithms for negative-weight SSSP with - m^{1+o(1)} work and n^{1/2+o(1)} span in the parallel model, and - (n^{2/5}D^{2/5} + √n + D)n^{o(1)} rounds in CONGEST, and - m^{1/2}n^{1/2+o(1)} quantum queries to the adjacency list or n^{1.5+o(1)} quantum queries to the adjacency matrix. Up to a n^{o(1)} factor, the parallel and distributed results match the current best upper bounds for reachability [Jambulapati et al., 2019; Cao et al., 2021]. Consequently, any improvement to negative-weight SSSP in these models beyond the n^{o(1)} factor necessitates an improvement to the current best bounds for reachability. The quantum result matches the lower bound up to an n^{o(1)} factor [Aija Berzina et al., 2004]. Our main technical contribution is an efficient reduction from computing a low-diameter decomposition (LDD) of directed graphs to computations of non-negative-weight SSSP with a virtual source. Efficiently computing an LDD has heretofore only been known for undirected graphs in both the parallel and distributed models, and been rather unstudied in quantum models. The directed LDD is a crucial step of the sequential algorithm in [Bernstein et al., 2022], and we think that its applications to other problems in parallel and distributed models are far from being exhausted. Other ingredients of our results include altering the recursion structure of the scaling algorithm in [Bernstein et al., 2022] to surmount difficulties that arise in these models, and also an efficient reduction from computing strongly connected components to computations of SSSP with a virtual source in CONGEST. The latter result answers a question posed in [Bernstein and Nanongkai, 2019] in the negative.

Cite as

Vikrant Ashvinkumar, Aaron Bernstein, Nairen Cao, Christoph Grunau, Bernhard Haeupler, Yonggang Jiang, Danupon Nanongkai, and Hsin-Hao Su. Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 13:1-13:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ashvinkumar_et_al:LIPIcs.ESA.2024.13,
  author =	{Ashvinkumar, Vikrant and Bernstein, Aaron and Cao, Nairen and Grunau, Christoph and Haeupler, Bernhard and Jiang, Yonggang and Nanongkai, Danupon and Su, Hsin-Hao},
  title =	{{Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{13:1--13:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.13},
  URN =		{urn:nbn:de:0030-drops-210849},
  doi =		{10.4230/LIPIcs.ESA.2024.13},
  annote =	{Keywords: Parallel algorithm, distributed algorithm, shortest paths}
}
Document
Approximating Maximum-Size Properly Colored Forests

Authors: Yuhang Bai, Kristóf Bérczi, Gergely Csáji, and Tamás Schwarcz


Abstract
In the Properly Colored Spanning Tree problem, we are given an edge-colored undirected graph and the goal is to find a properly colored spanning tree. The problem is interesting not only from a graph coloring point of view, but is also closely related to the Degree Bounded Spanning Tree and (1,2)-Traveling Salesman problems. We propose an optimization version called Maximum-size Properly Colored Forest problem, which aims to find a properly colored forest with as many edges as possible. We consider the problem in different graph classes and for different numbers of colors, and present polynomial-time approximation algorithms as well as inapproximability results for these settings. We also consider the Maximum-size Properly Colored Tree problem asking for the maximum size of a properly colored tree not necessarily spanning all the vertices. We show that the optimum is significantly more difficult to approximate than in the forest case, and provide an approximation algorithm for complete multigraphs.

Cite as

Yuhang Bai, Kristóf Bérczi, Gergely Csáji, and Tamás Schwarcz. Approximating Maximum-Size Properly Colored Forests. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bai_et_al:LIPIcs.ESA.2024.14,
  author =	{Bai, Yuhang and B\'{e}rczi, Krist\'{o}f and Cs\'{a}ji, Gergely and Schwarcz, Tam\'{a}s},
  title =	{{Approximating Maximum-Size Properly Colored Forests}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.14},
  URN =		{urn:nbn:de:0030-drops-210858},
  doi =		{10.4230/LIPIcs.ESA.2024.14},
  annote =	{Keywords: Approximation algorithm, (1,2)-traveling salesman problem, Degree bounded spanning tree, Properly colored forest}
}
Document
A (5/3+ε)-Approximation for Tricolored Non-Crossing Euclidean TSP

Authors: Júlia Baligács, Yann Disser, Andreas Emil Feldmann, and Anna Zych-Pawlewicz


Abstract
In the Tricolored Euclidean Traveling Salesperson problem, we are given k = 3 sets of points in the plane and are looking for disjoint tours, each covering one of the sets. Arora (1998) famously gave a PTAS based on "patching" for the case k = 1 and, recently, Dross et al. (2023) generalized this result to k = 2. Our contribution is a (5/3+ε)-approximation algorithm for k = 3 that further generalizes Arora’s approach. It is believed that patching is generally no longer possible for more than two tours. We circumvent this issue by either applying a conditional patching scheme for three tours or using an alternative approach based on a weighted solution for k = 2.

Cite as

Júlia Baligács, Yann Disser, Andreas Emil Feldmann, and Anna Zych-Pawlewicz. A (5/3+ε)-Approximation for Tricolored Non-Crossing Euclidean TSP. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baligacs_et_al:LIPIcs.ESA.2024.15,
  author =	{Balig\'{a}cs, J\'{u}lia and Disser, Yann and Feldmann, Andreas Emil and Zych-Pawlewicz, Anna},
  title =	{{A (5/3+\epsilon)-Approximation for Tricolored Non-Crossing Euclidean TSP}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.15},
  URN =		{urn:nbn:de:0030-drops-210862},
  doi =		{10.4230/LIPIcs.ESA.2024.15},
  annote =	{Keywords: Approximation Algorithms, geometric Network Optimization, Euclidean TSP, non-crossing Structures}
}
Document
Longest Common Substring with Gaps and Related Problems

Authors: Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan


Abstract
The longest common substring (also known as longest common factor) and longest common subsequence problems are two well-studied classical string problems. The former is solvable in optimal 𝒪(n) time for two strings of length m and n with m ≤ n, and the latter is solvable in 𝒪(nm) time, which is conditionally optimal under the Strong Exponential Time Hypothesis. In this work, we study the problem of longest common factor with gaps, that is, finding a set of at most k matching substrings obeying precedence conditions with maximum total length. For k = 1, this is equivalent to the longest common factor problem, and for k = m, this is equivalent to the longest common subsequence problem. Our work demonstrates that, for constant k, this problem can be solved in strongly subquadratic time, i.e., nm^{1 - Θ(1)}. Motivated by co-linear chaining applications in Computational Biology, we further demonstrate that the longest common factor with gaps results can be extended to the case where the matches are restricted to maximal exact matches (MEMs). To further demonstrate the applicability of our techniques, we show that a similar approach can be used for a restricted version of the episode matching problem where one seeks an ordered set of at most k matches whose concatenation equals a query pattern P and the length of the substring of T containing the matches is minimized. These solutions all run in strongly subquadratic time for constant k.

Cite as

Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan. Longest Common Substring with Gaps and Related Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{banerjee_et_al:LIPIcs.ESA.2024.16,
  author =	{Banerjee, Aranya and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{Longest Common Substring with Gaps and Related Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.16},
  URN =		{urn:nbn:de:0030-drops-210877},
  doi =		{10.4230/LIPIcs.ESA.2024.16},
  annote =	{Keywords: Pattern Matching, Longest Common Subsequence, Episode Matching}
}
Document
Cuts in Graphs with Matroid Constraints

Authors: Aritra Banik, Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Satyabrata Jana, and Saket Saurabh


Abstract
Vertex (s, t)-Cut and Vertex Multiway Cut are two fundamental graph separation problems in algorithmic graph theory. We study matroidal generalizations of these problems, where in addition to the usual input, we are given a representation R ∈ 𝔽^{r × n} of a linear matroid ℳ = (V(G), ℐ) of rank r in the input, and the goal is to determine whether there exists a vertex subset S ⊆ V(G) that has the required cut properties, as well as is independent in the matroid ℳ. We refer to these problems as Independent Vertex (s, t){-cut}, and Independent Multiway Cut, respectively. We show that these problems are fixed-parameter tractable (FPT) when parameterized by the solution size (which can be assumed to be equal to the rank of the matroid ℳ). These results are obtained by exploiting the recent technique of flow augmentation [Kim et al. STOC '22], combined with a dynamic programming algorithm on flow-paths á la [Feige and Mahdian, STOC '06] that maintains a representative family of solutions w.r.t. the given matroid [Marx, TCS '06; Fomin et al., JACM]. As a corollary, we also obtain FPT algorithms for the independent version of Odd Cycle Transversal. Further, our results can be generalized to other variants of the problems, e.g., weighted versions, or edge-deletion versions.

Cite as

Aritra Banik, Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Satyabrata Jana, and Saket Saurabh. Cuts in Graphs with Matroid Constraints. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{banik_et_al:LIPIcs.ESA.2024.17,
  author =	{Banik, Aritra and Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Jana, Satyabrata and Saurabh, Saket},
  title =	{{Cuts in Graphs with Matroid Constraints}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.17},
  URN =		{urn:nbn:de:0030-drops-210887},
  doi =		{10.4230/LIPIcs.ESA.2024.17},
  annote =	{Keywords: s-t-cut, multiway Cut, matroid, odd cycle transversal, feedback vertex set, fixed-parameter tractability}
}
Document
Height-Bounded Lempel-Ziv Encodings

Authors: Hideo Bannai, Mitsuru Funakoshi, Diptarama Hendrian, Myuji Matsuda, and Simon J. Puglisi


Abstract
We introduce height-bounded LZ encodings (LZHB), a new family of compressed representations that are variants of Lempel-Ziv parsings with a focus on bounding the worst-case access time to arbitrary positions in the text directly via the compressed representation. An LZ-like encoding is a partitioning of the string into phrases of length 1 which can be encoded literally, or phrases of length at least 2 which have a previous occurrence in the string and can be encoded by its position and length. An LZ-like encoding induces an implicit referencing forest on the set of positions of the string. An LZHB encoding is an LZ-like encoding where the height of the implicit referencing forest is bounded. An LZHB encoding with height constraint h allows access to an arbitrary position of the underlying text using O(h) predecessor queries. While computing the optimal (i.e., smallest) LZHB encoding efficiently seems to be difficult [Cicalese & Ugazio 2024, arXiv, to appear at DLT 2024], we give the first linear time algorithm for strings over a constant size alphabet that computes the greedy LZHB encoding, i.e., the string is processed from beginning to end, and the longest prefix of the remaining string that can satisfy the height constraint is taken as the next phrase. Our algorithms significantly improve both theoretically and practically, the very recently and independently proposed algorithms by Lipták et al. (CPM 2024). We also analyze the size of height bounded LZ encodings in the context of repetitiveness measures, and show that there exists a constant c such that the size ẑ_{HB(clog n)} of the optimal LZHB encoding whose height is bounded by clog n for any string of length n is O(ĝ_{rl}), where ĝ_{rl} is the size of the smallest run-length grammar. Furthermore, we show that there exists a family of strings such that ẑ_{HB(clog n)} = o(ĝ_{rl}), thus making ẑ_{HB(clog n)} one of the smallest known repetitiveness measures for which O(polylog n) time access is possible using linear (O(ẑ_{HB(clog n)})) space.

Cite as

Hideo Bannai, Mitsuru Funakoshi, Diptarama Hendrian, Myuji Matsuda, and Simon J. Puglisi. Height-Bounded Lempel-Ziv Encodings. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bannai_et_al:LIPIcs.ESA.2024.18,
  author =	{Bannai, Hideo and Funakoshi, Mitsuru and Hendrian, Diptarama and Matsuda, Myuji and Puglisi, Simon J.},
  title =	{{Height-Bounded Lempel-Ziv Encodings}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.18},
  URN =		{urn:nbn:de:0030-drops-210899},
  doi =		{10.4230/LIPIcs.ESA.2024.18},
  annote =	{Keywords: Lempel-Ziv parsing, data compression}
}
Document
Longest Common Extensions with Wildcards: Trade-Off and Applications

Authors: Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya


Abstract
We study the Longest Common Extension (LCE) problem in a string containing wildcards. Wildcards (also called "don't cares" or "holes") are special characters that match any other character in the alphabet, similar to the character "?" in Unix commands or "." in regular expression engines. We consider the problem parametrized by G, the number of maximal contiguous groups of wildcards in the input string. Our main contribution is a simple data structure for this problem that can be built in O(n (G/t) log n) time, occupies O(nG/t) space, and answers queries in O(t) time, for any t ∈ [1 .. G]. Up to the O(log n) factor, this interpolates smoothly between the data structure of Crochemore et al. [JDA 2015], which has O(nG) preprocessing time and space, and O(1) query time, and a simple solution based on the "kangaroo jumping" technique [Landau and Vishkin, STOC 1986], which has O(n) preprocessing time and space, and O(G) query time. By establishing a connection between this problem and Boolean matrix multiplication, we show that our solution is optimal up to subpolynomial factors when G = Ω(n) under a widely believed hypothesis. In addition, we develop a new simple, deterministic and combinatorial algorithm for sparse Boolean matrix multiplication. Finally, we show that our data structure can be used to obtain efficient algorithms for approximate pattern matching and structural analysis of strings with wildcards. First, we consider the problem of pattern matching with k errors (i.e., edit operations) in the setting where both the pattern and the text may contain wildcards. The "kangaroo jumping" technique can be adapted to yield an algorithm for this problem with runtime O(n(k+G)), where G is the total number of maximal contiguous groups of wildcards in the text and the pattern and n is the length of the text. By combining "kangaroo jumping" with a tailor-made data structure for LCE queries, Akutsu [IPL 1995] devised an O(n√{km} polylog m)-time algorithm. We improve on both algorithms when k ≪ G ≪ m by giving an algorithm with runtime O(n(k + √{Gk log n})). Secondly, we give O(n√G log n)-time and O(n)-space algorithms for computing the prefix array, as well as the quantum/deterministic border and period arrays of a string with wildcards. This is an improvement over the O(n√{nlog n})-time algorithms of Iliopoulos and Radoszewski [CPM 2016] when G = O(n / log n).

Cite as

Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya. Longest Common Extensions with Wildcards: Trade-Off and Applications. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 19:1-19:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bathie_et_al:LIPIcs.ESA.2024.19,
  author =	{Bathie, Gabriel and Charalampopoulos, Panagiotis and Starikovskaya, Tatiana},
  title =	{{Longest Common Extensions with Wildcards: Trade-Off and Applications}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{19:1--19:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.19},
  URN =		{urn:nbn:de:0030-drops-210904},
  doi =		{10.4230/LIPIcs.ESA.2024.19},
  annote =	{Keywords: Longest common prefix, longest common extension, wildcards, Boolean matrix multiplication, approximate pattern matching, periodicity arrays}
}
Document
Pattern Matching with Mismatches and Wildcards

Authors: Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya


Abstract
In this work, we address the problem of approximate pattern matching with wildcards. Given a pattern P of length m containing D wildcards, a text T of length n, and an integer k, our objective is to identify all fragments of T within Hamming distance k from P. Our primary contribution is an algorithm with runtime 𝒪(n + (D+k)(G+k)⋅ n/m) for this problem. Here, G ≤ D represents the number of maximal wildcard fragments in P. We derive this algorithm by elaborating in a non-trivial way on the ideas presented by [Charalampopoulos, Kociumaka, and Wellnitz, FOCS'20] for pattern matching with mismatches (without wildcards). Our algorithm improves over the state of the art when D, G, and k are small relative to n. For instance, if m = n/2, k = G = n^{2/5}, and D = n^{3/5}, our algorithm operates in 𝒪(n) time, surpassing the Ω(n^{6/5}) time requirement of all previously known algorithms. In the case of exact pattern matching with wildcards (k = 0), we present a much simpler algorithm with runtime 𝒪(n + DG ⋅ n/m) that clearly illustrates our main technical innovation: the utilisation of positions of P that do not belong to any fragment of P with a density of wildcards much larger than D/m as anchors for the sought (approximate) occurrences. Notably, our algorithm outperforms the best-known 𝒪(n log m)-time FFT-based algorithms of [Cole and Hariharan, STOC'02] and [Clifford and Clifford, IPL'04] if DG = o(m log m). We complement our algorithmic results with a structural characterization of the k-mismatch occurrences of P. We demonstrate that in a text of length 𝒪(m), these occurrences can be partitioned into 𝒪((D+k)(G+k)) arithmetic progressions. Additionally, we construct an infinite family of examples with Ω((D+k)k) arithmetic progressions of occurrences, leveraging a combinatorial result on progression-free sets [Elkin, SODA'10].

Cite as

Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya. Pattern Matching with Mismatches and Wildcards. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bathie_et_al:LIPIcs.ESA.2024.20,
  author =	{Bathie, Gabriel and Charalampopoulos, Panagiotis and Starikovskaya, Tatiana},
  title =	{{Pattern Matching with Mismatches and Wildcards}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{20:1--20:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.20},
  URN =		{urn:nbn:de:0030-drops-210910},
  doi =		{10.4230/LIPIcs.ESA.2024.20},
  annote =	{Keywords: pattern matching, wildcards, mismatches, Hamming distance}
}
Document
Improved Space Bounds for Subset Sum

Authors: Tatiana Belova, Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin


Abstract
More than 40 years ago, Schroeppel and Shamir presented an algorithm that solves the Subset Sum problem for n integers in time O^*(2^{0.5n}) and space O^*(2^{0.25n}). The time upper bound remains unbeaten, but the space upper bound has been improved to O^*(2^{0.249999n}) in a recent breakthrough paper by Nederlof and Węgrzycki (STOC 2021). Their algorithm is a clever combination of a number of previously known techniques with a new reduction and a new algorithm for the Orthogonal Vectors problem. In this paper, we give two new algorithms for Subset Sum. We start by presenting an Arthur-Merlin algorithm: upon receiving the verifier’s randomness, the prover sends an n/4-bit long proof to the verifier who checks it in (deterministic) time and space O^*(2^{n/4}). An interesting consequence of this result is the following fine-grained lower bound: assuming that 4-SUM cannot be solved in time O(n^{2-ε}) for all ε > 0, Circuit SAT cannot be solved in time O(g2^{(1-ε)n}), for all ε > 0 (where n and g denote the number of inputs and the number of gates, respectively). Then, we improve the space bound by Nederlof and Węgrzycki to O^*(2^{0.246n}) and also simplify their algorithm and its analysis. We achieve this space bound by further filtering sets of subsets using a random prime number. This allows us to reduce an instance of Subset Sum to a larger number of instances of smaller size.

Cite as

Tatiana Belova, Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin. Improved Space Bounds for Subset Sum. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 21:1-21:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{belova_et_al:LIPIcs.ESA.2024.21,
  author =	{Belova, Tatiana and Chukhin, Nikolai and Kulikov, Alexander S. and Mihajlin, Ivan},
  title =	{{Improved Space Bounds for Subset Sum}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{21:1--21:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.21},
  URN =		{urn:nbn:de:0030-drops-210925},
  doi =		{10.4230/LIPIcs.ESA.2024.21},
  annote =	{Keywords: algorithms, subset sum, complexity, space, upper bounds}
}
Document
Hypergraph Connectivity Augmentation in Strongly Polynomial Time

Authors: Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni


Abstract
We consider hypergraph network design problems where the goal is to construct a hypergraph that satisfies certain connectivity requirements. For graph network design problems where the goal is to construct a graph that satisfies certain connectivity requirements, the number of edges in every feasible solution is at most quadratic in the number of vertices. In contrast, for hypergraph network design problems, we might have feasible solutions in which the number of hyperedges is exponential in the number of vertices. This presents an additional technical challenge in hypergraph network design problems compared to graph network design problems: in order to solve the problem in polynomial time, we first need to show that there exists a feasible solution in which the number of hyperedges is polynomial in the input size. The central theme of this work is to overcome this additional technical challenge for certain hypergraph network design problems. We show that these hypergraph network design problems admit solutions in which the number of hyperedges is polynomial in the number of vertices and moreover, can be solved in strongly polynomial time. Our work improves on the previous fastest pseudo-polynomial run-time for these problems. As applications of our results, we derive the first strongly polynomial time algorithms for (i) degree-specified hypergraph connectivity augmentation using hyperedges and (ii) degree-specified hypergraph node-to-area connectivity augmentation using hyperedges.

Cite as

Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni. Hypergraph Connectivity Augmentation in Strongly Polynomial Time. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{berczi_et_al:LIPIcs.ESA.2024.22,
  author =	{B\'{e}rczi, Krist\'{o}f and Chandrasekaran, Karthekeyan and Kir\'{a}ly, Tam\'{a}s and Kulkarni, Shubhang},
  title =	{{Hypergraph Connectivity Augmentation in Strongly Polynomial Time}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.22},
  URN =		{urn:nbn:de:0030-drops-210938},
  doi =		{10.4230/LIPIcs.ESA.2024.22},
  annote =	{Keywords: Hypergraphs, Hypergraph Connectivity, Submodular Functions, Combinatorial Optimization}
}
Document
Density-Sensitive Algorithms for (Δ + 1)-Edge Coloring

Authors: Sayan Bhattacharya, Martín Costa, Nadav Panski, and Shay Solomon


Abstract
Vizing’s theorem asserts the existence of a (Δ+1)-edge coloring for any graph G, where Δ = Δ(G) denotes the maximum degree of G. Several polynomial time (Δ+1)-edge coloring algorithms are known, and the state-of-the-art running time (up to polylogarithmic factors) is Õ(min{m √n, m Δ}), by Gabow, Nishizeki, Kariv, Leven and Terada from 1985, where n and m denote the number of vertices and edges in the graph, respectively. Recently, Sinnamon shaved off a polylog(n) factor from the time bound of Gabow et al. The arboricity α = α(G) of a graph G is the minimum number of edge-disjoint forests into which its edge set can be partitioned, and it is a measure of the graph’s "uniform density". While α ≤ Δ in any graph, many natural and real-world graphs exhibit a significant separation between α and Δ. In this work we design a (Δ+1)-edge coloring algorithm with a running time of Õ(min{m √n, m Δ})⋅ α/Δ, thus improving the longstanding time barrier by a factor of α/Δ. In particular, we achieve a near-linear runtime for bounded arboricity graphs (i.e., α = Õ(1)) as well as when α = Õ(Δ/√n). Our algorithm builds on Gabow et al.’s and Sinnamon’s algorithms, and can be viewed as a density-sensitive refinement of them.

Cite as

Sayan Bhattacharya, Martín Costa, Nadav Panski, and Shay Solomon. Density-Sensitive Algorithms for (Δ + 1)-Edge Coloring. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ESA.2024.23,
  author =	{Bhattacharya, Sayan and Costa, Mart{\'\i}n and Panski, Nadav and Solomon, Shay},
  title =	{{Density-Sensitive Algorithms for (\Delta + 1)-Edge Coloring}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.23},
  URN =		{urn:nbn:de:0030-drops-210945},
  doi =		{10.4230/LIPIcs.ESA.2024.23},
  annote =	{Keywords: Graph Algorithms, Edge Coloring, Arboricity}
}
Document
A Parameterized Algorithm for Vertex and Edge Connectivity of Embedded Graphs

Authors: Therese Biedl, Prosenjit Bose, and Karthik Murali


Abstract
The problem of computing vertex and edge connectivity of a graph are classical problems in algorithmic graph theory. The focus of this paper is on computing these parameters for graphs drawn on the plane. A typical example of such graphs are planar graphs which can be embedded without any crossings. It has long been known that vertex and edge connectivity of planar embedded graphs can be computed in linear time. Very recently, Biedl and Murali extended the techniques from planar graphs to 1-plane graphs without ×-crossings, i.e., crossings whose endpoints induce a matching. While the tools used were novel, they were highly tailored to 1-plane graphs, and do not provide much leeway for further extension. In this paper, we develop alternate techniques that are simpler, have wider applications to near-planar graphs, and can be used to test both vertex and edge connectivity. Our technique works for all those embedded graphs where any pair of crossing edges are connected by a path that, roughly speaking, can be covered with few cells of the drawing. Important examples of such graphs include optimal 2-planar and optimal 3-planar graphs, d-map graphs, d-framed graphs, graphs with bounded crossing number, and k-plane graphs with bounded number of ×-crossings.

Cite as

Therese Biedl, Prosenjit Bose, and Karthik Murali. A Parameterized Algorithm for Vertex and Edge Connectivity of Embedded Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{biedl_et_al:LIPIcs.ESA.2024.24,
  author =	{Biedl, Therese and Bose, Prosenjit and Murali, Karthik},
  title =	{{A Parameterized Algorithm for Vertex and Edge Connectivity of Embedded Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.24},
  URN =		{urn:nbn:de:0030-drops-210950},
  doi =		{10.4230/LIPIcs.ESA.2024.24},
  annote =	{Keywords: Vertex Connectivity, Edge Connectivity, 1-planar, k-planar, Linear-time}
}
Document
Graph Spanners for Group Steiner Distances

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, and Alessandro Straziota


Abstract
A spanner is a sparse subgraph of a given graph G which preserves distances, measured w.r.t. some distance metric, up to a multiplicative stretch factor. This paper addresses the problem of constructing graph spanners w.r.t. the group Steiner metric, which generalizes the recently introduced beer distance metric. In such a metric we are given a collection of groups of required vertices, and we measure the distance between two vertices as the length of the shortest path between them that traverses at least one required vertex from each group. We discuss the relation between group Steiner spanners and classic spanners and we show that they exhibit strong ties with sourcewise spanners w.r.t. the shortest path metric. Nevertheless, group Steiner spanners capture several interesting scenarios that are not encompassed by existing spanners. This happens, e.g., for the singleton case, in which each group consists of a single required vertex, thus modeling the setting in which routes need to traverse certain points of interests (in any order). We provide several constructions of group Steiner spanners for both the all-pairs and single-source case, which exhibit various size-stretch trade-offs. Notably, we provide spanners with almost-optimal trade-offs for the singleton case. Moreover, some of our spanners also yield novel trade-offs for classical sourcewise spanners. Finally, we also investigate the query times that can be achieved when our spanners are turned into group Steiner distance oracles with the same size, stretch, and building time.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, and Alessandro Straziota. Graph Spanners for Group Steiner Distances. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2024.25,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Straziota, Alessandro},
  title =	{{Graph Spanners for Group Steiner Distances}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.25},
  URN =		{urn:nbn:de:0030-drops-210968},
  doi =		{10.4230/LIPIcs.ESA.2024.25},
  annote =	{Keywords: Network sparsification, Graph spanners, Group Steiner tree, Distance oracles}
}
Document
Achieving Envy-Freeness Through Items Sale

Authors: Vittorio Bilò, Evangelos Markakis, and Cosimo Vinci


Abstract
We consider a fair division setting of allocating indivisible items to a set of agents. In order to cope with the well-known impossibility results related to the non-existence of envy-free allocations, we allow the option of selling some of the items so as to compensate envious agents with monetary rewards. In fact, this approach is not new in practice, as it is applied in some countries in inheritance or divorce cases. A drawback of this approach is that it may create a value loss, since the market value derived by selling an item can be less than the value perceived by the agents. Therefore, given the market values of all items, a natural goal is to identify which items to sell so as to arrive at an envy-free allocation, while at the same time maximizing the overall social welfare. Our work is focused on the algorithmic study of this problem, and we provide both positive and negative results on its approximability. When the agents have a commonly accepted value for each item, our results show a sharp separation between the cases of two or more agents. In particular, we establish a PTAS for two agents, and we complement this with a hardness result, that for three or more agents, the best approximation guarantee is provided by essentially selling all items. This hardness barrier, however, is relieved when the number of distinct item values is constant, as we provide an efficient algorithm for any number of agents. We also explore the generalization to heterogeneous valuations, where the hardness result continues to hold, and where we provide positive results for certain special cases.

Cite as

Vittorio Bilò, Evangelos Markakis, and Cosimo Vinci. Achieving Envy-Freeness Through Items Sale. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 26:1-26:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2024.26,
  author =	{Bil\`{o}, Vittorio and Markakis, Evangelos and Vinci, Cosimo},
  title =	{{Achieving Envy-Freeness Through Items Sale}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{26:1--26:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.26},
  URN =		{urn:nbn:de:0030-drops-210977},
  doi =		{10.4230/LIPIcs.ESA.2024.26},
  annote =	{Keywords: Fair Item Allocation, Approximation Algorithms, Envy-freeness, Markets}
}
Document
Art Galleries and Mobile Guards: Revisiting O'Rourke’s Proof

Authors: Ahmad Biniaz


Abstract
O'Rourke (1983) proved that every n-vertex polygon, with n ≥ 4, can be guarded by ⌊ n/4⌋ edges or diagonals - a variant of Chvátal’s theorem for sufficiency of ⌊ n/3⌋ vertices. We present a short proof for a somewhat stronger result that allows us to impose some constraints on the guards. We prove that for every given subset V of vertices, the polygon can be guarded by ⌊(n+2|V|)/4⌋ edges or diagonals that include at least one edge or diagonal incident to every vertex of V. This bound is the best achievable given the constraint for V. Our proof is by induction and suggests a simple linear-time algorithm after triangulating the polygon. The sufficiency of ⌊4⌋ guards is a special case of the new result where V is the empty set.

Cite as

Ahmad Biniaz. Art Galleries and Mobile Guards: Revisiting O'Rourke’s Proof. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 27:1-27:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{biniaz:LIPIcs.ESA.2024.27,
  author =	{Biniaz, Ahmad},
  title =	{{Art Galleries and Mobile Guards: Revisiting O'Rourke’s Proof}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{27:1--27:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.27},
  URN =		{urn:nbn:de:0030-drops-210989},
  doi =		{10.4230/LIPIcs.ESA.2024.27},
  annote =	{Keywords: Polygon guarding, Edge guarding, Short proof, Simple algorithm}
}
Document
A Faster Algorithm for the Fréchet Distance in 1D for the Imbalanced Case

Authors: Lotte Blank and Anne Driemel


Abstract
The fine-grained complexity of computing the {Fréchet distance } has been a topic of much recent work, starting with the quadratic SETH-based conditional lower bound by Bringmann from 2014. Subsequent work established largely the same complexity lower bounds for the {Fréchet distance } in 1D. However, the imbalanced case, which was shown by Bringmann to be tight in dimensions d ≥ 2, was still left open. Filling in this gap, we show that a faster algorithm for the {Fréchet distance } in the imbalanced case is possible: Given two 1-dimensional curves of complexity n and n^{α} for some α ∈ (0,1), we can compute their {Fréchet distance } in O(n^{2α} log² n + n log n) time. This rules out a conditional lower bound of the form O((nm)^{1-ε}) that Bringmann showed for d ≥ 2 and any ε > 0 in turn showing a strict separation with the setting d = 1. At the heart of our approach lies a data structure that stores a 1-dimensional curve P of complexity n, and supports queries with a curve Q of complexity m for the continuous {Fréchet distance } between P and Q. The data structure has size in 𝒪(nlog n) and uses query time in 𝒪(m² log² n). Our proof uses a key lemma that is based on the concept of visiting orders and may be of independent interest. We demonstrate this by substantially simplifying the correctness proof of a clustering algorithm by Driemel, Krivošija and Sohler from 2015.

Cite as

Lotte Blank and Anne Driemel. A Faster Algorithm for the Fréchet Distance in 1D for the Imbalanced Case. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 28:1-28:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blank_et_al:LIPIcs.ESA.2024.28,
  author =	{Blank, Lotte and Driemel, Anne},
  title =	{{A Faster Algorithm for the Fr\'{e}chet Distance in 1D for the Imbalanced Case}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{28:1--28:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.28},
  URN =		{urn:nbn:de:0030-drops-210999},
  doi =		{10.4230/LIPIcs.ESA.2024.28},
  annote =	{Keywords: \{Fr\'{e}chet distance\}, distance oracle, data structures, time series}
}
Document
A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels

Authors: Jean-Daniel Boissonnat and Kunal Dutta


Abstract
Computing persistent homology of large datasets using Gaussian kernels is useful in the domains of topological data analysis and machine learning as shown by Phillips, Wang and Zheng [SoCG 2015]. However, unlike in the case of persistent homology computation using the Euclidean distance or the k-distance, using Gaussian kernels involves significantly higher overhead, as all distance computations are in terms of the Gaussian kernel distance which is computationally more expensive. Further, most algorithmic implementations (e.g. Gudhi, Ripser, etc.) are based on Euclidean distances, so the question of finding a Euclidean embedding - preferably low-dimensional - that preserves the persistent homology computed with Gaussian kernels, is quite important. We consider the Gaussian kernel power distance (GKPD) given by Phillips, Wang and Zheng. Given an n-point dataset and a relative error parameter {ε} ∈ (0,1], we show that the persistent homology of the {Čech } filtration of the dataset computed using the GKPD can be approximately preserved using O({ε}^{-2}log n) dimensions, under a high stable rank condition. Our results also extend to the Delaunay filtration and the (simpler) case of the weighted Rips filtrations constructed using the GKPD. Compared to the Euclidean embedding for the Gaussian kernel function in ∼ n dimensions, which uses the Cholesky decomposition of the matrix of the kernel function applied to all pairs of data points, our embedding may also be viewed as dimensionality reduction - reducing the dimensionality from n to ∼ log n dimensions. Our proof utilizes the embedding of Chen and Phillips [ALT 2017], based on the Random Fourier Functions of Rahimi and Recht [NeurIPS 2007], together with two novel ingredients. The first one is a new decomposition of the squared radii of {Čech } simplices computed using the GKPD, in terms of the pairwise GKPDs between the vertices, which we state and prove. The second is a new concentration inequality for sums of cosine functions of Gaussian random vectors, which we call Gaussian cosine chaoses. We believe these are of independent interest and will find other applications in future.

Cite as

Jean-Daniel Boissonnat and Kunal Dutta. A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{boissonnat_et_al:LIPIcs.ESA.2024.29,
  author =	{Boissonnat, Jean-Daniel and Dutta, Kunal},
  title =	{{A Euclidean Embedding for Computing Persistent Homology with Gaussian Kernels}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.29},
  URN =		{urn:nbn:de:0030-drops-211009},
  doi =		{10.4230/LIPIcs.ESA.2024.29},
  annote =	{Keywords: Persistent homology, Gaussian kernels, Random Fourier Features, Euclidean embedding}
}
Document
Exact Minimum Weight Spanners via Column Generation

Authors: Fritz Bökler, Markus Chimani, Henning Jasper, and Mirko H. Wagner


Abstract
Given a weighted graph G, a minimum weight α-spanner is a least-weight subgraph H ⊆ G that preserves minimum distances between all node pairs up to a factor of α. There are many results on heuristics and approximation algorithms, including a recent investigation of their practical performance [Markus Chimani and Finn Stutzenstein, 2022]. Exact approaches, in contrast, have long been denounced as impractical: The first exact ILP (integer linear program) method [Sigurd and Zachariasen, 2004] from 2004 is based on a model with exponentially many path variables, solved via column generation. A second approach [Ahmed et al., 2019], modeling via arc-based multicommodity flow, was presented in 2019. In both cases, only graphs with 40-100 nodes were reported to be solvable. In this paper, we briefly report on a theoretical comparison between these two models from a polyhedral point of view, and then concentrate on improvements and engineering aspects. We evaluate their performance in a large-scale empirical study. We report that our tuned column generation approach, based on multicriteria shortest path computations, is able to solve instances with over 16 000 nodes within 13 min. Furthermore, now knowing optimal solutions for larger graphs, we are able to investigate the quality of the strongest known heuristic on reasonably sized instances for the first time.

Cite as

Fritz Bökler, Markus Chimani, Henning Jasper, and Mirko H. Wagner. Exact Minimum Weight Spanners via Column Generation. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bokler_et_al:LIPIcs.ESA.2024.30,
  author =	{B\"{o}kler, Fritz and Chimani, Markus and Jasper, Henning and Wagner, Mirko H.},
  title =	{{Exact Minimum Weight Spanners via Column Generation}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.30},
  URN =		{urn:nbn:de:0030-drops-211012},
  doi =		{10.4230/LIPIcs.ESA.2024.30},
  annote =	{Keywords: Graph spanners, ILP, algorithm engineering, experimental study}
}
Document
String 2-Covers with No Length Restrictions

Authors: Itai Boneh, Shay Golan, and Arseny Shur


Abstract
A λ-cover of a string S is a set of strings {C_i}₁^λ such that every index in S is contained in an occurrence of at least one string C_i. The existence of a 1-cover defines a well-known class of quasi-periodic strings. Quasi-periodicity can be decided in linear time, and all 1-covers of a string can be reported in linear time as well. Since in general it is NP-complete to decide whether a string has a λ-cover, the natural next step is the development of efficient algorithms for 2-covers. Radoszewski and Straszyński [ESA 2020] analysed the particular case where the strings in a 2-cover must be of the same length. They provided an algorithm that reports all such 2-covers of S in time near-linear in |S| and in the size of the output. In this work, we consider 2-covers in full generality. Since every length-n string has Ω(n²) trivial 2-covers (every prefix and suffix of total length at least n constitute such a 2-cover), we state the reporting problem as follows: given a string S and a number m, report all 2-covers {C₁,C₂} of S with length |C₁|+|C₂| upper bounded by m. We present an Õ(n + output) time algorithm solving this problem, with output being the size of the output. This algorithm admits a simpler modification that finds a 2-cover of minimum length. We also provide an Õ(n) time construction of a 2-cover oracle which, given two substrings C₁,C₂ of S, reports in poly-logarithmic time whether {C₁,C₂} is a 2-cover of S.

Cite as

Itai Boneh, Shay Golan, and Arseny Shur. String 2-Covers with No Length Restrictions. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 31:1-31:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.ESA.2024.31,
  author =	{Boneh, Itai and Golan, Shay and Shur, Arseny},
  title =	{{String 2-Covers with No Length Restrictions}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{31:1--31:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.31},
  URN =		{urn:nbn:de:0030-drops-211029},
  doi =		{10.4230/LIPIcs.ESA.2024.31},
  annote =	{Keywords: Quasi-periodicity, String cover, Range query, Range stabbing}
}
Document
Separable Convex Mixed-Integer Optimization: Improved Algorithms and Lower Bounds

Authors: Cornelius Brand, Martin Koutecký, Alexandra Lassota, and Sebastian Ordyniak


Abstract
We provide several novel algorithms and lower bounds in central settings of mixed-integer (non-)linear optimization, shedding new light on classic results in the field. This includes an improvement on record running time bounds obtained from a slight extension of Lenstra’s 1983 algorithm [Math. Oper. Res. '83] to optimizing under few constraints with small coefficients. This is important for ubiquitous tasks like knapsack-, subset sum- or scheduling problems [Eisenbrand and Weismantel, SODA'18, Jansen and Rohwedder, ITCS'19]. Further, we extend our algorithm to an intermediate linear optimization problem when the matrix has many rows that exhibit 2-stage stochastic structure, which adds to a prominent line of recent results on this and similarly restricted cases [Jansen et al. ICALP'19, Cslovjecsek et al. SODA'21, Brand et al. AAAI'21, Klein, Reuter SODA'22, Cslovjecsek et al. SODA'24]. We also show that the generalization of two fundamental classes of structured constraints from these works (n-fold and 2-stage stochastic programs) to separable-convex mixed-integer optimization are harder than their mixed-integer, linear counterparts. This counters a widespread belief popularized initially by an influential paper of Hochbaum and Shanthikumar, namely that "convex separable optimization is not much harder than linear optimization" [J. ACM '90]. To obtain our algorithms, we employ the mixed Graver basis introduced by Hemmecke [Math. Prog. '03], and our work is the first to give bounds on the norm of its elements. Importantly, we use these bounds differently from how purely-integer Graver bounds are exploited in related approaches, and prove that, surprisingly, this cannot be avoided.

Cite as

Cornelius Brand, Martin Koutecký, Alexandra Lassota, and Sebastian Ordyniak. Separable Convex Mixed-Integer Optimization: Improved Algorithms and Lower Bounds. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{brand_et_al:LIPIcs.ESA.2024.32,
  author =	{Brand, Cornelius and Kouteck\'{y}, Martin and Lassota, Alexandra and Ordyniak, Sebastian},
  title =	{{Separable Convex Mixed-Integer Optimization: Improved Algorithms and Lower Bounds}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.32},
  URN =		{urn:nbn:de:0030-drops-211033},
  doi =		{10.4230/LIPIcs.ESA.2024.32},
  annote =	{Keywords: Mixed-Integer Programming, Separable Convex Optimization, Parameterized Algorithms, Parameterized Complexity}
}
Document
Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing

Authors: Karl Bringmann, Anita Dürr, and Adam Polak


Abstract
We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time Õ(n + t√{p_{max}}), where n is the number of items, t is the knapsack capacity, and p_{max} is the maximum item profit. This improves over the Õ(n + t p_{max})-time algorithm based on the convolution and prediction technique by Bateni et al. (STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the Õ(n^{1.5})-time algorithm for bounded monotone min-plus convolution by Chi et al. (STOC 2022) to the rectangular case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to balanced instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time Õ(n + OPT√{w_{max}}), Õ(n + (nw_{max}p_{max})^{1/3}t^{2/3}), and Õ(n + (nw_{max}p_{max})^{1/3} OPT^{2/3}), where OPT is the optimal total profit and w_{max} is the maximum item weight.

Cite as

Karl Bringmann, Anita Dürr, and Adam Polak. Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.33,
  author =	{Bringmann, Karl and D\"{u}rr, Anita and Polak, Adam},
  title =	{{Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.33},
  URN =		{urn:nbn:de:0030-drops-211047},
  doi =		{10.4230/LIPIcs.ESA.2024.33},
  annote =	{Keywords: 0-1-Knapsack problem, bounded monotone min-plus convolution, fine-grained complexity}
}
Document
Exploring the Approximability Landscape of 3SUM

Authors: Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann


Abstract
Since an increasing number of problems in P have conditional lower bounds against exact algorithms, it is natural to study which of these problems can be efficiently approximated. Often, however, there are many potential ways to formulate an approximate version of a problem. We ask: How sensitive is the (in-)approximability of a problem in P to its precise formulation? To this end, we perform a case study using the popular 3SUM problem. Its many equivalent formulations give rise to a wide range of potential approximate relaxations. Specifically, to obtain an approximate relaxation in our framework, one can choose among the options: (a) 3SUM or Convolution 3SUM, (b) monochromatic or trichromatic, (c) allowing under-approximation, over-approximation, or both, (d) approximate decision or approximate optimization, (e) single output or multiple outputs and (f) implicit or explicit target (given as input). We show general reduction principles between some variants and find that we can classify the remaining problems (over polynomially bounded positive integers) into three regimes: 1) (1+ε)-approximable in near-linear time Õ(n + 1/ε), 2) (1+ε)-approximable in near-quadratic time Õ(n/ε) or Õ(n+1/ε²), or 3) non-approximable, i.e., requiring time n^{2± o(1)} even for any approximation factor. In each of these three regimes, we provide matching upper and conditional lower bounds. To prove our results, we establish two results that may be of independent interest: Over polynomially bounded integers, we show subquadratic equivalence of (min,+)-convolution and polyhedral 3SUM, and we prove equivalence of the Strong 3SUM conjecture and the Strong Convolution 3SUM conjecture.

Cite as

Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann. Exploring the Approximability Landscape of 3SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.34,
  author =	{Bringmann, Karl and Ghazy, Ahmed and K\"{u}nnemann, Marvin},
  title =	{{Exploring the Approximability Landscape of 3SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.34},
  URN =		{urn:nbn:de:0030-drops-211057},
  doi =		{10.4230/LIPIcs.ESA.2024.34},
  annote =	{Keywords: Fine-grained Complexity, Conditional Lower Bounds, Approximation Schemes, Min-Plus Convolution}
}
Document
On Finding Longest Palindromic Subsequences Using Longest Common Subsequences

Authors: Gerth Stølting Brodal, Rolf Fagerberg, and Casper Moldrup Rysgaard


Abstract
Two standard textbook problems illustrating dynamic programming are to find the longest common subsequence (LCS) between two strings and to find the longest palindromic subsequence (LPS) of a single string. A popular claim is that the longest palindromic subsequence in a string can be computed as the longest common subsequence between the string and the reversed string. We prove that the correctness of this claim depends on how the longest common subsequence is computed. In particular, we prove that the classical dynamic programming solution by Wagner and Fischer [JACM 1974] for finding an LCS in fact does find an LPS, while a slightly different LCS backtracking strategy makes the algorithm fail to always report a palindrome.

Cite as

Gerth Stølting Brodal, Rolf Fagerberg, and Casper Moldrup Rysgaard. On Finding Longest Palindromic Subsequences Using Longest Common Subsequences. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2024.35,
  author =	{Brodal, Gerth St{\o}lting and Fagerberg, Rolf and Rysgaard, Casper Moldrup},
  title =	{{On Finding Longest Palindromic Subsequences Using Longest Common Subsequences}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.35},
  URN =		{urn:nbn:de:0030-drops-211068},
  doi =		{10.4230/LIPIcs.ESA.2024.35},
  annote =	{Keywords: Palindromic subsequence, longest common subsequence, dynamic programming}
}
Document
Bicriteria Approximation for Minimum Dilation Graph Augmentation

Authors: Kevin Buchin, Maike Buchin, Joachim Gudmundsson, and Sampson Wong


Abstract
Spanner constructions focus on the initial design of the network. However, networks tend to improve over time. In this paper, we focus on the improvement step. Given a graph and a budget k, which k edges do we add to the graph to minimise its dilation? Gudmundsson and Wong [TALG'22] provided the first positive result for this problem, but their approximation factor is linear in k. Our main result is a (2 √[r]{2} k^{1/r},2r)-bicriteria approximation that runs in O(n³ log n) time, for all r ≥ 1. In other words, if t^* is the minimum dilation after adding any k edges to a graph, then our algorithm adds O(k^{1+1/r}) edges to the graph to obtain a dilation of 2rt^*. Moreover, our analysis of the algorithm is tight under the Erdős girth conjecture.

Cite as

Kevin Buchin, Maike Buchin, Joachim Gudmundsson, and Sampson Wong. Bicriteria Approximation for Minimum Dilation Graph Augmentation. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ESA.2024.36,
  author =	{Buchin, Kevin and Buchin, Maike and Gudmundsson, Joachim and Wong, Sampson},
  title =	{{Bicriteria Approximation for Minimum Dilation Graph Augmentation}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.36},
  URN =		{urn:nbn:de:0030-drops-211079},
  doi =		{10.4230/LIPIcs.ESA.2024.36},
  annote =	{Keywords: Greedy spanner, Graph augmentation}
}
Document
Online Flexible Busy Time Scheduling on Heterogeneous Machines

Authors: Gruia Călinescu, Sami Davies, Samir Khuller, and Shirley Zhang


Abstract
We study the online busy time scheduling model on heterogeneous machines. In our setting, jobs with uniform length arrive online with a deadline that becomes known to the algorithm at the job’s arrival time. An algorithm has access to machines, each with different associated capacities and costs. The goal is to schedule jobs on machines by their deadline, so that the total cost incurred by the scheduling algorithm is minimized. While busy time scheduling has been well-studied, relatively little is known when machines are heterogeneous (i.e., have different costs and capacities), despite this natural theoretical generalization being the most practical model for clients using cloud computing services. We make significant progress in understanding this model by designing an 8-competitive algorithm for the problem on unit-length jobs and provide a lower bound of 2 on the competitive ratio. The lower bound is tight in the setting when jobs form non-nested intervals. Our 8-competitive algorithm generalizes to one with competitive ratio 8(2p-1)/p < 16 when all jobs have uniform length p.

Cite as

Gruia Călinescu, Sami Davies, Samir Khuller, and Shirley Zhang. Online Flexible Busy Time Scheduling on Heterogeneous Machines. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{calinescu_et_al:LIPIcs.ESA.2024.37,
  author =	{C\u{a}linescu, Gruia and Davies, Sami and Khuller, Samir and Zhang, Shirley},
  title =	{{Online Flexible Busy Time Scheduling on Heterogeneous Machines}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{37:1--37:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.37},
  URN =		{urn:nbn:de:0030-drops-211083},
  doi =		{10.4230/LIPIcs.ESA.2024.37},
  annote =	{Keywords: Online algorithms, Scheduling, Competitive analysis}
}
Document
Local Optimization Algorithms for Maximum Planar Subgraph

Authors: Gruia Călinescu and Sumedha Uniyal


Abstract
Consider the NP-hard problem of, given a simple graph G, to find a planar subgraph of G with the maximum number of edges. This is called the Maximum Planar Subgraph problem and the best known approximation is 4/9 and is obtained by sophisticated Graphic Matroid Parity algorithms. Here we show that applying a local optimization phase to the output of this known algorithm improves this approximation ratio by a small {ε} = 1/747 > 0. This is the first improvement in approximation ratio in more than a quarter century. The analysis relies on a more refined extremal bound on the Lovász cactus number in planar graphs, compared to the earlier (tight) bound of [Gruia Călinescu et al., 1998; Chalermsook et al., 2019]. A second local optimization algorithm achieves a tight ratio of 5/12 for Maximum Planar Subgraph without using Graphic Matroid Parity. We also show that applying a greedy algorithm before this second optimization algorithm improves its ratio to at least 91/216 < 4/9. The motivation for not using Graphic Matroid Parity is that it requires sophisticated algorithms that are not considered practical by previous work. The best previously published [Chalermsook and Schmid, 2017] approximation ratio without Graphic Matroid Parity is 13/33 < 5/12.

Cite as

Gruia Călinescu and Sumedha Uniyal. Local Optimization Algorithms for Maximum Planar Subgraph. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 38:1-38:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{calinescu_et_al:LIPIcs.ESA.2024.38,
  author =	{C\u{a}linescu, Gruia and Uniyal, Sumedha},
  title =	{{Local Optimization Algorithms for Maximum Planar Subgraph}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{38:1--38:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.38},
  URN =		{urn:nbn:de:0030-drops-211090},
  doi =		{10.4230/LIPIcs.ESA.2024.38},
  annote =	{Keywords: planar graph, maximum subgraph, approximation algorithm, matroid parity, local optimization}
}
Document
List Homomorphisms by Deleting Edges and Vertices: Tight Complexity Bounds for Bounded-Treewidth Graphs

Authors: Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski


Abstract
The goal of this paper is to investigate a family of optimization problems arising from list homomorphisms, and to understand what the best possible algorithms are if we restrict the problem to bounded-treewidth graphs. Given graphs G, H, and lists L(v) ⊆ V(H) for every v ∈ V(G), a list homomorphism from (G,L) to H is a function f:V(G) → V(H) that preserves the edges (i.e., uv ∈ E(G) implies f(u)f(v) ∈ E(H)) and respects the lists (i.e., f(v) ∈ L(v)). The graph H may have loops. For a fixed H, the input of the optimization problem LHomVD(H) is a graph G with lists L(v), and the task is to find a set X of vertices having minimum size such that (G-X,L) has a list homomorphism to H. We define analogously the edge-deletion variant LHomED(H), where we have to delete as few edges as possible from G to obtain a graph that has a list homomorphism. This expressive family of problems includes members that are essentially equivalent to fundamental problems such as Vertex Cover, Max Cut, Odd Cycle Transversal, and Edge/Vertex Multiway Cut. For both variants, we first characterize those graphs H that make the problem polynomial-time solvable and show that the problem is NP-hard for every other fixed H. Second, as our main result, we determine for every graph H for which the problem is NP-hard, the smallest possible constant c_H such that the problem can be solved in time c^t_H⋅ n^{𝒪(1)} if a tree decomposition of G having width t is given in the input. Let i(H) be the maximum size of a set of vertices in H that have pairwise incomparable neighborhoods. For the vertex-deletion variant LHomVD(H), we show that the smallest possible constant is i(H)+1 for every H: - Given a tree decomposition of width t of G, LHomVD(H) can be solved in time (i(H)+1)^t⋅ n^{𝒪(1)}. - For any ε > 0 and H, an (i(H)+1-ε)^t⋅ n^{𝒪(1)} algorithm would violate the Strong Exponential-Time Hypothesis (SETH). The situation is more complex for the edge-deletion version. For every H, one can solve LHomED(H) in time i(H)^t⋅ n^{𝒪(1)} if a tree decomposition of width t is given. However, the existence of a specific type of decomposition of H shows that there are graphs H where LHomED(H) can be solved significantly more efficiently and the best possible constant can be arbitrarily smaller than i(H). Nevertheless, we determine this best possible constant and (assuming the SETH) prove tight bounds for every fixed H.

Cite as

Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski. List Homomorphisms by Deleting Edges and Vertices: Tight Complexity Bounds for Bounded-Treewidth Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{canesmer_et_al:LIPIcs.ESA.2024.39,
  author =	{Can Esmer, Bar{\i}\c{s} and Focke, Jacob and Marx, D\'{a}niel and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{List Homomorphisms by Deleting Edges and Vertices: Tight Complexity Bounds for Bounded-Treewidth Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.39},
  URN =		{urn:nbn:de:0030-drops-211103},
  doi =		{10.4230/LIPIcs.ESA.2024.39},
  annote =	{Keywords: Graph Homomorphism, List Homomorphism, Vertex Deletion, Edge Deletion, Multiway Cut, Parameterized Complexity, Tight Bounds, Treewidth, SETH}
}
Document
Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams

Authors: Amit Chakrabarti, Andrew McGregor, and Anthony Wirth


Abstract
The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k.

Cite as

Amit Chakrabarti, Andrew McGregor, and Anthony Wirth. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.ESA.2024.40,
  author =	{Chakrabarti, Amit and McGregor, Andrew and Wirth, Anthony},
  title =	{{Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.40},
  URN =		{urn:nbn:de:0030-drops-211114},
  doi =		{10.4230/LIPIcs.ESA.2024.40},
  annote =	{Keywords: Data Stream Computation, Maximum Coverage, Submodular Maximization}
}
Document
Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing

Authors: Chandra Chekuri and Rhea Jain


Abstract
We consider two-cost network design models in which edges of the input graph have an associated cost and length. We build upon recent advances in hop-constrained oblivious routing to obtain two sets of results. We address multicommodity buy-at-bulk network design in the nonuniform setting. Existing poly-logarithmic approximations are based on the junction tree approach [Chekuri et al., 2010; Guy Kortsarz and Zeev Nutov, 2011]. We obtain a new polylogarithmic approximation via a natural LP relaxation. This establishes an upper bound on its integrality gap and affirmatively answers an open question raised in [Chekuri et al., 2010]. The rounding is based on recent results in hop-constrained oblivious routing [Ghaffari et al., 2021], and this technique yields a polylogarithmic approximation in more general settings such as set connectivity. Our algorithm for buy-at-bulk network design is based on an LP-based reduction to h-hop constrained network design for which we obtain LP-based bicriteria approximation algorithms. We also consider a fault-tolerant version of h-hop constrained network design where one wants to design a low-cost network to guarantee short paths between a given set of source-sink pairs even when k-1 edges can fail. This model has been considered in network design [Luis Gouveia and Markus Leitner, 2017; Gouveia et al., 2018; Arslan et al., 2020] but no approximation algorithms were known. We obtain polylogarithmic bicriteria approximation algorithms for the single-source setting for any fixed k. We build upon the single-source algorithm and the junction-tree approach to obtain an approximation algorithm for the multicommodity setting when at most one edge can fail.

Cite as

Chandra Chekuri and Rhea Jain. Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 41:1-41:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:LIPIcs.ESA.2024.41,
  author =	{Chekuri, Chandra and Jain, Rhea},
  title =	{{Approximation Algorithms for Hop Constrained and Buy-At-Bulk Network Design via Hop Constrained Oblivious Routing}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{41:1--41:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.41},
  URN =		{urn:nbn:de:0030-drops-211124},
  doi =		{10.4230/LIPIcs.ESA.2024.41},
  annote =	{Keywords: Buy-at-bulk, Hop-constrained network design, LP integrality gap, Fault-tolerant network design}
}
Document
From Directed Steiner Tree to Directed Polymatroid Steiner Tree in Planar Graphs

Authors: Chandra Chekuri, Rhea Jain, Shubhang Kulkarni, Da Wei Zheng, and Weihao Zhu


Abstract
In the Directed Steiner Tree (DST) problem the input is a directed edge-weighted graph G = (V,E), a root vertex r and a set S ⊆ V of k terminals. The goal is to find a min-cost subgraph that connects r to each of the terminals. DST admits an O(log² k/log log k)-approximation in quasi-polynomial time [Grandoni et al., 2022; Rohan Ghuge and Viswanath Nagarajan, 2022], and an O(k^{ε})-approximation for any fixed ε > 0 in polynomial-time [Alexander Zelikovsky, 1997; Moses Charikar et al., 1999]. Resolving the existence of a polynomial-time poly-logarithmic approximation is a major open problem in approximation algorithms. In a recent work, Friggstad and Mousavi [Zachary Friggstad and Ramin Mousavi, 2023] obtained a simple and elegant polynomial-time O(log k)-approximation for DST in planar digraphs via Thorup’s shortest path separator theorem [Thorup, 2004]. We build on their work and obtain several new results on DST and related problems. - We develop a tree embedding technique for rooted problems in planar digraphs via an interpretation of the recursion in [Zachary Friggstad and Ramin Mousavi, 2023]. Using this we obtain polynomial-time poly-logarithmic approximations for Group Steiner Tree [Naveen Garg et al., 2000], Covering Steiner Tree [Goran Konjevod et al., 2002] and the Polymatroid Steiner Tree [Gruia Călinescu and Alexander Zelikovsky, 2005] problems in planar digraphs. All these problems are hard to approximate to within a factor of Ω(log² n/log log n) even in trees [Eran Halperin and Robert Krauthgamer, 2003; Grandoni et al., 2022]. - We prove that the natural cut-based LP relaxation for DST has an integrality gap of O(log² k) in planar digraphs. This is in contrast to general graphs where the integrality gap of this LP is known to be Ω(√k) [Leonid Zosin and Samir Khuller, 2002] and Ω(n^{δ}) for some fixed δ > 0 [Shi Li and Bundit Laekhanukit, 2022]. - We combine the preceding results with density based arguments to obtain poly-logarithmic approximations for the multi-rooted versions of the problems in planar digraphs. For DST our result improves the O(R + log k) approximation of [Zachary Friggstad and Ramin Mousavi, 2023] when R = ω(log² k).

Cite as

Chandra Chekuri, Rhea Jain, Shubhang Kulkarni, Da Wei Zheng, and Weihao Zhu. From Directed Steiner Tree to Directed Polymatroid Steiner Tree in Planar Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 42:1-42:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:LIPIcs.ESA.2024.42,
  author =	{Chekuri, Chandra and Jain, Rhea and Kulkarni, Shubhang and Zheng, Da Wei and Zhu, Weihao},
  title =	{{From Directed Steiner Tree to Directed Polymatroid Steiner Tree in Planar Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{42:1--42:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.42},
  URN =		{urn:nbn:de:0030-drops-211134},
  doi =		{10.4230/LIPIcs.ESA.2024.42},
  annote =	{Keywords: Directed Planar Graphs, Submodular Functions, Steiner Tree, Network Design}
}
Document
Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments

Authors: Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki


Abstract
In the Maximum Weight Independent Set of Rectangles problem (MWISR) we are given a weighted set of n axis-parallel rectangles in the plane. The task is to find a subset of pairwise non-overlapping rectangles with the maximum possible total weight. This problem is NP-hard and the best-known polynomial-time approximation algorithm, due to Chalermsook and Walczak [SODA 2021], achieves approximation factor 𝒪(log log n). While in the unweighted setting, constant factor approximation algorithms are known, due to Mitchell [FOCS 2021] and to Gálvez et al. [SODA 2022], it remains open to extend these techniques to the weighted setting. In this paper, we consider MWISR through the lens of parameterized approximation. Grandoni, Kratsch and Wiese [ESA 2019] gave a (1-ε)-approximation algorithm running in k^{𝒪(k/ε⁸)} n^{𝒪(1/ε⁸)} time, where k is the number of rectangles in an optimum solution. Unfortunately, their algorithm works only in the unweighted setting and they left it as an open problem to give a parameterized approximation scheme in the weighted setting. We give a parameterized approximation algorithm for MWISR that given a parameter k ∈ ℕ, finds a set of non-overlapping rectangles of weight at least (1-ε) opt_k in 2^{𝒪(k log(k/ε))} n^{𝒪(1/ε)} time, where opt_k is the maximum weight of a solution of cardinality at most k. We also propose a parameterized approximation scheme with running time 2^{𝒪(k² log(k/ε))} n^{𝒪(1)} that finds a solution with cardinality at most k and total weight at least (1-ε)opt_k for the special case of axis-parallel segments.

Cite as

Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki. Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cslovjecsek_et_al:LIPIcs.ESA.2024.43,
  author =	{Cslovjecsek, Jana and Pilipczuk, Micha{\l} and W\k{e}grzycki, Karol},
  title =	{{Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.43},
  URN =		{urn:nbn:de:0030-drops-211146},
  doi =		{10.4230/LIPIcs.ESA.2024.43},
  annote =	{Keywords: parameterized approximation, Maximum Weight Independent Set, rectangles, segments}
}
Document
An Optimal Randomized Algorithm for Finding the Saddlepoint

Authors: Justin Dallant, Frederik Haagensen, Riko Jacob, László Kozma, and Sebastian Wild


Abstract
A saddlepoint of an n × n matrix is an entry that is the maximum of its row and the minimum of its column. Saddlepoints give the value of a two-player zero-sum game, corresponding to its pure-strategy Nash equilibria; efficiently finding a saddlepoint is thus a natural and fundamental algorithmic task. For finding a strict saddlepoint (an entry that is the strict maximum of its row and the strict minimum of its column) an O(n log* n)-time algorithm was recently obtained by Dallant, Haagensen, Jacob, Kozma, and Wild, improving the O(n log n) bounds from 1991 of Bienstock, Chung, Fredman, Schäffer, Shor, Suri and of Byrne and Vaserstein. In this paper we present an optimal O(n)-time algorithm for finding a strict saddlepoint based on random sampling. Our algorithm, like earlier approaches, accesses matrix entries only via unit-cost binary comparisons. For finding a (non-strict) saddlepoint, we extend an existing lower bound to randomized algorithms, showing that the trivial O(n²) runtime cannot be improved even with the use of randomness.

Cite as

Justin Dallant, Frederik Haagensen, Riko Jacob, László Kozma, and Sebastian Wild. An Optimal Randomized Algorithm for Finding the Saddlepoint. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 44:1-44:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dallant_et_al:LIPIcs.ESA.2024.44,
  author =	{Dallant, Justin and Haagensen, Frederik and Jacob, Riko and Kozma, L\'{a}szl\'{o} and Wild, Sebastian},
  title =	{{An Optimal Randomized Algorithm for Finding the Saddlepoint}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{44:1--44:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.44},
  URN =		{urn:nbn:de:0030-drops-211154},
  doi =		{10.4230/LIPIcs.ESA.2024.44},
  annote =	{Keywords: saddlepoint, matrix, comparison, search, randomized algorithms}
}
Document
Near Optimal Dual Fault Tolerant Distance Oracle

Authors: Dipan Dey and Manoj Gupta


Abstract
We present a dual fault-tolerant distance oracle for undirected and unweighted graphs. Given a set F of two edges, as well as a source node s and a destination node t, our oracle returns the length of the shortest path from s to t that avoids F in O(1) time with a high probability. The space complexity of our oracle is Õ(n²) , making it nearly optimal in terms of both space and query time. Prior to our work, Pettie and Duan [SODA 2009] designed a dual fault-tolerant distance oracle that required Õ(n²) space and O(log n) query time. In addition to improving the query time, our oracle is much simpler than the previous approach.

Cite as

Dipan Dey and Manoj Gupta. Near Optimal Dual Fault Tolerant Distance Oracle. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 45:1-45:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ESA.2024.45,
  author =	{Dey, Dipan and Gupta, Manoj},
  title =	{{Near Optimal Dual Fault Tolerant Distance Oracle}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{45:1--45:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.45},
  URN =		{urn:nbn:de:0030-drops-211164},
  doi =		{10.4230/LIPIcs.ESA.2024.45},
  annote =	{Keywords: Distance Sensitive Oracle, Dual Fault Distance Oracle}
}
Document
Deterministic Minimum Steiner Cut in Maximum Flow Time

Authors: Matthew Ding and Jason Li


Abstract
We devise a deterministic algorithm for minimum Steiner cut, which uses (log n)^{O(1)} maximum flow calls and additional near-linear time. This algorithm improves on Li and Panigrahi’s (FOCS 2020) algorithm, which uses (log n)^{O(1/ε⁴)} maximum flow calls and additional O(m^{1+ε}) time, for ε > 0. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices that generalizes all existing s-strong partitioning methods, which we believe may have future applications.

Cite as

Matthew Ding and Jason Li. Deterministic Minimum Steiner Cut in Maximum Flow Time. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2024.46,
  author =	{Ding, Matthew and Li, Jason},
  title =	{{Deterministic Minimum Steiner Cut in Maximum Flow Time}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.46},
  URN =		{urn:nbn:de:0030-drops-211174},
  doi =		{10.4230/LIPIcs.ESA.2024.46},
  annote =	{Keywords: graph algorithms, minimum cut, deterministic}
}
Document
Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem

Authors: Yann Disser, Svenja M. Griesbach, Max Klimm, and Annette Lutz


Abstract
We consider an incremental variant of the rooted prize-collecting Steiner-tree problem with a growing budget constraint. While no incremental solution exists that simultaneously approximates the optimum for all budgets, we show that a bicriterial (α,μ)-approximation is possible, i.e., a solution that with budget B+α for all B ∈ ℝ_{≥ 0} is a multiplicative μ-approximation compared to the optimum solution with budget B. For the case that the underlying graph is a tree, we present a polynomial-time density-greedy algorithm that computes a (χ,1)-approximation, where χ denotes the eccentricity of the root vertex in the underlying graph, and show that this is best possible. An adaptation of the density-greedy algorithm for general graphs is (γ,2)-competitive where γ is the maximal length of a vertex-disjoint path starting in the root. While this algorithm does not run in polynomial time, it can be adapted to a (γ,3)-competitive algorithm that runs in polynomial time. We further devise a capacity-scaling algorithm that guarantees a (3χ,8)-approximation and, more generally, a ((4𝓁 - 1)χ, (2^{𝓁 + 2})/(2^𝓁 -1))-approximation for every fixed 𝓁 ∈ ℕ.

Cite as

Yann Disser, Svenja M. Griesbach, Max Klimm, and Annette Lutz. Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 47:1-47:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{disser_et_al:LIPIcs.ESA.2024.47,
  author =	{Disser, Yann and Griesbach, Svenja M. and Klimm, Max and Lutz, Annette},
  title =	{{Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{47:1--47:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.47},
  URN =		{urn:nbn:de:0030-drops-211188},
  doi =		{10.4230/LIPIcs.ESA.2024.47},
  annote =	{Keywords: incremental optimization, competitive analysis, prize-collecting Steiner-tree}
}
Document
Scheduling with Obligatory Tests

Authors: Konstantinos Dogeas, Thomas Erlebach, and Ya-Chun Liang


Abstract
Motivated by settings such as medical treatments or aircraft maintenance, we consider a scheduling problem with jobs that consist of two operations, a test and a processing part. The time required to execute the test is known in advance while the time required to execute the processing part becomes known only upon completion of the test. We use competitive analysis to study algorithms for minimizing the sum of completion times for n given jobs on a single machine. As our main result, we prove using a novel analysis technique that the natural 1-SORT algorithm has competitive ratio at most 1.861. For the special case of uniform test times, we show that a simple threshold-based algorithm has competitive ratio at most 1.585. We also prove a lower bound that shows that no deterministic algorithm can be better than √2-competitive even in the case of uniform test times.

Cite as

Konstantinos Dogeas, Thomas Erlebach, and Ya-Chun Liang. Scheduling with Obligatory Tests. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 48:1-48:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dogeas_et_al:LIPIcs.ESA.2024.48,
  author =	{Dogeas, Konstantinos and Erlebach, Thomas and Liang, Ya-Chun},
  title =	{{Scheduling with Obligatory Tests}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{48:1--48:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.48},
  URN =		{urn:nbn:de:0030-drops-211194},
  doi =		{10.4230/LIPIcs.ESA.2024.48},
  annote =	{Keywords: Competitive ratio, Online algorithm, Scheduling with testing, Sum of completion times}
}
Document
Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs

Authors: Sally Dong and Guanghao Ye


Abstract
We present an algorithm for min-cost flow in graphs with n vertices and m edges, given a tree decomposition of width τ and size S, and polynomially bounded, integral edge capacities and costs, running in Õ(m√{τ} + S) time. This improves upon the previous fastest algorithm in this setting achieved by the bounded-treewidth linear program solver of [Gu and Song, 2022; Dong et al., 2024], which runs in Õ(m τ^{(ω+1)/2}) time, where ω ≈ 2.37 is the matrix multiplication exponent. Our approach leverages recent advances in structured linear program solvers and robust interior point methods (IPM). In general graphs where treewidth is trivially bounded by n, the algorithm runs in Õ(m √ n) time, which is the best-known result without using the Lee-Sidford barrier or 𝓁₁ IPM, demonstrating the surprising power of robust interior point methods. As a corollary, we obtain a Õ(tw³ ⋅ m) time algorithm to compute a tree decomposition of width O(tw⋅ log(n)), given a graph with m edges.

Cite as

Sally Dong and Guanghao Ye. Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.ESA.2024.49,
  author =	{Dong, Sally and Ye, Guanghao},
  title =	{{Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.49},
  URN =		{urn:nbn:de:0030-drops-211207},
  doi =		{10.4230/LIPIcs.ESA.2024.49},
  annote =	{Keywords: Min-cost flow, tree decomposition, interior point method, bounded treewidth graphs}
}
Document
Making Multicurves Cross Minimally on Surfaces

Authors: Loïc Dubois


Abstract
On an orientable surface S, consider a collection Γ of closed curves. The (geometric) intersection number i_S(Γ) is the minimum number of self-intersections that a collection Γ' can have, where Γ' results from a continuous deformation (homotopy) of Γ. We provide algorithms that compute i_S(Γ) and such a Γ', assuming that Γ is given by a collection of closed walks of length n in a graph M cellularly embedded on S, in O(n log n) time when M and S are fixed. The state of the art is a paper of Despré and Lazarus [SoCG 2017, J. ACM 2019], who compute i_S(Γ) in O(n²) time, and Γ' in O(n⁴) time if Γ is a single closed curve. Our result is more general since we can put an arbitrary number of closed curves in minimal position. Also, our algorithms are quasi-linear in n instead of quadratic and quartic. Most importantly, our proofs are simpler, shorter, and more structured. We use techniques from two-dimensional topology and from the theory of hyperbolic surfaces. Most notably, we prove a new property of the reducing triangulations introduced by Colin de Verdière, Despré, and Dubois [SODA 2024], reducing our problem to the case of surfaces with boundary. As a key subroutine, we rely on an algorithm of Fulek and Tóth [JCO 2020].

Cite as

Loïc Dubois. Making Multicurves Cross Minimally on Surfaces. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 50:1-50:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dubois:LIPIcs.ESA.2024.50,
  author =	{Dubois, Lo\"{i}c},
  title =	{{Making Multicurves Cross Minimally on Surfaces}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{50:1--50:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.50},
  URN =		{urn:nbn:de:0030-drops-211216},
  doi =		{10.4230/LIPIcs.ESA.2024.50},
  annote =	{Keywords: Algorithms, Topology, Surfaces, Closed Curves, Geometric Intersection Number}
}
Document
Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs

Authors: Lech Duraj, Filip Konieczny, and Krzysztof Potępa


Abstract
We develop a framework for algorithms finding the diameter in graphs of bounded distance Vapnik-Chervonenkis dimension, in (parameterized) subquadratic time complexity. The class of bounded distance VC-dimension graphs is wide, including, e.g. all minor-free graphs. We build on the work of Ducoffe et al. [SODA'20, SIGCOMP'22], improving their technique. With our approach the algorithms become simpler and faster, working in 𝒪{(k ⋅ n^{1-1/d} ⋅ m ⋅ polylog(n))} time complexity for the graph on n vertices and m edges, where k is the diameter and d is the distance VC-dimension of the graph. Furthermore, it allows us to use the improved technique in more general setting. In particular, we use this framework for geometric intersection graphs, i.e. graphs where vertices are identical geometric objects on a plane and the adjacency is defined by intersection. Applying our approach for these graphs, we partially answer a question posed by Bringmann et al. [SoCG'22], finding an 𝒪{(n^{7/4} ⋅ polylog(n))} parameterized diameter algorithm for unit square intersection graph of size n, as well as a more general algorithm for convex polygon intersection graphs.

Cite as

Lech Duraj, Filip Konieczny, and Krzysztof Potępa. Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 51:1-51:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{duraj_et_al:LIPIcs.ESA.2024.51,
  author =	{Duraj, Lech and Konieczny, Filip and Pot\k{e}pa, Krzysztof},
  title =	{{Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{51:1--51:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.51},
  URN =		{urn:nbn:de:0030-drops-211229},
  doi =		{10.4230/LIPIcs.ESA.2024.51},
  annote =	{Keywords: Graph Diameter, Geometric Intersection Graphs, Vapnik-Chervonenkis Dimension}
}
Document
Toward Self-Adjusting k-Ary Search Tree Networks

Authors: Evgeniy Feder, Anton Paramonov, Pavel Mavrin, Iosif Salem, Vitaly Aksenov, and Stefan Schmid


Abstract
Datacenter networks are becoming increasingly flexible with the incorporation of new optical communication technologies, such as optical circuit switches, enabling self-adjusting topologies that can adapt to the traffic pattern in a demand-aware manner. In this paper, we take the first steps toward demand-aware and self-adjusting k-ary tree networks. These are more powerful generalizations of existing binary search tree networks (like SplayNet [Stefan Schmid et al., 2016]), which have been at the core of self-adjusting network (SAN) designs. k-ary search tree networks are a natural generalization offering nodes of higher degrees, reduced route lengths, and local routing in spite of reconfigurations (due to maintaining the search property). Our main results are two online heuristics for self-adjusting k-ary tree networks. Empirical results show that our heuristics work better than SplayNet in most of the real network traces and for average to low locality synthetic traces, and are only a little inferior to SplayNet in all remaining traces. We build our online algorithms by first solving the offline case. First, we compute an offline (optimal) static demand-aware network for arbitrary traffic patterns in 𝒪(n³ ⋅ k) time via dynamic programming, where n is the number of network nodes (e.g., datacenter racks), and also improve the bound for the special case of uniformly distributed traffic. Then, we present a centroid-based approach to demand-aware network designs that we use both in the offline static and online settings. In the offline uniform-workload case, we construct this centroid network in linear time 𝒪(n).

Cite as

Evgeniy Feder, Anton Paramonov, Pavel Mavrin, Iosif Salem, Vitaly Aksenov, and Stefan Schmid. Toward Self-Adjusting k-Ary Search Tree Networks. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feder_et_al:LIPIcs.ESA.2024.52,
  author =	{Feder, Evgeniy and Paramonov, Anton and Mavrin, Pavel and Salem, Iosif and Aksenov, Vitaly and Schmid, Stefan},
  title =	{{Toward Self-Adjusting k-Ary Search Tree Networks}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.52},
  URN =		{urn:nbn:de:0030-drops-211235},
  doi =		{10.4230/LIPIcs.ESA.2024.52},
  annote =	{Keywords: self-adjusting networks, networks, splay-tree, k-ary tree}
}
Document
Semi-Streaming Algorithms for Weighted k-Disjoint Matchings

Authors: S M Ferdous, Bhargav Samineni, Alex Pothen, Mahantesh Halappanavar, and Bala Krishnamoorthy


Abstract
We design and implement two single-pass semi-streaming algorithms for the maximum weight k-disjoint matching (k-DM) problem. Given an integer k, the k-DM problem is to find k pairwise edge-disjoint matchings such that the sum of the weights of the matchings is maximized. For k ≥ 2, this problem is NP-hard. Our first algorithm is based on the primal-dual framework of a linear programming relaxation of the problem and is 1/(3+ε)-approximate. We also develop an approximation preserving reduction from k-DM to the maximum weight b-matching problem. Leveraging this reduction and an existing semi-streaming b-matching algorithm, we design a (1/(2+ε))(1 - 1/(k+1))-approximate semi-streaming algorithm for k-DM. For any constant ε > 0, both of these algorithms require O(nk log_{1+ε}² n) bits of space. To the best of our knowledge, this is the first study of semi-streaming algorithms for the k-DM problem. We compare our two algorithms to state-of-the-art offline algorithms on 95 real-world and synthetic test problems, including thirteen graphs generated from data center network traces. On these instances, our streaming algorithms used significantly less memory (ranging from 6× to 512× less) and were faster in runtime than the offline algorithms. Our solutions were often within 5% of the best weights from the offline algorithms. We highlight that the existing offline algorithms run out of 1 TB memory for most of the large instances (> 1 billion edges), whereas our streaming algorithms can solve these problems using only 100 GB memory for k = 8.

Cite as

S M Ferdous, Bhargav Samineni, Alex Pothen, Mahantesh Halappanavar, and Bala Krishnamoorthy. Semi-Streaming Algorithms for Weighted k-Disjoint Matchings. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 53:1-53:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferdous_et_al:LIPIcs.ESA.2024.53,
  author =	{Ferdous, S M and Samineni, Bhargav and Pothen, Alex and Halappanavar, Mahantesh and Krishnamoorthy, Bala},
  title =	{{Semi-Streaming Algorithms for Weighted k-Disjoint Matchings}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{53:1--53:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.53},
  URN =		{urn:nbn:de:0030-drops-211245},
  doi =		{10.4230/LIPIcs.ESA.2024.53},
  annote =	{Keywords: Matchings, Semi-Streaming Algorithms, Approximation Algorithms}
}
Document
Invertible Bloom Lookup Tables with Less Memory and Randomness

Authors: Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin


Abstract
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing n elements and ensuring correctness with probability at least 1 - δ, existing IBLT constructions require Ω(n((log(1/δ))/(log n))+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing n elements with a failure probability of at most δ, our data structure only requires O{n + log(1/δ)log log(1/δ)} space and O{log(log(n)/δ)}-wise independent hash functions. As a key technical ingredient we show that hashing n keys with any k-wise independent hash function h:U → [Cn] for some sufficiently large constant C guarantees with probability 1 - 2^{-Ω(k)} that at least n/2 keys will have a unique hash value. Proving this is non-trivial as k approaches n. We believe that the techniques used to prove this statement may be of independent interest. We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023). We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.

Cite as

Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Invertible Bloom Lookup Tables with Less Memory and Randomness. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fleischhacker_et_al:LIPIcs.ESA.2024.54,
  author =	{Fleischhacker, Nils and Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Invertible Bloom Lookup Tables with Less Memory and Randomness}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.54},
  URN =		{urn:nbn:de:0030-drops-211252},
  doi =		{10.4230/LIPIcs.ESA.2024.54},
  annote =	{Keywords: Invertible Bloom Lookup Tables}
}
Document
Hitting Meets Packing: How Hard Can It Be?

Authors: Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki


Abstract
We study a general family of problems that form a common generalization of classic hitting (also referred to as covering or transversal) and packing problems. An instance of 𝒳-HitPack asks: Can removing k (deletable) vertices of a graph G prevent us from packing 𝓁 vertex-disjoint objects of type 𝒳? This problem captures a spectrum of problems with standard hitting and packing on opposite ends. Our main motivating question is whether the combination 𝒳-HitPack can be significantly harder than these two base problems. Already for one particular choice of 𝒳, this question can be posed for many different complexity notions, leading to a large, so-far unexplored domain at the intersection of the areas of hitting and packing problems. At a high level, we present two case studies: (1) 𝒳 being all cycles, and (2) 𝒳 being all copies of a fixed graph H. In each, we explore the classical complexity as well as the parameterized complexity with the natural parameters k+𝓁 and treewidth. We observe that the combined problem can be drastically harder than the base problems: for cycles or for H being a connected graph on at least 3 vertices, the problem is Σ₂^𝖯-complete and requires double-exponential dependence on the treewidth of the graph (assuming the Exponential-Time Hypothesis). In contrast, the combined problem admits qualitatively similar running times as the base problems in some cases, although significant novel ideas are required. For 𝒳 being all cycles, we establish a 2^{poly(k+𝓁)}⋅ n^{𝒪(1)} algorithm using an involved branching method, for example. Also, for 𝒳 being all edges (i.e., H = K₂; this combines Vertex Cover and Maximum Matching) the problem can be solved in time 2^{poly(tw)}⋅ n^{𝒪(1)} on graphs of treewidth tw. The key step enabling this running time relies on a combinatorial bound obtained from an algebraic (linear delta-matroid) representation of possible matchings.

Cite as

Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki. Hitting Meets Packing: How Hard Can It Be?. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 55:1-55:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{focke_et_al:LIPIcs.ESA.2024.55,
  author =	{Focke, Jacob and Frei, Fabian and Li, Shaohua and Marx, D\'{a}niel and Schepper, Philipp and Sharma, Roohani and W\k{e}grzycki, Karol},
  title =	{{Hitting Meets Packing: How Hard Can It Be?}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{55:1--55:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.55},
  URN =		{urn:nbn:de:0030-drops-211261},
  doi =		{10.4230/LIPIcs.ESA.2024.55},
  annote =	{Keywords: Hitting, Packing, Covering, Parameterized Algorithms, Lower Bounds, Treewidth}
}
Document
A Simple Deterministic Near-Linear Time Approximation Scheme for Transshipment with Arbitrary Positive Edge Costs

Authors: Emily Fox


Abstract
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with positive real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph G = (V, E), vertex demands b ∈ R^V such that ∑_{v ∈ V} b(v) = 0, positive edge costs c ∈ R_{> 0}^E, and a parameter ε > 0. In O(ε^{-2} m log^{O(1)} n) time, it returns a flow f such that the net flow out of each vertex is equal to the vertex’s demand and the cost of the flow is within a (1 + ε) factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the Ω(n²) vertex-vertex distances that an approximation of this kind suggests, we also take advantage of the clustering method used in the well-known Thorup-Zwick distance oracle.

Cite as

Emily Fox. A Simple Deterministic Near-Linear Time Approximation Scheme for Transshipment with Arbitrary Positive Edge Costs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fox:LIPIcs.ESA.2024.56,
  author =	{Fox, Emily},
  title =	{{A Simple Deterministic Near-Linear Time Approximation Scheme for Transshipment with Arbitrary Positive Edge Costs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.56},
  URN =		{urn:nbn:de:0030-drops-211270},
  doi =		{10.4230/LIPIcs.ESA.2024.56},
  annote =	{Keywords: Transshipment, minimum cost flow, approximation algorithms}
}
Document
Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices

Authors: Dvir Fried, Tsvi Kopelowitz, and Ely Porat


Abstract
We revisit the problem of multiplying two square matrices over the (min, +) semi-ring, where all entries are integers from a bounded range [-M : M] ∪ {∞}. The current state of the art for this problem is a simple O(M n^{ω} log M) time algorithm by Alon, Galil and Margalit [JCSS'97], where ω is the exponent in the runtime of the fastest matrix multiplication (FMM) algorithm. We design a new simple algorithm whose runtime is O(M n^ω + M n² log M), thereby removing the logM factor in the runtime if ω > 2 or if n^ω = Ω (n²log n).

Cite as

Dvir Fried, Tsvi Kopelowitz, and Ely Porat. Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 57:1-57:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fried_et_al:LIPIcs.ESA.2024.57,
  author =	{Fried, Dvir and Kopelowitz, Tsvi and Porat, Ely},
  title =	{{Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{57:1--57:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.57},
  URN =		{urn:nbn:de:0030-drops-211283},
  doi =		{10.4230/LIPIcs.ESA.2024.57},
  annote =	{Keywords: FMM, (min , +)-product, FFT}
}
Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity

Authors: Paweł Gawrychowski and Mateusz Wasylkiewicz


Abstract
Petersen’s theorem, one of the earliest results in graph theory, states that every bridgeless cubic multigraph contains a perfect matching. While the original proof was neither constructive nor algorithmic, Biedl, Bose, Demaine, and Lubiw [J. Algorithms 38(1)] showed how to implement a later constructive proof by Frink in 𝒪(nlog⁴n) time using a fully dynamic 2-edge-connectivity structure. Then, Diks and Stańczyk [SOFSEM 2010] described a faster approach that only needs a fully dynamic connectivity structure and works in 𝒪(nlog²n) time. Both algorithms, while reasonable simple, utilize non-trivial (2-edge-)connectivity structures. We show that this is not necessary, and in fact a structure for maintaining a dynamic tree, e.g. link-cut trees, suffices to obtain a simple 𝒪(nlog n) time algorithm.

Cite as

Paweł Gawrychowski and Mateusz Wasylkiewicz. Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 59:1-59:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2024.59,
  author =	{Gawrychowski, Pawe{\l} and Wasylkiewicz, Mateusz},
  title =	{{Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{59:1--59:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.59},
  URN =		{urn:nbn:de:0030-drops-211301},
  doi =		{10.4230/LIPIcs.ESA.2024.59},
  annote =	{Keywords: perfect matching, cubic graphs, bridgeless graphs, link-cut tree}
}
Document
New Algorithms and Lower Bounds for Streaming Tournaments

Authors: Prantar Ghosh and Sahil Kuchlous


Abstract
We study fundamental directed graph (digraph) problems in the streaming model. An initial investigation by Chakrabarti, Ghosh, McGregor, and Vorotnikova [SODA'20] on streaming digraphs showed that while most of these problems are provably hard in general, some of them become tractable when restricted to the well-studied class of tournament graphs where every pair of nodes shares exactly one directed edge. Thus, we focus on tournaments and improve the state of the art for multiple problems in terms of both upper and lower bounds. Our primary upper bound is a deterministic single-pass semi-streaming algorithm (using Õ(n) space for n-node graphs, where Õ(.) hides polylog(n) factors) for decomposing a tournament into strongly connected components (SCC). It improves upon the previously best-known algorithm by Baweja, Jia, and Woodruff [ITCS'22] in terms of both space and passes: for p ⩾ 1, they used (p+1) passes and Õ(n^{1+1/p}) space. We further extend our algorithm to digraphs that are close to tournaments and establish tight bounds demonstrating that the problem’s complexity grows smoothly with the "distance" from tournaments. Applying our SCC-decomposition framework, we obtain improved - and in some cases, optimal - tournament algorithms for s,t-reachability, strong connectivity, Hamiltonian paths and cycles, and feedback arc set. On the other hand, we prove lower bounds exhibiting that some well-studied problems - such as (exact) feedback arc set and s,t-distance - remain hard (require Ω(n²) space) on tournaments. Moreover, we generalize the former problem’s lower bound to establish space-approximation tradeoffs: any single-pass (1± ε)-approximation algorithm requires Ω(n/√{ε}) space. Finally, we settle the streaming complexities of two basic digraph problems studied by prior work: acyclicity testing of tournaments and sink finding in DAGs. As a whole, our collection of results contributes significantly to the growing literature on streaming digraphs.

Cite as

Prantar Ghosh and Sahil Kuchlous. New Algorithms and Lower Bounds for Streaming Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ESA.2024.60,
  author =	{Ghosh, Prantar and Kuchlous, Sahil},
  title =	{{New Algorithms and Lower Bounds for Streaming Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.60},
  URN =		{urn:nbn:de:0030-drops-211318},
  doi =		{10.4230/LIPIcs.ESA.2024.60},
  annote =	{Keywords: tournaments, streaming algorithms, graph algorithms, communication complexity, strongly connected components, reachability, feedback arc set}
}
Document
Practical Expander Decomposition

Authors: Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg


Abstract
The expander decomposition of a graph decomposes the set of vertices into clusters such that the induced subgraph of each cluster is a subgraph with high conductance, and there is only a small number of inter-cluster edges. Expander decompositions are at the forefront of recent theoretical developments in the area of efficient graph algorithms and act as a central component in several state-of-the-art graph algorithms for fundamental problems like maximum flow, min-cost flow, Gomory-Hu trees, global min-cut, and more. Despite this crucial role and the existence of theoretically efficient expander decomposition algorithms, little is known on their behavior in practice. In this paper we explore the engineering design space in implementations for computing expander decompositions. We base our implementation on the near-linear time algorithm of Saranurak and Wang [SODA'19], and enhance it with practical optimizations that accelerate its running time in practice and at the same time preserve the theoretical runtime and approximation guarantees. We evaluate our algorithm on real-world graphs with up to tens of millions of edges. We demonstrate significant speedups of up to two orders of magnitude over the only prior implementation. To the best of our knowledge, our implementation is the first to compute expander decompositions at this scale within reasonable time.

Cite as

Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg. Practical Expander Decomposition. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 61:1-61:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2024.61,
  author =	{Gottesb\"{u}ren, Lars and Parotsidis, Nikos and Gutenberg, Maximilian Probst},
  title =	{{Practical Expander Decomposition}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{61:1--61:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.61},
  URN =		{urn:nbn:de:0030-drops-211323},
  doi =		{10.4230/LIPIcs.ESA.2024.61},
  annote =	{Keywords: Expander Decomposition, Clustering, Graph Algorithms}
}
Document
Optimizing Throughput and Makespan of Queuing Systems by Information Design

Authors: Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke


Abstract
We study the optimal provision of information for two natural performance measures of queuing systems: throughput and makespan. A set of parallel links (queues) is equipped with deterministic capacities and stochastic offsets where the latter depend on a realized state, and the number of states is assumed to be constant. A continuum of flow particles (agents) arrives at the system at a constant rate. A system operator knows the realization of the state and may (partially) reveal this information via a public signaling scheme to the flow particles. Upon arrival, the flow particles observe the signal issued by the system operator, form an updated belief about the realized state, and decide on which link they use. Inflow into a link exceeding the link’s capacity builds up in a queue that increases the cost (total travel time) on the link. Dynamic inflow rates are in a Bayesian dynamic equilibrium when the expected cost along all links with positive inflow is equal at every point in time and not larger than the expected cost of any unused link. For a given time horizon T, the throughput induced by a signaling scheme is the total volume of flow that leaves the links in the interval [0,T]. The public signaling scheme maximizing the throughput may involve irrational numbers. We provide an additive polynomial time approximation scheme (PTAS) that approximates the optimal throughput by an arbitrary additive constant ε > 0. The algorithm solves a Lagrangian dual of the signaling problem with the Ellipsoid method whose separation oracle is implemented by a cell decomposition technique. We also provide a multiplicative fully polynomial time approximation scheme (FPTAS) that does not rely on strong duality and, thus, allows to compute the optimal signals. It uses a different cell decomposition technique together with a piecewise convex under-estimator of the optimal value function. Finally, we consider the makespan of a Bayesian dynamic equilibrium which is defined as the last point in time when a total given value of flow leaves the system. Using a variational inequality argument, we show that full information revelation is a public signaling scheme that minimizes the makespan.

Cite as

Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke. Optimizing Throughput and Makespan of Queuing Systems by Information Design. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{griesbach_et_al:LIPIcs.ESA.2024.62,
  author =	{Griesbach, Svenja M. and Klimm, Max and Warode, Philipp and Ziemke, Theresa},
  title =	{{Optimizing Throughput and Makespan of Queuing Systems by Information Design}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.62},
  URN =		{urn:nbn:de:0030-drops-211336},
  doi =		{10.4230/LIPIcs.ESA.2024.62},
  annote =	{Keywords: Information Design, Dynamic Flows, Public Signals, Convex Envelope}
}
Document
Fractional Linear Matroid Matching Is in Quasi-NC

Authors: Rohit Gurjar, Taihei Oki, and Roshan Raj


Abstract
The matching and linear matroid intersection problems are solvable in quasi-NC, meaning that there exist deterministic algorithms that run in polylogarithmic time and use quasi-polynomially many parallel processors. However, such a parallel algorithm is unknown for linear matroid matching, which generalizes both of these problems. In this work, we propose a quasi-NC algorithm for fractional linear matroid matching, which is a relaxation of linear matroid matching and commonly generalizes fractional matching and linear matroid intersection. Our algorithm builds upon the connection of fractional matroid matching to non-commutative Edmonds' problem recently revealed by Oki and Soma (2023). As a corollary, we also solve black-box non-commutative Edmonds' problem with rank-two skew-symmetric coefficients.

Cite as

Rohit Gurjar, Taihei Oki, and Roshan Raj. Fractional Linear Matroid Matching Is in Quasi-NC. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gurjar_et_al:LIPIcs.ESA.2024.63,
  author =	{Gurjar, Rohit and Oki, Taihei and Raj, Roshan},
  title =	{{Fractional Linear Matroid Matching Is in Quasi-NC}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.63},
  URN =		{urn:nbn:de:0030-drops-211344},
  doi =		{10.4230/LIPIcs.ESA.2024.63},
  annote =	{Keywords: parallel algorithms, hitting set, non-commutative rank, Brascamp-Lieb polytope, algebraic algorithms}
}
Document
A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs

Authors: Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti


Abstract
Tree covering is a technique for decomposing a tree into smaller sized trees with desirable properties, and has been employed in various succinct data structures. However, significant hurdles stand in the way of a practical implementation of tree covering: a lot of pointers are used to maintain the tree-covering hierarchy and many indices for tree navigational queries consume theoretically negligible yet practically vast space. To tackle these problems, we propose a simple representation of tree covering using a balanced-parenthesis representation. The key to the proposal is the observation that every micro tree splits into at most two intervals on the BP representation. Utilizing the representation, we propose several data structures that represent a tree and its tree cover, which consequently allow micro tree compression with arbitrary coding and efficient tree navigational queries. We also applied our data structure to average-case optimal RMQ by Munro et al. [ESA 2021] and implemented the RMQ data structure. Our RMQ data structures spend less than 2n bits and process queries in a practical time on several settings of the performance evaluation, reducing the gap between theoretical space complexity and actual space consumption. For example, our implementation consumes 1.822n bits and processes queries in 5µs on average for random queries and in 13µs on average for the worst query widths. We also implement tree navigational operations while using the same amount of space as the RMQ data structures. We believe the representation can be widely utilized for designing practically memory-efficient data structures based on tree covering.

Cite as

Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti. A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hamada_et_al:LIPIcs.ESA.2024.64,
  author =	{Hamada, Kou and Chakraborty, Sankardeep and Jo, Seungbum and Koriyama, Takuto and Sadakane, Kunihiko and Satti, Srinivasa Rao},
  title =	{{A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.64},
  URN =		{urn:nbn:de:0030-drops-211359},
  doi =		{10.4230/LIPIcs.ESA.2024.64},
  annote =	{Keywords: Hypersuccinct trees, Succinct data structures, Range minimum queries, Binary trees}
}
Document
A Nearly Linear Time Construction of Approximate Single-Source Distance Sensitivity Oracles

Authors: Kaito Harada, Naoki Kitamura, Taisuke Izumi, and Toshimitsu Masuzawa


Abstract
An α-approximate vertex fault-tolerant distance sensitivity oracle (α-VSDO) for a weighted input graph G = (V, E, w) and a source vertex s ∈ V is the data structure answering an α-approximate distance from s to t in G-x for any given query (x, t) ∈ V × V. It is a data structure version of the so-called single-source replacement path problem (SSRP). In this paper, we present a new nearly linear-time algorithm of constructing a (1 + ε)-VSDO for any directed input graph with polynomially bounded integer edge weights. More precisely, the presented oracle attains Õ(m log (nW)/ ε + n log² (nW)/ε²) construction time, Õ(n log (nW) / ε) size, and Õ(1/ε) query time, where n is the number of vertices, m is the number of edges, and W is the maximum edge weight. These bounds are all optimal up to polylogarithmic factors. To the best of our knowledge, this is the first non-trivial algorithm for SSRP/VSDO beating Õ(mn) computation time for directed graphs with general edge weight functions, and also the first nearly linear-time construction breaking approximation factor 3. Such a construction has been unknown even for undirected and unweighted graphs. In addition, our result implies that the known conditional lower bounds for the exact SSRP computation does not apply to the case of approximation.

Cite as

Kaito Harada, Naoki Kitamura, Taisuke Izumi, and Toshimitsu Masuzawa. A Nearly Linear Time Construction of Approximate Single-Source Distance Sensitivity Oracles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 65:1-65:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harada_et_al:LIPIcs.ESA.2024.65,
  author =	{Harada, Kaito and Kitamura, Naoki and Izumi, Taisuke and Masuzawa, Toshimitsu},
  title =	{{A Nearly Linear Time Construction of Approximate Single-Source Distance Sensitivity Oracles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{65:1--65:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.65},
  URN =		{urn:nbn:de:0030-drops-211367},
  doi =		{10.4230/LIPIcs.ESA.2024.65},
  annote =	{Keywords: data structure, distance sensitivity oracle, replacement path problem, graph algorithm}
}
Document
Shortest Path Separators in Unit Disk Graphs

Authors: Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng


Abstract
We introduce a new balanced separator theorem for unit-disk graphs involving two shortest paths combined with the 1-hop neighbours of those paths and two other vertices. This answers an open problem of Yan, Xiang and Dragan [CGTA '12] and improves their result that requires removing the 3-hop neighbourhood of two shortest paths. Our proof uses very different ideas, including Delaunay triangulations and a generalization of the celebrated balanced separator theorem of Lipton and Tarjan [J. Appl. Math. '79] to systems of non-intersecting paths.

Cite as

Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng. Shortest Path Separators in Unit Disk Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harb_et_al:LIPIcs.ESA.2024.66,
  author =	{Harb, Elfarouk and Huang, Zhengcheng and Zheng, Da Wei},
  title =	{{Shortest Path Separators in Unit Disk Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.66},
  URN =		{urn:nbn:de:0030-drops-211375},
  doi =		{10.4230/LIPIcs.ESA.2024.66},
  annote =	{Keywords: Balanced shortest path separators, unit disk graphs, crossings}
}
Document
Approximation Algorithms for Steiner Connectivity Augmentation

Authors: Daniel Hathcock and Michael Zlatin


Abstract
We consider connectivity augmentation problems in the Steiner setting, where the goal is to augment the edge-connectivity between a specified subset of terminal nodes. In the Steiner Augmentation of a Graph problem (k-SAG), we are given a k-edge-connected subgraph H of a graph G. The goal is to augment H by including links from G of minimum cost so that the edge-connectivity between nodes of H increases by 1. This is a generalization of the Weighted Connectivity Augmentation Problem, in which only links between pairs of nodes in H are available for the augmentation. In the Steiner Connectivity Augmentation Problem (k-SCAP), we are given a Steiner k-edge-connected graph connecting terminals R, and we seek to add links of minimum cost to create a Steiner (k+1)-edge-connected graph for R. Note that k-SAG is a special case of k-SCAP. The results of Ravi, Zhang and Zlatin for the Steiner Tree Augmentation problem yield a (1.5+ε)-approximation for 1-SCAP and for k-SAG when k is odd [Ravi et al., 2023]. In this work, we give a (1 + ln{2} +ε)-approximation for the Steiner Ring Augmentation Problem (SRAP). This yields a polynomial time algorithm with approximation ratio (1 + ln{2} + ε) for 2-SCAP. We obtain an improved approximation guarantee for SRAP when the ring consists of only terminals, yielding a (1.5+ε)-approximation for k-SAG for any k.

Cite as

Daniel Hathcock and Michael Zlatin. Approximation Algorithms for Steiner Connectivity Augmentation. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 67:1-67:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hathcock_et_al:LIPIcs.ESA.2024.67,
  author =	{Hathcock, Daniel and Zlatin, Michael},
  title =	{{Approximation Algorithms for Steiner Connectivity Augmentation}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{67:1--67:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.67},
  URN =		{urn:nbn:de:0030-drops-211387},
  doi =		{10.4230/LIPIcs.ESA.2024.67},
  annote =	{Keywords: Approximation Algorithms, Steiner Connectivity, Network Design}
}
Document
Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard

Authors: Klaus Heeger and Danny Hermelin


Abstract
We consider the 1∣∣∑ w_jU_j problem, the problem of minimizing the weighted number of tardy jobs on a single machine. This problem is one of the most basic and fundamental problems in scheduling theory, with several different applications both in theory and practice. Using a reduction from the Multicolored Clique problem, we prove that 1∣∣∑ w_jU_j is W[1]-hard with respect to the number p_# of different processing times in the input, as well as with respect to the number w_# of different weights in the input. This, along with previous work, provides a complete picture for 1∣∣∑ w_jU_j from the perspective of parameterized complexity, as well as almost tight complexity bounds for the problem under the Exponential Time Hypothesis (ETH).

Cite as

Klaus Heeger and Danny Hermelin. Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ESA.2024.68,
  author =	{Heeger, Klaus and Hermelin, Danny},
  title =	{{Minimizing the Weighted Number of Tardy Jobs Is W\lbrack1\rbrack-Hard}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.68},
  URN =		{urn:nbn:de:0030-drops-211392},
  doi =		{10.4230/LIPIcs.ESA.2024.68},
  annote =	{Keywords: single-machine scheduling, number of different weights, number of different processing times}
}
Document
PHOBIC: Perfect Hashing With Optimized Bucket Sizes and Interleaved Coding

Authors: Stefan Hermann, Hans-Peter Lehmann, Giulio Ermanno Pibiri, Peter Sanders, and Stefan Walzer


Abstract
A minimal perfect hash function (or MPHF) maps a set of n keys to [n] : = {1, …, n} without collisions. Such functions find widespread application e.g. in bioinformatics and databases. In this paper we revisit PTHash - a construction technique particularly designed for fast queries. PTHash distributes the input keys into small buckets and, for each bucket, it searches for a hash function seed that places its keys in the output domain without collisions. The collection of all seeds is then stored in a compressed way. Since the first buckets are easier to place, buckets are considered in non-increasing order of size. Additionally, PTHash heuristically produces an imbalanced distribution of bucket sizes by distributing 60% of the keys into 30% of the buckets. Our main contribution is to characterize, up to lower order terms, an optimal choice for the expected bucket sizes, improving construction throughput for space efficient configurations both in theory and practice. Further contributions include a new encoding scheme for seeds that works across partitions of the data structure and a GPU parallelization. Compared to PTHash, PHOBIC is 0.17 bits/key more space efficient for same query time and construction throughput. For a configuration with fast queries, our GPU implementation can construct an MPHF at 2.17 bits/key in 28 ns/key, which can be queried in 37 ns/query on the CPU.

Cite as

Stefan Hermann, Hans-Peter Lehmann, Giulio Ermanno Pibiri, Peter Sanders, and Stefan Walzer. PHOBIC: Perfect Hashing With Optimized Bucket Sizes and Interleaved Coding. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 69:1-69:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hermann_et_al:LIPIcs.ESA.2024.69,
  author =	{Hermann, Stefan and Lehmann, Hans-Peter and Pibiri, Giulio Ermanno and Sanders, Peter and Walzer, Stefan},
  title =	{{PHOBIC: Perfect Hashing With Optimized Bucket Sizes and Interleaved Coding}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{69:1--69:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.69},
  URN =		{urn:nbn:de:0030-drops-211405},
  doi =		{10.4230/LIPIcs.ESA.2024.69},
  annote =	{Keywords: Compressed Data Structures, Minimal Perfect Hashing, GPU}
}
Document
Dynamic Embeddings of Dynamic Single-Source Upward Planar Graphs

Authors: Ivor van der Hoog, Irene Parada, and Eva Rotenberg


Abstract
A directed graph G is upward planar if it admits a planar embedding where each edge is y-monotone. Unlike planarity testing, upward planarity testing is NP-hard except in restricted cases, such as when the graph has the single-source property (i.e., each connected component has one source). In this paper, we present a dynamic data structure for maintaining an upward combinatorial embedding ℰ→(G) of a single-source upward planar graph subject to edge deletions, edge contractions, directed edge insertions across a face, and single-source-preserving vertex splits through specified corners (i.e., the gaps between pairs of consecutive edges that share a vertex and a face). We furthermore support changes to the embedding ℰ→(G) in the form of subgraph flips that mirror or slide the placement of a subgraph that is connected to the rest of the graph via at most two vertices. Updates that are incompatible with the current upward planar embedding are identified and rejected. All update operations are supported as long as the graph remains upward planar. In addition, we support queries that can tell whether two vertices can be connected with a directed edge while the graph remains single-source (we call these uplinkability queries). If a pair of vertices are not uplinkable, we facilitate one-flip-linkable queries: These point to a flip that makes them uplinkable, if any such flip exists. We dynamically maintain a linear-size data structure on G which supports incidence queries between a vertex and a face, and uplinkability queries for vertex pairs. We support all updates and queries in O(log² n) worst-case time.

Cite as

Ivor van der Hoog, Irene Parada, and Eva Rotenberg. Dynamic Embeddings of Dynamic Single-Source Upward Planar Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 70:1-70:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ESA.2024.70,
  author =	{van der Hoog, Ivor and Parada, Irene and Rotenberg, Eva},
  title =	{{Dynamic Embeddings of Dynamic Single-Source Upward Planar Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{70:1--70:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.70},
  URN =		{urn:nbn:de:0030-drops-211410},
  doi =		{10.4230/LIPIcs.ESA.2024.70},
  annote =	{Keywords: dynamic graphs, data structures, computational geometry, graph drawing, graph algorithms, upward planarity}
}
Document
Towards Communication-Efficient Peer-To-Peer Networks

Authors: Khalid Hourani, William K. Moses Jr., and Gopal Pandurangan


Abstract
We focus on designing Peer-to-Peer (P2P) networks that enable efficient communication. Over the last two decades, there has been substantial algorithmic research on distributed protocols for building P2P networks with various desirable properties such as high expansion, low diameter, and robustness to a large number of deletions. A key underlying theme in all of these works is to distributively build a random graph topology that guarantees the above properties. Moreover, the random connectivity topology is widely deployed in many P2P systems today, including those that implement blockchains and cryptocurrencies. However, a major drawback of using a random graph topology for a P2P network is that the random topology does not respect the underlying (Internet) communication topology. This creates a large propagation delay, which is a major communication bottleneck in modern P2P networks. In this paper, we work towards designing P2P networks that are communication-efficient (having small propagation delay) with provable guarantees. Our main contribution is an efficient, decentralized protocol, Close-Weaver, that transforms a random graph topology embedded in an underlying Euclidean space into a topology that also respects the underlying metric. We then present efficient point-to-point routing and broadcast protocols that achieve essentially optimal performance with respect to the underlying space.

Cite as

Khalid Hourani, William K. Moses Jr., and Gopal Pandurangan. Towards Communication-Efficient Peer-To-Peer Networks. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 71:1-71:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hourani_et_al:LIPIcs.ESA.2024.71,
  author =	{Hourani, Khalid and Moses Jr., William K. and Pandurangan, Gopal},
  title =	{{Towards Communication-Efficient Peer-To-Peer Networks}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{71:1--71:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.71},
  URN =		{urn:nbn:de:0030-drops-211428},
  doi =		{10.4230/LIPIcs.ESA.2024.71},
  annote =	{Keywords: Peer-to-Peer Networks, Overlay Construction Protocol, Expanders, Broadcast, Geometric Routing}
}
Document
Connectivity Oracles for Predictable Vertex Failures

Authors: Bingbing Hu, Evangelos Kosinas, and Adam Polak


Abstract
The problem of designing connectivity oracles supporting vertex failures is one of the basic data structures problems for undirected graphs. It is already well understood: previous works [Duan-Pettie STOC'10; Long-Saranurak FOCS'22] achieve query time linear in the number of failed vertices, and it is conditionally optimal as long as we require preprocessing time polynomial in the size of the graph and update time polynomial in the number of failed vertices. We revisit this problem in the paradigm of algorithms with predictions: we ask if the query time can be improved if the set of failed vertices can be predicted beforehand up to a small number of errors. More specifically, we design a data structure that, given a graph G = (V,E) and a set of vertices predicted to fail D̂ ⊆ V of size d = |D̂|, preprocesses it in time Õ(d|E|) and then can receive an update given as the symmetric difference between the predicted and the actual set of failed vertices D̂△D = (D̂ ⧵ D) ∪ (D ⧵ D̂) of size η = |D̂△D|, process it in time Õ(η⁴), and after that answer connectivity queries in G ⧵ D in time O(η). Viewed from another perspective, our data structure provides an improvement over the state of the art for the fully dynamic subgraph connectivity problem in the sensitivity setting [Henzinger-Neumann ESA'16]. We argue that the preprocessing time and query time of our data structure are conditionally optimal under standard fine-grained complexity assumptions.

Cite as

Bingbing Hu, Evangelos Kosinas, and Adam Polak. Connectivity Oracles for Predictable Vertex Failures. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 72:1-72:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ESA.2024.72,
  author =	{Hu, Bingbing and Kosinas, Evangelos and Polak, Adam},
  title =	{{Connectivity Oracles for Predictable Vertex Failures}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{72:1--72:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.72},
  URN =		{urn:nbn:de:0030-drops-211437},
  doi =		{10.4230/LIPIcs.ESA.2024.72},
  annote =	{Keywords: Data structures, graph connectivity, algorithms with predictions}
}
Document
Laminar Matroid Secretary: Greedy Strikes Back

Authors: Zhiyi Huang, Zahra Parsaeian, and Zixuan Zhu


Abstract
We show that a simple greedy algorithm is 4.75-competitive for the Laminar Matroid Secretary Problem, improving the 3√3 ≈ 5.196-competitive algorithm based on the forbidden sets technique (Soto, Turkieltaub, and Verdugo, 2018).

Cite as

Zhiyi Huang, Zahra Parsaeian, and Zixuan Zhu. Laminar Matroid Secretary: Greedy Strikes Back. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 73:1-73:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ESA.2024.73,
  author =	{Huang, Zhiyi and Parsaeian, Zahra and Zhu, Zixuan},
  title =	{{Laminar Matroid Secretary: Greedy Strikes Back}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{73:1--73:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.73},
  URN =		{urn:nbn:de:0030-drops-211443},
  doi =		{10.4230/LIPIcs.ESA.2024.73},
  annote =	{Keywords: Matroid Secretary, Greedy Algorithm, Laminar Matroid}
}
Document
Improved Approximations for Flexible Network Design

Authors: Dylan Hyatt-Denesik, Afrouz Jabal-Ameli, and Laura Sanità


Abstract
Flexible network design deals with building a network that guarantees some connectivity requirements between its vertices, even when some of its elements (like vertices or edges) fail. In particular, the set of edges (resp. vertices) of a given graph are here partitioned into safe and unsafe. The goal is to identify a minimum size subgraph that is 2-edge-connected (resp. 2-vertex-connected), and stay so whenever any of the unsafe elements gets removed. In this paper, we provide improved approximation algorithms for flexible network design problems, considering both edge-connectivity and vertex-connectivity, as well as connectivity values higher than 2. For the vertex-connectivity variant, in particular, our algorithm is the first with approximation factor strictly better than 2.

Cite as

Dylan Hyatt-Denesik, Afrouz Jabal-Ameli, and Laura Sanità. Improved Approximations for Flexible Network Design. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hyattdenesik_et_al:LIPIcs.ESA.2024.74,
  author =	{Hyatt-Denesik, Dylan and Jabal-Ameli, Afrouz and Sanit\`{a}, Laura},
  title =	{{Improved Approximations for Flexible Network Design}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{74:1--74:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.74},
  URN =		{urn:nbn:de:0030-drops-211456},
  doi =		{10.4230/LIPIcs.ESA.2024.74},
  annote =	{Keywords: Approximation Algorithms, Network Design, Flexible Connectivity}
}
Document
Finding a Maximum Restricted t-Matching via Boolean Edge-CSP

Authors: Yuni Iwamasa, Yusuke Kobayashi, and Kenjiro Takazawa


Abstract
The problem of finding a maximum 2-matching without short cycles has received significant attention due to its relevance to the Hamilton cycle problem. This problem is generalized to finding a maximum t-matching which excludes specified complete t-partite subgraphs, where t is a fixed positive integer. The polynomial solvability of this generalized problem remains an open question. In this paper, we present polynomial-time algorithms for the following two cases of this problem: in the first case the forbidden complete t-partite subgraphs are edge-disjoint; and in the second case the maximum degree of the input graph is at most 2t-1. Our result for the first case extends the previous work of Nam (1994) showing the polynomial solvability of the problem of finding a maximum 2-matching without cycles of length four, where the cycles of length four are vertex-disjoint. The second result expands upon the works of Bérczi and Végh (2010) and Kobayashi and Yin (2012), which focused on graphs with maximum degree at most t+1. Our algorithms are obtained from exploiting the discrete structure of restricted t-matchings and employing an algorithm for the Boolean edge-CSP.

Cite as

Yuni Iwamasa, Yusuke Kobayashi, and Kenjiro Takazawa. Finding a Maximum Restricted t-Matching via Boolean Edge-CSP. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 75:1-75:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{iwamasa_et_al:LIPIcs.ESA.2024.75,
  author =	{Iwamasa, Yuni and Kobayashi, Yusuke and Takazawa, Kenjiro},
  title =	{{Finding a Maximum Restricted t-Matching via Boolean Edge-CSP}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{75:1--75:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.75},
  URN =		{urn:nbn:de:0030-drops-211463},
  doi =		{10.4230/LIPIcs.ESA.2024.75},
  annote =	{Keywords: Polynomial algorithm, C\underlinek-free 2-matching, Jump system, Boolean edge-CSP}
}
Document
Steiner Tree Parameterized by Multiway Cut and Even Less

Authors: Bart M.P. Jansen and Céline M.F. Swennenhuis


Abstract
In the Steiner Tree problem we are given an undirected edge-weighted graph as input, along with a set K of vertices called terminals. The task is to output a minimum-weight connected subgraph that spans all the terminals. The famous Dreyfus-Wagner algorithm running in 3^{|K|}poly(n) time shows that the problem is fixed-parameter tractable parameterized by the number of terminals. We present fixed-parameter tractable algorithms for Steiner Tree using structurally smaller parameterizations. Our first result concerns the parameterization by a multiway cut S of the terminals, which is a vertex set S (possibly containing terminals) such that each connected component of G-S contains at most one terminal. We show that Steiner Tree can be solved in 2^{𝒪(|S|log|S|)}poly(n) time and polynomial space, where S is a minimum multiway cut for K. The algorithm is based on the insight that, after guessing how an optimal Steiner tree interacts with a multiway cut S, computing a minimum-cost solution of this type can be formulated as minimum-cost bipartite matching. Our second result concerns a new hybrid parameterization called K-free treewidth that simultaneously refines the number of terminals |K| and the treewidth of the input graph. By utilizing recent work on ℋ-Treewidth in order to find a corresponding decomposition of the graph, we give an algorithm that solves Steiner Tree in time 2^{𝒪(k)} poly(n), where k denotes the K-free treewidth of the input graph. To obtain this running time, we show how the rank-based approach for solving Steiner Tree parameterized by treewidth can be extended to work in the setting of K-free treewidth, by exploiting existing algorithms parameterized by |K| to compute the table entries of leaf bags of a tree K-free decomposition.

Cite as

Bart M.P. Jansen and Céline M.F. Swennenhuis. Steiner Tree Parameterized by Multiway Cut and Even Less. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 76:1-76:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2024.76,
  author =	{Jansen, Bart M.P. and Swennenhuis, C\'{e}line M.F.},
  title =	{{Steiner Tree Parameterized by Multiway Cut and Even Less}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{76:1--76:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.76},
  URN =		{urn:nbn:de:0030-drops-211471},
  doi =		{10.4230/LIPIcs.ESA.2024.76},
  annote =	{Keywords: fixed-parameter tractability, Steiner Tree, structural parameterization, H-treewidth}
}
Document
Near-Linear Algorithms for Visibility Graphs over a 1.5-Dimensional Terrain

Authors: Matthew J. Katz, Rachel Saban, and Micha Sharir


Abstract
We present several near-linear algorithms for problems involving visibility over a 1.5-dimensional terrain. Concretely, we have a 1.5-dimensional terrain T, i.e., a bounded x-monotone polygonal path in the plane, with n vertices, and a set P of m points that lie on or above T. The visibility graph VG(P,T) is the graph with P as its vertex set and {(p,q) | p and q are visible to each other} as its edge set. We present algorithms that perform BFS and DFS on VG(P,T), which run in O(nlog n + mlog³(m+n)) time. We also consider three optimization problems, in which P is a set of points on T, and we erect a vertical tower of height h at each p ∈ P. In the first problem, called the reverse shortest path problem, we are given two points s, t ∈ P, and an integer k, and wish to find the smallest height h^* for which VG(P(h^*),T) contains a path from s to t of at most k edges, where P(h^*) is the set of the tips of the towers of height h^* erected at the points of P. In the second problem we wish to find the smallest height h^* for which VG(P(h^*),T) contains a cycle, and in the third problem we wish to find the smallest height h^* for which VG(P(h^*),T) is nonempty; we refer to that problem as "Seeing the most without being seen". We present algorithms for the first two problems that run in O^*((m+n)^{6/5}) time, where the O^*(⋅) notation hides subpolynomial factors. The third problem can be solved by a faster algorithm, which runs in O((n+m)log³ (m+n)) time.

Cite as

Matthew J. Katz, Rachel Saban, and Micha Sharir. Near-Linear Algorithms for Visibility Graphs over a 1.5-Dimensional Terrain. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 77:1-77:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{katz_et_al:LIPIcs.ESA.2024.77,
  author =	{Katz, Matthew J. and Saban, Rachel and Sharir, Micha},
  title =	{{Near-Linear Algorithms for Visibility Graphs over a 1.5-Dimensional Terrain}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{77:1--77:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.77},
  URN =		{urn:nbn:de:0030-drops-211482},
  doi =		{10.4230/LIPIcs.ESA.2024.77},
  annote =	{Keywords: 1.5-dimensional terrain, visibility, visibility graph, reverse shortest path, parametric search, shrink-and-bifurcate, range searching}
}
Document
Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage

Authors: Justin Kim, Rahul Varki, Marco Oliva, and Christina Boucher


Abstract
The RePair compression algorithm produces a context-free grammar by iteratively substituting the most frequently occurring pair of consecutive symbols with a new symbol until all consecutive pairs of symbols appear only once in the compressed text. It is widely used in the settings of bioinformatics, machine learning, and information retrieval where random access to the original input text is needed. For example, in pangenomics, RePair is used for random access to a population of genomes. BigRePair improves the scalability of the original RePair algorithm by using Prefix-Free Parsing (PFP) to preprocess the text prior to building the RePair grammar. Despite the efficiency of PFP on repetitive text, there is a scalability issue with the size of the parse which causes a memory bottleneck in BigRePair. In this paper, we design and implement recursive RePair (denoted as Re²Pair), which builds the RePair grammar using recursive PFP. Our novel algorithm faces the challenge of constructing the RePair grammar without direct access to the parse of text, relying solely on the dictionary of the text and the parse and dictionary of the parse of the text. We compare Re²Pair to BigRePair using SARS-CoV-2 haplotypes and haplotypes from the 1000 Genomes Project. We show that our method Re²Pair achieves over a 40% peak memory reduction and a speed up ranging between 12% to 79% compared to BigRePair when compressing the largest input texts in all experiments. Re²Pair is made publicly available under the GNU public license here: https://github.com/jkim210/Recursive-RePair

Cite as

Justin Kim, Rahul Varki, Marco Oliva, and Christina Boucher. Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 78:1-78:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ESA.2024.78,
  author =	{Kim, Justin and Varki, Rahul and Oliva, Marco and Boucher, Christina},
  title =	{{Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{78:1--78:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.78},
  URN =		{urn:nbn:de:0030-drops-211496},
  doi =		{10.4230/LIPIcs.ESA.2024.78},
  annote =	{Keywords: RePair, Compressed Data Structures, Prefix-free Parsing}
}
Document
Giving Some Slack: Shortcuts and Transitive Closure Compressions

Authors: Shimon Kogan and Merav Parter


Abstract
We consider the fundamental problems of reachability shortcuts and compression schemes of the transitive closure (TC) of n-vertex directed acyclic graphs (DAGs) G when we are allowed to neglect the distance (or reachability) constraints for an ε fraction of the pairs in the transitive closure of G, denoted by TC(G). Shortcuts with Slack. For a directed graph G = (V,E), a d-reachability shortcut is a set of edges H ⊆ TC(G), whose addition decreases the directed diameter of G to be at most d. We introduce the notion of shortcuts with slack which provide the desired distance bound d for all but a small fraction ε of the vertex pairs in TC(G). For ε ∈ (0,1), a (d,ε)-shortcut H ⊆ TC(G) is a subset of edges with the property that dist_{G ∪ H}(u,v) ≤ d for at least (1-ε) fraction of the (u,v) pairs in TC(G). Our constructions hold for any DAG G and their size bounds are parameterized by the width of the graph G defined by the smallest number of directed paths in G that cover all vertices in G. - For every ε ∈ (0,1] and integer d ≥ 5, every n-vertex DAG G of width {ω} admits a (d,ε)-shortcut of size Õ({ω}²/(ε d)+n). A more delicate construction yields a (3,ε)-shortcut of size Õ({ω}²/(ε d)+n/ε), hence of linear size for {ω} ≤ √n. We show that without a slack (i.e., for ε = 0), graphs with {ω} ≤ √n cannot be shortcut to diameter below n^{1/6} using a linear number of shortcut edges. - There exists an n-vertex DAG G for which any (3,ε = 1/2^{√{log ω}})-shortcut set has Ω({ω}²/2^{√{log ω}}+n) edges. Hence, for d = Õ(1), our constructions are almost optimal. Approximate TC Representations. A key application of our shortcut’s constructions is a (1-ε)-approximate all-successors data structure which given a vertex v, reports a list containing (1-ε) fraction of the successors of v in the graph. We present a Õ({ω}²/ε+n)-space data structure with a near linear (in the output size) query time. Using connections to Error Correcting Codes, we also present a near-matching space lower bound of Ω({ω}²+n) bits (regardless of the query time) for constant ε. This improves upon the state-of-the-art space bounds of O({ω} ⋅ n) for ε = 0 by the prior work of Jagadish [ACM Trans. Database Syst., 1990].

Cite as

Shimon Kogan and Merav Parter. Giving Some Slack: Shortcuts and Transitive Closure Compressions. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 79:1-79:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kogan_et_al:LIPIcs.ESA.2024.79,
  author =	{Kogan, Shimon and Parter, Merav},
  title =	{{Giving Some Slack: Shortcuts and Transitive Closure Compressions}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{79:1--79:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.79},
  URN =		{urn:nbn:de:0030-drops-211509},
  doi =		{10.4230/LIPIcs.ESA.2024.79},
  annote =	{Keywords: Reachability Shortcuts, Width, DAG}
}
Document
The Algorithmic Power of the Greene-Kleitman Theorem

Authors: Shimon Kogan and Merav Parter


Abstract
For a given n-vertex DAG G = (V,E) with transitive-closure TC(G), a chain is a directed path in TC(G) and an antichain is an independent set in TC(G). The maximum k-antichain problem asks for computing the maximum k-colorable subgraph of the transitive closure. The related maximum h-chains problem asks for computing h disjoint chains (i.e., cliques in TC(G)) of largest total lengths. The celebrated Greene-Kleitman (GK) theorem [J. of Comb. Theory, 1976] demonstrates the (combinatorial) connections between these two problems. In this work we translate the combinatorial properties implied by the GK theorem into time-efficient covering algorithms. In contrast to prior results, our algorithms are applied directly on G, and do not require the precomputation of its transitive closure. Let α_k(G) be the maximum number of vertices that can be covered by k antichains. We show: - For every n-vertex m-edge DAG G = (V,E), one can compute at most (2k-1) disjoint antichains that cover α_k(G) vertices in time m^{1+o(1)} (hence, independent in k). This extends the recent m^{1+o(1)}-time Maximum-Antichain algorithm (where k = 1) by [Cáceres et al., SODA 2022] to any value of k. - For every n-vertex m-edge Partially-Ordered-Set (poset) P = (V,E), one can compute (1+ε)k disjoint antichains that cover α_k(P) vertices in time O(√m⋅ α_k(P)⋅ n^{o(1)}/ε), hence at most n^{2+o(1)}/ε. This improves over the exact solution of O(n³) time of [Gavril, Networks 1987] at the cost of producing (1+ε)k antichains instead of exactly k. The heart of our approach is a linear-time greedy-like algorithm that translates suitable chain collections 𝒞 into an parallel set of antichains 𝒜, in which |C_j ∩ A_i| = 1 for every C_j ∈ 𝒞 and A_i ∈ 𝒜. The correctness of this approach is underlined by the GK theorem.

Cite as

Shimon Kogan and Merav Parter. The Algorithmic Power of the Greene-Kleitman Theorem. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kogan_et_al:LIPIcs.ESA.2024.80,
  author =	{Kogan, Shimon and Parter, Merav},
  title =	{{The Algorithmic Power of the Greene-Kleitman Theorem}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.80},
  URN =		{urn:nbn:de:0030-drops-211512},
  doi =		{10.4230/LIPIcs.ESA.2024.80},
  annote =	{Keywords: Chains, Antichains, DAG}
}
Document
Edge-Coloring Sparse Graphs with Δ Colors in Quasilinear Time

Authors: Łukasz Kowalik


Abstract
In this paper we show that every graph G of bounded maximum average degree mad(G) and with maximum degree Δ can be edge-colored using the optimal number of Δ colors in quasilinear time, whenever Δ ≥ 2mad(G). The maximum average degree is within a multiplicative constant of other popular graph sparsity parameters like arboricity, degeneracy or maximum density. Our algorithm extends previous results of Chrobak and Nishizeki [Marek Chrobak and Takao Nishizeki, 1990] and Bhattacharya, Costa, Panski and Solomon [Sayan Bhattacharya et al., 2023].

Cite as

Łukasz Kowalik. Edge-Coloring Sparse Graphs with Δ Colors in Quasilinear Time. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 81:1-81:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kowalik:LIPIcs.ESA.2024.81,
  author =	{Kowalik, {\L}ukasz},
  title =	{{Edge-Coloring Sparse Graphs with \Delta Colors in Quasilinear Time}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{81:1--81:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.81},
  URN =		{urn:nbn:de:0030-drops-211523},
  doi =		{10.4230/LIPIcs.ESA.2024.81},
  annote =	{Keywords: edge coloring, algorithm, sparse, graph, quasilinear}
}
Document
A Lower Bound for Local Search Proportional Approval Voting

Authors: Sonja Kraiczy and Edith Elkind


Abstract
Selecting k out of m items based on the preferences of n heterogeneous agents is a widely studied problem in algorithmic game theory. If agents have approval preferences over individual items and harmonic utility functions over bundles - an agent receives ∑_{j = 1}^t1/j utility if t of her approved items are selected - then welfare optimisation is captured by a voting rule known as Proportional Approval Voting (PAV). PAV also satisfies demanding fairness axioms. However, finding a winning set of items under PAV is NP-hard. In search of a tractable method with strong fairness guarantees, a bounded local search version of PAV was proposed [Aziz et al., 2018]. It proceeds by starting with an arbitrary size-k set W and, at each step, checking if there is a pair of candidates a ∈ W, b ̸ ∈ W such that swapping a and b increases the total welfare by at least ε; if yes, it performs the swap. Aziz et al. show that setting ε = n/(k²) ensures both the desired fairness guarantees and polynomial running time. However, they leave it open whether the algorithm converges in polynomial time if ε is very small (in particular, if we do not stop until there are no welfare-improving swaps). We resolve this open question, by showing that if ε can be arbitrarily small, the running time of this algorithm may be super-polynomial. Specifically, we prove a lower bound of Ω(k^{log k}) if improvements are chosen lexicographically. To complement our lower bound, we provide an empirical comparison of two variants of local search - better-response and best-response - on several real-life data sets and a variety of synthetic data sets. Our experiments indicate that, empirically, better response exhibits faster running time than best response.

Cite as

Sonja Kraiczy and Edith Elkind. A Lower Bound for Local Search Proportional Approval Voting. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 82:1-82:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kraiczy_et_al:LIPIcs.ESA.2024.82,
  author =	{Kraiczy, Sonja and Elkind, Edith},
  title =	{{A Lower Bound for Local Search Proportional Approval Voting}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{82:1--82:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.82},
  URN =		{urn:nbn:de:0030-drops-211538},
  doi =		{10.4230/LIPIcs.ESA.2024.82},
  annote =	{Keywords: Computational Social Choice, Committee Elections, Local Search, Fairness}
}
Document
Scalable Distributed String Sorting

Authors: Florian Kurpicz, Pascal Mehnert, Peter Sanders, and Matthias Schimek


Abstract
String sorting is an important part of tasks such as building index data structures. Unfortunately, current string sorting algorithms do not scale to massively parallel distributed-memory machines since they either have latency (at least) proportional to the number of processors p or communicate the data a large number of times (at least logarithmic). We present practical and efficient algorithms for distributed-memory string sorting that scale to large p. Similar to state-of-the-art sorters for atomic objects, the algorithms have latency of about p^{1/k} when allowing the data to be communicated k times. Experiments indicate good scaling behavior on a wide range of inputs on up to 49152 cores. Overall, we achieve speedups of up to 4.9 over the current state-of-the-art distributed string sorting algorithms.

Cite as

Florian Kurpicz, Pascal Mehnert, Peter Sanders, and Matthias Schimek. Scalable Distributed String Sorting. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kurpicz_et_al:LIPIcs.ESA.2024.83,
  author =	{Kurpicz, Florian and Mehnert, Pascal and Sanders, Peter and Schimek, Matthias},
  title =	{{Scalable Distributed String Sorting}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.83},
  URN =		{urn:nbn:de:0030-drops-211541},
  doi =		{10.4230/LIPIcs.ESA.2024.83},
  annote =	{Keywords: sorting, strings, distributed-memory computing, distributed membership filters, scalability}
}
Document
Insights into (k, ρ)-Shortcutting Algorithms

Authors: Alexander Leonhardt, Ulrich Meyer, and Manuel Penschuck


Abstract
A graph is called a (k, ρ)-graph iff every node can reach ρ of its nearest neighbors in at most k hops. This property has proven useful in the analysis and design of parallel shortest-path algorithms [Blelloch et al., 2016; Dong et al., 2021]. Any graph can be transformed into a (k, ρ)-graph by adding shortcuts. Formally, the (k,ρ)-Minimum-Shortcut-Problem (kρ-MSP) asks to find an appropriate shortcut set of minimal cardinality. We show that kρ-MSP is NP-complete in the practical regime of k ≥ 3 and ρ = Θ(n^ε) for ε > 0. With a related construction, we bound the approximation factor of known kρ-MSP heuristics [Blelloch et al., 2016] from below and propose algorithmic countermeasures improving the approximation quality. Further, we describe an integer linear problem (ILP) that optimally solves kρ-MSP. Finally, we compare the practical performance and quality of all algorithms empirically.

Cite as

Alexander Leonhardt, Ulrich Meyer, and Manuel Penschuck. Insights into (k, ρ)-Shortcutting Algorithms. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{leonhardt_et_al:LIPIcs.ESA.2024.84,
  author =	{Leonhardt, Alexander and Meyer, Ulrich and Penschuck, Manuel},
  title =	{{Insights into (k, \rho)-Shortcutting Algorithms}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.84},
  URN =		{urn:nbn:de:0030-drops-211554},
  doi =		{10.4230/LIPIcs.ESA.2024.84},
  annote =	{Keywords: Complexity, Approximation, Optimal algorithms, Parallel shortest path}
}
Document
Tree Decompositions Meet Induced Matchings: Beyond Max Weight Independent Set

Authors: Paloma T. Lima, Martin Milanič, Peter Muršič, Karolina Okrasa, Paweł Rzążewski, and Kenny Štorgel


Abstract
For a tree decomposition 𝒯 of a graph G, by μ(𝒯) we denote the size of a largest induced matching in G all of whose edges intersect one bag of 𝒯. The induced matching treewidth of a graph G is the minimum value of μ(𝒯) over all tree decompositions 𝒯 of G. Yolov [SODA 2018] proved that for graphs of bounded induced matching treewidth, tree decompositions with bounded μ(𝒯) can be computed in polynomial time and Max Weight Independent Set can be solved in polynomial time. In this paper we explore what other problems are tractable in such classes of graphs. As our main result, we give a polynomial-time algorithm for Min Weight Feedback Vertex Set. We also provide some positive results concerning packing induced subgraphs, which in particular imply a PTAS for the problem of finding a largest induced subgraph of bounded treewidth. These results suggest that in graphs of bounded induced matching treewidth, one could find in polynomial time a maximum-weight induced subgraph of bounded treewidth satisfying a given CMSO₂ formula. We conjecture that such a result indeed holds and prove it for graphs of bounded tree-independence number, which form a rich and important family of subclasses of graphs of bounded induced matching treewidth. We complement these algorithmic results with a number of complexity and structural results concerning induced matching treewidth, including a linear relation to treewidth for graphs with bounded degree.

Cite as

Paloma T. Lima, Martin Milanič, Peter Muršič, Karolina Okrasa, Paweł Rzążewski, and Kenny Štorgel. Tree Decompositions Meet Induced Matchings: Beyond Max Weight Independent Set. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 85:1-85:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lima_et_al:LIPIcs.ESA.2024.85,
  author =	{Lima, Paloma T. and Milani\v{c}, Martin and Mur\v{s}i\v{c}, Peter and Okrasa, Karolina and Rz\k{a}\.{z}ewski, Pawe{\l} and \v{S}torgel, Kenny},
  title =	{{Tree Decompositions Meet Induced Matchings: Beyond Max Weight Independent Set}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{85:1--85:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.85},
  URN =		{urn:nbn:de:0030-drops-211569},
  doi =		{10.4230/LIPIcs.ESA.2024.85},
  annote =	{Keywords: induced matching treewidth, tree-independence number, feedback vertex set, induced packing, algorithmic meta-theorem}
}
Document
A Textbook Solution for Dynamic Strings

Authors: Zsuzsanna Lipták, Francesco Masillo, and Gonzalo Navarro


Abstract
We consider the problem of maintaining a collection of strings while efficiently supporting splits and concatenations on them, as well as comparing two substrings, and computing the longest common prefix between two suffixes. This problem can be solved in optimal time O(log N) whp for the updates and O(1) worst-case time for the queries, where N is the total collection size [Gawrychowski et al., SODA 2018]. We present here a much simpler solution based on a forest of enhanced splay trees (FeST), where both the updates and the substring comparison take O(log n) amortized time, n being the lengths of the strings involved. The longest common prefix of length 𝓁 is computed in O(log n + log²𝓁) amortized time. Our query results are correct whp. Our simpler solution enables other more general updates in O(log n) amortized time, such as reversing a substring and/or mapping its symbols. We can also regard substrings as circular or as their omega extension.

Cite as

Zsuzsanna Lipták, Francesco Masillo, and Gonzalo Navarro. A Textbook Solution for Dynamic Strings. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 86:1-86:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{liptak_et_al:LIPIcs.ESA.2024.86,
  author =	{Lipt\'{a}k, Zsuzsanna and Masillo, Francesco and Navarro, Gonzalo},
  title =	{{A Textbook Solution for Dynamic Strings}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{86:1--86:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.86},
  URN =		{urn:nbn:de:0030-drops-211576},
  doi =		{10.4230/LIPIcs.ESA.2024.86},
  annote =	{Keywords: dynamic strings, splay trees, dynamic data structures, LCP, circular strings}
}
Document
Parameterized Dynamic Data Structure for Split Completion

Authors: Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz


Abstract
We design a randomized data structure that, for a fully dynamic graph G updated by edge insertions and deletions and integers k, d fixed upon initialization, maintains the answer to the Split Completion problem: whether one can add k edges to G to obtain a split graph. The data structure can be initialized on an edgeless n-vertex graph in time n ⋅ (k d ⋅ log n)^{𝒪(1)}, and the amortized time complexity of an update is 5^k ⋅ (k d ⋅ log n)^{𝒪(1)}. The answer provided by the data structure is correct with probability 1-𝒪(n^{-d}).

Cite as

Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz. Parameterized Dynamic Data Structure for Split Completion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 87:1-87:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{majewski_et_al:LIPIcs.ESA.2024.87,
  author =	{Majewski, Konrad and Pilipczuk, Micha{\l} and Zych-Pawlewicz, Anna},
  title =	{{Parameterized Dynamic Data Structure for Split Completion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{87:1--87:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.87},
  URN =		{urn:nbn:de:0030-drops-211587},
  doi =		{10.4230/LIPIcs.ESA.2024.87},
  annote =	{Keywords: parameterized complexity, dynamic data structures, split graphs}
}
Document
Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion

Authors: Samuel McCauley


Abstract
Approximate nearest neighbor search (ANN) data structures have widespread applications in machine learning, computational biology, and text processing. The goal of ANN is to preprocess a set S so that, given a query q, we can find a point y whose distance from q approximates the smallest distance from q to any point in S. For most distance functions, the best-known ANN bounds for high-dimensional point sets are obtained using techniques based on locality-sensitive hashing (LSH). Unfortunately, space efficiency is a major challenge for LSH-based data structures. Classic LSH techniques require a very large amount of space, oftentimes polynomial in |S|. A long line of work has developed intricate techniques to reduce this space usage, but these techniques suffer from downsides: they must be hand tailored to each specific LSH, are often complicated, and their space reduction comes at the cost of significantly increased query times. In this paper we explore a new way to improve the space efficiency of LSH using function inversion techniques, originally developed in (Fiat and Naor 2000). We begin by describing how function inversion can be used to improve LSH data structures. This gives a fairly simple, black box method to reduce LSH space usage. Then, we give a data structure that leverages function inversion to improve the query time of the best known near-linear space data structure for approximate nearest neighbor search under Euclidean distance: the ALRW data structure of (Andoni, Laarhoven, Razenshteyn, and Waingarten 2017). ALRW was previously shown to be optimal among "list-of-points" data structures for both Euclidean and Manhattan ANN; thus, in addition to giving improved bounds, our results imply that list-of-points data structures are not optimal for Euclidean or Manhattan ANN .

Cite as

Samuel McCauley. Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 88:1-88:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mccauley:LIPIcs.ESA.2024.88,
  author =	{McCauley, Samuel},
  title =	{{Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{88:1--88:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.88},
  URN =		{urn:nbn:de:0030-drops-211590},
  doi =		{10.4230/LIPIcs.ESA.2024.88},
  annote =	{Keywords: similarity search, locality-sensitive hashing, randomized algorithms, data structures, space efficiency, function inversion}
}
Document
Locally Computing Edge Orientations

Authors: Slobodan Mitrović, Ronitt Rubinfeld, and Mihir Singhal


Abstract
We consider the question of orienting the edges in a graph G such that every vertex has bounded out-degree. For graphs of arboricity α, there is an orientation in which every vertex has out-degree at most α and, moreover, the best possible maximum out-degree of an orientation is at least α - 1. We are thus interested in algorithms that can achieve a maximum out-degree of close to α. A widely studied approach for this problem in the distributed algorithms setting is a "peeling algorithm" that provides an orientation with maximum out-degree α(2+ε) in a logarithmic number of iterations. We consider this problem in the local computation algorithm (LCA) model, which quickly answers queries of the form "What is the orientation of edge (u,v)?" by probing the input graph. When the peeling algorithm is executed in the LCA setting by applying standard techniques, e.g., the Parnas-Ron paradigm, it requires Ω(n) probes per query on an n-vertex graph. In the case where G has unbounded degree, we show that any LCA that orients its edges to yield maximum out-degree r must use Ω(√ n/r) probes to G per query in the worst case, even if G is known to be a forest (that is, α = 1). We also show several algorithms with sublinear probe complexity when G has unbounded degree. When G is a tree such that the maximum degree Δ of G is bounded, we demonstrate an algorithm that uses Δ n^{1-log_Δ r + o(1)} probes to G per query. To obtain this result, we develop an edge-coloring approach that ultimately yields a graph-shattering-like result. We also use this shattering-like approach to demonstrate an LCA which 4-colors any tree using sublinear probes per query.

Cite as

Slobodan Mitrović, Ronitt Rubinfeld, and Mihir Singhal. Locally Computing Edge Orientations. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mitrovic_et_al:LIPIcs.ESA.2024.89,
  author =	{Mitrovi\'{c}, Slobodan and Rubinfeld, Ronitt and Singhal, Mihir},
  title =	{{Locally Computing Edge Orientations}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.89},
  URN =		{urn:nbn:de:0030-drops-211603},
  doi =		{10.4230/LIPIcs.ESA.2024.89},
  annote =	{Keywords: local computation algorithms, edge orientation, tree coloring}
}
Document
Many-To-Many Polygon Matching à La Jaccard

Authors: Alexander Naumann, Annika Bonerath, and Jan-Henrik Haunert


Abstract
Integration of spatial data is a major field of research. An important task of data integration is finding correspondences between entities. Here, we focus on combining building footprint data from cadastre and from volunteered geographic information, in particular OpenStreetMap. Previous research on this topic has led to exact 1:1 matching approaches and heuristic m:n matching approaches, most of which are lacking a mathematical problem definition. We introduce a model for many-to-many polygon matching based on the well-established Jaccard index. This is a natural extension to the existing 1:1 matching approaches. We show that the problem is NP-complete and a naive approach via integer programming fails easily. By analyzing the structure of the problem in detail, we can reduce the number of variables significantly. This approach yields an optimal m:n matching even for large real-world instances with appropriate running time. In particular, for the set of all building footprints of the city of Bonn (119,300 / 97,284 polygons) it yielded an optimal solution in approximately 1 hour.

Cite as

Alexander Naumann, Annika Bonerath, and Jan-Henrik Haunert. Many-To-Many Polygon Matching à La Jaccard. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 90:1-90:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{naumann_et_al:LIPIcs.ESA.2024.90,
  author =	{Naumann, Alexander and Bonerath, Annika and Haunert, Jan-Henrik},
  title =	{{Many-To-Many Polygon Matching \`{a} La Jaccard}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{90:1--90:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.90},
  URN =		{urn:nbn:de:0030-drops-211614},
  doi =		{10.4230/LIPIcs.ESA.2024.90},
  annote =	{Keywords: polygon matching, exact algorithm, Jaccard index}
}
Document
Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm

Authors: Zipei Nie and Hang Zhou


Abstract
We study the unit-demand capacitated vehicle routing problem in the random setting of the Euclidean plane. The objective is to visit n random terminals in a square using a set of tours of minimum total length, such that each tour visits the depot and at most k terminals. We design an algorithm combining the classical sweep heuristic and the framework for the Euclidean traveling salesman problem due to Arora [J. ACM 1998] and Mitchell [SICOMP 1999]. We show that our algorithm is a polynomial-time approximation of ratio at most 1.55 asymptotically almost surely. This improves on the prior ratio of 1.915 due to Mathieu and Zhou [RSA 2022]. In addition, we conjecture that, for any ε > 0, our algorithm is a (1+ε)-approximation asymptotically almost surely.

Cite as

Zipei Nie and Hang Zhou. Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 91:1-91:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nie_et_al:LIPIcs.ESA.2024.91,
  author =	{Nie, Zipei and Zhou, Hang},
  title =	{{Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{91:1--91:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.91},
  URN =		{urn:nbn:de:0030-drops-211627},
  doi =		{10.4230/LIPIcs.ESA.2024.91},
  annote =	{Keywords: capacitated vehicle routing, approximation algorithm, combinatorial optimization}
}
Document
Parameterized Algorithms for Node Connectivity Augmentation Problems

Authors: Zeev Nutov


Abstract
A graph G is k-out-connected from its node s if it contains k internally disjoint sv-paths to every node v; G is k-connected if it is k-out-connected from every node. In connectivity augmentation problems, the goal is to augment a graph G₀ = (V,E₀) by a minimum costs edge set J such that G₀ ∪ J has higher connectivity than G₀. In the k-Out-Connectivity Augmentation ({k-OCA}) problem, G₀ is (k-1)-out-connected from s and G₀ ∪ J should be k-out-connected from s; in the k-Connectivity Augmentation ({k-CA}) problem G₀ is (k-1)-connected and G₀ ∪ J should be k-connected. The parameterized complexity status of these problems was open even for k = 3 and unit costs. We will show that {k-OCA} and 3-{CA} can be solved in time 9^p ⋅ n^{O(1)}, where p is the size of an optimal solution. Our paper is the first that shows fixed-parameter tractability of a k-node-connectivity augmentation problem with high values of k. We will also consider the (2,k)-Connectivity Augmentation ({(2,k)-CA}) problem where G₀ is (k-1)-edge-connected and G₀ ∪ J should be both k-edge-connected and 2-connected. We will show that this problem can be solved in time 9^p ⋅ n^{O(1)}, and for unit costs approximated within 1.892.

Cite as

Zeev Nutov. Parameterized Algorithms for Node Connectivity Augmentation Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 92:1-92:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nutov:LIPIcs.ESA.2024.92,
  author =	{Nutov, Zeev},
  title =	{{Parameterized Algorithms for Node Connectivity Augmentation Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{92:1--92:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.92},
  URN =		{urn:nbn:de:0030-drops-211639},
  doi =		{10.4230/LIPIcs.ESA.2024.92},
  annote =	{Keywords: node connectivity augmentation, fixed-parameter tractability}
}
Document
Parameterized Complexity of MinCSP over the Point Algebra

Authors: George Osipov, Marcin Pilipczuk, and Magnus Wahlström


Abstract
The input in the Minimum-Cost Constraint Satisfaction Problem (MinCSP) over the Point Algebra contains a set of variables, a collection of constraints of the form x < y, x = y, x ≤ y and x ≠ y, and a budget k. The goal is to check whether it is possible to assign rational values to the variables while breaking constraints of total cost at most k. This problem generalizes several prominent graph separation and transversal problems: - MinCSP({<}) is equivalent to Directed Feedback Arc Set, - MinCSP({< , ≤}) is equivalent to Directed Subset Feedback Arc Set, - MinCSP({= ,≠}) is equivalent to Edge Multicut, and - MinCSP({≤ ,≠}) is equivalent to Directed Symmetric Multicut. Apart from trivial cases, MinCSP({Γ}) for Γ ⊆ {< , = , ≤ ,≠} is NP-hard even to approximate within any constant factor under the Unique Games Conjecture. Hence, we study parameterized complexity of this problem under a natural parameterization by the solution cost k. We obtain a complete classification: if Γ ⊆ {< , = , ≤ ,≠} contains both ≤ and ≠, then MinCSP({Γ}) is W[1]-hard, otherwise it is fixed-parameter tractable. For the positive cases, we solve MinCSP({< , = ,≠}), generalizing the FPT results for Directed Feedback Arc Set and Edge Multicut as well as their weighted versions. Our algorithm works by reducing the problem into a Boolean MinCSP, which is in turn solved by flow augmentation. For the lower bounds, we prove that Directed Symmetric Multicut is W[1]-hard, solving an open problem.

Cite as

George Osipov, Marcin Pilipczuk, and Magnus Wahlström. Parameterized Complexity of MinCSP over the Point Algebra. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{osipov_et_al:LIPIcs.ESA.2024.93,
  author =	{Osipov, George and Pilipczuk, Marcin and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Complexity of MinCSP over the Point Algebra}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.93},
  URN =		{urn:nbn:de:0030-drops-211640},
  doi =		{10.4230/LIPIcs.ESA.2024.93},
  annote =	{Keywords: parameterized complexity, constraint satisfaction, point algebra, multicut, feedback arc set}
}
Document
A Row Generation Algorithm for Finding Optimal Burning Sequences of Large Graphs

Authors: Felipe de Carvalho Pereira, Pedro Jussieu de Rezende, Tallys Yunes, and Luiz Fernando Batista Morato


Abstract
We propose an exact algorithm for the Graph Burning Problem (GBP), an NP-hard optimization problem that models the spread of influence on social networks. Given a graph G with vertex set V, the objective is to find a sequence of k vertices in V, namely, v₁, v₂, … , v_k, such that k is minimum and ⋃_{i=1}^{k} {u∈V: d(u,v_i) ≤ k-i} = V, where d(u,v) denotes the distance between u and v. We formulate the problem as a set covering integer programming model and design a row generation algorithm for the GBP. Our method exploits the fact that a very small number of covering constraints is often sufficient for solving the integer model, allowing the corresponding rows to be generated on demand. To date, the most efficient exact algorithm for the GBP, denoted here by GDCA, is able to obtain optimal solutions for graphs with up to 14,000 vertices within two hours of execution. In comparison, our algorithm finds provably optimal solutions approximately 236 times faster, on average, than GDCA. For larger graphs, memory space becomes a limiting factor for GDCA. Our algorithm, however, solves real-world instances with more than 3 million vertices in less than 19 minutes, increasing the size of graphs for which optimal solutions are known by a factor of 200. Additionally, we conduct tests on the proposed algorithm using a series of challenging instances composed of grid graphs containing up to 5,000 vertices. As a result, we achieve novel optimal solutions and tight optimality gaps that have not been previously reported in the literature.

Cite as

Felipe de Carvalho Pereira, Pedro Jussieu de Rezende, Tallys Yunes, and Luiz Fernando Batista Morato. A Row Generation Algorithm for Finding Optimal Burning Sequences of Large Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 94:1-94:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{pereira_et_al:LIPIcs.ESA.2024.94,
  author =	{Pereira, Felipe de Carvalho and de Rezende, Pedro Jussieu and Yunes, Tallys and Morato, Luiz Fernando Batista},
  title =	{{A Row Generation Algorithm for Finding Optimal Burning Sequences of Large Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{94:1--94:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.94},
  URN =		{urn:nbn:de:0030-drops-211651},
  doi =		{10.4230/LIPIcs.ESA.2024.94},
  annote =	{Keywords: Graph Burning, Burning Number, Burning Sequence, Set Covering, Integer Programming, Row Generation}
}
Document
Competitive Capacitated Online Recoloring

Authors: Rajmohan Rajaraman and Omer Wasim


Abstract
In this paper, we revisit the online recoloring problem introduced recently by Azar, Machluf, Patt-Shamir and Touitou [Azar et al., 2022] to investigate algorithmic challenges that arise while scheduling virtual machines or processes in distributed systems and cloud services. In online recoloring, there is a fixed set V of n vertices and an initial coloring c₀: V → [k] for some k ∈ ℤ^{> 0}. Under an online sequence σ of requests where each request is an edge (u_t,v_t), a proper vertex coloring c of the graph G_t induced by requests until time t needs to be maintained for all t; i.e., for any (u,v) ∈ G_t, c(u)≠ c(v). In the distributed systems application, a vertex corresponds to a VM, an edge corresponds to the requirement that the two endpoint VMs be on different clusters, and a coloring is an allocation of VMs to clusters. The objective is to minimize the total weight of vertices recolored for the sequence σ. In [Azar et al., 2022], the authors give competitive algorithms for two polynomially tractable cases - 2-coloring for bipartite G_t and (Δ+1)-coloring for Δ-degree G_t - and lower bounds for the fully dynamic case where G_t can be arbitrary. We obtain the first competitive algorithms for capacitated online recoloring and fully dynamic recoloring, in which there is a bound on the number or weight of vertices in each color. Our first set of results is for 2-recoloring using algorithms that are (1+ε)-resource augmented where ε ∈ (0,1) is an arbitrarily small constant. Our main result is an O(log n)-competitive deterministic algorithm for weighted bipartite graphs, which is asymptotically optimal in light of an Ω(log n) lower bound that holds for an unbounded amount of augmentation. We also present an O(nlog n)-competitive deterministic algorithm for fully dynamic recoloring, which is optimal within an O(log n) factor in light of a Ω(n) lower bound that holds for an unbounded amount of augmentation. Our second set of results is for Δ-recoloring in an (1+ε)-overprovisioned setting where the maximum degree of G_t is bounded by (1-ε)Δ for all t, and each color assigned to at most (1+ε)n/(Δ) vertices, for an arbitrary ε > 0. Our main result is an O(1)-competitive randomized algorithm for Δ = O(√{n/log n}). We also present an O(Δ)-competitive deterministic algorithm for Δ ≤ ε n/2. Both results are asymptotically optimal.

Cite as

Rajmohan Rajaraman and Omer Wasim. Competitive Capacitated Online Recoloring. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 95:1-95:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rajaraman_et_al:LIPIcs.ESA.2024.95,
  author =	{Rajaraman, Rajmohan and Wasim, Omer},
  title =	{{Competitive Capacitated Online Recoloring}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{95:1--95:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.95},
  URN =		{urn:nbn:de:0030-drops-211666},
  doi =		{10.4230/LIPIcs.ESA.2024.95},
  annote =	{Keywords: online algorithms, competitive ratio, recoloring, resource augmentation}
}
Document
Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM

Authors: Tim Randolph and Karol Węgrzycki


Abstract
We study the parameterized complexity of algorithmic problems whose input is an integer set A in terms of the doubling constant 𝒞 := |A+A| / |A|, a fundamental measure of additive structure. We present evidence that this new parameterization is algorithmically useful in the form of new results for two difficult, well-studied problems: Integer Programming and Subset Sum. First, we show that determining the feasibility of bounded Integer Programs is a tractable problem when parameterized in the doubling constant. Specifically, we prove that the feasibility of an integer program ℐ with n polynomially-bounded variables and m constraints can be determined in time n^{O_𝒞(1)} ⋅ poly(|ℐ|) when the column set of the constraint matrix has doubling constant 𝒞. Second, we show that the Subset Sum and Unbounded Subset Sum problems can be solved in time n^{O_C(1)} and n^{O_𝒞(log log log n)}, respectively, where the O_C notation hides functions that depend only on the doubling constant 𝒞. We also show the equivalence of achieving an FPT algorithm for Subset Sum with bounded doubling and achieving a milestone result for the parameterized complexity of Box ILP. Finally, we design near-linear time algorithms for k-SUM as well as tight lower bounds for 4-SUM and nearly tight lower bounds for k-SUM, under the k-SUM conjecture. Several of our results rely on a new proof that Freiman’s Theorem, a central result in additive combinatorics, can be made efficiently constructive. This result may be of independent interest.

Cite as

Tim Randolph and Karol Węgrzycki. Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{randolph_et_al:LIPIcs.ESA.2024.96,
  author =	{Randolph, Tim and W\k{e}grzycki, Karol},
  title =	{{Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.96},
  URN =		{urn:nbn:de:0030-drops-211672},
  doi =		{10.4230/LIPIcs.ESA.2024.96},
  annote =	{Keywords: Parameterized algorithms, parameterized complexity, additive combinatorics, Subset Sum, integer programming, doubling constant}
}
Document
Engineering Edge Orientation Algorithms

Authors: Henrik Reinstädtler, Christian Schulz, and Bora Uçar


Abstract
Given an undirected graph G, the edge orientation problem asks for assigning a direction to each edge to convert G into a directed graph. The aim is to minimize the maximum out-degree of a vertex in the resulting directed graph. This problem, which is solvable in polynomial time, arises in many applications. An ongoing challenge in edge orientation algorithms is their scalability, particularly in handling large-scale networks with millions or billions of edges efficiently. We propose a novel algorithmic framework based on finding and manipulating simple paths to face this challenge. Our framework is based on an existing algorithm and allows many algorithmic choices. By carefully exploring these choices and engineering the underlying algorithms, we obtain an implementation which is more efficient and scalable than the current state-of-the-art. Our experiments demonstrate significant performance improvements compared to state-of-the-art solvers. On average our algorithm is 6.59 times faster when compared to the state-of-the-art.

Cite as

Henrik Reinstädtler, Christian Schulz, and Bora Uçar. Engineering Edge Orientation Algorithms. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 97:1-97:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{reinstadtler_et_al:LIPIcs.ESA.2024.97,
  author =	{Reinst\"{a}dtler, Henrik and Schulz, Christian and U\c{c}ar, Bora},
  title =	{{Engineering Edge Orientation Algorithms}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{97:1--97:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.97},
  URN =		{urn:nbn:de:0030-drops-211682},
  doi =		{10.4230/LIPIcs.ESA.2024.97},
  annote =	{Keywords: edge orientation, pseudoarboricity, graph algorithms}
}
Document
Local Max-Cut on Sparse Graphs

Authors: Gregory Schwartzman


Abstract
We bound the smoothed running time of the FLIP algorithm for local Max-Cut as a function of α, the arboricity of the input graph. We show that, with high probability and in expectation, the following holds (where n is the number of nodes and ϕ is the smoothing parameter): 1) When α = O(log^{1-δ} n) FLIP terminates in ϕ poly(n) iterations, where δ ∈ (0,1] is an arbitrarily small constant. Previous to our results the only graph families for which FLIP was known to achieve a smoothed polynomial running time were complete graphs and graphs with logarithmic maximum degree. 2) For arbitrary values of α we get a running time of ϕ n^{O(α/(log n) + log α)}. This improves over the best known running time for general graphs of ϕ n^{O(√{log n})} for α = o(log^{1.5} n). Specifically, when α = O(log n) we get a significantly faster running time of ϕ n^{O(log log n)}.

Cite as

Gregory Schwartzman. Local Max-Cut on Sparse Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 98:1-98:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schwartzman:LIPIcs.ESA.2024.98,
  author =	{Schwartzman, Gregory},
  title =	{{Local Max-Cut on Sparse Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{98:1--98:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.98},
  URN =		{urn:nbn:de:0030-drops-211694},
  doi =		{10.4230/LIPIcs.ESA.2024.98},
  annote =	{Keywords: Algorithms, smoothed analysis}
}
Document
Parameterized Quantum Query Algorithms for Graph Problems

Authors: Tatsuya Terao and Ryuhei Mori


Abstract
In this paper, we consider the parameterized quantum query complexity for graph problems. We design parameterized quantum query algorithms for k-vertex cover and k-matching problems, and present lower bounds on the parameterized quantum query complexity. Then, we show that our quantum query algorithms are optimal up to a constant factor when the parameters are small. Our main results are as follows. Parameterized quantum query complexity of vertex cover. In the k-vertex cover problem, we are given an undirected graph G with n vertices and an integer k, and the objective is to determine whether G has a vertex cover of size at most k. We show that the quantum query complexity of the k-vertex cover problem is O(√kn + k^{3/2}√n) in the adjacency matrix model. For the design of the quantum query algorithm, we use the method of kernelization, a well-known tool for the design of parameterized classical algorithms, combined with Grover’s search. Parameterized quantum query complexity of matching. In the k-matching problem, we are given an undirected graph G with n vertices and an integer k, and the objective is to determine whether G has a matching of size at least k. We show that the quantum query complexity of the k-matching problem is O(√kn + k²) in the adjacency matrix model. We obtain this upper bound by using Grover’s search carefully and analyzing the number of Grover’s searches by making use of potential functions. We also show that the quantum query complexity of the maximum matching problem is O(√pn + p²) where p is the size of the maximum matching. For small p, it improves known bounds Õ(n^{3/2}) for bipartite graphs [Blikstad-v.d.Brand-Efron-Mukhopadhyay-Nanongkai, FOCS 2022] and O(n^{7/4}) for general graphs [Kimmel-Witter, WADS 2021]. Lower bounds on parameterized quantum query complexity. We also present lower bounds on the quantum query complexities of the k-vertex cover and k-matching problems. The lower bounds prove the optimality of the above parameterized quantum query algorithms up to a constant factor when k is small. Indeed, the quantum query complexities of the k-vertex cover and k-matching problems are both Θ(√k n) when k = O(√n) and k = O(n^{2/3}), respectively.

Cite as

Tatsuya Terao and Ryuhei Mori. Parameterized Quantum Query Algorithms for Graph Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 99:1-99:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{terao_et_al:LIPIcs.ESA.2024.99,
  author =	{Terao, Tatsuya and Mori, Ryuhei},
  title =	{{Parameterized Quantum Query Algorithms for Graph Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{99:1--99:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.99},
  URN =		{urn:nbn:de:0030-drops-211707},
  doi =		{10.4230/LIPIcs.ESA.2024.99},
  annote =	{Keywords: Quantum query complexity, parameterized algorithms, vertex cover, matching, kernelization}
}
Document
Fully Dynamic k-Means Coreset in Near-Optimal Update Time

Authors: Max Dupré la Tour, Monika Henzinger, and David Saulpic


Abstract
We study in this paper the problem of maintaining a solution to k-median and k-means clustering in a fully dynamic setting. To do so, we present an algorithm to efficiently maintain a coreset, a compressed version of the dataset, that allows easy computation of a clustering solution at query time. Our coreset algorithm has near-optimal update time of Õ(k) in general metric spaces, which reduces to Õ(d) in the Euclidean space ℝ^d. The query time is O(k²) in general metrics, and O(kd) in ℝ^d. To maintain a constant-factor approximation for k-median and k-means clustering in Euclidean space, this directly leads to an algorithm with update time Õ(d), and query time Õ(kd + k²). To maintain a O(polylog k)-approximation, the query time is reduced to Õ(kd).

Cite as

Max Dupré la Tour, Monika Henzinger, and David Saulpic. Fully Dynamic k-Means Coreset in Near-Optimal Update Time. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 100:1-100:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{latour_et_al:LIPIcs.ESA.2024.100,
  author =	{la Tour, Max Dupr\'{e} and Henzinger, Monika and Saulpic, David},
  title =	{{Fully Dynamic k-Means Coreset in Near-Optimal Update Time}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{100:1--100:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.100},
  URN =		{urn:nbn:de:0030-drops-211716},
  doi =		{10.4230/LIPIcs.ESA.2024.100},
  annote =	{Keywords: clustering, fully-dynamic, coreset, k-means}
}
Document
Time-Efficient Quantum Entropy Estimator via Samplizer

Authors: Qisheng Wang and Zhicheng Zhang


Abstract
Entropy is a measure of the randomness of a system. Estimating the entropy of a quantum state is a basic problem in quantum information. In this paper, we introduce a time-efficient quantum approach to estimating the von Neumann entropy S(ρ) and Rényi entropy S_α(ρ) of an N-dimensional quantum state ρ, given access to independent samples of ρ. Specifically, we provide the following quantum estimators. - A quantum estimator for S(ρ) with time complexity Õ(N²), improving the prior best time complexity Õ(N⁶) by Acharya, Issa, Shende, and Wagner (2020) and Bavarian, Mehraba, and Wright (2016). - A quantum estimator for S_α(ρ) with time complexity Õ(N^{4/α-2}) for 0 < α < 1 and Õ(N^{4-2/α}) for α > 1, improving the prior best time complexity Õ(N^{6/α}) for 0 < α < 1 and Õ(N⁶) for α > 1 by Acharya, Issa, Shende, and Wagner (2020), though at a cost of a slightly larger sample complexity. Moreover, these estimators are naturally extensible to the low-rank case. We also provide a sample lower bound Ω(max{N/ε, N^{1/α-1}/ε^{1/α}}) for estimating S_α(ρ). Technically, our method is quite different from the previous ones that are based on weak Schur sampling and Young diagrams. At the heart of our construction, is a novel tool called samplizer, which can "samplize" a quantum query algorithm to a quantum algorithm with similar behavior using only samples of quantum states; this suggests a unified framework for estimating quantum entropies. Specifically, when a quantum oracle U block-encodes a mixed quantum state ρ, any quantum query algorithm using Q queries to U can be samplized to a δ-close (in the diamond norm) quantum algorithm using Θ~(Q²/δ) samples of ρ. Moreover, this samplization is proven to be optimal, up to a polylogarithmic factor.

Cite as

Qisheng Wang and Zhicheng Zhang. Time-Efficient Quantum Entropy Estimator via Samplizer. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 101:1-101:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ESA.2024.101,
  author =	{Wang, Qisheng and Zhang, Zhicheng},
  title =	{{Time-Efficient Quantum Entropy Estimator via Samplizer}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{101:1--101:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.101},
  URN =		{urn:nbn:de:0030-drops-211722},
  doi =		{10.4230/LIPIcs.ESA.2024.101},
  annote =	{Keywords: Quantum computing, entropy estimation, von Neumann entropy, R\'{e}nyi entropy, sample complexity}
}
Document
SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints

Authors: Henning Martin Woydt, Christian Komusiewicz, and Frank Sommer


Abstract
In the Cardinality-Constrained Maximization (Minimization) problem the input is a universe 𝒰, a function f: 2^{{𝒰}} → ℝ, and an integer k, and the task is to find a set S ⊆ 𝒰 with |S| ≤ k that maximizes (minimizes) f(S). Many well-studied problems such as Facility Location, Partial Dominating Set, Group Closeness Centrality and Euclidean k-Medoid Clustering are special cases of Cardinality-Constrained Maximization (Minimization). All the above-mentioned problems have the diminishing return property, that is, the improvement of adding an element e ∈ 𝒰 to a set S is at least as large as adding e to any superset of S. This property is called submodularity for maximization problems and supermodularity for minimization problems. In this work we develop a new exact branch-and-cut algorithm SubModST for the generic Submodular Cardinality-Constrained Maximization and Supermodular Cardinality-Constrained Minimization. We develop several speed-ups for SubModST and we show their effectiveness on six example problems. We show that SubModST outperforms the state-of-the-art solvers developed by Csókás and Vinkó [J. Glob. Optim. '24] and Uematsu et al. [J. Oper. Res. Soc. Japan '20] for Submodular Cardinality-Constrained Maximization by orders of magnitudes.

Cite as

Henning Martin Woydt, Christian Komusiewicz, and Frank Sommer. SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 102:1-102:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{woydt_et_al:LIPIcs.ESA.2024.102,
  author =	{Woydt, Henning Martin and Komusiewicz, Christian and Sommer, Frank},
  title =	{{SubModST: A Fast Generic Solver for Submodular Maximization with Size Constraints}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{102:1--102:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.102},
  URN =		{urn:nbn:de:0030-drops-211730},
  doi =		{10.4230/LIPIcs.ESA.2024.102},
  annote =	{Keywords: Branch-and-Cut, Lazy Evaluations, Facility Location, Group Closeness Centrality, Partial Dominating Set}
}
Document
A Faster Algorithm for the 4-Coloring Problem

Authors: Pu Wu, Huanyu Gu, Huiqin Jiang, Zehui Shao, and Jin Xu


Abstract
We explore the 4-coloring problem, a fundamental combinatorial NP-hard problem. Given a graph G, the 4-coloring problem asks whether there exists a function f from the vertex set of G to {1,2,3,4} such that f(u)≠ f(v) for each edge uv of G. Such function f is referred to as a 4-coloring of G. The fastest known algorithm for the 4-coloring problem, introduced by Fomin, Gaspers, and Saurabh (COCOON 2007), exhibits a time complexity of O(1.7272ⁿ) and exponential space. In this paper, we propose an enhanced algorithm for the 4-coloring problem with a time complexity of O(1.7159ⁿ) and polynomial space. Our algorithm is deterministic and built upon a novel method. Specifically, inspired by previous algorithmic approaches for the 4-coloring problem, such as the aforementioned O(1.7272ⁿ) time algorithm, we consider the instance (G,I,S), where G is a graph and I,S are subsets of its vertex set representing vertices colored with 1 and vertices unable to be colored with 1, respectively. For a given instance (G,I,S), we aim to determine the existence of a 4-coloring f of G such that f(v) = 1 for v ∈ I and f(v)≠ 1 for v ∈ S. Our key innovation lies in recognizing that, leveraging certain combinatorial properties, the instance (G,I,S) can be efficiently solved when G-I-S is a union of K₃’s and K₄’s (where K₃ and K₄ denote complete graphs with 3 and 4 vertices, respectively). The ability to efficiently solve instances (G,I,S), where G-I-S is comprised solely of K₃’s and K₄’s, enables us to devise a branching algorithm capable of efficiently addressing instances (G,I,S), where G-I-S is not a union of K₃’s and K₄’s (the other case). Based on this innovative method, we derive our final enhanced algorithm.

Cite as

Pu Wu, Huanyu Gu, Huiqin Jiang, Zehui Shao, and Jin Xu. A Faster Algorithm for the 4-Coloring Problem. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 103:1-103:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wu_et_al:LIPIcs.ESA.2024.103,
  author =	{Wu, Pu and Gu, Huanyu and Jiang, Huiqin and Shao, Zehui and Xu, Jin},
  title =	{{A Faster Algorithm for the 4-Coloring Problem}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{103:1--103:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.103},
  URN =		{urn:nbn:de:0030-drops-211749},
  doi =		{10.4230/LIPIcs.ESA.2024.103},
  annote =	{Keywords: Graph coloring, Graph algorithms, Exact algorithms}
}
Document
Solving Directed Multiway Cut Faster Than 2ⁿ

Authors: Mingyu Xiao


Abstract
In the Directed Multiway Cut problem, we are given a directed graph G = (V,E) and a subset T ⊆ V, called the terminal set. The aim is to find a minimum sized set S ⊆ V⧵ T, such that after deleting S, no directed path exists from one terminal to another terminal in the remaining graph. It has been an open question whether Directed Multiway Cut can be solved faster than the trivial running-time bound O^*(2^{|V|}). In this paper, we provide a positive answer to this question, presenting an algorithm with a running-time bound O(1.9967^{|V|}).

Cite as

Mingyu Xiao. Solving Directed Multiway Cut Faster Than 2ⁿ. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 104:1-104:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{xiao:LIPIcs.ESA.2024.104,
  author =	{Xiao, Mingyu},
  title =	{{Solving Directed Multiway Cut Faster Than 2ⁿ}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{104:1--104:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.104},
  URN =		{urn:nbn:de:0030-drops-211758},
  doi =		{10.4230/LIPIcs.ESA.2024.104},
  annote =	{Keywords: Exact Algorithms, Parameterized Algorithms, Directed Multiway Cut, Directed Multicut, Directed Graphs}
}
Document
The Last Success Problem with Samples

Authors: Toru Yoshinaga and Yasushi Kawase


Abstract
The last success problem is an optimal stopping problem that aims to maximize the probability of stopping on the last success in a sequence of independent n Bernoulli trials. In the classical setting where complete information about the distributions is available, Bruss [Bruss, 2000] provided an optimal stopping policy that ensures a winning probability of 1/e. However, assuming complete knowledge of the distributions is unrealistic in many practical applications. This paper investigates a variant of the last success problem where samples from each distribution are available instead of complete knowledge of them. When a single sample from each distribution is allowed, we provide a deterministic policy that guarantees a winning probability of 1/4. This is best possible by the upper bound provided by Nuti and Vondrák [Nuti and Vondr{á}k, 2023]. Furthermore, for any positive constant ε, we show that a constant number of samples from each distribution is sufficient to guarantee a winning probability of 1/e-ε.

Cite as

Toru Yoshinaga and Yasushi Kawase. The Last Success Problem with Samples. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 105:1-105:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yoshinaga_et_al:LIPIcs.ESA.2024.105,
  author =	{Yoshinaga, Toru and Kawase, Yasushi},
  title =	{{The Last Success Problem with Samples}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{105:1--105:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.105},
  URN =		{urn:nbn:de:0030-drops-211762},
  doi =		{10.4230/LIPIcs.ESA.2024.105},
  annote =	{Keywords: The Last Success Problem, Secretary Problem, Sample Information Model, Optimal Stopping, Online Algorithms}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail