LIPIcs, Volume 322

35th International Symposium on Algorithms and Computation (ISAAC 2024)



Thumbnail PDF

Event

ISAAC 2024, December 8-11, 2024, Sydney, Australia

Editors

Julián Mestre
  • School of Computer Science, The University of Sydney, Australia
Anthony Wirth
  • School of Computer Science, The University of Sydney, Australia

Publication Details

  • published at: 2024-12-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-354-6

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 322, ISAAC 2024, Complete Volume

Authors: Julián Mestre and Anthony Wirth


Abstract
LIPIcs, Volume 322, ISAAC 2024, Complete Volume

Cite as

35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 1-956, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Proceedings{mestre_et_al:LIPIcs.ISAAC.2024,
  title =	{{LIPIcs, Volume 322, ISAAC 2024, Complete Volume}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{1--956},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024},
  URN =		{urn:nbn:de:0030-drops-222718},
  doi =		{10.4230/LIPIcs.ISAAC.2024},
  annote =	{Keywords: LIPIcs, Volume 322, ISAAC 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Julián Mestre and Anthony Wirth


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 0:i-0:xviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mestre_et_al:LIPIcs.ISAAC.2024.0,
  author =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{0:i--0:xviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.0},
  URN =		{urn:nbn:de:0030-drops-222702},
  doi =		{10.4230/LIPIcs.ISAAC.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Algorithmic Problems in Discrete Choice (Invited Talk)

Authors: Ravi Kumar


Abstract
In discrete choice, a user selects one option from a finite set of available alternatives, a process that is crucial for recommendation systems applications in e-commerce, social media, search engines, etc. A popular way to model discrete choice is through Random Utility Models (RUMs). RUMs assume that users assign values to options and choose the one with the highest value from among the available alternatives. RUMs have become increasingly important in the Web era; they offer an elegant mathematical framework for researchers to model user choices and predict user behavior based on (possibly limited) observations. While RUMs have been extensively studied in behavioral economics and social sciences, many basic algorithmic tasks remain poorly understood. In this talk, we will discuss various algorithmic and learning questions concerning RUMs.

Cite as

Ravi Kumar. Algorithmic Problems in Discrete Choice (Invited Talk). In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kumar:LIPIcs.ISAAC.2024.1,
  author =	{Kumar, Ravi},
  title =	{{Algorithmic Problems in Discrete Choice}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.1},
  URN =		{urn:nbn:de:0030-drops-221287},
  doi =		{10.4230/LIPIcs.ISAAC.2024.1},
  annote =	{Keywords: discrete choice theory, random utility models, user behavior}
}
Document
Invited Talk
Data Privacy: The Land Where Average Cases Don't Exist and Assumptions Quickly Perish (Invited Talk)

Authors: Olga Ohrimenko


Abstract
Machine learning on personal and sensitive data raises serious privacy concerns and creates potential for inadvertent information leakage (e.g., extraction of private messages or images from generative models). However, incorporating analysis of such data in decision making can benefit individuals and society at large (e.g., in healthcare). To strike a balance between these two conflicting objectives, one must ensure that data analysis with strong confidentiality guarantees is deployed and securely implemented. Differential privacy (DP) is emerging as a leading framework for analyzing data while maintaining mathematical privacy guarantees. Although it has seen some real-world deployment (e.g., by Apple, Microsoft, and Google), such instances remain limited and are often constrained to specific scenarios. Why? In this talk, I argue that part of the challenge lies in the assumptions DP makes about its deployment environment. By examining several DP systems and their assumptions, I demonstrate how private information can be extracted using, for example, side-channel information or the ability to rewind system’s state. I then give an overview of efficient algorithms and protocols to realize these assumptions and ensure secure deployment of differential privacy.

Cite as

Olga Ohrimenko. Data Privacy: The Land Where Average Cases Don't Exist and Assumptions Quickly Perish (Invited Talk). In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ohrimenko:LIPIcs.ISAAC.2024.2,
  author =	{Ohrimenko, Olga},
  title =	{{Data Privacy: The Land Where Average Cases Don't Exist and Assumptions Quickly Perish}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.2},
  URN =		{urn:nbn:de:0030-drops-221290},
  doi =		{10.4230/LIPIcs.ISAAC.2024.2},
  annote =	{Keywords: Differential privacy, side-channel attacks, trusted execution environment, privacy budget, state continuity}
}
Document
Invited Talk
Role of Structured Matrices in Fine-Grained Algorithm Design (Invited Talk)

Authors: Barna Saha


Abstract
Fine-grained complexity attempts to precisely determine the time complexity of a problem and has emerged as a guide for algorithm design in recent times. Some of the central problems in fine-grain complexity deals with computation of distances. For example, computing all pairs shortest paths in a weighted graph, computing edit distance between two sequences or two trees, and computing distance of a sequence from a context free language. Many of these problems reduce to computation of matrix products over various algebraic structures, predominantly over the (min,+) semiring. Obtaining a truly subcubic algorithm for (min,+) product is one of the outstanding open questions in computer science. Interestingly many of the aforementioned distance computation problems have some additional structural properties. Specifically, when we perturb the inputs slightly, we do not expect a huge change in the output. This simple yet powerful observation has led to better algorithms for many problems for which we were able to improve the running time after several decades. This includes problems such as the Language Edit Distance, RNA folding, and Dyck Edit Distance. Indeed, this structure in the problem leads to matrices that have the Lipschitz property, and we gave the first truly subcubic time algorithm for computing (min,+) product over such Lipschitz matrices. Follow-up work by several researchers obtained improved bounds for monotone matrices, and for (min,+) convolution under similar structures leading to improved bounds for a series of optimization problems. These result in not just faster algorithms for exact computation but also for approximation algorithms. In particular, we show how fast (min,+) product computation over monotone matrices can lead to better additive approximation algorithms for computing all pairs shortest paths on unweighted undirected graphs, leading to improvements after twenty four years.

Cite as

Barna Saha. Role of Structured Matrices in Fine-Grained Algorithm Design (Invited Talk). In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{saha:LIPIcs.ISAAC.2024.3,
  author =	{Saha, Barna},
  title =	{{Role of Structured Matrices in Fine-Grained Algorithm Design}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.3},
  URN =		{urn:nbn:de:0030-drops-221303},
  doi =		{10.4230/LIPIcs.ISAAC.2024.3},
  annote =	{Keywords: Fine-Grained Complexity, Fast Algorithms}
}
Document
Minimum Plane Bichromatic Spanning Trees

Authors: Hugo A. Akitaya, Ahmad Biniaz, Erik D. Demaine, Linda Kleist, Frederick Stock, and Csaba D. Tóth


Abstract
For a set of red and blue points in the plane, a minimum bichromatic spanning tree (MinBST) is a shortest spanning tree of the points such that every edge has a red and a blue endpoint. A MinBST can be computed in O(n log n) time where n is the number of points. In contrast to the standard Euclidean MST, which is always plane (noncrossing), a MinBST may have edges that cross each other. However, we prove that a MinBST is quasi-plane, that is, it does not contain three pairwise crossing edges, and we determine the maximum number of crossings. Moreover, we study the problem of finding a minimum plane bichromatic spanning tree (MinPBST) which is a shortest bichromatic spanning tree with pairwise noncrossing edges. This problem is known to be NP-hard. The previous best approximation algorithm, due to Borgelt et al. (2009), has a ratio of O(√n). It is also known that the optimum solution can be computed in polynomial time in some special cases, for instance, when the points are in convex position, collinear, semi-collinear, or when one color class has constant size. We present an O(log n)-factor approximation algorithm for the general case.

Cite as

Hugo A. Akitaya, Ahmad Biniaz, Erik D. Demaine, Linda Kleist, Frederick Stock, and Csaba D. Tóth. Minimum Plane Bichromatic Spanning Trees. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{a.akitaya_et_al:LIPIcs.ISAAC.2024.4,
  author =	{A. Akitaya, Hugo and Biniaz, Ahmad and Demaine, Erik D. and Kleist, Linda and Stock, Frederick and T\'{o}th, Csaba D.},
  title =	{{Minimum Plane Bichromatic Spanning Trees}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.4},
  URN =		{urn:nbn:de:0030-drops-221319},
  doi =		{10.4230/LIPIcs.ISAAC.2024.4},
  annote =	{Keywords: Bichromatic Spanning Tree, Minimum Spanning Tree, Plane Tree}
}
Document
Constrained Two-Line Center Problems

Authors: Taehoon Ahn and Sang Won Bae


Abstract
Given a set P of n points in the plane, the two-line center problem asks to find two lines that minimize the maximum distance from each point in P to its closer one of the two resulting lines. The currently best algorithm for the problem takes O(n² log² n) time by Jaromczyk and Kowaluk in 1995. In this paper, we present faster algorithms for three variants of the two-line center problem in which the orientations of the resulting lines are constrained. Specifically, our algorithms solve the problem in O(n log n) time when the orientations of both lines are fixed; in O(n log³ n) time when the orientation of one line is fixed; and in O(n² α(n) log n) time when the angle between the two lines is fixed, where α(n) denotes the inverse Ackermann function.

Cite as

Taehoon Ahn and Sang Won Bae. Constrained Two-Line Center Problems. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 5:1-5:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ahn_et_al:LIPIcs.ISAAC.2024.5,
  author =	{Ahn, Taehoon and Bae, Sang Won},
  title =	{{Constrained Two-Line Center Problems}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{5:1--5:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.5},
  URN =		{urn:nbn:de:0030-drops-221327},
  doi =		{10.4230/LIPIcs.ISAAC.2024.5},
  annote =	{Keywords: two-line center problem, geometric location problem, geometric optimization}
}
Document
Dynamic Parameterized Problems on Unit Disk Graphs

Authors: Shinwoo An, Kyungjin Cho, Leo Jang, Byeonghyeon Jung, Yudam Lee, Eunjin Oh, Donghun Shin, Hyeonjun Shin, and Chanho Song


Abstract
In this paper, we study fundamental parameterized problems such as k-Path/Cycle, Vertex Cover, Triangle Hitting Set, Feedback Vertex Set, and Cycle Packing for dynamic unit disk graphs. Given a vertex set V changing dynamically under vertex insertions and deletions, our goal is to maintain data structures so that the aforementioned parameterized problems on the unit disk graph induced by V can be solved efficiently. Although dynamic parameterized problems on general graphs have been studied extensively, no previous work focuses on unit disk graphs. In this paper, we present the first data structures for fundamental parameterized problems on dynamic unit disk graphs. More specifically, our data structure supports 2^O(√k) update time and O(k) query time for k-Path/Cycle. For the other problems, our data structures support O(log n) update time and 2^O(√k) query time, where k denotes the output size.

Cite as

Shinwoo An, Kyungjin Cho, Leo Jang, Byeonghyeon Jung, Yudam Lee, Eunjin Oh, Donghun Shin, Hyeonjun Shin, and Chanho Song. Dynamic Parameterized Problems on Unit Disk Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{an_et_al:LIPIcs.ISAAC.2024.6,
  author =	{An, Shinwoo and Cho, Kyungjin and Jang, Leo and Jung, Byeonghyeon and Lee, Yudam and Oh, Eunjin and Shin, Donghun and Shin, Hyeonjun and Song, Chanho},
  title =	{{Dynamic Parameterized Problems on Unit Disk Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.6},
  URN =		{urn:nbn:de:0030-drops-221337},
  doi =		{10.4230/LIPIcs.ISAAC.2024.6},
  annote =	{Keywords: Unit disk graphs, dynamic parameterized algorithms, kernelization}
}
Document
On the Connected Minimum Sum of Radii Problem

Authors: Hyung-Chan An and Mong-Jen Kao


Abstract
In this paper, we consider the study for the connected minimum sum of radii problem. In this problem, we are given as input a metric defined on a set of facilities and clients, along with some cost parameters. The objective is to open a subset of facilities, assign every client to an open facilitiy, and connect open facilities using a Steiner tree so that the weighted (by cost parameters) sum of the maximum assignment distance of each facility and the Steiner tree cost is minimized. This problem introduces the min-sum radii objective, an objective function that is widely considered in the clustering literature, to the connected facility location problem, a well-studied network design/clustering problem. This problem is useful in communication network design on a shared medium, or energy optimization of mobile wireless chargers. We present both a constant-factor approximation algorithm and hardness results for this problem. Our algorithm is based on rounding an LP relaxation that jointly models the min-sum of radii problem and the rooted Steiner tree problem. To round the solution we use a careful clustering procedure that guarantees that every open facility has a proxy client nearby. This allows a reinterpretation for part of the LP solution as a fractional rooted Steiner tree. Combined with a cost filtering technique, this yields a 5.542-approximation algorithm.

Cite as

Hyung-Chan An and Mong-Jen Kao. On the Connected Minimum Sum of Radii Problem. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{an_et_al:LIPIcs.ISAAC.2024.7,
  author =	{An, Hyung-Chan and Kao, Mong-Jen},
  title =	{{On the Connected Minimum Sum of Radii Problem}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{7:1--7:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.7},
  URN =		{urn:nbn:de:0030-drops-221342},
  doi =		{10.4230/LIPIcs.ISAAC.2024.7},
  annote =	{Keywords: connected minimum sum of radii, minimum sum of radii, connected facility location, approximation algorithms, Steiner trees}
}
Document
Lower Bounds for Adaptive Relaxation-Based Algorithms for Single-Source Shortest Paths

Authors: Sunny Atalig, Alexander Hickerson, Arrdya Srivastav, Tingting Zheng, and Marek Chrobak


Abstract
We consider the classical single-source shortest path problem in directed weighted graphs. D. Eppstein proved recently an Ω(n³) lower bound for oblivious algorithms that use relaxation operations to update the tentative distances from the source vertex. We generalize this result by extending this Ω(n³) lower bound to adaptive algorithms that, in addition to relaxations, can perform queries involving some simple types of linear inequalities between edge weights and tentative distances. Our model captures as a special case the operations on tentative distances used by Dijkstra’s algorithm.

Cite as

Sunny Atalig, Alexander Hickerson, Arrdya Srivastav, Tingting Zheng, and Marek Chrobak. Lower Bounds for Adaptive Relaxation-Based Algorithms for Single-Source Shortest Paths. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{atalig_et_al:LIPIcs.ISAAC.2024.8,
  author =	{Atalig, Sunny and Hickerson, Alexander and Srivastav, Arrdya and Zheng, Tingting and Chrobak, Marek},
  title =	{{Lower Bounds for Adaptive Relaxation-Based Algorithms for Single-Source Shortest Paths}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.8},
  URN =		{urn:nbn:de:0030-drops-221356},
  doi =		{10.4230/LIPIcs.ISAAC.2024.8},
  annote =	{Keywords: single-source shortest paths, lower bounds, decision trees}
}
Document
Fault-Tolerant Bounded Flow Preservers

Authors: Shivam Bansal, Keerti Choudhary, Harkirat Dhanoa, and Harsh Wardhan


Abstract
Given a directed graph G = (V, E) with n vertices, m edges and a designated source vertex s ∈ V, we consider the question of finding a sparse subgraph H of G that preserves the flow from s up to a given threshold λ even after failure of k edges. We refer to such subgraphs as (λ,k)-fault-tolerant bounded-flow-preserver ((λ,k)-FT-BFP). Formally, for any F ⊆ E of at most k edges and any v ∈ V, the (s, v)-max-flow in H⧵F is equal to (s, v)-max-flow in G⧵F, if the latter is bounded by λ, and at least λ otherwise. Our contributions are summarized as follows: 1) We provide a polynomial time algorithm that given any graph G constructs a (λ,k)-FT-BFP of G with at most λ 2^kn edges. 2) We also prove a matching lower bound of Ω(λ 2^kn) on the size of (λ,k)-FT-BFP. In particular, we show that for every λ,k,n ⩾ 1, there exists an n-vertex directed graph whose optimal (λ,k)-FT-BFP contains Ω(min{2^kλ n, n²}) edges. 3) Furthermore, we show that the problem of computing approximate (λ,k)-FT-BFP is NP-hard for any approximation ratio that is better than O(log(λ^{-1} n)).

Cite as

Shivam Bansal, Keerti Choudhary, Harkirat Dhanoa, and Harsh Wardhan. Fault-Tolerant Bounded Flow Preservers. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.ISAAC.2024.9,
  author =	{Bansal, Shivam and Choudhary, Keerti and Dhanoa, Harkirat and Wardhan, Harsh},
  title =	{{Fault-Tolerant Bounded Flow Preservers}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.9},
  URN =		{urn:nbn:de:0030-drops-221363},
  doi =		{10.4230/LIPIcs.ISAAC.2024.9},
  annote =	{Keywords: Fault-tolerant Data-structures, Max-flow, Bounded Flow Preservers}
}
Document
Optimal Sensitivity Oracle for Steiner Mincut

Authors: Koustav Bhanja


Abstract
Let G = (V,E) be an undirected weighted graph on n = |V| vertices and S ⊆ V be a Steiner set. Steiner mincut is a well-studied concept, which also provides a generalization to both (s,t)-mincut (when |S| = 2) and global mincut (when |S| = n). Here, we address the problem of designing a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in G; such a data structure is known as a Sensitivity Oracle for Steiner mincut. In the area of minimum cuts, although many Sensitivity Oracles have been designed in unweighted graphs, however, in weighted graphs, Sensitivity Oracles exist only for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019, ICALP 2024], which is just a special case of Steiner mincut. Here, we generalize this result from |S| = 2 to any arbitrary set S ⊆ V, that is, 2 ≤ |S| ≤ n. We first design an {O}(n²) space Sensitivity Oracle for Steiner mincut by suitably generalizing the approach used for (s,t)-mincuts [Annals of Operations Research 1991, NETWORKS 2019]. However, the main question that arises quite naturally is the following. Can we design a Sensitivity Oracle for Steiner mincut that breaks the {O}(n²) bound on space? In this article, we present the following two results that provide an answer to this question. 1. Sensitivity Oracle: Assuming the capacity of every edge is known, a) there is an O(n) space data structure that can report the capacity of Steiner mincut in O(1) time and b) there is an O(n(n-|S|+1)) space data structure that can report a Steiner mincut in O(n) time after the failure of any edge in G. 2. Lower Bound: We show that any data structure that, after the failure of any edge in G, can report a Steiner mincut or its capacity must occupy Ω(n²) bits of space in the worst case, irrespective of the size of the Steiner set. The lower bound in (2) shows that the assumption in (1) is essential to break the Ω(n²) lower bound on space. Sensitivity Oracle in (1.b) occupies only subquadratic, that is O(n^{1+ε}), space if |S| = n-n^ε+1, for every ε ∈ [0,1). For |S| = n-k for any constant k ≥ 0, it occupies only O(n) space. So, we also present the first Sensitivity Oracle occupying O(n) space for global mincut. In addition, we are able to match the existing best-known bounds on both space and query time for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019] in undirected graphs.

Cite as

Koustav Bhanja. Optimal Sensitivity Oracle for Steiner Mincut. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhanja:LIPIcs.ISAAC.2024.10,
  author =	{Bhanja, Koustav},
  title =	{{Optimal Sensitivity Oracle for Steiner Mincut}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.10},
  URN =		{urn:nbn:de:0030-drops-221371},
  doi =		{10.4230/LIPIcs.ISAAC.2024.10},
  annote =	{Keywords: mincut, (s, t)-mincut, Steiner mincut, fault tolerant structures, data structure, vital edges, vitality, sensitivity oracle}
}
Document
Temporal Queries for Dynamic Temporal Forests

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Alessandro Straziota


Abstract
In a temporal forest each edge has an associated set of time labels that specify the time instants in which the edges are available. A temporal path from vertex u to vertex v in the forest is a selection of a label for each edge in the unique path from u to v, assuming it exists, such that the labels selected for any two consecutive edges are non-decreasing. We design linear-size data structures that maintain a temporal forest of rooted trees under addition and deletion of both edge labels and singleton vertices, insertion of root-to-node edges, and removal of edges with no labels. Such data structures can answer temporal reachability, earliest arrival, and latest departure queries. All queries and updates are handled in polylogarithmic worst-case time. Our results can be adapted to deal with latencies. More precisely, all the worst-case time bounds are asymptotically unaffected when latencies are uniform. For arbitrary latencies, the update time becomes amortized in the incremental case where only label additions and edge/singleton insertions are allowed as well as in the decremental case in which only label deletions and edge/singleton removals are allowed. To the best of our knowledge, the only previously known data structure supporting temporal reachability queries is due to Brito, Albertini, Casteigts, and Travençolo [Social Network Analysis and Mining, 2021], which can handle general temporal graphs, answers queries in logarithmic time in the worst case, but requires an amortized update time that is quadratic in the number of vertices, up to polylogarithmic factors.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Alessandro Straziota. Temporal Queries for Dynamic Temporal Forests. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ISAAC.2024.11,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido and Straziota, Alessandro},
  title =	{{Temporal Queries for Dynamic Temporal Forests}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.11},
  URN =		{urn:nbn:de:0030-drops-221382},
  doi =		{10.4230/LIPIcs.ISAAC.2024.11},
  annote =	{Keywords: temporal graphs, temporal reachability, earliest arrival, latest departure, dynamic forests}
}
Document
Partitioning Problems with Splittings and Interval Targets

Authors: Samuel Bismuth, Vladislav Makarov, Erel Segal-Halevi, and Dana Shapira


Abstract
The n-way number partitioning problem is a classic problem in combinatorial optimization, with applications to diverse settings such as fair allocation and machine scheduling. All these problems are NP-hard, but various approximation algorithms are known. We consider three closely related kinds of approximations. The first two variants optimize the partition such that: in the first variant some fixed number s of items can be split between two or more bins and in the second variant we allow at most a fixed number t of splittings. The third variant is a decision problem: the largest bin sum must be within a pre-specified interval, parameterized by a fixed rational number u times the largest item size. When the number of bins n is unbounded, we show that every variant is strongly NP-complete. When the number of bins n is fixed, the running time depends on the fixed parameters s,t,u. For each variant, we give a complete picture of its running time. For n = 2, the running time is easy to identify. Our main results consider any fixed integer n ≥ 3. Using a two-way polynomial-time reduction between the first and the third variant, we show that n-way number-partitioning with s split items can be solved in polynomial time if s ≥ n-2, and it is NP-complete otherwise. Also, n-way number-partitioning with t splittings can be solved in polynomial time if t ≥ n-1, and it is NP-complete otherwise. Finally, we show that the third variant can be solved in polynomial time if u ≥ (n-2)/n, and it is NP-complete otherwise. Our positive results for the optimization problems consider both min-max and max-min versions. Using the same reduction, we provide a fully polynomial-time approximation scheme for the case where the number of split items is lower than n-2.

Cite as

Samuel Bismuth, Vladislav Makarov, Erel Segal-Halevi, and Dana Shapira. Partitioning Problems with Splittings and Interval Targets. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bismuth_et_al:LIPIcs.ISAAC.2024.12,
  author =	{Bismuth, Samuel and Makarov, Vladislav and Segal-Halevi, Erel and Shapira, Dana},
  title =	{{Partitioning Problems with Splittings and Interval Targets}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{12:1--12:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.12},
  URN =		{urn:nbn:de:0030-drops-221394},
  doi =		{10.4230/LIPIcs.ISAAC.2024.12},
  annote =	{Keywords: Number Partitioning, Fair Division, Identical Machine Scheduling}
}
Document
The Existential Theory of the Reals with Summation Operators

Authors: Markus Bläser, Julian Dörfler, Maciej Liśkiewicz, and Benito van der Zander


Abstract
To characterize the computational complexity of satisfiability problems for probabilistic and causal reasoning within Pearl’s Causal Hierarchy, van der Zander, Bläser, and Liśkiewicz [IJCAI 2023] introduce a new natural class, named succ-∃ℝ. This class can be viewed as a succinct variant of the well-studied class ∃ℝ based on the Existential Theory of the Reals (ETR). Analogously to ∃ℝ, succ-∃ℝ is an intermediate class between NEXP and EXPSPACE, the exponential versions of NP and PSPACE. The main contributions of this work are threefold. Firstly, we characterize the class succ-∃ℝ in terms of nondeterministic real Random-Access Machines (RAMs) and develop structural complexity theoretic results for real RAMs, including translation and hierarchy theorems. Notably, we demonstrate the separation of ∃ℝ and succ-∃ℝ. Secondly, we examine the complexity of model checking and satisfiability of fragments of existential second-order logic and probabilistic independence logic. We show succ-∃ℝ-completeness of several of these problems, for which the best-known complexity lower and upper bounds were previously NEXP-hardness and EXPSPACE, respectively. Thirdly, while succ-∃ℝ is characterized in terms of ordinary (non-succinct) ETR instances enriched by exponential sums and a mechanism to index exponentially many variables, in this paper, we prove that when only exponential sums are added, the corresponding class ∃ℝ^Σ is contained in PSPACE. We conjecture that this inclusion is strict, as this class is equivalent to adding a VNP-oracle to a polynomial time nondeterministic real RAM. Conversely, the addition of exponential products to ETR, yields PSPACE. Furthermore, we study the satisfiability problem for probabilistic reasoning, with the additional requirement of a small model, and prove that this problem is complete for ∃ℝ^Σ.

Cite as

Markus Bläser, Julian Dörfler, Maciej Liśkiewicz, and Benito van der Zander. The Existential Theory of the Reals with Summation Operators. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blaser_et_al:LIPIcs.ISAAC.2024.13,
  author =	{Bl\"{a}ser, Markus and D\"{o}rfler, Julian and Li\'{s}kiewicz, Maciej and van der Zander, Benito},
  title =	{{The Existential Theory of the Reals with Summation Operators}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{13:1--13:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.13},
  URN =		{urn:nbn:de:0030-drops-221407},
  doi =		{10.4230/LIPIcs.ISAAC.2024.13},
  annote =	{Keywords: Existential theory of the real numbers, Computational complexity, Probabilistic logic, Models of computation, Existential second order logic}
}
Document
Routing from Pentagon to Octagon Delaunay Graphs

Authors: Prosenjit Bose, Jean-Lou De Carufel, and John Stuart


Abstract
The standard Delaunay triangulation is a geometric graph whose vertices are points in the plane, and two vertices share an edge if they lie on the boundary of an empty disk. If the disk is replaced with a homothet of a fixed convex shape C, then the resulting graph is called a C-Delaunay graph. We study the problem of local routing in C-Delaunay graphs where C is a regular polygon having five to eight sides. In particular, we generalize the routing algorithm of Chew for square-Delaunay graphs (Chew. SCG 1986, 169-177) in order to obtain the following approximate upper bounds of 4.640, 6.429, 8.531 and 4.054 on the spanning and routing ratios for pentagon-, hexagon-, septagon-, and octagon-Delaunay graphs, respectively. The exact expression for the upper bounds of the routing ratio is Ψ(n):= √{1+((cos(2π/n)+n-1)/sin(2π/n))^2} (if n ∈ {5,6,7}), √{1+((cos(π/8)cos(3π/8)+3)/(cos(π/8)sin(3π/8)))^2} (if n = 8). We show that these bounds are tight for the output of our routing algorithm by providing a point set where these bounds are achieved. We also include lower bounds of 1.708 and 1.995 on the spanning and routing ratios of the pentagon-Delaunay graph. Our upper bounds yield a significant improvement over the previous routing ratio upper bounds for this problem, which previously sat at around 400 for the pentagon, septagon, and octagon as well as 18 for the hexagon. Our routing ratios also provide significant improvements over the previously best known spanning ratios for pentagon-, septagon- and octagon-Delaunay graphs, which were around 45.

Cite as

Prosenjit Bose, Jean-Lou De Carufel, and John Stuart. Routing from Pentagon to Octagon Delaunay Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bose_et_al:LIPIcs.ISAAC.2024.14,
  author =	{Bose, Prosenjit and De Carufel, Jean-Lou and Stuart, John},
  title =	{{Routing from Pentagon to Octagon Delaunay Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.14},
  URN =		{urn:nbn:de:0030-drops-221411},
  doi =		{10.4230/LIPIcs.ISAAC.2024.14},
  annote =	{Keywords: Geometric Spanners, Generalized Delaunay Graphs, Local Routing Algorithms}
}
Document
On the Spanning and Routing Ratios of the Yao-Four Graph

Authors: Prosenjit Bose, Darryl Hill, Michiel Smid, and Tyler Tuttle


Abstract
The Yao graph is a geometric spanner that was independently introduced by Yao [SIAM J. Comput., 1982] and Flinchbaugh and Jones [SIAM J. Algebr. Discret. Appl., 1981]. We prove that for any two vertices of the undirected version of the Yao graph with four cones, there is a path between them with length at most 13 + 5/√2 ≈ 16.54 times the Euclidean distance between the vertices, improving the previous best bound of approximately 54.62. We also present an online routing algorithm for the directed Yao graph with four cones that constructs a path between any two vertices with length at most 17 + 9/√2 ≈ 23.36 times the Euclidean distance between the vertices. This is the first routing algorithm for a directed Yao graph with fewer than six cones. The algorithm uses knowledge of the coordinates of the current vertex, the (up to) four neighbours of the current vertex, and the destination vertex to make a routing decision. It also uses one additional bit of memory. We show how to dispense with this single bit at the cost of increasing the length of the path to √{331 + 154√2} ≈ 23.43 times the Euclidean distance between the vertices.

Cite as

Prosenjit Bose, Darryl Hill, Michiel Smid, and Tyler Tuttle. On the Spanning and Routing Ratios of the Yao-Four Graph. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 15:1-15:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bose_et_al:LIPIcs.ISAAC.2024.15,
  author =	{Bose, Prosenjit and Hill, Darryl and Smid, Michiel and Tuttle, Tyler},
  title =	{{On the Spanning and Routing Ratios of the Yao-Four Graph}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{15:1--15:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.15},
  URN =		{urn:nbn:de:0030-drops-221422},
  doi =		{10.4230/LIPIcs.ISAAC.2024.15},
  annote =	{Keywords: Yao graph, online routing, geometric spanners}
}
Document
FPT Approximations for Fair k-Min-Sum-Radii

Authors: Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, and Melanie Schmidt


Abstract
We consider the k-min-sum-radii (k-MSR) clustering problem with fairness constraints. The k-min-sum-radii problem is a mixture of the classical k-center and k-median problems. We are given a set of points P in a metric space and a number k and aim to partition the points into k clusters, each of the clusters having one designated center. The objective to minimize is the sum of the radii of the k clusters (where in k-center we would only consider the maximum radius and in k-median we would consider the sum of the individual points' costs). Various notions of fair clustering have been introduced lately, and we follow the definitions due to Chierichetti et al. [Flavio Chierichetti et al., 2017] which demand that cluster compositions shall follow the proportions of the input point set with respect to some given sensitive attribute. For the easier case where the sensitive attribute only has two possible values and each is equally frequent in the input, the aim is to compute a clustering where all clusters have a 1:1 ratio with respect to this attribute. We call this the 1:1 case. There has been a surge of FPT-approximation algorithms for the k-MSR problem lately, solving the problem both in the unconstrained case and in several constrained problem variants. We add to this research area by designing an FPT (6+ε)-approximation that works for k-MSR under the mentioned general fairness notion. For the special 1:1 case, we improve our algorithm to achieve a (3+ε)-approximation.

Cite as

Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, and Melanie Schmidt. FPT Approximations for Fair k-Min-Sum-Radii. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{carta_et_al:LIPIcs.ISAAC.2024.16,
  author =	{Carta, Lena and Drexler, Lukas and Hennes, Annika and R\"{o}sner, Clemens and Schmidt, Melanie},
  title =	{{FPT Approximations for Fair k-Min-Sum-Radii}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.16},
  URN =		{urn:nbn:de:0030-drops-221438},
  doi =		{10.4230/LIPIcs.ISAAC.2024.16},
  annote =	{Keywords: Clustering, k-min-sum-radii, fairness}
}
Document
Succinct Data Structures for Baxter Permutation and Related Families

Authors: Sankardeep Chakraborty, Seungbum Jo, Geunho Kim, and Kunihiko Sadakane


Abstract
A permutation π: [n] → [n] is a Baxter permutation if and only if it does not contain either of the patterns 2-41-3 and 3-14-2. Baxter permutations are one of the most widely studied subclasses of general permutation due to their connections with various combinatorial objects such as plane bipolar orientations and mosaic floorplans, etc. In this paper, we introduce a novel succinct representation (i.e., using o(n) additional bits from their information-theoretical lower bounds) for Baxter permutations of size n that supports π(i) and π^{-1}(j) queries for any i ∈ [n] in O(f₁(n)) and O(f₂(n)) time, respectively. Here, f₁(n) and f₂(n) are arbitrary increasing functions that satisfy the conditions ω(log n) and ω(log² n), respectively. This stands out as the first succinct representation with sub-linear worst-case query times for Baxter permutations. The main idea is to traverse the Cartesian tree on the permutation using a simple yet elegant two-stack algorithm which traverses the nodes in ascending order of their corresponding labels and stores the necessary information throughout the algorithm. Additionally, we consider a subclass of Baxter permutations called separable permutations, which do not contain either of the patterns 2-4-1-3 and 3-1-4-2. In this paper, we provide the first succinct representation of the separable permutation ρ: [n] → [n] of size n that supports both ρ(i) and ρ^{-1}(j) queries in O(1) time. In particular, this result circumvents Golynski’s [SODA 2009] lower bound result for trade-offs between redundancy and ρ(i) and ρ^{-1}(j) queries. Moreover, as applications of these permutations with the queries, we also introduce the first succinct representations for mosaic/slicing floorplans, and plane bipolar orientations, which can further support specific navigational queries on them efficiently.

Cite as

Sankardeep Chakraborty, Seungbum Jo, Geunho Kim, and Kunihiko Sadakane. Succinct Data Structures for Baxter Permutation and Related Families. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 17:1-17:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ISAAC.2024.17,
  author =	{Chakraborty, Sankardeep and Jo, Seungbum and Kim, Geunho and Sadakane, Kunihiko},
  title =	{{Succinct Data Structures for Baxter Permutation and Related Families}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{17:1--17:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.17},
  URN =		{urn:nbn:de:0030-drops-221441},
  doi =		{10.4230/LIPIcs.ISAAC.2024.17},
  annote =	{Keywords: Succinct data structure, Baxter permutation, Mosaic floorplan, Plane bipolar orientation}
}
Document
Enhancing Generalized Compressed Suffix Trees, with Applications

Authors: Sankardeep Chakraborty, Kunihiko Sadakane, and Wiktor Zuba


Abstract
Generalized suffix trees are data structures for storing and searching a set of strings. Though many string problems can be solved efficiently using them, their space usage can be large relative to the size of the input strings. For a set of strings with n characters in total, generalized suffix trees use O(n log n) bit space, which is much larger than the strings that occupy n log σ bits where σ is the alphabet size. Generalized compressed suffix trees use just O(n log σ) bits but support the same basic operations as the generalized suffix trees. However, for some sophisticated operations we need to add auxiliary data structures of O(n log n) bits. This becomes a bottleneck for applications involving big data. In this paper, we enhance the generalized compressed suffix trees while still retaining their space efficiency. First, we give an auxiliary data structure of O(n) bits for generalized compressed suffix trees such that given a suffix s of a string and another string t, we can find the suffix of t that is closest to s. Next, we give a o(n) bit data structure for finding the ancestor of a node in a (generalized) compressed suffix tree with given string depth. Finally, we give data structures for a generalization of the document listing problem from arrays to trees. We also show their applications to suffix-prefix matching problems.

Cite as

Sankardeep Chakraborty, Kunihiko Sadakane, and Wiktor Zuba. Enhancing Generalized Compressed Suffix Trees, with Applications. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ISAAC.2024.18,
  author =	{Chakraborty, Sankardeep and Sadakane, Kunihiko and Zuba, Wiktor},
  title =	{{Enhancing Generalized Compressed Suffix Trees, with Applications}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.18},
  URN =		{urn:nbn:de:0030-drops-221453},
  doi =		{10.4230/LIPIcs.ISAAC.2024.18},
  annote =	{Keywords: suffix tree, compact data structure, suffix-prefix query, weighted level ancestor}
}
Document
Tight (Double) Exponential Bounds for Identification Problems: Locating-Dominating Set and Test Cover

Authors: Dipayan Chakraborty, Florent Foucaud, Diptapriyo Majumdar, and Prafullkumar Tale


Abstract
Foucaud et al. [ICALP 2024] demonstrated that some problems in NP can admit (tight) double-exponential lower bounds when parameterized by treewidth or vertex cover number. They showed these first-of-their-kind results by proving conditional lower bounds for certain graph problems, in particular, the metric-based identification problems (Strong) Metric Dimension. We continue this line of research and highlight the usefulness of this type of problems, to prove relatively rare types of (tight) lower bounds. We investigate fine-grained algorithmic aspects of classical (non-metric based) identification problems in graphs, namely Locating-Dominating Set, and in set systems, namely Test Cover. In the first problem, an input is a graph G on n vertices and an integer k, and the objective is to decide whether there is a subset S of k vertices such that any two distinct vertices not in S are dominated by distinct subsets of S. In the second problem, an input is a set of items U, a collection of subsets ℱ of U called tests, and an integer k, and the objective is to select a set S of at most k tests such that any two distinct items are contained in a distinct subset of tests of S. For our first result, we adapt the techniques introduced by Foucaud et al. [ICALP 2024] to prove similar (tight) lower bounds for these two problems. - Locating-Dominating Set (respectively, Test Cover) parameterized by the treewidth of the input graph (respectively, the natural auxiliary graph) does not admit an algorithm running in time 2^{2^o(tw)} ⋅ poly(n) (respectively, 2^{2^o(tw)} ⋅ poly(|U| + |ℱ|))), unless the ETH fails. This augments the short list of NP-Complete problems that admit tight double-exponential lower bounds when parameterized by treewidth, and shows that "local" (non-metric-based) problems can also admit such bounds. We show that these lower bounds are tight by designing treewidth-based dynamic programming schemes with matching running times. Next, we prove that these two problems also admit "exotic" (and tight) lower bounds, when parameterized by the solution size k. We prove that unless the ETH fails, - Locating-Dominating Set does not admit an algorithm running in time 2^o(k²) ⋅ poly(n), nor a polynomial-time kernelization algorithm that reduces the solution size and outputs a kernel with 2^o(k) vertices, and - Test Cover does not admit an algorithm running in time 2^{2^o(k)} ⋅ poly(|U| + |ℱ|) nor a kernel with 2^{2^o(k)} vertices. Again, we show that these lower bounds are tight by designing (kernelization) algorithms with matching running times. To the best of our knowledge, Locating-Dominating Set is the first known problem which is FPT when parameterized by solution size k, where the optimal running time has a quadratic function in the exponent. These results also extend the (very) small list of problems that admit an ETH-based lower bound on the number of vertices in a kernel, and (for Test Cover) a double-exponential lower bound when parameterized by the solution size. Whereas it is the first example, to the best of our knowledge, that admit a double exponential lower bound for the number of vertices.

Cite as

Dipayan Chakraborty, Florent Foucaud, Diptapriyo Majumdar, and Prafullkumar Tale. Tight (Double) Exponential Bounds for Identification Problems: Locating-Dominating Set and Test Cover. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 19:1-19:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ISAAC.2024.19,
  author =	{Chakraborty, Dipayan and Foucaud, Florent and Majumdar, Diptapriyo and Tale, Prafullkumar},
  title =	{{Tight (Double) Exponential Bounds for Identification Problems: Locating-Dominating Set and Test Cover}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{19:1--19:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.19},
  URN =		{urn:nbn:de:0030-drops-221469},
  doi =		{10.4230/LIPIcs.ISAAC.2024.19},
  annote =	{Keywords: Identification Problems, Locating-Dominating Set, Test Cover, Double Exponential Lower Bound, ETH, Kernelization Lower Bounds}
}
Document
Revisit the Scheduling Problem with Calibrations

Authors: Lin Chen, Yixiong Gao, Minming Li, Guohui Lin, and Kai Wang


Abstract
The research about scheduling with calibrations was initiated from the Integrated Stockpile Evaluation (ISE) program which tests nuclear weapons periodically. The tests for these weapons require calibrations that are expensive in the monetary sense. This model has many industrial applications where the machines need to be calibrated periodically to ensure high-quality products, including robotics and digital cameras. In 2013, Bender et al. (SPAA '13) proposed a theoretical framework for the ISE problem. In this model, a machine can only be trusted to run a job when it is calibrated and the calibration remains valid for a time period of length T, after which it must be recalibrated before running more jobs. The objective is to find a schedule that completes all jobs by their deadlines and minimizes the total number of calibrations. In this paper, we study the scheduling problem with calibrations on multiple parallel machines where we consider unit-time processing jobs with release times and deadlines. We propose a dynamic programming algorithm with polynomial running time when the number of machines is constant. Then, we propose another dynamic programming approach with polynomial running time when the length of the calibrated period is constant. Also, we propose a PTAS, that is, for any constant ε > 0, we give a (1+ε) - approximation solution with m machines.

Cite as

Lin Chen, Yixiong Gao, Minming Li, Guohui Lin, and Kai Wang. Revisit the Scheduling Problem with Calibrations. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2024.20,
  author =	{Chen, Lin and Gao, Yixiong and Li, Minming and Lin, Guohui and Wang, Kai},
  title =	{{Revisit the Scheduling Problem with Calibrations}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{20:1--20:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.20},
  URN =		{urn:nbn:de:0030-drops-221476},
  doi =		{10.4230/LIPIcs.ISAAC.2024.20},
  annote =	{Keywords: Approximation Algorithm, Scheduling, Calibration, Resource Augmentation}
}
Document
Mimicking Networks for Constrained Multicuts in Hypergraphs

Authors: Kyungjin Cho and Eunjin Oh


Abstract
In this paper, we study a multicut-mimicking network for a hypergraph over terminals T with a parameter c. It is a hypergraph preserving the minimum multicut values of any set of pairs over T where the value is at most c. This is a new variant of the multicut-mimicking network of a graph in [Wahlström ICALP'20], which introduces a parameter c and extends it to handle hypergraphs. Additionally, it is a natural extension of the connectivity-c mimicking network introduced by [Chalermsook et al. SODA'21] and [Jiang et al. ESA'22] that is a (hyper)graph preserving the minimum cut values between two subsets of terminals where the value is at most c. We propose an algorithm for a hypergraph that returns a multicut-mimicking network over terminals T with a parameter c having |T|c^O(rlog c) hyperedges in p^{1+o(1)} + |T|(c^rlog n)^{Õ(rc)}⋅m time, where p and r are the total size and the rank, respectively, of the hypergraph.

Cite as

Kyungjin Cho and Eunjin Oh. Mimicking Networks for Constrained Multicuts in Hypergraphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cho_et_al:LIPIcs.ISAAC.2024.21,
  author =	{Cho, Kyungjin and Oh, Eunjin},
  title =	{{Mimicking Networks for Constrained Multicuts in Hypergraphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.21},
  URN =		{urn:nbn:de:0030-drops-221487},
  doi =		{10.4230/LIPIcs.ISAAC.2024.21},
  annote =	{Keywords: hyperedge multicut, vertex sparsification, parameterized complexity}
}
Document
On HTLC-Based Protocols for Multi-Party Cross-Chain Swaps

Authors: Emily Clark, Chloe Georgiou, Katelyn Poon, and Marek Chrobak


Abstract
In his 2018 paper, Herlihy introduced an atomic protocol for multi-party asset swaps across different blockchains. Practical implementation of this protocol is hampered by its intricacy and computational complexity, as it relies on elaborate smart contracts for asset transfers, and specifying the protocol’s steps on a given digraph requires solving an NP-hard problem of computing longest paths. Herlihy left open the question whether there is a simple and efficient protocol for cross-chain asset swaps in arbitrary digraphs. Addressing this, we study HTLC-based protocols, in which all asset transfers are implemented with standard hashed time-lock smart contracts (HTLCs). Our main contribution is a full characterization of swap digraphs that have such protocols, in terms of so-called reuniclus graphs. We give an atomic HTLC-based protocol for reuniclus graphs. Our protocol is simple and efficient. We then prove that non-reuniclus graphs do not have atomic HTLC-based swap protocols.

Cite as

Emily Clark, Chloe Georgiou, Katelyn Poon, and Marek Chrobak. On HTLC-Based Protocols for Multi-Party Cross-Chain Swaps. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{clark_et_al:LIPIcs.ISAAC.2024.22,
  author =	{Clark, Emily and Georgiou, Chloe and Poon, Katelyn and Chrobak, Marek},
  title =	{{On HTLC-Based Protocols for Multi-Party Cross-Chain Swaps}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.22},
  URN =		{urn:nbn:de:0030-drops-221498},
  doi =		{10.4230/LIPIcs.ISAAC.2024.22},
  annote =	{Keywords: distributed computing, blockchain, asset swaps}
}
Document
Simple Realizability of Abstract Topological Graphs

Authors: Giordano Da Lozzo, Walter Didimo, Fabrizio Montecchiani, Miriam Münch, Maurizio Patrignani, and Ignaz Rutter


Abstract
An abstract topological graph (AT-graph) is a pair A = (G, X), where G = (V,E) is a graph and X ⊆ binom(E,2) is a set of pairs of edges of G. A realization of A is a drawing Γ_A of G in the plane such that any two edges e₁,e₂ of G cross in Γ_A if and only if (e₁,e₂) ∈ X; Γ_A is simple if any two edges intersect at most once (either at a common endpoint or at a proper crossing). The AT-graph Realizability (ATR) problem asks whether an input AT-graph admits a realization. The version of this problem that requires a simple realization is called Simple AT-graph Realizability (SATR). It is a classical result that both ATR and SATR are NP-complete [Kratochvíl, 1991; Kratochvíl and Matoušek, 1989]. In this paper, we study the SATR problem from a new structural perspective. More precisely, we consider the size λ(A) of the largest connected component of the crossing graph of any realization of A, i.e., the graph C(A) = (E, X). This parameter represents a natural way to measure the level of interplay among edge crossings. First, we prove that SATR is NP-complete when λ(A) ≥ 6. On the positive side, we give an optimal linear-time algorithm that solves SATR when λ(A) ≤ 3 and returns a simple realization if one exists. Our algorithm is based on several ingredients, in particular the reduction to a new embedding problem subject to constraints that require certain pairs of edges to alternate (in the rotation system), and a sequence of transformations that exploit the interplay between alternation constraints and the SPQR-tree and PQ-tree data structures to eventually arrive at a simpler embedding problem that can be solved with standard techniques.

Cite as

Giordano Da Lozzo, Walter Didimo, Fabrizio Montecchiani, Miriam Münch, Maurizio Patrignani, and Ignaz Rutter. Simple Realizability of Abstract Topological Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dalozzo_et_al:LIPIcs.ISAAC.2024.23,
  author =	{Da Lozzo, Giordano and Didimo, Walter and Montecchiani, Fabrizio and M\"{u}nch, Miriam and Patrignani, Maurizio and Rutter, Ignaz},
  title =	{{Simple Realizability of Abstract Topological Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.23},
  URN =		{urn:nbn:de:0030-drops-221501},
  doi =		{10.4230/LIPIcs.ISAAC.2024.23},
  annote =	{Keywords: Abstract Topological Graphs, SPQR-Trees, Synchronized PQ-Trees}
}
Document
Exact Algorithms for Clustered Planarity with Linear Saturators

Authors: Giordano Da Lozzo, Robert Ganian, Siddharth Gupta, Bojan Mohar, Sebastian Ordyniak, and Meirav Zehavi


Abstract
We study Clustered Planarity with Linear Saturators, which is the problem of augmenting an n-vertex planar graph whose vertices are partitioned into independent sets (called clusters) with paths - one for each cluster - that connect all the vertices in each cluster while maintaining planarity. We show that the problem can be solved in time 2^𝒪(n) for both the variable and fixed embedding case. Moreover, we show that it can be solved in subexponential time 2^𝒪(√n log n) in the fixed embedding case if additionally the input graph is connected. The latter time complexity is tight under the Exponential-Time Hypothesis. We also show that n can be replaced with the vertex cover number of the input graph by providing a linear (resp. polynomial) kernel for the variable-embedding (resp. fixed-embedding) case; these results contrast the NP-hardness of the problem on graphs of bounded treewidth (and even on trees). Finally, we complement known lower bounds for the problem by showing that Clustered Planarity with Linear Saturators is NP-hard even when the number of clusters is at most 3, thus excluding the algorithmic use of the number of clusters as a parameter.

Cite as

Giordano Da Lozzo, Robert Ganian, Siddharth Gupta, Bojan Mohar, Sebastian Ordyniak, and Meirav Zehavi. Exact Algorithms for Clustered Planarity with Linear Saturators. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dalozzo_et_al:LIPIcs.ISAAC.2024.24,
  author =	{Da Lozzo, Giordano and Ganian, Robert and Gupta, Siddharth and Mohar, Bojan and Ordyniak, Sebastian and Zehavi, Meirav},
  title =	{{Exact Algorithms for Clustered Planarity with Linear Saturators}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.24},
  URN =		{urn:nbn:de:0030-drops-221513},
  doi =		{10.4230/LIPIcs.ISAAC.2024.24},
  annote =	{Keywords: Clustered planarity, independent c-graphs, path saturation, graph drawing}
}
Document
The Complexity of Geodesic Spanners Using Steiner Points

Authors: Sarita de Berg, Tim Ophelders, Irene Parada, Frank Staals, and Jules Wulms


Abstract
A geometric t-spanner 𝒢 on a set S of n point sites in a metric space P is a subgraph of the complete graph on S such that for every pair of sites p,q the distance in 𝒢 is a most t times the distance d(p,q) in P. We call a connection between two sites a link. In some settings, such as when P is a simple polygon with m vertices and a link is a shortest path in P, links can consist of Θ (m) segments and thus have non-constant complexity. The spanner complexity is a measure of how compact a spanner is, which is equal to the sum of the complexities of all links in the spanner. In this paper, we study what happens if we are allowed to introduce k Steiner points to reduce the spanner complexity. We study such Steiner spanners in simple polygons, polygonal domains, and edge-weighted trees. Surprisingly, we show that Steiner points have only limited utility. For a spanner that uses k Steiner points, we provide an Ω(nm/k) lower bound on the worst-case complexity of any (3-ε)-spanner, and an Ω(mn^{1/(t+1)}/k^{1/(t+1)}) lower bound on the worst-case complexity of any (t-ε)-spanner, for any constant ε ∈ (0,1) and integer constant t ≥ 2. These lower bounds hold in all settings. Additionally, we show NP-hardness for the problem of deciding whether a set of sites in a polygonal domain admits a 3-spanner with a given maximum complexity using k Steiner points. On the positive side, for trees we show how to build a 2t-spanner that uses k Steiner points of complexity O(mn^{1/t}/k^{1/t} + n log (n/k)), for any integer t ≥ 1. We generalize this result to forests, and apply it to obtain a 2√2t-spanner in a simple polygon with total complexity O(mn^{1/t}(log k)^{1+1/t}/k^{1/t} + nlog² n). When a link in the spanner can be any path between two sites, we show how to improve the spanning ratio in a simple polygon to (2k+ε), for any constant ε ∈ (0,2k), and how to build a 6t-spanner in a polygonal domain with the same complexity.

Cite as

Sarita de Berg, Tim Ophelders, Irene Parada, Frank Staals, and Jules Wulms. The Complexity of Geodesic Spanners Using Steiner Points. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{deberg_et_al:LIPIcs.ISAAC.2024.25,
  author =	{de Berg, Sarita and Ophelders, Tim and Parada, Irene and Staals, Frank and Wulms, Jules},
  title =	{{The Complexity of Geodesic Spanners Using Steiner Points}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.25},
  URN =		{urn:nbn:de:0030-drops-221527},
  doi =		{10.4230/LIPIcs.ISAAC.2024.25},
  annote =	{Keywords: spanner, simple polygon, polygonal domain, geodesic distance, complexity}
}
Document
Constrained Boundary Labeling

Authors: Thomas Depian, Martin Nöllenburg, Soeren Terziadis, and Markus Wallinger


Abstract
Boundary labeling is a technique in computational geometry used to label dense sets of feature points in an illustration. It involves placing labels along an axis-aligned bounding box and connecting each label with its corresponding feature point using non-crossing leader lines. Although boundary labeling is well-studied, semantic constraints on the labels have not been investigated thoroughly. In this paper, we introduce grouping and ordering constraints in boundary labeling: Grouping constraints enforce that all labels in a group are placed consecutively on the boundary, and ordering constraints enforce a partial order over the labels. We show that it is NP-hard to find a labeling for arbitrarily sized labels with unrestricted positions along one side of the boundary. However, we obtain polynomial-time algorithms if we restrict this problem either to uniform-height labels or to a finite set of candidate positions. Finally, we show that finding a labeling on two opposite sides of the boundary is NP-complete, even for uniform-height labels and finite label positions.

Cite as

Thomas Depian, Martin Nöllenburg, Soeren Terziadis, and Markus Wallinger. Constrained Boundary Labeling. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 26:1-26:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{depian_et_al:LIPIcs.ISAAC.2024.26,
  author =	{Depian, Thomas and N\"{o}llenburg, Martin and Terziadis, Soeren and Wallinger, Markus},
  title =	{{Constrained Boundary Labeling}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{26:1--26:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.26},
  URN =		{urn:nbn:de:0030-drops-221539},
  doi =		{10.4230/LIPIcs.ISAAC.2024.26},
  annote =	{Keywords: Boundary labeling, Grouping constraints, Ordering constraints}
}
Document
Knapsack with Vertex Cover, Set Cover, and Hitting Set

Authors: Palash Dey, Ashlesha Hota, Sudeshna Kolay, and Sipra Singh


Abstract
In the Vertex Cover Knapsack problem, we are given an undirected graph G = (V, E), with weights (w(u))_{u ∈ V} and values (𝛂(u))_{u ∈ V} of the vertices, the size s of the knapsack, a target value p, and the goal is to compute if there exists a vertex cover U ⊆ V with total weight at most s, and total value at least p. This problem simultaneously generalizes the classical vertex cover and knapsack problems. We show that this problem is strongly NP-complete. However, it admits a pseudo-polynomial time algorithm for trees. In fact, we show that there is an algorithm that runs in time O (2^tw ⋅ n ⋅ min) where tw is the treewidth of G. Moreover, we can compute a (1-ε)- approximate solution for maximizing the value of the solution given the knapsack size as input in time O (2^tw ⋅ poly(n,1/ε,log(∑_{v ∈ V} 𝛂(v)))) and a (1+ε)-approximate solution to minimize the size of the solution given a target value as input, in time O (2^tw ⋅ poly(n,1/ε,log(∑_{v ∈ V} w(v)))) for every ε > 0. Restricting our attention to polynomial-time algorithms only, we then consider polynomial-time algorithms and present a 2 factor polynomial-time approximation algorithm for this problem for minimizing the total weight of the solution, which is optimal up to additive o(1) assuming Unique Games Conjecture (UGC). On the other hand, we show that there is no ρ factor polynomial-time approximation algorithm for maximizing the total value of the solution given a knapsack size for any ρ > 1 unless 𝖯 = NP. Furthermore, we show similar results for the variants of the above problem when the solution U needs to be a minimal vertex cover, minimum vertex cover, and vertex cover of size at most k for some input integer k. Then, we consider set families (equivalently hypergraphs) and study the variants of the above problem when the solution needs to be a set cover and hitting set. We show that there are H_d and f factor polynomial-time approximation algorithms for Set Cover Knapsack where d is the maximum cardinality of any set and f is the maximum number of sets in the family where any element can belongs in the input for minimizing the weight of the knapsack given a target value, and a d factor polynomial-time approximation algorithm for d-Hitting Set Knapsack which are optimal up to additive o(1) assuming UGC. On the other hand, we show that there is no ρ factor polynomial-time approximation algorithm for maximizing the total value of the solution given a knapsack size for any ρ > 1 unless 𝖯 = NP for both Set Cover Knapsack and d-Hitting Set Knapsack.

Cite as

Palash Dey, Ashlesha Hota, Sudeshna Kolay, and Sipra Singh. Knapsack with Vertex Cover, Set Cover, and Hitting Set. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ISAAC.2024.27,
  author =	{Dey, Palash and Hota, Ashlesha and Kolay, Sudeshna and Singh, Sipra},
  title =	{{Knapsack with Vertex Cover, Set Cover, and Hitting Set}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.27},
  URN =		{urn:nbn:de:0030-drops-221540},
  doi =		{10.4230/LIPIcs.ISAAC.2024.27},
  annote =	{Keywords: Knapsack, vertex cover, minimal vertex cover, minimum vertex cover, hitting set, set cover, algorithm, approximation algorithm, parameterized complexity}
}
Document
Subsequence Matching and Analysis Problems for Formal Languages

Authors: Szilárd Zsolt Fazekas, Tore Koß, Florin Manea, Robert Mercaş, and Timo Specht


Abstract
In this paper, we study a series of algorithmic problems related to the subsequences occurring in the strings of a given language, under the assumption that this language is succinctly represented by a grammar generating it, or an automaton accepting it. In particular, we focus on the following problems: Given a string w and a language L, does there exist a word of L which has w as subsequence? Do all words of L have w as a subsequence? Given an integer k alongside L, does there exist a word of L which has all strings of length k, over the alphabet of L, as subsequences? Do all words of L have all strings of length k as subsequences? For the last two problems, efficient algorithms were already presented in [Adamson et al., ISAAC 2023] for the case when L is a regular language, and efficient solutions can be easily obtained for the first two problems. We extend that work as follows: we give sufficient conditions on the class of input-languages, under which these problems are decidable; we provide efficient algorithms for all these problems in the case when the input language is context-free; we show that all problems are undecidable for context-sensitive languages. Finally, we provide a series of initial results related to a class of languages that strictly includes the regular languages and is strictly included in the class of context-sensitive languages, but is incomparable to the of class context-free languages; these results deviate significantly from those reported for language-classes from the Chomsky hierarchy.

Cite as

Szilárd Zsolt Fazekas, Tore Koß, Florin Manea, Robert Mercaş, and Timo Specht. Subsequence Matching and Analysis Problems for Formal Languages. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 28:1-28:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fazekas_et_al:LIPIcs.ISAAC.2024.28,
  author =	{Fazekas, Szil\'{a}rd Zsolt and Ko{\ss}, Tore and Manea, Florin and Merca\c{s}, Robert and Specht, Timo},
  title =	{{Subsequence Matching and Analysis Problems for Formal Languages}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{28:1--28:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.28},
  URN =		{urn:nbn:de:0030-drops-221551},
  doi =		{10.4230/LIPIcs.ISAAC.2024.28},
  annote =	{Keywords: Stringology, String Combinatorics, Subsequence, Formal Languages, Context-Free Languages, Context-Sensitive Languages}
}
Document
Coordinated Motion Planning: Multi-Agent Path Finding in a Densely Packed, Bounded Domain

Authors: Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, and Christian Scheffer


Abstract
We study Multi-Agent Path Finding for arrangements of labeled agents in the interior of a simply connected domain: Given a unique start and target position for each agent, the goal is to find a sequence of parallel, collision-free agent motions that minimizes the overall time (the makespan) until all agents have reached their respective targets. A natural case is that of a simply connected polygonal domain with axis-parallel boundaries and integer coordinates, i.e., a simple polyomino, which amounts to a simply connected union of lattice unit squares or cells. We focus on the particularly challenging setting of densely packed agents, i.e., one per cell, which strongly restricts the mobility of agents, and requires intricate coordination of motion. We provide a variety of novel results for this problem, including (1) a characterization of polyominoes in which a reconfiguration plan is guaranteed to exist; (2) a characterization of shape parameters that induce worst-case bounds on the makespan; (3) a suite of algorithms to achieve asymptotically worst-case optimal performance with respect to the achievable stretch for cases with severely limited maneuverability. This corresponds to bounding the ratio between obtained makespan and the lower bound provided by the max-min distance between the start and target position of any agent and our shape parameters. Our results extend findings by Demaine et al. [Erik D. Demaine et al., 2018; Erik D. Demaine et al., 2019] who investigated the problem for solid rectangular domains, and in the closely related field of Permutation Routing, as presented by Alpert et al. [H. Alpert et al., 2022] for convex pieces of grid graphs.

Cite as

Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, and Christian Scheffer. Coordinated Motion Planning: Multi-Agent Path Finding in a Densely Packed, Bounded Domain. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.ISAAC.2024.29,
  author =	{Fekete, S\'{a}ndor P. and Kosfeld, Ramin and Kramer, Peter and Neutzner, Jonas and Rieck, Christian and Scheffer, Christian},
  title =	{{Coordinated Motion Planning: Multi-Agent Path Finding in a Densely Packed, Bounded Domain}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.29},
  URN =		{urn:nbn:de:0030-drops-221565},
  doi =		{10.4230/LIPIcs.ISAAC.2024.29},
  annote =	{Keywords: multi-agent path finding, coordinated motion planning, bounded stretch, makespan, swarm robotics, reconfigurability, parallel sorting}
}
Document
On the Complexity of Establishing Hereditary Graph Properties via Vertex Splitting

Authors: Alexander Firbas and Manuel Sorge


Abstract
Vertex splitting is a graph operation that replaces a vertex v with two nonadjacent new vertices u, w and makes each neighbor of v adjacent with one or both of u or w. Vertex splitting has been used in contexts from circuit design to statistical analysis. In this work, we generalize from specific vertex-splitting problems and systematically explore the computational complexity of achieving a given graph property Π by a limited number of vertex splits, formalized as the problem Π Vertex Splitting (Π-VS). We focus on hereditary graph properties and contribute four groups of results: First, we classify the classical complexity of Π-VS for graph properties characterized by forbidden subgraphs of order at most 3. Second, we provide a framework that allows one to show NP-completeness whenever one can construct a combination of a forbidden subgraph and prescribed vertex splits that satisfy certain conditions. Using this framework we show NP-completeness when Π is characterized by sufficiently well-connected forbidden subgraphs. In particular, we show that F-Free-VS is NP-complete for each biconnected graph F. Third, we study infinite families of forbidden subgraphs, obtaining NP-completeness for Bipartite-VS and Perfect-VS, contrasting the known result that Π-VS is in P if Π is the set of all cycles. Finally, we contribute to the study of the parameterized complexity of Π-VS with respect to the number of allowed splits. We show para-NP-hardness for K₃-Free-VS and derive an XP-algorithm when each vertex is only allowed to be split at most once, showing that the ability to split a vertex more than once is a key driver of the problems' complexity.

Cite as

Alexander Firbas and Manuel Sorge. On the Complexity of Establishing Hereditary Graph Properties via Vertex Splitting. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{firbas_et_al:LIPIcs.ISAAC.2024.30,
  author =	{Firbas, Alexander and Sorge, Manuel},
  title =	{{On the Complexity of Establishing Hereditary Graph Properties via Vertex Splitting}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.30},
  URN =		{urn:nbn:de:0030-drops-221572},
  doi =		{10.4230/LIPIcs.ISAAC.2024.30},
  annote =	{Keywords: NP-completeness, polynomial-time solvability, graph theory, graph transformation, graph modification}
}
Document
From Chinese Postman to Salesman and Beyond: Shortest Tour δ-Covering All Points on All Edges

Authors: Fabian Frei, Ahmed Ghazy, Tim A. Hartmann, Florian Hörsch, and Dániel Marx


Abstract
A well-studied continuous model of graphs, introduced by Dearing and Francis [Transportation Science, 1974], considers each edge as a continuous unit-length interval of points. For δ ≥ 0, we introduce the problem δ-Tour, where the objective is to find the shortest tour that comes within a distance of δ of every point on every edge. It can be observed that 0-Tour is essentially equivalent to the Chinese Postman Problem, which is solvable in polynomial time. In contrast, 1/2-Tour is essentially equivalent to the graphic Traveling Salesman Problem (TSP), which is NP-hard but admits a constant-factor approximation in polynomial time. We investigate δ-Tour for other values of δ, noting that the problem’s behavior and the insights required to understand it differ significantly across various δ regimes. On the one hand, we first examine the approximability of the problem for every fixed δ > 0: 1) For every fixed 0 < δ < 3/2, the problem δ-Tour admits a constant-factor approximation and is APX-hard, while for every fixed δ ≥ 3/2, the problem admits an O(log n)-approximation in polynomial time and has no polynomial-time o(log n)-approximation, unless P = NP. Our techniques also yield a new APX-hardness result for graphic TSP on cubic bipartite graphs. When parameterizing by the length of a shortest tour, it is relatively easy to show that 3/2 is the threshold of fixed-parameter tractability: 2) For every fixed 0 < δ < 3/2, the problem δ-Tour is fixed-parameter tractable (FPT) when parameterized by the length of a shortest tour, while it is W[2]-hard for every fixed δ ≥ 3/2. On the other hand, if δ is considered to be part of the input, then an interesting nontrivial phenomenon appears when δ is a constant fraction of the number of vertices: 3) If δ is part of the input, then the problem can be solved in time f(k)n^O(k), where k = ⌈n/δ⌉; however, assuming the Exponential-Time Hypothesis (ETH), there is no algorithm that solves the problem and runs in time f(k)n^o(k/log k).

Cite as

Fabian Frei, Ahmed Ghazy, Tim A. Hartmann, Florian Hörsch, and Dániel Marx. From Chinese Postman to Salesman and Beyond: Shortest Tour δ-Covering All Points on All Edges. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 31:1-31:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{frei_et_al:LIPIcs.ISAAC.2024.31,
  author =	{Frei, Fabian and Ghazy, Ahmed and Hartmann, Tim A. and H\"{o}rsch, Florian and Marx, D\'{a}niel},
  title =	{{From Chinese Postman to Salesman and Beyond: Shortest Tour \delta-Covering All Points on All Edges}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{31:1--31:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.31},
  URN =		{urn:nbn:de:0030-drops-221582},
  doi =		{10.4230/LIPIcs.ISAAC.2024.31},
  annote =	{Keywords: Chinese Postman Problem, Traveling Salesman Problem, Continuous Graphs, Approximation Algorithms, Inapproximability, Parameterized Complexity}
}
Document
When Can Cluster Deletion with Bounded Weights Be Solved Efficiently?

Authors: Jaroslav Garvardt, Christian Komusiewicz, and Nils Morawietz


Abstract
In the NP-hard Weighted Cluster Deletion problem, the input is an undirected graph G = (V,E) and an edge-weight function ω: E → ℕ, and the task is to partition the vertex set V into cliques so that the total weight of edges in the cliques is maximized. Recently, it has been shown that Weighted Cluster Deletion is NP-hard on some graph classes where Cluster Deletion, the special case where every edge has unit weight, can be solved in polynomial time. We study the influence of the value t of the largest edge weight assigned by ω on the problem complexity for such graph classes. Our main results are that Weighted Cluster Deletion is fixed-parameter tractable with respect to t on graph classes whose graphs consist of well-separated clusters that are connected by a sparse periphery. Concrete examples for such classes are split graphs and graphs that are close to cluster graphs. We complement our results by strengthening previous hardness results for Weighted Cluster Deletion. For example, we show that Weighted Cluster Deletion is NP-hard on restricted subclasses of cographs even when every edge has weight 1 or 2.

Cite as

Jaroslav Garvardt, Christian Komusiewicz, and Nils Morawietz. When Can Cluster Deletion with Bounded Weights Be Solved Efficiently?. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 32:1-32:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garvardt_et_al:LIPIcs.ISAAC.2024.32,
  author =	{Garvardt, Jaroslav and Komusiewicz, Christian and Morawietz, Nils},
  title =	{{When Can Cluster Deletion with Bounded Weights Be Solved Efficiently?}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{32:1--32:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.32},
  URN =		{urn:nbn:de:0030-drops-221592},
  doi =		{10.4230/LIPIcs.ISAAC.2024.32},
  annote =	{Keywords: Graph clustering, split graphs, cographs, parameterized complexity}
}
Document
Robust Bichromatic Classification Using Two Lines

Authors: Erwin Glazenburg, Thijs van der Horst, Tom Peters, Bettina Speckmann, and Frank Staals


Abstract
Given two sets R and B of n points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in R from the "blue" points in B and is robust to outliers. More precisely, we find a region 𝒲_B bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points B, and few red points. Our running times vary between optimal O(nlog n) up to around O(n³), depending on the type of region 𝒲_B and whether we wish to minimize only red outliers, only blue outliers, or both.

Cite as

Erwin Glazenburg, Thijs van der Horst, Tom Peters, Bettina Speckmann, and Frank Staals. Robust Bichromatic Classification Using Two Lines. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{glazenburg_et_al:LIPIcs.ISAAC.2024.33,
  author =	{Glazenburg, Erwin and van der Horst, Thijs and Peters, Tom and Speckmann, Bettina and Staals, Frank},
  title =	{{Robust Bichromatic Classification Using Two Lines}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.33},
  URN =		{urn:nbn:de:0030-drops-221605},
  doi =		{10.4230/LIPIcs.ISAAC.2024.33},
  annote =	{Keywords: Geometric Algorithms, Separating Line, Classification, Bichromatic, Duality}
}
Document
Robust Classification of Dynamic Bichromatic Point Sets in R²

Authors: Erwin Glazenburg, Marc van Kreveld, and Frank Staals


Abstract
Let R ∪ B be a set of n points in R², and let k ∈ 1..n. Our goal is to compute a line that "best" separates the "red" points R from the "blue" points B with at most k outliers. We present an efficient semi-online dynamic data structure that can maintain whether such a separator exists ("semi-online" meaning that when a point is inserted, we know when it will be deleted). Furthermore, we present efficient exact and approximation algorithms that compute a linear separator that is guaranteed to misclassify at most k, points and minimizes the distance to the farthest outlier. Our exact algorithm runs in O(nk + n log n) time, and our (1+ε)-approximation algorithm runs in O(ε^(-1/2)((n + k²) log n)) time. Based on our (1+ε)-approximation algorithm we then also obtain a semi-online data structure to maintain such a separator efficiently.

Cite as

Erwin Glazenburg, Marc van Kreveld, and Frank Staals. Robust Classification of Dynamic Bichromatic Point Sets in R². In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{glazenburg_et_al:LIPIcs.ISAAC.2024.34,
  author =	{Glazenburg, Erwin and van Kreveld, Marc and Staals, Frank},
  title =	{{Robust Classification of Dynamic Bichromatic Point Sets in R²}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.34},
  URN =		{urn:nbn:de:0030-drops-221615},
  doi =		{10.4230/LIPIcs.ISAAC.2024.34},
  annote =	{Keywords: classification, duality, data structures, dynamic, linear programming}
}
Document
Generating All Invertible Matrices by Row Operations

Authors: Petr Gregor, Hung P. Hoang, Arturo Merino, and Ondřej Mička


Abstract
We show that all invertible n × n matrices over any finite field 𝔽_q can be generated in a Gray code fashion. More specifically, there exists a listing such that (1) each matrix appears exactly once, and (2) two consecutive matrices differ by adding or subtracting one row from a previous or subsequent row, or by multiplying or dividing a row by the generator of the multiplicative group of 𝔽_q. This even holds in the more general setting where the pairs of rows that can be added or subtracted are specified by an arbitrary transition tree that has to satisfy some mild constraints. Moreover, we can prescribe the first and the last matrix if n ≥ 3, or n = 2 and q > 2. In other words, the corresponding flip graph on all invertible n × n matrices over 𝔽_q is Hamilton connected if it is not a cycle. This solves yet another special case of Lovász conjecture on Hamiltonicity of vertex-transitive graphs.

Cite as

Petr Gregor, Hung P. Hoang, Arturo Merino, and Ondřej Mička. Generating All Invertible Matrices by Row Operations. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gregor_et_al:LIPIcs.ISAAC.2024.35,
  author =	{Gregor, Petr and Hoang, Hung P. and Merino, Arturo and Mi\v{c}ka, Ond\v{r}ej},
  title =	{{Generating All Invertible Matrices by Row Operations}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.35},
  URN =		{urn:nbn:de:0030-drops-221621},
  doi =		{10.4230/LIPIcs.ISAAC.2024.35},
  annote =	{Keywords: Hamilton cycle, combinatorial Gray code, invertible matrices, finite field, general linear group, generation algorithms}
}
Document
Kernelization Complexity of Solution Discovery Problems

Authors: Mario Grobler, Stephanie Maaz, Amer E. Mouawad, Naomi Nishimura, Vijayaragunathan Ramamoorthi, and Sebastian Siebertz


Abstract
In the solution discovery variant of a vertex (edge) subset problem Π on graphs, we are given an initial configuration of tokens on the vertices (edges) of an input graph G together with a budget b. The question is whether we can transform this configuration into a feasible solution of Π on G with at most b modification steps. We consider the token sliding variant of the solution discovery framework, where each modification step consists of sliding a token to an adjacent vertex (edge). The framework of solution discovery was recently introduced by Fellows et al. [ECAI 2023] and for many solution discovery problems the classical as well as the parameterized complexity has been established. In this work, we study the kernelization complexity of the solution discovery variants of Vertex Cover, Independent Set, Dominating Set, Shortest Path, Matching, and Vertex Cut with respect to the parameters number of tokens k, discovery budget b, as well as structural parameters such as pathwidth.

Cite as

Mario Grobler, Stephanie Maaz, Amer E. Mouawad, Naomi Nishimura, Vijayaragunathan Ramamoorthi, and Sebastian Siebertz. Kernelization Complexity of Solution Discovery Problems. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 36:1-36:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grobler_et_al:LIPIcs.ISAAC.2024.36,
  author =	{Grobler, Mario and Maaz, Stephanie and Mouawad, Amer E. and Nishimura, Naomi and Ramamoorthi, Vijayaragunathan and Siebertz, Sebastian},
  title =	{{Kernelization Complexity of Solution Discovery Problems}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{36:1--36:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.36},
  URN =		{urn:nbn:de:0030-drops-221630},
  doi =		{10.4230/LIPIcs.ISAAC.2024.36},
  annote =	{Keywords: solution discovery, kernelization, cut, independent set, vertex cover, dominating set}
}
Document
Approximating the Fréchet Distance When Only One Curve Is c-Packed

Authors: Joachim Gudmundsson, Tiancheng Mai, and Sampson Wong


Abstract
One approach to studying the Fréchet distance is to consider curves that satisfy realistic assumptions. By now, the most popular realistic assumption for curves is c-packedness. Existing algorithms for computing the Fréchet distance between c-packed curves require both curves to be c-packed. In this paper, we only require one of the two curves to be c-packed. Our result is a nearly-linear time algorithm that (1+ε)-approximates the Fréchet distance between a c-packed curve and a general curve in ℝ^d, for constant values of ε, d and c.

Cite as

Joachim Gudmundsson, Tiancheng Mai, and Sampson Wong. Approximating the Fréchet Distance When Only One Curve Is c-Packed. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 37:1-37:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gudmundsson_et_al:LIPIcs.ISAAC.2024.37,
  author =	{Gudmundsson, Joachim and Mai, Tiancheng and Wong, Sampson},
  title =	{{Approximating the Fr\'{e}chet Distance When Only One Curve Is c-Packed}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{37:1--37:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.37},
  URN =		{urn:nbn:de:0030-drops-221649},
  doi =		{10.4230/LIPIcs.ISAAC.2024.37},
  annote =	{Keywords: Fr\'{e}chet distance, c-packed curve, approximation algorithm}
}
Document
Basis Sequence Reconfiguration in the Union of Matroids

Authors: Tesshu Hanaka, Yuni Iwamasa, Yasuaki Kobayashi, Yuto Okada, and Rin Saito


Abstract
Given a graph G and two spanning trees T and T' in G, normalSpanning Tree Reconfiguration asks whether there is a step-by-step transformation from T to T' such that all intermediates are also spanning trees of G, by exchanging an edge in T with an edge outside T at a single step. This problem is naturally related to matroid theory, which shows that there always exists such a transformation for any pair of T and T'. Motivated by this example, we study the problem of transforming a sequence of spanning trees into another sequence of spanning trees. We formulate this problem in the language of matroid theory: Given two sequences of bases of matroids, the goal is to decide whether there is a transformation between these sequences. We design a polynomial-time algorithm for this problem, even if the matroids are given as basis oracles. To complement this algorithmic result, we show that the problem of finding a shortest transformation is NP-hard to approximate within a factor of c log n for some constant c > 0, where n is the total size of the ground sets of the input matroids.

Cite as

Tesshu Hanaka, Yuni Iwamasa, Yasuaki Kobayashi, Yuto Okada, and Rin Saito. Basis Sequence Reconfiguration in the Union of Matroids. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hanaka_et_al:LIPIcs.ISAAC.2024.38,
  author =	{Hanaka, Tesshu and Iwamasa, Yuni and Kobayashi, Yasuaki and Okada, Yuto and Saito, Rin},
  title =	{{Basis Sequence Reconfiguration in the Union of Matroids}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.38},
  URN =		{urn:nbn:de:0030-drops-221658},
  doi =		{10.4230/LIPIcs.ISAAC.2024.38},
  annote =	{Keywords: Combinatorial reconfiguration, Matroids, Polynomial-time algorithm, Inapproximability}
}
Document
Core Stability in Additively Separable Hedonic Games of Low Treewidth

Authors: Tesshu Hanaka, Noleen Köhler, and Michael Lampis


Abstract
Additively Separable Hedonic Games (ASHGs) are coalition-formation games where we are given a directed graph whose vertices represent n selfish agents and the weight of each arc uv denotes the preferences from u to v. We revisit the computational complexity of the well-known notion of core stability of symmetric ASHGs, where the goal is to construct a partition of the agents into coalitions such that no group of agents would prefer to diverge from the given partition and form a new coalition. For Core Stability Verification (CSV), we first show the following hardness results: CSV remains coNP-complete on graphs of vertex cover 2; CSV is coW[1]-hard parameterized by vertex integrity when edge weights are polynomially bounded; and CSV is coW[1]-hard parameterized by tree-depth even if all weights are from {-1,1}. We complement these results with essentially matching algorithms and color{black}{an FPT algorithm parameterized by the treewidth tw plus the maximum degree Δ (improving a previous algorithm’s dependence from 2^O(twΔ²)} to 2^O(twΔ)).} We then move on to study Core Stability (CS), which one would naturally expect to be even harder than CSV. We confirm this intuition by showing that CS is Σ₂^p-complete even on graphs of bounded vertex cover number. On the positive side, we present a 2^{2^O(Δtw)}n^O(1)-time algorithm parameterized by tw+Δ, which is essentially optimal assuming Exponential Time Hypothesis (ETH). Finally, we consider the notion of k-core stability: k denotes the maximum size of the allowed blocking (diverging) coalitions. We show that k-CSV is coW[1]-hard parameterized by k (even on unweighted graphs), while k-CS is NP-complete for all k ≥ 3 (even on graphs of bounded degree with bounded edge weights).

Cite as

Tesshu Hanaka, Noleen Köhler, and Michael Lampis. Core Stability in Additively Separable Hedonic Games of Low Treewidth. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 39:1-39:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hanaka_et_al:LIPIcs.ISAAC.2024.39,
  author =	{Hanaka, Tesshu and K\"{o}hler, Noleen and Lampis, Michael},
  title =	{{Core Stability in Additively Separable Hedonic Games of Low Treewidth}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{39:1--39:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.39},
  URN =		{urn:nbn:de:0030-drops-221662},
  doi =		{10.4230/LIPIcs.ISAAC.2024.39},
  annote =	{Keywords: Hedonic games, Treewidth, Core stability}
}
Document
Crossing Number Is NP-Hard for Constant Path-Width (And Tree-Width)

Authors: Petr Hliněný and Liana Khazaliya


Abstract
Crossing Number is a celebrated problem in graph drawing. It is known to be NP-complete since the 1980s, and fairly involved techniques were already required to show its fixed-parameter tractability when parameterized by the vertex cover number. In this paper we prove that computing exactly the crossing number is NP-hard even for graphs of path-width 12 (and as a result, for simple graphs of path-width 13 and tree-width 9). Thus, while tree-width and path-width have been very successful tools in many graph algorithm scenarios, our result shows that general crossing number computations unlikely (under P≠ NP) could be successfully tackled using graph decompositions of bounded width, what has been a "tantalizing open problem" [S. Cabello, Hardness of Approximation for Crossing Number, 2013] till now.

Cite as

Petr Hliněný and Liana Khazaliya. Crossing Number Is NP-Hard for Constant Path-Width (And Tree-Width). In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hlineny_et_al:LIPIcs.ISAAC.2024.40,
  author =	{Hlin\v{e}n\'{y}, Petr and Khazaliya, Liana},
  title =	{{Crossing Number Is NP-Hard for Constant Path-Width (And Tree-Width)}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.40},
  URN =		{urn:nbn:de:0030-drops-221677},
  doi =		{10.4230/LIPIcs.ISAAC.2024.40},
  annote =	{Keywords: Graph Drawing, Crossing Number, Tree-width, Path-width}
}
Document
A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees

Authors: Ashwin Jacob, Diptapriyo Majumdar, and Meirav Zehavi


Abstract
The class of graph deletion problems has been extensively studied in theoretical computer science, particularly in the field of parameterized complexity. Recently, a new notion of graph deletion problems was introduced, called deletion to scattered graph classes, where after deletion, each connected component of the graph should belong to at least one of the given graph classes. While fixed-parameter algorithms were given for a wide variety of problems, little progress has been made on the kernelization complexity of any of them. Here, we present the first non-trivial polynomial kernel for one such deletion problem, where, after deletion, each connected component should be a clique or a tree - that is, as dense as possible or as sparse as possible (while being connected). We develop a kernel of O(k⁵) vertices for the same.

Cite as

Ashwin Jacob, Diptapriyo Majumdar, and Meirav Zehavi. A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jacob_et_al:LIPIcs.ISAAC.2024.41,
  author =	{Jacob, Ashwin and Majumdar, Diptapriyo and Zehavi, Meirav},
  title =	{{A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{41:1--41:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.41},
  URN =		{urn:nbn:de:0030-drops-221687},
  doi =		{10.4230/LIPIcs.ISAAC.2024.41},
  annote =	{Keywords: Parameterized Complexity, Kernelization, Scattered Graph Classes, New Expansion Lemma, Cliques or Trees Vertex Deletion}
}
Document
Hardness Amplification for Dynamic Binary Search Trees

Authors: Shunhua Jiang, Victor Lecomte, Omri Weinstein, and Sorrachai Yingchareonthawornchai


Abstract
We prove direct-sum theorems for Wilber’s two lower bounds [Wilber, FOCS'86] on the cost of access sequences in the binary search tree (BST) model. These bounds are central to the question of dynamic optimality [Sleator and Tarjan, JACM'85]: the Alternation bound is the only bound to have yielded online BST algorithms beating log n competitive ratio, while the Funnel bound has repeatedly been conjectured to exactly characterize the cost of executing an access sequence using the optimal tree [Wilber, FOCS'86, Kozma'16], and has been explicitly linked to splay trees [Levy and Tarjan, SODA'19]. Previously, the direct-sum theorem for the Alternation bound was known only when approximation was allowed [Chalermsook, Chuzhoy and Saranurak, APPROX'20, ToC'24]. We use these direct-sum theorems to amplify the sequences from [Lecomte and Weinstein, ESA'20] that separate between Wilber’s Alternation and Funnel bounds, increasing the Alternation and Funnel bounds while optimally maintaining the separation. As a corollary, we show that Tango trees [Demaine et al., FOCS'04] are optimal among any BST algorithms that charge their costs to the Alternation bound. This is true for any value of the Alternation bound, even values for which Tango trees achieve a competitive ratio of o(log log n) instead of the default O(log log n). Previously, the optimality of Tango trees was shown only for a limited range of Alternation bound [Lecomte and Weinstein, ESA'20].

Cite as

Shunhua Jiang, Victor Lecomte, Omri Weinstein, and Sorrachai Yingchareonthawornchai. Hardness Amplification for Dynamic Binary Search Trees. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 42:1-42:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ISAAC.2024.42,
  author =	{Jiang, Shunhua and Lecomte, Victor and Weinstein, Omri and Yingchareonthawornchai, Sorrachai},
  title =	{{Hardness Amplification for Dynamic Binary Search Trees}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{42:1--42:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.42},
  URN =		{urn:nbn:de:0030-drops-221696},
  doi =		{10.4230/LIPIcs.ISAAC.2024.42},
  annote =	{Keywords: Data Structures, Amortized Analysis}
}
Document
Reconfiguration of Labeled Matchings in Triangular Grid Graphs

Authors: Naonori Kakimura and Yuta Mishima


Abstract
This paper introduces a new reconfiguration problem of matchings in a triangular grid graph. In this problem, we are given a nearly perfect matching in which each matching edge is labeled, and aim to transform it to a target matching by sliding edges one by one. This problem is motivated to investigate the solvability of a sliding-block puzzle called "Gourds" on a hexagonal grid board, introduced by Hamersma et al. [ISAAC 2020]. The main contribution of this paper is to prove that, if a triangular grid graph is factor-critical and has a vertex of degree 6, then any two matchings can be reconfigured to each other. Moreover, for a triangular grid graph (which may not have a degree-6 vertex), we present another sufficient condition using the local connectivity. Both of our results provide broad sufficient conditions for the solvability of the Gourds puzzle on a hexagonal grid board with holes, where Hamersma et al. left it as an open question.

Cite as

Naonori Kakimura and Yuta Mishima. Reconfiguration of Labeled Matchings in Triangular Grid Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 43:1-43:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kakimura_et_al:LIPIcs.ISAAC.2024.43,
  author =	{Kakimura, Naonori and Mishima, Yuta},
  title =	{{Reconfiguration of Labeled Matchings in Triangular Grid Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{43:1--43:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.43},
  URN =		{urn:nbn:de:0030-drops-221709},
  doi =		{10.4230/LIPIcs.ISAAC.2024.43},
  annote =	{Keywords: combinatorial reconfiguration, matching, factor-critical graphs, sliding-block puzzles}
}
Document
Composition Orderings for Linear Functions and Matrix Multiplication Orderings

Authors: Susumu Kubo, Kazuhisa Makino, and Souta Sakamoto


Abstract
We first consider composition orderings for linear functions of one variable. Given n linear functions f_1,… ,f_n: ℝ → ℝ and a constant c ∈ ℝ, the objective is to find a permutation σ:[n] → [n] that minimizes/maximizes f_σ(n)∘⋯∘f_σ(1)(c), where [n] = {1, … , n}. It was first studied in the area of time-dependent scheduling, and known to be solvable in O(n log n) time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show the equivalence between local and global optimality in optimal composition orderings. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem with general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication as a generalization of composition of linear functions. Given n matrices M₁,… , M_n ∈ ℝ^{m×m} and two vectors w,y ∈ ℝ^m, where m is a positive integer, the objective is to find a permutation σ:[n] → [n] that minimizes/maximizes w^⊤ M_σ(n) ⋯ M_σ(1) y. The matrix multiplication ordering problem has been studied in the context of max-plus algebra, but despite being a natural problem, it has not been explored in the conventional algebra to date. By extending the results for composition orderings for linear functions, we show that the matrix multiplication ordering problem with 2× 2 matrices is solvable in O(n log n) time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we prove that three possible natural generalizations are NP-hard. In addition, we derive the existing result for the minimum matrix multiplication ordering problem with 2 × 2 upper triangular matrices in max-plus algebra, which is an extension of the well-known Johnson’s rule for the two-machine flow shop scheduling, as a corollary of our result in the conventional algebra.

Cite as

Susumu Kubo, Kazuhisa Makino, and Souta Sakamoto. Composition Orderings for Linear Functions and Matrix Multiplication Orderings. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kubo_et_al:LIPIcs.ISAAC.2024.44,
  author =	{Kubo, Susumu and Makino, Kazuhisa and Sakamoto, Souta},
  title =	{{Composition Orderings for Linear Functions and Matrix Multiplication Orderings}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.44},
  URN =		{urn:nbn:de:0030-drops-221717},
  doi =		{10.4230/LIPIcs.ISAAC.2024.44},
  annote =	{Keywords: function composition, matrix multiplication, ordering problem, scheduling}
}
Document
A Simple Distributed Algorithm for Sparse Fractional Covering and Packing Problems

Authors: Qian Li, Minghui Ouyang, and Yuyi Wang


Abstract
This paper presents a distributed algorithm in the CONGEST model that achieves a (1+ε)-approximation for row-sparse fractional covering problems (RS-FCP) and the dual column-sparse fraction packing problems (CS-FPP). Compared with the best-known (1+ε)-approximation CONGEST algorithm for RS-FCP/CS-FPP developed by Kuhn, Moscibroda, and Wattenhofer (SODA'06), our algorithm is not only much simpler but also significantly improves the dependency on ε.

Cite as

Qian Li, Minghui Ouyang, and Yuyi Wang. A Simple Distributed Algorithm for Sparse Fractional Covering and Packing Problems. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 45:1-45:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ISAAC.2024.45,
  author =	{Li, Qian and Ouyang, Minghui and Wang, Yuyi},
  title =	{{A Simple Distributed Algorithm for Sparse Fractional Covering and Packing Problems}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{45:1--45:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.45},
  URN =		{urn:nbn:de:0030-drops-221726},
  doi =		{10.4230/LIPIcs.ISAAC.2024.45},
  annote =	{Keywords: CONGEST model, row-sparse fractional covering, column-sparse fractional packing, positive linear programming, simple algorithms}
}
Document
Uniform Polynomial Kernel for Deletion to K_{2,p} Minor-Free Graphs

Authors: William Lochet and Roohani Sharma


Abstract
In the F-Deletion problem, where F is a fixed finite family of graphs, the input is a graph G and an integer k, and the goal is to determine if there exists a set of at most k vertices whose deletion results in a graph that does not contain any graph of F as a minor. The F-Deletion problem encapsulates a large class of natural and interesting graph problems like Vertex Cover, Feedback Vertex Set, Treewidth-η Deletion, Treedepth-η Deletion, Pathwidth-η Deletion, Outerplanar Deletion, Vertex Planarization and many more. We study the F-Deletion problem from the kernelization perspective. In a seminal work, Fomin et al. [FOCS 2012] gave a polynomial kernel for this problem when the family F contains at least one planar graph. The asymptotic growth of the size of the kernel is not uniform with respect to the family F: that is, the size of the kernel is k^{f(F)}, for some function f that depends only on F. Later Giannopoulou et al. [TALG 2017] showed that the non-uniformity in the kernel size bound is unavoidable as Treewidth-η Deletion cannot admit a kernel of size 𝒪(k^{(η+1)/2 - ε}), for any ε > 0, unless NP ⊆ coNP/poly. On the other hand it was also shown that Treedepth-η Deletion admits a uniform kernel of size f(F) ⋅ k⁶ depicting that there are subclasses of F where the asymptotic kernel sizes do not grow as a function of the family F. This work led to the question of determining classes of F where the problem admits uniform polynomial kernels. In this paper, we show that if all the graphs in F are connected and ℱ contains K_{2,p} (a bipartite graph with 2 vertices on one side and p vertices on the other), then the problem admits a uniform kernel of size f(F) ⋅ k^10. The graph K_{2,p} is one natural extension of the graph θ_p, where θ_p is a graph on two vertices and p parallel edges. The case when F contains θ_p has been studied earlier and serves as (the only) other example where the problem admits a uniform polynomial kernel.

Cite as

William Lochet and Roohani Sharma. Uniform Polynomial Kernel for Deletion to K_{2,p} Minor-Free Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lochet_et_al:LIPIcs.ISAAC.2024.46,
  author =	{Lochet, William and Sharma, Roohani},
  title =	{{Uniform Polynomial Kernel for Deletion to K\underline\{2,p\} Minor-Free Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.46},
  URN =		{urn:nbn:de:0030-drops-221731},
  doi =		{10.4230/LIPIcs.ISAAC.2024.46},
  annote =	{Keywords: Uniform polynomial kernel, ℱ-minor-free deletion, complete bipartite minor-free graphs, K\underline\{2,p\}, protrusions}
}
Document
Complexity Framework for Forbidden Subgraphs II: Edge Subdivision and the "H"-Graphs

Authors: Vadim Lozin, Barnaby Martin, Sukanya Pandey, Daniël Paulusma, Mark Siggers, Siani Smith, and Erik Jan van Leeuwen


Abstract
For a fixed set H of graphs, a graph G is H-subgraph-free if G does not contain any H ∈ H as a (not necessarily induced) subgraph. A recent framework gives a complete classification on H-subgraph-free graphs (for finite sets H) for problems that are solvable in polynomial time on graph classes of bounded treewidth, NP-complete on subcubic graphs, and whose NP-hardness is preserved under edge subdivision. While a lot of problems satisfy these conditions, there are also many problems that do not satisfy all three conditions and for which the complexity in H-subgraph-free graphs is unknown. We study problems for which only the first two conditions of the framework hold (they are solvable in polynomial time on classes of bounded treewidth and NP-complete on subcubic graphs, but NP-hardness is not preserved under edge subdivision). In particular, we make inroads into the classification of the complexity of four such problems: Hamilton Cycle, k-Induced Disjoint Paths, C₅-Colouring and Star 3-Colouring. Although we do not complete the classifications, we show that the boundary between polynomial time and NP-complete differs among our problems and also from problems that do satisfy all three conditions of the framework, in particular when we forbid certain subdivisions of the "H"-graph (the graph that looks like the letter "H"). Hence, we exhibit a rich complexity landscape among problems for H-subgraph-free graph classes.

Cite as

Vadim Lozin, Barnaby Martin, Sukanya Pandey, Daniël Paulusma, Mark Siggers, Siani Smith, and Erik Jan van Leeuwen. Complexity Framework for Forbidden Subgraphs II: Edge Subdivision and the "H"-Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lozin_et_al:LIPIcs.ISAAC.2024.47,
  author =	{Lozin, Vadim and Martin, Barnaby and Pandey, Sukanya and Paulusma, Dani\"{e}l and Siggers, Mark and Smith, Siani and van Leeuwen, Erik Jan},
  title =	{{Complexity Framework for Forbidden Subgraphs II: Edge Subdivision and the "H"-Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.47},
  URN =		{urn:nbn:de:0030-drops-221747},
  doi =		{10.4230/LIPIcs.ISAAC.2024.47},
  annote =	{Keywords: forbidden subgraph, complexity dichotomy, edge subdivision, treewidth}
}
Document
Complexity of Local Search for Euclidean Clustering Problems

Authors: Bodo Manthey, Nils Morawietz, Jesse van Rhijn, and Frank Sommer


Abstract
We show that the simplest local search heuristics for two natural Euclidean clustering problems are PLS-hard. First, we show that the Hartigan-Wong method, which is essentially the Flip heuristic, for k-Means clustering is PLS-hard, even when k = 2. Second, we show the same result for the Flip heuristic for Max Cut, even when the edge weights are given by the (squared) Euclidean distances between the points in some set 𝒳 ⊆ R^d; a problem which is equivalent to Min Sum 2-Clustering.

Cite as

Bodo Manthey, Nils Morawietz, Jesse van Rhijn, and Frank Sommer. Complexity of Local Search for Euclidean Clustering Problems. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 48:1-48:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{manthey_et_al:LIPIcs.ISAAC.2024.48,
  author =	{Manthey, Bodo and Morawietz, Nils and van Rhijn, Jesse and Sommer, Frank},
  title =	{{Complexity of Local Search for Euclidean Clustering Problems}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{48:1--48:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.48},
  URN =		{urn:nbn:de:0030-drops-221755},
  doi =		{10.4230/LIPIcs.ISAAC.2024.48},
  annote =	{Keywords: Local search, PLS-complete, max cut, k-means, partitioning problem, flip-neighborhood}
}
Document
Online Multi-Level Aggregation with Delays and Stochastic Arrivals

Authors: Mathieu Mari, Michał Pawłowski, Runtian Ren, and Piotr Sankowski


Abstract
This paper presents a new research direction for online Multi-Level Aggregation (MLA) with delays. Given an edge-weighted rooted tree T as input, a sequence of requests arriving at its vertices needs to be served in an online manner. A request r is characterized by two parameters: its arrival time t(r) > 0 and location l(r) being a vertex in tree T. Once r arrives, we can either serve it immediately or postpone this action until any time t > t(r). A request that has not been served at its arrival time is called pending up to the moment it gets served. We can serve several pending requests at the same time, paying a service cost equal to the weight of the subtree containing the locations of all the requests served and the root of T. Postponing the service of a request r to time t > t(r) generates an additional delay cost of t - t(r). The goal is to serve all requests in an online manner such that the total cost (i.e., the total sum of service and delay costs) is minimized. The MLA problem is a generalization of several well-studied problems, including the TCP Acknowledgment (trees of depth 1), Joint Replenishment (depth 2), and Multi-Level Message Aggregation (arbitrary depth). The current best algorithm achieves a competitive ratio of O(d²), where d denotes the depth of the tree. Here, we consider a stochastic version of MLA where the requests follow a Poisson arrival process. We present a deterministic online algorithm that achieves a constant ratio of expectations, meaning that the ratio between the expected costs of the solution generated by our algorithm and the optimal offline solution is bounded by a constant. Our algorithm is obtained by carefully combining two strategies. In the first one, we plan periodic oblivious visits to the subset of frequent vertices, whereas, in the second one, we greedily serve the pending requests in the remaining vertices. This problem is complex enough to demonstrate a very rare phenomenon that "single-minded" or "sample-average" strategies are not enough in stochastic optimization.

Cite as

Mathieu Mari, Michał Pawłowski, Runtian Ren, and Piotr Sankowski. Online Multi-Level Aggregation with Delays and Stochastic Arrivals. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 49:1-49:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mari_et_al:LIPIcs.ISAAC.2024.49,
  author =	{Mari, Mathieu and Paw{\l}owski, Micha{\l} and Ren, Runtian and Sankowski, Piotr},
  title =	{{Online Multi-Level Aggregation with Delays and Stochastic Arrivals}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{49:1--49:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.49},
  URN =		{urn:nbn:de:0030-drops-221768},
  doi =		{10.4230/LIPIcs.ISAAC.2024.49},
  annote =	{Keywords: online algorithms, online network design, stochastic model, Poisson arrivals}
}
Document
On the Parameterized Complexity of Diverse SAT

Authors: Neeldhara Misra, Harshil Mittal, and Ashutosh Rai


Abstract
We study the Boolean Satisfiability problem (SAT) in the framework of diversity, where one asks for multiple solutions that are mutually far apart (i.e., sufficiently dissimilar from each other) for a suitable notion of distance/dissimilarity between solutions. Interpreting assignments as bit vectors, we take their Hamming distance to quantify dissimilarity, and we focus on the problem of finding two solutions. Specifically, we define the problem Max Differ SAT (resp. Exact Differ SAT) as follows: Given a Boolean formula ϕ on n variables, decide whether ϕ has two satisfying assignments that differ on at least (resp. exactly) d variables. We study the classical and parameterized (in parameters d and n-d) complexities of Max Differ SAT and Exact Differ SAT, when restricted to some classes of formulas on which SAT is known to be polynomial-time solvable. In particular, we consider affine formulas, Krom formulas (i.e., 2-CNF formulas) and hitting formulas. For affine formulas, we show the following: Both problems are polynomial-time solvable when each equation has at most two variables. Exact Differ SAT is NP-hard, even when each equation has at most three variables and each variable appears in at most four equations. Also, Max Differ SAT is NP-hard, even when each equation has at most four variables. Both problems are 𝖶[1]-hard in the parameter n-d. In contrast, when parameterized by d, Exact Differ SAT is 𝖶[1]-hard, but Max Differ SAT admits a single-exponential FPT algorithm and a polynomial-kernel. For Krom formulas, we show the following: Both problems are polynomial-time solvable when each variable appears in at most two clauses. Also, both problems are 𝖶[1]-hard in the parameter d (and therefore, it turns out, also NP-hard), even on monotone inputs (i.e., formulas with no negative literals). Finally, for hitting formulas, we show that both problems can be solved in polynomial-time.

Cite as

Neeldhara Misra, Harshil Mittal, and Ashutosh Rai. On the Parameterized Complexity of Diverse SAT. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 50:1-50:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{misra_et_al:LIPIcs.ISAAC.2024.50,
  author =	{Misra, Neeldhara and Mittal, Harshil and Rai, Ashutosh},
  title =	{{On the Parameterized Complexity of Diverse SAT}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{50:1--50:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.50},
  URN =		{urn:nbn:de:0030-drops-221773},
  doi =		{10.4230/LIPIcs.ISAAC.2024.50},
  annote =	{Keywords: Diverse solutions, Affine formulas, 2-CNF formulas, Hitting formulas}
}
Document
Easier Ways to Prove Counting Hard: A Dichotomy for Generalized #SAT, Applied to Constraint Graphs

Authors: MIT Hardness Group, Josh Brunner, Erik D. Demaine, Jenny Diomidova, Timothy Gomez, Markus Hecher, Frederick Stock, and Zixiang Zhou


Abstract
To prove #P-hardness, a single-call reduction from #2SAT needs a clause gadget to have exactly the same number of solutions for all satisfying assignments - no matter how many and which literals satisfy the clause. In this paper, we relax this condition, making it easier to find #P-hardness reductions. Specifically, we introduce a framework called Generalized #SAT where each clause contributes a term to the total count of solutions based on a given function of the literals. For two-variable clauses (a natural generalization of #2SAT), we prove a dichotomy theorem characterizing when Generalized #SAT is in FP versus #P-complete. Equipped with these tools, we analyze the complexity of counting solutions to Constraint Graph Satisfiability (CGS), a framework previously used to prove NP-hardness (and PSPACE-hardness) of many puzzles and games. We prove CGS ASP-hard, meaning that there is a parsimonious reduction (with algorithmic bijection on solutions) from every NP search problem, which implies #P-completeness. Then we analyze CGS restricted to various subsets of features (vertex and edge types), and prove most of them either easy (in FP) or hard (#P-complete). Most of our results also apply to planar constraint graphs. CGS is thus a second powerful framework for proving problems #P-hard, with reductions requiring very few gadgets.

Cite as

MIT Hardness Group, Josh Brunner, Erik D. Demaine, Jenny Diomidova, Timothy Gomez, Markus Hecher, Frederick Stock, and Zixiang Zhou. Easier Ways to Prove Counting Hard: A Dichotomy for Generalized #SAT, Applied to Constraint Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mithardnessgroup_et_al:LIPIcs.ISAAC.2024.51,
  author =	{MIT Hardness Group and Brunner, Josh and Demaine, Erik D. and Diomidova, Jenny and Gomez, Timothy and Hecher, Markus and Stock, Frederick and Zhou, Zixiang},
  title =	{{Easier Ways to Prove Counting Hard: A Dichotomy for Generalized #SAT, Applied to Constraint Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.51},
  URN =		{urn:nbn:de:0030-drops-221790},
  doi =		{10.4230/LIPIcs.ISAAC.2024.51},
  annote =	{Keywords: Counting, Computational Complexity, Sharp-P, Dichotomy, Constraint Graph Satisfiability}
}
Document
Single Family Algebra Operation on BDDs and ZDDs Leads to Exponential Blow-Up

Authors: Kengo Nakamura, Masaaki Nishino, and Shuhei Denzumi


Abstract
Binary decision diagram (BDD) and zero-suppressed binary decision diagram (ZDD) are data structures to represent a family of (sub)sets compactly, and it can be used as succinct indexes for a family of sets. To build BDD/ZDD representing a desired family of sets, there are many transformation operations that take BDDs/ZDDs as inputs and output BDD/ZDD representing the resultant family after performing operations such as set union and intersection. However, except for some basic operations, the worst-time complexity of taking such transformation on BDDs/ZDDs has not been extensively studied, and some contradictory statements about it have arisen in the literature. In this paper, we show that many transformation operations on BDDs/ZDDs, including all operations for families of sets that appear in Knuth’s book, cannot be performed in worst-case polynomial time in the size of input BDDs/ZDDs. This refutes some of the folklore circulated in past literature and resolves an open problem raised by Knuth. Our results are stronger in that such blow-up of computational time occurs even when the ordering, which has a significant impact on the efficiency of treating BDDs/ZDDs, is chosen arbitrarily.

Cite as

Kengo Nakamura, Masaaki Nishino, and Shuhei Denzumi. Single Family Algebra Operation on BDDs and ZDDs Leads to Exponential Blow-Up. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nakamura_et_al:LIPIcs.ISAAC.2024.52,
  author =	{Nakamura, Kengo and Nishino, Masaaki and Denzumi, Shuhei},
  title =	{{Single Family Algebra Operation on BDDs and ZDDs Leads to Exponential Blow-Up}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.52},
  URN =		{urn:nbn:de:0030-drops-221803},
  doi =		{10.4230/LIPIcs.ISAAC.2024.52},
  annote =	{Keywords: Binary decision diagrams, family of sets, family algebra}
}
Document
A Fast Algorithm for Computing a Planar Support for Non-Piercing Rectangles

Authors: Ambar Pal, Rajiv Raman, Saurabh Ray, and Karamjeet Singh


Abstract
For a hypergraph ℋ = (X,ℰ) a support is a graph G on X such that for each E ∈ ℰ, the induced subgraph of G on the elements in E is connected. If G is planar, we call it a planar support. A set of axis parallel rectangles ℛ forms a non-piercing family if for any R₁, R₂ ∈ ℛ, R₁⧵R₂ is connected. Given a set P of n points in ℝ² and a set ℛ of m non-piercing axis-aligned rectangles, we give an algorithm for computing a planar support for the hypergraph (P,ℛ) in O(nlog² n + (n+m)log m) time, where each R ∈ ℛ defines a hyperedge consisting of all points of P contained in R.

Cite as

Ambar Pal, Rajiv Raman, Saurabh Ray, and Karamjeet Singh. A Fast Algorithm for Computing a Planar Support for Non-Piercing Rectangles. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 53:1-53:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{pal_et_al:LIPIcs.ISAAC.2024.53,
  author =	{Pal, Ambar and Raman, Rajiv and Ray, Saurabh and Singh, Karamjeet},
  title =	{{A Fast Algorithm for Computing a Planar Support for Non-Piercing Rectangles}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{53:1--53:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.53},
  URN =		{urn:nbn:de:0030-drops-221819},
  doi =		{10.4230/LIPIcs.ISAAC.2024.53},
  annote =	{Keywords: Algorithms, Hypergraphs, Computational Geometry, Visualization}
}
Document
A Dichotomy Theorem for Linear Time Homomorphism Orbit Counting in Bounded Degeneracy Graphs

Authors: Daniel Paul-Pena and C. Seshadhri


Abstract
Counting the number of homomorphisms of a pattern graph H in a large input graph G is a fundamental problem in computer science. In many applications in databases, bioinformatics, and network science, we need more than just the total count. We wish to compute, for each vertex v of G, the number of H-homomorphisms that v participates in. This problem is referred to as homomorphism orbit counting, as it relates to the orbits of vertices of H under its automorphisms. Given the need for fast algorithms for this problem, we study when near-linear time algorithms are possible. A natural restriction is to assume that the input graph G has bounded degeneracy, a commonly observed property in modern massive networks. Can we characterize the patterns H for which homomorphism orbit counting can be done in near-linear time? We discover a dichotomy theorem that resolves this problem. For pattern H, let 𝓁 be the length of the longest induced path between any two vertices of the same orbit (under the automorphisms of H). If 𝓁 ≤ 5, then H-homomorphism orbit counting can be done in near-linear time for bounded degeneracy graphs. If 𝓁 > 5, then (assuming fine-grained complexity conjectures) there is no near-linear time algorithm for this problem. We build on existing work on dichotomy theorems for counting the total H-homomorphism count. Surprisingly, there exist (and we characterize) patterns H for which the total homomorphism count can be computed in near-linear time, but the corresponding orbit counting problem cannot be done in near-linear time.

Cite as

Daniel Paul-Pena and C. Seshadhri. A Dichotomy Theorem for Linear Time Homomorphism Orbit Counting in Bounded Degeneracy Graphs. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 54:1-54:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{paulpena_et_al:LIPIcs.ISAAC.2024.54,
  author =	{Paul-Pena, Daniel and Seshadhri, C.},
  title =	{{A Dichotomy Theorem for Linear Time Homomorphism Orbit Counting in Bounded Degeneracy Graphs}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{54:1--54:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.54},
  URN =		{urn:nbn:de:0030-drops-221821},
  doi =		{10.4230/LIPIcs.ISAAC.2024.54},
  annote =	{Keywords: Homomorphism counting, Bounded degeneracy graphs, Fine-grained complexity, Orbit counting, Subgraph counting}
}
Document
Optimal Offline ORAM with Perfect Security via Simple Oblivious Priority Queues

Authors: Thore Thießen and Jan Vahrenhold


Abstract
Oblivious RAM (ORAM) is a well-researched primitive to hide the memory access pattern of a RAM computation; it has a variety of applications in trusted computing, outsourced storage, and multiparty computation. In this paper, we study the so-called offline ORAM in which the sequence of memory access locations to be hidden is known in advance. Apart from their theoretical significance, offline ORAMs can be used to construct efficient oblivious algorithms. We obtain the first optimal offline ORAM with perfect security from oblivious priority queues via time-forward processing. For this, we present a simple construction of an oblivious priority queue with perfect security. Our construction achieves an asymptotically optimal (amortized) runtime of Θ(log N) per operation for a capacity of N elements and is of independent interest. Building on our construction, we additionally present efficient external-memory instantiations of our oblivious, perfectly-secure construction: For the cache-aware setting, we match the optimal I/O complexity of Θ(1/B log N/M) per operation (amortized), and for the cache-oblivious setting we achieve a near-optimal I/O complexity of O(1/B log N/M log log_M N) per operation (amortized).

Cite as

Thore Thießen and Jan Vahrenhold. Optimal Offline ORAM with Perfect Security via Simple Oblivious Priority Queues. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 55:1-55:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{thieen_et_al:LIPIcs.ISAAC.2024.55,
  author =	{Thie{\ss}en, Thore and Vahrenhold, Jan},
  title =	{{Optimal Offline ORAM with Perfect Security via Simple Oblivious Priority Queues}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{55:1--55:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.55},
  URN =		{urn:nbn:de:0030-drops-221832},
  doi =		{10.4230/LIPIcs.ISAAC.2024.55},
  annote =	{Keywords: offline ORAM, oblivious priority queue, perfect security, external memory algorithm, cache-oblivious algorithm}
}
Document
Data Structures for Approximate Fréchet Distance for Realistic Curves

Authors: Ivor van der Hoog, Eva Rotenberg, and Sampson Wong


Abstract
The Fréchet distance is a popular distance measure between curves P and Q. Conditional lower bounds prohibit (1+ε)-approximate Fréchet distance computations in strongly subquadratic time, even when preprocessing P using any polynomial amount of time and space. As a consequence, the Fréchet distance has been studied under realistic input assumptions, for example, assuming both curves are c-packed. In this paper, we study c-packed curves in Euclidean space ℝ^d and in general geodesic metrics 𝒳. In ℝ^d, we provide a nearly-linear time static algorithm for computing the (1+ε)-approximate continuous Fréchet distance between c-packed curves. Our algorithm has a linear dependence on the dimension d, as opposed to previous algorithms which have an exponential dependence on d. In general geodesic metric spaces X, little was previously known. We provide the first data structure, and thereby the first algorithm, under this model. Given a c-packed input curve P with n vertices, we preprocess it in O(n log n) time, so that given a query containing a constant ε and a curve Q with m vertices, we can return a (1+ε)-approximation of the discrete Fréchet distance between P and Q in time polylogarithmic in n and linear in m, 1/ε, and the realism parameter c. Finally, we show several extensions to our data structure; to support dynamic extend/truncate updates on P, to answer map matching queries, and to answer Hausdorff distance queries.

Cite as

Ivor van der Hoog, Eva Rotenberg, and Sampson Wong. Data Structures for Approximate Fréchet Distance for Realistic Curves. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanderhoog_et_al:LIPIcs.ISAAC.2024.56,
  author =	{van der Hoog, Ivor and Rotenberg, Eva and Wong, Sampson},
  title =	{{Data Structures for Approximate Fr\'{e}chet Distance for Realistic Curves}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.56},
  URN =		{urn:nbn:de:0030-drops-221846},
  doi =		{10.4230/LIPIcs.ISAAC.2024.56},
  annote =	{Keywords: Fr\'{e}chet distance, data structures, approximation algorithms}
}
Document
Constant Approximating Disjoint Paths on Acyclic Digraphs Is W[1]-Hard

Authors: Michał Włodarczyk


Abstract
In the Disjoint Paths problem, one is given a graph with a set of k vertex pairs (s_i,t_i) and the task is to connect each s_i to t_i with a path, so that the k paths are pairwise disjoint. In the optimization variant, Max Disjoint Paths, the goal is to maximize the number of vertex pairs to be connected. We study this problem on acyclic directed graphs, where Disjoint Paths is known to be W[1]-hard when parameterized by k. We show that in this setting Max Disjoint Paths is W[1]-hard to c-approximate for any constant c. To the best of our knowledge, this is the first non-trivial result regarding the parameterized approximation for Max Disjoint Paths with respect to the natural parameter k. Our proof is based on an elementary self-reduction that is guided by a certain combinatorial object constructed by the probabilistic method.

Cite as

Michał Włodarczyk. Constant Approximating Disjoint Paths on Acyclic Digraphs Is W[1]-Hard. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 57:1-57:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wlodarczyk:LIPIcs.ISAAC.2024.57,
  author =	{W{\l}odarczyk, Micha{\l}},
  title =	{{Constant Approximating Disjoint Paths on Acyclic Digraphs Is W\lbrack1\rbrack-Hard}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{57:1--57:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.57},
  URN =		{urn:nbn:de:0030-drops-221853},
  doi =		{10.4230/LIPIcs.ISAAC.2024.57},
  annote =	{Keywords: fixed-parameter tractability, hardness of approximation, disjoint paths}
}
Document
Does Subset Sum Admit Short Proofs?

Authors: Michał Włodarczyk


Abstract
We investigate the question whether Subset Sum can be solved by a polynomial-time algorithm with access to a certificate of length poly(k) where k is the maximal number of bits in an input number. In other words, can it be solved using only few nondeterministic bits? This question has motivated us to initiate a systematic study of certification complexity of parameterized problems. Apart from Subset Sum, we examine problems related to integer linear programming, scheduling, and group theory. We reveal an equivalence class of problems sharing the same hardness with respect to having a polynomial certificate. These include Subset Sum and Boolean Linear Programming parameterized by the number of constraints. Secondly, we present new techniques for establishing lower bounds in this regime. In particular, we show that Subset Sum in permutation groups is at least as hard for nondeterministic computation as 3Coloring in bounded-pathwidth graphs.

Cite as

Michał Włodarczyk. Does Subset Sum Admit Short Proofs?. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 58:1-58:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wlodarczyk:LIPIcs.ISAAC.2024.58,
  author =	{W{\l}odarczyk, Micha{\l}},
  title =	{{Does Subset Sum Admit Short Proofs?}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{58:1--58:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.58},
  URN =		{urn:nbn:de:0030-drops-221864},
  doi =		{10.4230/LIPIcs.ISAAC.2024.58},
  annote =	{Keywords: subset sum, nondeterminism, fixed-parameter tractability}
}
Document
Approximation Algorithms for Cumulative Vehicle Routing with Stochastic Demands

Authors: Jingyang Zhao and Mingyu Xiao


Abstract
In the Cumulative Vehicle Routing Problem (Cu-VRP), we need to find a feasible itinerary for a capacitated vehicle located at the depot to satisfy customers' demand, as in the well-known Vehicle Routing Problem (VRP), but the goal is to minimize the cumulative cost of the vehicle, which is based on the vehicle’s load throughout the itinerary. If the demand of each customer is unknown until the vehicle visits it, the problem is called Cu-VRP with Stochastic Demands (Cu-VRPSD). In this paper, we propose a randomized 3.456-approximation algorithm for Cu-VRPSD, improving the best-known approximation ratio of 6 (Discret. Appl. Math. 2020). Since VRP with Stochastic Demands (VRPSD) is a special case of Cu-VRPSD, as a corollary, we also obtain a randomized 3.25-approximation algorithm for VRPSD, improving the best-known approximation ratio of 3.5 (Oper. Res. 2012). At last, we give a randomized 3.194-approximation algorithm for Cu-VRP, improving the best-known approximation ratio of 4 (Oper. Res. Lett. 2013).

Cite as

Jingyang Zhao and Mingyu Xiao. Approximation Algorithms for Cumulative Vehicle Routing with Stochastic Demands. In 35th International Symposium on Algorithms and Computation (ISAAC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 322, pp. 59:1-59:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhao_et_al:LIPIcs.ISAAC.2024.59,
  author =	{Zhao, Jingyang and Xiao, Mingyu},
  title =	{{Approximation Algorithms for Cumulative Vehicle Routing with Stochastic Demands}},
  booktitle =	{35th International Symposium on Algorithms and Computation (ISAAC 2024)},
  pages =	{59:1--59:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-354-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{322},
  editor =	{Mestre, Juli\'{a}n and Wirth, Anthony},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2024.59},
  URN =		{urn:nbn:de:0030-drops-221878},
  doi =		{10.4230/LIPIcs.ISAAC.2024.59},
  annote =	{Keywords: Cumulative Vehicle Routing, Stochastic Demands, Approximation Algorithms}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail