eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
1
960
10.4230/LIPIcs.ISAAC.2023
article
LIPIcs, Volume 283, ISAAC 2023, Complete Volume
Iwata, Satoru
1
2
https://orcid.org/0000-0002-6467-1335
Kakimura, Naonori
3
https://orcid.org/0000-0002-3918-3479
University of Tokyo, Tokyo,
Hokkaido University, Sapporo, Japan
Keio University, Yokohama, Japan
LIPIcs, Volume 283, ISAAC 2023, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023/LIPIcs.ISAAC.2023.pdf
LIPIcs, Volume 283, ISAAC 2023, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
0:i
0:xvi
10.4230/LIPIcs.ISAAC.2023.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Iwata, Satoru
1
2
https://orcid.org/0000-0002-6467-1335
Kakimura, Naonori
3
https://orcid.org/0000-0002-3918-3479
University of Tokyo, Tokyo,
Hokkaido University, Sapporo, Japan
Keio University, Yokohama, Japan
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.0/LIPIcs.ISAAC.2023.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
1:1
1:3
10.4230/LIPIcs.ISAAC.2023.1
article
Group Fairness: From Multiwinner Voting to Participatory Budgeting (Invited Talk)
Elkind, Edith
1
2
University of Oxford, UK
Alan Turing Institute, London, UK
Many cities around the world allocate a part of their budget based on residents' votes, following a process known as participatory budgeting. It is important to understand which outcomes of this process should be viewed as fair, and whether fair outcomes could be computed efficiently. We summarise recent progress on this topic. We first focus on a special case of participatory budgeting where all candidate projects have the same cost (known as multiwinner voting), formulate progressively more demanding notions of fairness for this setting, and identify efficiently computable voting rules that satisfy them. We then discuss the challenges of extending these ideas to the general model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.1/LIPIcs.ISAAC.2023.1.pdf
multiwinner voting
participatory budgeting
justified representation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
2:1
2:1
10.4230/LIPIcs.ISAAC.2023.2
article
Faithful Graph Drawing (Invited Talk)
Hong, Seok-Hee
1
https://orcid.org/0000-0003-1698-3868
School of Computer Science, The University of Sydney, Australia
Graph drawing aims to compute good geometric representations of graphs in two or three dimensions. It has wide applications in network visualisation, such as social networks and biological networks, arising from many other disciplines.
This talk will review fundamental theoretical results as well as recent advances in graph drawing, including symmetric graph drawing, generalisation of the Tutte’s barycenter theorem, Steinitz’s theorem, and Fáry’s theorem, and the so-called beyond planar graphs such as k-planar graphs.
I will conclude my talk with recent progress in visualization of big complex graphs, including sublinear-time graph drawing algorithms and faithful graph drawing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.2/LIPIcs.ISAAC.2023.2.pdf
Graph drawing
Planar graphs
Beyond planar graphs
Tutte’s barycenter theorem
Steinitz’s theorem
Fáry’s theorem
Sublinear-time graph drawing algorithm
Faithful graph drawing
Symmetric graph drawing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
3:1
3:20
10.4230/LIPIcs.ISAAC.2023.3
article
Realizability of Free Spaces of Curves
A. Akitaya, Hugo
1
https://orcid.org/0000-0002-6827-2200
Buchin, Maike
2
https://orcid.org/0000-0002-3446-4343
Mirzanezhad, Majid
3
https://orcid.org/0000-0002-2950-673X
Ryvkin, Leonie
4
https://orcid.org/0000-0002-7036-1341
Wenk, Carola
5
https://orcid.org/0000-0001-9275-5336
Department of Computer Science, University of Massachusetts Lowell, MA, USA
Department of Computer Science, Ruhr University Bochum, Germany
Transportation Research Institute, University of Michigan, Ann Arbor, MI, USA
Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands
Department of Computer Science, Tulane University, New Orleans, LA, USA
The free space diagram is a popular tool to compute the well-known Fréchet distance. As the Fréchet distance is used in many different fields, many variants have been established to cover the specific needs of these applications. Often the question arises whether a certain pattern in the free space diagram is realizable, i.e., whether there exists a pair of polygonal chains whose free space diagram corresponds to it. The answer to this question may help in deciding the computational complexity of these distance measures, as well as allowing to design more efficient algorithms for restricted input classes that avoid certain free space patterns. Therefore we study the inverse problem: Given a potential free space diagram, do there exist curves that generate this diagram?
Our problem of interest is closely tied to the classic Distance Geometry problem. We settle the complexity of Distance Geometry in ℝ^{>2}, showing ∃ℝ-hardness. We use this to show that for curves in ℝ^{≥2} the realizability problem is ∃ℝ-complete, both for continuous and for discrete Fréchet distance. We prove that the continuous case in ℝ¹ is only weakly NP-hard, and we provide a pseudo-polynomial time algorithm and show that it is fixed-parameter tractable. Interestingly, for the discrete case in ℝ¹ we show that the problem becomes solvable in polynomial time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.3/LIPIcs.ISAAC.2023.3.pdf
Fréchet distance
Distance Geometry
free space diagram
inverse problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
4:1
4:21
10.4230/LIPIcs.ISAAC.2023.4
article
k-Universality of Regular Languages
Adamson, Duncan
1
https://orcid.org/0000-0003-3343-2435
Fleischmann, Pamela
2
https://orcid.org/0000-0002-1531-7970
Huch, Annika
2
Koß, Tore
3
https://orcid.org/0000-0001-6002-1581
Manea, Florin
3
https://orcid.org/0000-0001-6094-3324
Nowotka, Dirk
2
https://orcid.org/0000-0002-5422-2229
Leverhulme Centre for Functional Material Design, University of Liverpool, UK
Department of Computer Science, Kiel University, Germany
Department of Computer Science, University of Göttingen, Germany
A subsequence of a word w is a word u such that u = w[i₁] w[i₂] … w[i_k], for some set of indices 1 ≤ i₁ < i₂ < … < i_k ≤ |w|. A word w is k-subsequence universal over an alphabet Σ if every word in Σ^k appears in w as a subsequence. In this paper, we study the intersection between the set of k-subsequence universal words over some alphabet Σ and regular languages over Σ. We call a regular language L k-∃-subsequence universal if there exists a k-subsequence universal word in L, and k-∀-subsequence universal if every word of L is k-subsequence universal. We give algorithms solving the problems of deciding if a given regular language, represented by a finite automaton recognising it, is k-∃-subsequence universal and, respectively, if it is k-∀-subsequence universal, for a given k. The algorithms are FPT w.r.t. the size of the input alphabet, and their run-time does not depend on k; they run in polynomial time in the number n of states of the input automaton when the size of the input alphabet is O(log n). Moreover, we show that the problem of deciding if a given regular language is k-∃-subsequence universal is NP-complete, when the language is over a large alphabet. Further, we provide algorithms for counting the number of k-subsequence universal words (paths) accepted by a given deterministic (respectively, nondeterministic) finite automaton, and ranking an input word (path) within the set of k-subsequence universal words accepted by a given finite automaton.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.4/LIPIcs.ISAAC.2023.4.pdf
String Algorithms
Regular Languages
Finite Automata
Subsequences
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
5:1
5:19
10.4230/LIPIcs.ISAAC.2023.5
article
Unified Almost Linear Kernels for Generalized Covering and Packing Problems on Nowhere Dense Classes
Ahn, Jungho
1
https://orcid.org/0000-0003-0511-1976
Kim, Jinha
2
https://orcid.org/0000-0001-5982-7836
Kwon, O-joung
3
4
https://orcid.org/0000-0003-1820-1962
Korea Institute for Advanced Study, Seoul, South Korea
Department of Mathematics, Chonnam National University, Gwangju, South Korea
Department of Mathematics, Hanyang University, Seoul, South Korea
Discrete Mathematics Group, Institute for Basic Science, Daejeon, South Korea
Let ℱ be a family of graphs, and let p,r be nonnegative integers. For a graph G and an integer k, the (p,r,ℱ)-Covering problem asks whether there is a set D ⊆ V(G) of size at most k such that if the p-th power of G has an induced subgraph isomorphic to a graph in ℱ, then it is at distance at most r from D. The (p,r,ℱ)-Packing problem asks whether G^p has k induced subgraphs H₁,…,H_k such that each H_i is isomorphic to a graph in ℱ, and for i,j ∈ {1,…,k}, the distance between V(H_i) and V(H_j) in G is larger than r.
We show that for every fixed nonnegative integers p,r and every fixed nonempty finite family ℱ of connected graphs, (p,r,ℱ)-Covering with p ≤ 2r+1 and (p,r,ℱ)-Packing with p ≤ 2⌊r/2⌋+1 admit almost linear kernels on every nowhere dense class of graphs, parameterized by the solution size k. As corollaries, we prove that Distance-r Vertex Cover, Distance-r Matching, ℱ-Free Vertex Deletion, and Induced-ℱ-Packing for any fixed finite family ℱ of connected graphs admit almost linear kernels on every nowhere dense class of graphs. Our results extend the results for Distance-r Dominating Set by Drange et al. (STACS 2016) and Eickmeyer et al. (ICALP 2017), and for Distance-r Independent Set by Pilipczuk and Siebertz (EJC 2021).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.5/LIPIcs.ISAAC.2023.5.pdf
kernelization
independent set
dominating set
covering
packing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
6:1
6:19
10.4230/LIPIcs.ISAAC.2023.6
article
Geometric TSP on Sets
Alkema, Henk
1
de Berg, Mark
1
https://orcid.org/0000-0001-5770-3784
Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands
In One-of-a-Set TSP, also known as the Generalised TSP, the input is a collection 𝒫 : = {P_1, ..., P_r} of sets in a metric space and the goal is to compute a minimum-length tour that visits one element from each set.
In the Euclidean variant of this problem, each P_i is a set of points in ℝ^d that is contained in a given hypercube H_i. We investigate how the complexity of Euclidean One-of-a-Set TSP depends on λ, the ply of the set ℋ := {H_1, ..., H_r} of hypercubes (The ply is the smallest λ such that every point in ℝ^d is in at most λ of the hypercubes). Furthermore, we show that the problem can be solved in 2^O(λ^{1/d} n^{1-1/d}) time, where n : = ∑_{i=1}^r |P_i| is the total number of points. Finally, we show that the problem cannot be solved in 2^o(n) time when λ = Θ(n), unless the Exponential Time Hypothesis (ETH) fails.
In Rectilinear One-of-a-Cube TSP, the input is a set ℋ of hypercubes in ℝ^d and the goal is to compute a minimum-length rectilinear tour that visits every hypercube. We show that the problem can be solved in 2^O(λ^{1/d} n^{1-1/d} log n) time, where n is the number of hypercubes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.6/LIPIcs.ISAAC.2023.6.pdf
Euclidean TSP
TSP on Sets
Rectilinear TSP
TSP on Neighbourhoods
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
7:1
7:16
10.4230/LIPIcs.ISAAC.2023.7
article
Depth-Three Circuits for Inner Product and Majority Functions
Amano, Kazuyuki
1
https://orcid.org/0000-0003-2322-6072
Gunma University, Kiryu, Japan
We consider the complexity of depth-three Boolean circuits with limited bottom fan-in that compute some explicit functions. This is one of the simplest circuit classes for which we cannot derive tight bounds on the complexity for many functions. A Σ₃^k-circuit is a depth-three OR ∘ AND ∘ OR circuit in which each bottom gate has fan-in at most k.
First, we investigate the complexity of Σ₃^k-circuits computing the inner product mod two function IP_n on n pairs of variables for small values of k. We give an explicit construction of a Σ²₃-circuit of size smaller than 2^{0.952n} for IP_n as well as a Σ³₃-circuit of size smaller than 2^{0.692n}. These improve the known upper bounds of 2^{n-o(n)} for Σ₃²-circuits and 3^{n/2} ∼ 2^{0.792n} for Σ₃³-circuits by Golovnev, Kulikov and Williams (ITCS 2021), and also the upper bound of 2^{(0.965…)n} for Σ₃²-circuits shown in a recent concurrent work by Göös, Guan and Mosnoi (MFCS 2023).
Second, we investigate the complexity of the majority function MAJ_n aiming for exploring the effect of negations. Currently, the smallest known depth-three circuit for MAJ_n is a monotone circuit. A Σ₃^{(+k,-𝓁)}-circuit is a Σ₃-circuit in which each bottom gate has at most k positive literals and 𝓁 negative literals as its input. We show that, for k ≤ 2, the minimum size of a Σ₃^{(+k,-∞)}-circuit for MAJ_n is essentially equal to the minimum size of a monotone Σ₃^k-circuit for MAJ_n. In sharp contrast, we also show that, for k = 3,4 and 5, there exists a Σ₃^{(+k, -𝓁)}-circuit computing MAJ_n (for an appropriately chosen 𝓁) that is smaller than the smallest known monotone Σ₃^k-circuit for MAJ_n. Our results suggest that negations may help to speed up the computation of the majority function even for depth-three circuits. All these constructions rely on efficient circuits or formulas on a small number of variables that we found through a computer search.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.7/LIPIcs.ISAAC.2023.7.pdf
Circuit complexity
depth-3 circuits
upper bounds
lower bounds
computer-assisted proof
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
8:1
8:18
10.4230/LIPIcs.ISAAC.2023.8
article
Recognizing Unit Multiple Intervals Is Hard
Ardévol Martínez, Virginia
1
https://orcid.org/0000-0002-3703-2335
Rizzi, Romeo
2
https://orcid.org/0000-0002-2387-0952
Sikora, Florian
1
https://orcid.org/0000-0003-2670-6258
Vialette, Stéphane
3
https://orcid.org/0000-0003-2308-6970
Université Paris-Dauphine, PSL University, CNRS, LAMSADE, 75016 Paris, France
Department of Computer Science, University of Verona, Italy
LIGM, CNRS, Univ Gustave Eiffel, F77454 Marne-la-Vallée, France
Multiple interval graphs are a well-known generalization of interval graphs introduced in the 1970s to deal with situations arising naturally in scheduling and allocation. A d-interval is the union of d intervals on the real line, and a graph is a d-interval graph if it is the intersection graph of d-intervals. In particular, it is a unit d-interval graph if it admits a d-interval representation where every interval has unit length.
Whereas it has been known for a long time that recognizing 2-interval graphs and other related classes such as 2-track interval graphs is NP-complete, the complexity of recognizing unit 2-interval graphs remains open. Here, we settle this question by proving that the recognition of unit 2-interval graphs is also NP-complete. Our proof technique uses a completely different approach from the other hardness results of recognizing related classes. Furthermore, we extend the result for unit d-interval graphs for any d ⩾ 2, which does not follow directly in graph recognition problems -as an example, it took almost 20 years to close the gap between d = 2 and d > 2 for the recognition of d-track interval graphs. Our result has several implications, including that recognizing (x, …, x) d-interval graphs and depth r unit 2-interval graphs is NP-complete for every x ⩾ 11 and every r ⩾ 4.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.8/LIPIcs.ISAAC.2023.8.pdf
Interval graphs
unit multiple interval graphs
recognition
NP-hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
9:1
9:15
10.4230/LIPIcs.ISAAC.2023.9
article
Non-Clairvoyant Makespan Minimization Scheduling with Predictions
Bampis, Evripidis
1
https://orcid.org/0000-0002-4498-3040
Kononov, Alexander
2
3
https://orcid.org/0000-0001-6144-0251
Lucarelli, Giorgio
4
https://orcid.org/0000-0001-7368-355X
Pascual, Fanny
1
https://orcid.org/0000-0003-0215-409X
Sorbonne Université, CNRS, LIP6, F-75005 Paris, France
Sobolev Institute of Mathematics, Novosibirsk, Russia
Novosibirsk State University, Russia
LCOMS, University of Lorraine, Metz, France
We revisit the classical non-clairvoyant problem of scheduling a set of n jobs on a set of m parallel identical machines where the processing time of a job is not known until the job finishes. Our objective is the minimization of the makespan, i.e., the date at which the last job terminates its execution. We adopt the framework of learning-augmented algorithms and we study the question of whether (possibly erroneous) predictions may help design algorithms with a competitive ratio which is good when the prediction is accurate (consistency), deteriorates gradually with respect to the prediction error (smoothness), and not too bad and bounded when the prediction is arbitrarily bad (robustness). We first consider the non-preemptive case and we devise lower bounds, as a function of the error of the prediction, for any deterministic learning-augmented algorithm. Then we analyze a variant of Longest Processing Time first (LPT) algorithm (with and without release dates) and we prove that it is consistent, smooth, and robust. Furthermore, we study the preemptive case and we provide lower bounds for any deterministic algorithm with predictions as a function of the prediction error. Finally, we introduce a variant of the classical Round Robin algorithm (RR), the Predicted Proportional Round Robin algorithm (PPRR), which we prove to be consistent, smooth and robust.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.9/LIPIcs.ISAAC.2023.9.pdf
scheduling
online
learning-augmented algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
10:1
10:17
10.4230/LIPIcs.ISAAC.2023.10
article
Small-Space Algorithms for the Online Language Distance Problem for Palindromes and Squares
Bathie, Gabriel
1
2
https://orcid.org/0000-0003-2400-4914
Kociumaka, Tomasz
3
https://orcid.org/0000-0002-2477-1702
Starikovskaya, Tatiana
1
https://orcid.org/0000-0002-7193-9432
DIENS, École normale supérieure de Paris, PSL Research University, France
LaBRI, Université de Bordeaux, France
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
We study the online variant of the language distance problem for two classical formal languages, the language of palindromes and the language of squares, and for the two most fundamental distances, the Hamming distance and the edit (Levenshtein) distance. In this problem, defined for a fixed formal language L, we are given a string T of length n, and the task is to compute the minimal distance to L from every prefix of T. We focus on the low-distance regime, where one must compute only the distances smaller than a given threshold k. In this work, our contribution is twofold:
1) First, we show streaming algorithms, which access the input string T only through a single left-to-right scan. Both for palindromes and squares, our algorithms use O(k polylog n) space and time per character in the Hamming-distance case and O(k² polylog n) space and time per character in the edit-distance case. These algorithms are randomised by necessity, and they err with probability inverse-polynomial in n.
2) Second, we show deterministic read-only online algorithms, which are also provided with read-only random access to the already processed characters of T. Both for palindromes and squares, our algorithms use O(k polylog n) space and time per character in the Hamming-distance case and O(k⁴ polylog n) space and amortised time per character in the edit-distance case.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.10/LIPIcs.ISAAC.2023.10.pdf
Approximate pattern matching
streaming algorithms
palindromes
squares
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
11:1
11:13
10.4230/LIPIcs.ISAAC.2023.11
article
Sparse Graphs of Twin-Width 2 Have Bounded Tree-Width
Bergougnoux, Benjamin
1
https://orcid.org/0000-0002-6270-3663
Gajarský, Jakub
1
https://orcid.org/0000-0002-4761-3432
Guśpiel, Grzegorz
2
https://orcid.org/0000-0002-3303-8107
Hliněný, Petr
2
https://orcid.org/0000-0003-2125-1514
Pokrývka, Filip
2
https://orcid.org/0000-0003-1212-4927
Sokołowski, Marek
1
https://orcid.org/0000-0001-8309-0141
University of Warsaw, Poland
Masaryk University, Brno, Czech Republic
Twin-width is a structural width parameter introduced by Bonnet, Kim, Thomassé and Watrigant [FOCS 2020]. Very briefly, its essence is a gradual reduction (a contraction sequence) of the given graph down to a single vertex while maintaining limited difference of neighbourhoods of the vertices, and it can be seen as widely generalizing several other traditional structural parameters. Having such a sequence at hand allows to solve many otherwise hard problems efficiently. Our paper focuses on a comparison of twin-width to the more traditional tree-width on sparse graphs. Namely, we prove that if a graph G of twin-width at most 2 contains no K_{t,t} subgraph for some integer t, then the tree-width of G is bounded by a polynomial function of t. As a consequence, for any sparse graph class C we obtain a polynomial time algorithm which for any input graph G ∈ C either outputs a contraction sequence of width at most c (where c depends only on C), or correctly outputs that G has twin-width more than 2. On the other hand, we present an easy example of a graph class of twin-width 3 with unbounded tree-width, showing that our result cannot be extended to higher values of twin-width.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.11/LIPIcs.ISAAC.2023.11.pdf
twin-width
tree-width
excluded grid
sparsity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
12:1
12:19
10.4230/LIPIcs.ISAAC.2023.12
article
Substring Complexity in Sublinear Space
Bernardini, Giulia
1
https://orcid.org/0000-0001-6647-088X
Fici, Gabriele
2
https://orcid.org/0000-0002-3536-327X
Gawrychowski, Paweł
3
https://orcid.org/0000-0002-6993-5440
Pissis, Solon P.
4
5
https://orcid.org/0000-0002-1445-1932
University of Trieste, Italy
Dipartimento di Matematica e Informatica, University of Palermo, Italy
Institute of Computer Science, University of Wrocław, Poland
CWI, Amsterdam, The Netherlands
Vrije Universiteit, Amsterdam, The Netherlands
Shannon’s entropy is a definitive lower bound for statistical compression. Unfortunately, no such clear measure exists for the compressibility of repetitive strings. Thus, ad hoc measures are employed to estimate the repetitiveness of strings, e.g., the size z of the Lempel–Ziv parse or the number r of equal-letter runs of the Burrows-Wheeler transform. A more recent one is the size γ of a smallest string attractor. Let T be a string of length n. A string attractor of T is a set of positions of T capturing the occurrences of all the substrings of T. Unfortunately, Kempa and Prezza [STOC 2018] showed that computing γ is NP-hard. Kociumaka et al. [LATIN 2020] considered a new measure of compressibility that is based on the function S_T(k) counting the number of distinct substrings of length k of T, also known as the substring complexity of T. This new measure is defined as δ = sup{S_T(k)/k, k ≥ 1} and lower bounds all the relevant ad hoc measures previously considered. In particular, δ ≤ γ always holds and δ can be computed in 𝒪(n) time using Θ(n) working space. Kociumaka et al. showed that one can construct an 𝒪(δ log n/(δ))-sized representation of T supporting efficient direct access and efficient pattern matching queries on T. Given that for highly compressible strings, δ is significantly smaller than n, it is natural to pose the following question:
Can we compute δ efficiently using sublinear working space?
It is straightforward to show that in the comparison model, any algorithm computing δ using 𝒪(b) space requires Ω(n^{2-o(1)}/b) time through a reduction from the element distinctness problem [Yao, SIAM J. Comput. 1994]. We thus wanted to investigate whether we can indeed match this lower bound. We address this algorithmic challenge by showing the following bounds to compute δ:
- 𝒪((n³log b)/b²) time using 𝒪(b) space, for any b ∈ [1,n], in the comparison model.
- 𝒪̃(n²/b) time using 𝒪̃(b) space, for any b ∈ [√n,n], in the word RAM model. This gives an 𝒪̃(n^{1+ε})-time and 𝒪̃(n^{1-ε})-space algorithm to compute δ, for any 0 < ε ≤ 1/2.
Let us remark that our algorithms compute S_T(k), for all k, within the same complexities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.12/LIPIcs.ISAAC.2023.12.pdf
sublinear-space algorithm
string algorithm
substring complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
13:1
13:18
10.4230/LIPIcs.ISAAC.2023.13
article
New Support Size Bounds for Integer Programming, Applied to Makespan Minimization on Uniformly Related Machines
Berndt, Sebastian
1
https://orcid.org/0000-0003-4177-8081
Brinkop, Hauke
2
https://orcid.org/0000-0002-7791-2353
Jansen, Klaus
2
https://orcid.org/0000-0001-8358-6796
Mnich, Matthias
3
https://orcid.org/0000-0002-4721-5354
Stamm, Tobias
3
https://orcid.org/0000-0002-5381-4935
Institute for Theoretical Computer Science, University of Lübeck, Germany
Kiel University, Germany
Hamburg University of Technology, Institute for Algorithms and Complexity, Germany
Mixed-integer linear programming (MILP) is at the core of many advanced algorithms for solving fundamental problems in combinatorial optimization. The complexity of solving MILPs directly correlates with their support size, which is the minimum number of non-zero integer variables in an optimal solution. A hallmark result by Eisenbrand and Shmonin (Oper. Res. Lett. , 2006) shows that any feasible integer linear program (ILP) has a solution with support size s ≤ 2m⋅log(4mΔ), where m is the number of constraints, and Δ is the largest absolute coefficient in any constraint.
Our main combinatorial result are improved support size bounds for ILPs.
We show that any ILP has a solution with support size s ≤ m⋅(log(3A_max)+√{log(A_max)}), where A_max≔ ‖A‖₁ denotes the 1-norm of the constraint matrix A. Furthermore, we show support bounds in the linearized form s ≤ 2m⋅log(1.46 A_max). Our upper bounds also hold with A_max replaced by √mΔ, which improves on the previously best constants in the linearized form.
Our main algorithmic result are the fastest known approximation schemes for fundamental scheduling problems, which use the improved support bounds as one ingredient.
We design an efficient approximation scheme (EPTAS) for makespan minimization on uniformly related machines (Q||C_{max}). Our EPTAS yields a (1+ε)-approximation for Q||C_{max} on N jobs in time 2^𝒪(1/ε log³(1/ε)log(log(1/ε))) + 𝒪(N), which improves over the previously fastest algorithm by Jansen, Klein and Verschae (Math. Oper. Res., 2020) with run time 2^𝒪(1/ε log⁴(1/ε)) + N^𝒪(1). Arguably, our approximation scheme is also simpler than all previous EPTASes for Q||C_max, as we reduce the problem to a novel MILP formulation which greatly benefits from the small support.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.13/LIPIcs.ISAAC.2023.13.pdf
Integer programming
scheduling algorithms
uniformly related machines
makespan minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
14:1
14:16
10.4230/LIPIcs.ISAAC.2023.14
article
Improved Guarantees for the a Priori TSP
Blauth, Jannis
1
https://orcid.org/0000-0001-5181-802X
Neuwohner, Meike
1
https://orcid.org/0000-0002-3664-3687
Puhlmann, Luise
1
https://orcid.org/0009-0001-0776-4586
Vygen, Jens
1
Research Inst. for Discrete Mathematics, Hausdorff Center for Math., University of Bonn, Germany
We revisit the a priori TSP (with independent activation) and prove stronger approximation guarantees than were previously known. In the a priori TSP, we are given a metric space (V,c) and an activation probability p(v) for each customer v ∈ V. We ask for a TSP tour T for V that minimizes the expected length after cutting T short by skipping the inactive customers.
All known approximation algorithms select a nonempty subset S of the customers and construct a master route solution, consisting of a TSP tour for S and two edges connecting every customer v ∈ V⧵S to a nearest customer in S.
We address the following questions. If we randomly sample the subset S, what should be the sampling probabilities? How much worse than the optimum can the best master route solution be? The answers to these questions (we provide almost matching lower and upper bounds) lead to improved approximation guarantees: less than 3.1 with randomized sampling, and less than 5.9 with a deterministic polynomial-time algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.14/LIPIcs.ISAAC.2023.14.pdf
A priori TSP
random sampling
stochastic combinatorial optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
15:1
15:14
10.4230/LIPIcs.ISAAC.2023.15
article
An FPT Algorithm for Splitting a Necklace Among Two Thieves
Borzechowski, Michaela
1
Schnider, Patrick
2
https://orcid.org/0000-0002-2172-9285
Weber, Simon
2
https://orcid.org/0000-0003-1901-3621
Department of Mathematics and Computer Science, Freie Universität Berlin, Germany
Department of Computer Science, ETH Zürich, Switzerland
It is well-known that the 2-Thief-Necklace-Splitting problem reduces to the discrete Ham Sandwich problem. In fact, this reduction was crucial in the proof of the PPA-completeness of the Ham Sandwich problem [Filos-Ratsikas and Goldberg, STOC'19]. Recently, a variant of the Ham Sandwich problem called α-Ham Sandwich has been studied, in which the point sets are guaranteed to be well-separated [Steiger and Zhao, DCG'10]. The complexity of this search problem remains unknown, but it is known to lie in the complexity class UEOPL [Chiu, Choudhary and Mulzer, ICALP'20]. We define the analogue of this well-separation condition in the necklace splitting problem - a necklace is n-separable, if every subset A of the n types of jewels can be separated from the types [n]⧵A by at most n separator points. Since this version of necklace splitting reduces to α-Ham Sandwich in a solution-preserving way it follows that instances of this version always have unique solutions.
We furthermore provide two FPT algorithms: The first FPT algorithm solves 2-Thief-Necklace-Splitting on (n-1+𝓁)-separable necklaces with n types of jewels and m total jewels in time 2^O(𝓁log𝓁) + O(m²). In particular, this shows that 2-Thief-Necklace-Splitting is polynomial-time solvable on n-separable necklaces. Thus, attempts to show hardness of α-Ham Sandwich through reduction from the 2-Thief-Necklace-Splitting problem cannot work. The second FPT algorithm tests (n-1+𝓁)-separability of a given necklace with n types of jewels in time 2^O(𝓁²) ⋅ n⁴. In particular, n-separability can thus be tested in polynomial time, even though testing well-separation of point sets is co-NP-complete [Bergold et al., SWAT'22].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.15/LIPIcs.ISAAC.2023.15.pdf
Necklace splitting
n-separability
well-separation
ham sandwich
FPT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
16:1
16:16
10.4230/LIPIcs.ISAAC.2023.16
article
Fast Convolutions for Near-Convex Sequences
Brand, Cornelius
1
https://orcid.org/0000-0002-1929-055X
Lassota, Alexandra
2
https://orcid.org/0000-0001-6215-066X
Institute of Logic and Computation, Vienna University of Technology, Austria
Max Planck Institute for Informatics, SIC, Saarbrücken, Germany
We develop algorithms for (min,+)-Convolution and related convolution problems such as Super Additivity Testing, Convolution 3-Sum and Minimum Consecutive Subsums which use the degree of convexity of the instance as a parameter. Assuming the min-plus conjecture (Künnemann-Paturi-Schneider, ICALP'17 and Cygan et al., ICALP'17), our results interpolate in an optimal manner between fully convex instances, which can be solved in near-linear time using Legendre transformations, and general non-convex sequences, where the trivial quadratic-time algorithm is conjectured to be best possible, up to subpolynomial factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.16/LIPIcs.ISAAC.2023.16.pdf
(min,+)-convolution
fine-grained complexity
convex sequences
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
17:1
17:19
10.4230/LIPIcs.ISAAC.2023.17
article
Matrix Completion: Approximating the Minimum Diameter
Chakraborty, Diptarka
1
Dey, Sanjana
1
National University of Singapore, Singapore
In this paper, we focus on the matrix completion problem and aim to minimize the diameter over an arbitrary alphabet. Given a matrix M with missing entries, our objective is to complete the matrix by filling in the missing entries in a way that minimizes the maximum (Hamming) distance between any pair of rows in the completed matrix (also known as the diameter of the matrix). It is worth noting that this problem is already known to be NP-hard. Currently, the best-known upper bound is a 4-approximation algorithm derived by applying the triangle inequality together with a well-known 2-approximation algorithm for the radius minimization variant.
In this work, we make the following contributions:
- We present a novel 3-approximation algorithm for the diameter minimization variant of the matrix completion problem. To the best of our knowledge, this is the first approximation result that breaks below the straightforward 4-factor bound.
- Furthermore, we establish that the diameter minimization variant of the matrix completion problem is (2-ε)-inapproximable, for any ε > 0, even when considering a binary alphabet, under the assumption that 𝖯 ≠ NP. This is the first result that demonstrates a hardness of approximation for this problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.17/LIPIcs.ISAAC.2023.17.pdf
Incomplete Data
Matrix Completion
Hamming Distance
Diameter Minimization
Approximation Algorithms
Hardness of Approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
18:1
18:19
10.4230/LIPIcs.ISAAC.2023.18
article
Distance Queries over Dynamic Interval Graphs
Chen, Jingbang
1
https://orcid.org/0000-0002-7279-0801
He, Meng
2
https://orcid.org/0000-0003-0358-7102
Munro, J. Ian
1
https://orcid.org/0000-0002-7165-7988
Peng, Richard
1
Wu, Kaiyu
1
https://orcid.org/0000-0001-7562-1336
Zhang, Daniel J.
3
https://orcid.org/0000-0002-3867-9608
Cheriton School of Computer Science, University of Waterloo, Canada
Faculty of Computer Science, Dalhousie University, Halifax, Canada
School of Computer Science, Georgia Tech, Atlanta, GA, USA
We design the first dynamic distance oracles for interval graphs, which are intersection graphs of a set of intervals on the real line, and for proper interval graphs, which are intersection graphs of a set of intervals in which no interval is properly contained in another.
For proper interval graphs, we design a linear space data structure which supports distance queries (computing the distance between two query vertices) and vertex insertion or deletion in O(lg n) worst-case time, where n is the number of vertices currently in G. Under incremental (insertion only) or decremental (deletion only) settings, we design linear space data structures that support distance queries in O(lg n) worst-case time and vertex insertion or deletion in O(lg n) amortized time, where n is the maximum number of vertices in the graph. Under fully dynamic settings, we design a data structure that represents an interval graph G in O(n) words of space to support distance queries in O(n lg n/S(n)) worst-case time and vertex insertion or deletion in O(S(n)+lg n) worst-case time, where n is the number of vertices currently in G and S(n) is an arbitrary function that satisfies S(n) = Ω(1) and S(n) = O(n). This implies an O(n)-word solution with O(√{nlg n})-time support for both distance queries and updates. All four data structures can answer shortest path queries by reporting the vertices in the shortest path between two query vertices in O(lg n) worst-case time per vertex.
We also study the hardness of supporting distance queries under updates over an intersection graph of 3D axis-aligned line segments, which generalizes our problem to 3D. Finally, we solve the problem of computing the diameter of a dynamic connected interval graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.18/LIPIcs.ISAAC.2023.18.pdf
interval graph
proper interval graph
intersection graph
geometric intersection graph
distance oracle
distance query
shortest path query
dynamic graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
19:1
19:20
10.4230/LIPIcs.ISAAC.2023.19
article
FPT Approximation Using Treewidth: Capacitated Vertex Cover, Target Set Selection and Vector Dominating Set
Chu, Huairui
1
Lin, Bingkai
1
Nanjing University, China
Treewidth is a useful tool in designing graph algorithms. Although many NP-hard graph problems can be solved in linear time when the input graphs have small treewidth, there are problems which remain hard on graphs of bounded treewidth. In this paper, we consider three vertex selection problems that are W[1]-hard when parameterized by the treewidth of the input graph, namely the capacitated vertex cover problem, the target set selection problem and the vector dominating set problem. We provide two new methods to obtain FPT approximation algorithms for these problems. For the capacitated vertex cover problem and the vector dominating set problem, we obtain (1+o(1))-approximation FPT algorithms. For the target set selection problem, we give an FPT algorithm providing a tradeoff between its running time and the approximation ratio.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.19/LIPIcs.ISAAC.2023.19.pdf
FPT approximation algorithm
Treewidth
Capacitated vertex cover
Target set selection
Vector dominating set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
20:1
20:17
10.4230/LIPIcs.ISAAC.2023.20
article
Improved Approximation for Two-Dimensional Vector Multiple Knapsack
Cohen, Tomer
1
https://orcid.org/0009-0003-5241-1565
Kulik, Ariel
2
https://orcid.org/0000-0002-0533-3926
Shachnai, Hadas
1
https://orcid.org/0000-0002-6645-4350
Computer Science Department, Technion, Haifa, Israel
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
We study the uniform 2-dimensional vector multiple knapsack (2VMK) problem, a natural variant of multiple knapsack arising in real-world applications such as virtual machine placement. The input for 2VMK is a set of items, each associated with a 2-dimensional weight vector and a positive profit, along with m 2-dimensional bins of uniform (unit) capacity in each dimension. The goal is to find an assignment of a subset of the items to the bins, such that the total weight of items assigned to a single bin is at most one in each dimension, and the total profit is maximized.
Our main result is a (1 - (ln 2)/2 - ε)-approximation algorithm for 2VMK, for every fixed ε > 0, thus improving the best known ratio of (1 - 1/e - ε) which follows as a special case from a result of [Fleischer at al., MOR 2011]. Our algorithm relies on an adaptation of the Round&Approx framework of [Bansal et al., SICOMP 2010], originally designed for set covering problems, to maximization problems. The algorithm uses randomized rounding of a configuration-LP solution to assign items to ≈ m⋅ln 2 ≈ 0.693⋅m of the bins, followed by a reduction to the (1-dimensional) Multiple Knapsack problem for assigning items to the remaining bins.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.20/LIPIcs.ISAAC.2023.20.pdf
vector multiple knapsack
two-dimensional packing
randomized rounding
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
21:1
21:15
10.4230/LIPIcs.ISAAC.2023.21
article
A Compact DAG for Storing and Searching Maximal Common Subsequences
Conte, Alessio
1
https://orcid.org/0000-0003-0770-2235
Grossi, Roberto
1
https://orcid.org/0000-0002-7985-4222
Punzi, Giulia
2
https://orcid.org/0000-0001-8738-1595
Uno, Takeaki
2
https://orcid.org/0000-0001-7274-279X
Università di Pisa, Italy
National Institute of Informatics, Tokyo, Japan
Maximal Common Subsequences (MCSs) between two strings X and Y are subsequences of both X and Y that are maximal under inclusion. MCSs relax and generalize the well known and widely used concept of Longest Common Subsequences (LCSs), which can be seen as MCSs of maximum length. While the number both LCSs and MCSs can be exponential in the length of the strings, LCSs have been long exploited for string and text analysis, as simple compact representations of all LCSs between two strings, built via dynamic programming or automata, have been known since the '70s. MCSs appear to have a more challenging structure: even listing them efficiently was an open problem open until recently, thus narrowing the complexity difference between the two problems, but the gap remained significant. In this paper we close the complexity gap: we show how to build DAG of polynomial size - in polynomial time - which allows for efficient operations on the set of all MCSs such as enumeration in Constant Amortized Time per solution (CAT), counting, and random access to the i-th element (i.e., rank and select operations). Other than improving known algorithmic results, this work paves the way for new sequence analysis methods based on MCSs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.21/LIPIcs.ISAAC.2023.21.pdf
Maximal common subsequence
DAG
Compact data structures
Enumeration
Constant amortized time
Random access
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
22:1
22:15
10.4230/LIPIcs.ISAAC.2023.22
article
Prefix Sorting DFAs: A Recursive Algorithm
Cotumaccio, Nicola
1
2
https://orcid.org/0000-0002-1402-5298
Gran Sasso Science Institute, L'Aquila, Italy
Dalhousie University, Halifax, Canada
In the past thirty years, numerous algorithms for building the suffix array of a string have been proposed. In 2021, the notion of suffix array was extended from strings to DFAs, and it was shown that the resulting data structure can be built in O(m² + n^{5/2}) time, where n is the number of states and m is the number of edges [SODA 2021]. Recently, algorithms running in O(mn) and O(n²log n) time have been described [CPM 2023].
In this paper, we improve the previous bounds by proposing an O(n²) recursive algorithm inspired by Farach’s algorithm for building a suffix tree [FOCS 1997]. To this end, we provide insight into the rich lexicographic and combinatorial structure of a graph, so contributing to the fascinating journey which might lead to solve the long-standing open problem of building the suffix tree of a graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.22/LIPIcs.ISAAC.2023.22.pdf
Suffix Array
Burrows-Wheeler Transform
FM-index
Recursive Algorithms
Graph Theory
Pattern Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
23:1
23:15
10.4230/LIPIcs.ISAAC.2023.23
article
Clustering in Polygonal Domains
de Berg, Mark
1
https://orcid.org/0000-0001-5770-3784
Biabani, Leyla
1
Monemizadeh, Morteza
1
Theocharous, Leonidas
1
Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands
We study various clustering problems for a set D of n points in a polygonal domain P under the geodesic distance. We start by studying the discrete k-median problem for D in P. We develop an exact algorithm which runs in time poly(n,m) + n^O(√k), where m is the complexity of the domain. Subsequently, we show that our approach can also be applied to solve the k-center problem with z outliers in the same running time. Next, we turn our attention to approximation algorithms. In particular, we study the k-center problem in a simple polygon and show how to obtain a (1+ε)-approximation algorithm which runs in time 2^{O((k log(k))/ε)} (n log(m) + m). To obtain this, we demonstrate that a previous approach by Bădoiu et al. [Bâdoiu et al., 2002; Bâdoiu and Clarkson, 2003] that works in ℝ^d, carries over to the setting of simple polygons. Finally, we study the 1-center problem in a simple polygon in the presence of z outliers. We show that a coreset C of size O(z) exists, such that the 1-center of C is a 3-approximation of the 1-center of D, when z outliers are allowed. This result is actually more general and carries over to any metric space, which to the best of our knowledge was not known so far. By extending this approach, we show that for the 1-center problem under the Euclidean metric in ℝ², there exists an ε-coreset of size O(z/ε).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.23/LIPIcs.ISAAC.2023.23.pdf
clustering
geodesic distance
coreset
outliers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
24:1
24:17
10.4230/LIPIcs.ISAAC.2023.24
article
Finding Diverse Minimum s-t Cuts
de Berg, Mark
1
López Martínez, Andrés
1
Spieksma, Frits
1
Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands
Recently, many studies have been devoted to finding diverse solutions in classical combinatorial problems, such as Vertex Cover (Baste et al., IJCAI'20), Matching (Fomin et al., ISAAC'20) and Spanning Tree (Hanaka et al., AAAI'21). Finding diverse solutions is important in settings where the user is not able to specify all criteria of the desired solution. Motivated by an application in the field of system identification, we initiate the algorithmic study of k-Diverse Minimum s-t Cuts which, given a directed graph G = (V, E), two specified vertices s,t ∈ V, and an integer k > 0, asks for a collection of k minimum s-t cuts in G that has maximum diversity. We investigate the complexity of the problem for two diversity measures for a collection of cuts: (i) the sum of all pairwise Hamming distances, and (ii) the cardinality of the union of cuts in the collection. We prove that k-Diverse Minimum s-t Cuts can be solved in strongly polynomial time for both diversity measures via submodular function minimization. We obtain this result by establishing a connection between ordered collections of minimum s-t cuts and the theory of distributive lattices. When restricted to finding only collections of mutually disjoint solutions, we provide a more practical algorithm that finds a maximum set of pairwise disjoint minimum s-t cuts. For graphs with small minimum s-t cut, it runs in the time of a single max-flow computation. These results stand in contrast to the problem of finding k diverse global minimum cuts - which is known to be NP-hard even for the disjoint case (Hanaka et al., AAAI'23) - and partially answer a long-standing open question of Wagner (Networks 1990) about improving the complexity of finding disjoint collections of minimum s-t cuts.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.24/LIPIcs.ISAAC.2023.24.pdf
S-T MinCut
Diversity
Lattice Theory
Submodular Function Minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
25:1
25:17
10.4230/LIPIcs.ISAAC.2023.25
article
Efficient Algorithms for Euclidean Steiner Minimal Tree on Near-Convex Terminal Sets
Dhar, Anubhav
1
Hait, Soumita
1
Kolay, Sudeshna
1
Indian Institute of Technology Kharagpur, India
The Euclidean Steiner Minimal Tree problem takes as input a set P of points in the Euclidean plane and finds the minimum length network interconnecting all the points of P. In this paper, in continuation to the works of [Du et al., 1987] and [Weng and Booth, 1995], we study Euclidean Steiner Minimal Tree when P is formed by the vertices of a pair of regular, concentric and parallel n-gons.
We restrict our attention to the cases where the two polygons are not very close to each other. In such cases, we show that Euclidean Steiner Minimal Tree is polynomial-time solvable, and we describe an explicit structure of a Euclidean Steiner minimal tree for P.
We also consider point sets P of size n where the number of input points not on the convex hull of P is f(n) ≤ n. We give an exact algorithm with running time 2^𝒪(f(n) log n) for such input point sets P. Note that when f(n) = 𝒪(n/(log n)), our algorithm runs in single-exponential time, and when f(n) = o(n) the running time is 2^o(n log n) which is better than the known algorithm in [Hwang et al., 1992].
We know that no FPTAS exists for Euclidean Steiner Minimal Tree unless P = NP [Garey et al., 1977]. On the other hand FPTASes exist for Euclidean Steiner Minimal Tree on convex point sets [Scott Provan, 1988]. In this paper, we show that if the number of input points in P not belonging to the convex hull of P is 𝒪(log n), then an FPTAS exists for Euclidean Steiner Minimal Tree. In contrast, we show that for any ε ∈ (0,1], when there are Ω(n^ε) points not belonging to the convex hull of the input set, then no FPTAS can exist for Euclidean Steiner Minimal Tree unless P = NP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.25/LIPIcs.ISAAC.2023.25.pdf
Steiner minimal tree
Euclidean Geometry
Almost Convex point sets
FPTAS
strong NP-completeness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
26:1
26:20
10.4230/LIPIcs.ISAAC.2023.26
article
Rectilinear-Upward Planarity Testing of Digraphs
Didimo, Walter
1
https://orcid.org/0000-0002-4379-6059
Kaufmann, Michael
2
https://orcid.org/0000-0001-9186-3538
Liotta, Giuseppe
1
https://orcid.org/0000-0002-2886-9694
Ortali, Giacomo
1
https://orcid.org/0000-0002-4481-698X
Patrignani, Maurizio
3
https://orcid.org/0000-0001-9806-7411
Department of Engineering, University of Perugia, Italy
Department of Computer Science, University of Tübingen, Germany
Department of Civil, Computer and Aeronautical Engineering, Roma Tre University, Italy
A rectilinear-upward planar drawing of a digraph G is a crossing-free drawing of G where each edge is either a horizontal or a vertical segment, and such that no directed edge points downward. Rectilinear-Upward Planarity Testing is the problem of deciding whether a digraph G admits a rectilinear-upward planar drawing. We show that: (i) Rectilinear-Upward Planarity Testing is NP-complete, even if G is biconnected; (ii) it can be solved in linear time when an upward planar embedding of G is fixed; (iii) the problem is polynomial-time solvable for biconnected digraphs of treewidth at most two, i.e., for digraphs whose underlying undirected graph is a series-parallel graph; (iv) for any biconnected digraph the problem is fixed-parameter tractable when parameterized by the number of sources and sinks in the digraph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.26/LIPIcs.ISAAC.2023.26.pdf
Graph drawing
orthogonal drawings
upward drawings
rectilinear planarity
upward planarity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
27:1
27:17
10.4230/LIPIcs.ISAAC.2023.27
article
A Unified Worst Case for Classical Simplex and Policy Iteration Pivot Rules
Disser, Yann
1
https://orcid.org/0000-0002-2085-0454
Mosis, Nils
1
https://orcid.org/0000-0002-0692-0647
TU Darmstadt, Germany
We construct a family of Markov decision processes for which the policy iteration algorithm needs an exponential number of improving switches with Dantzig’s rule, with Bland’s rule, and with the Largest Increase pivot rule. This immediately translates to a family of linear programs for which the simplex algorithm needs an exponential number of pivot steps with the same three pivot rules. Our results yield a unified construction that simultaneously reproduces well-known lower bounds for these classical pivot rules, and we are able to infer that any (deterministic or randomized) combination of them cannot avoid an exponential worst-case behavior. Regarding the policy iteration algorithm, pivot rules typically switch multiple edges simultaneously and our lower bound for Dantzig’s rule and the Largest Increase rule, which perform only single switches, seem novel. Regarding the simplex algorithm, the individual lower bounds were previously obtained separately via deformed hypercube constructions. In contrast to previous bounds for the simplex algorithm via Markov decision processes, our rigorous analysis is reasonably concise.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.27/LIPIcs.ISAAC.2023.27.pdf
Bland’s pivot rule
Dantzig’s pivot rule
Largest Increase pivot rule
Markov decision process
policy iteration
simplex algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
28:1
28:18
10.4230/LIPIcs.ISAAC.2023.28
article
Exact Matching: Correct Parity and FPT Parameterized by Independence Number
El Maalouly, Nicolas
1
https://orcid.org/0000-0002-1037-0203
Steiner, Raphael
1
https://orcid.org/0000-0002-4234-6136
Wulf, Lasse
2
https://orcid.org/0000-0001-7139-4092
Department of Computer Science, ETH Zürich, Switzerland
Institute of Discrete Mathematics, TU Graz, Austria
Given an integer k and a graph where every edge is colored either red or blue, the goal of the exact matching problem is to find a perfect matching with the property that exactly k of its edges are red. Soon after Papadimitriou and Yannakakis (JACM 1982) introduced the problem, a randomized polynomial-time algorithm solving the problem was described by Mulmuley et al. (Combinatorica 1987). Despite a lot of effort, it is still not known today whether a deterministic polynomial-time algorithm exists. This makes the exact matching problem an important candidate to test the popular conjecture that the complexity classes P and RP are equal. In a recent article (MFCS 2022), progress was made towards this goal by showing that for bipartite graphs of bounded bipartite independence number, a polynomial time algorithm exists. In terms of parameterized complexity, this algorithm was an XP-algorithm parameterized by the bipartite independence number. In this article, we introduce novel algorithmic techniques that allow us to obtain an FPT-algorithm. If the input is a general graph we show that one can at least compute a perfect matching M which has the correct number of red edges modulo 2, in polynomial time. This is motivated by our last result, in which we prove that an FPT algorithm for general graphs, parameterized by the independence number, reduces to the problem of finding in polynomial time a perfect matching M with at most k red edges and the correct number of red edges modulo 2.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.28/LIPIcs.ISAAC.2023.28.pdf
Perfect Matching
Exact Matching
Independence Number
Parameterized Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
29:1
29:17
10.4230/LIPIcs.ISAAC.2023.29
article
Approximation Guarantees for Shortest Superstrings: Simpler and Better
Englert, Matthias
1
https://orcid.org/0000-0002-8859-7731
Matsakis, Nicolaos
2
https://orcid.org/0000-0002-0386-749X
Veselý, Pavel
2
https://orcid.org/0000-0003-1169-7934
University of Warwick, Coventry, UK
Charles University, Prague, Czech Republic
The Shortest Superstring problem is an NP-hard problem, in which given as input a set of strings, we are looking for a string of minimum length that contains all input strings as substrings. The Greedy Conjecture (Tarhio and Ukkonen, 1988) states that the GREEDY algorithm, which repeatedly merges the two strings of maximum overlap, is 2-approximate. We have recently shown (STOC 2022) that the approximation guarantee of GREEDY is at most (13+√{57})/6 ≈ 3.425. Before that, the best established upper bound for this was 3.5 by Kaplan and Shafrir (IPL 2005), which improved upon the upper bound of 4 by Blum et al. (STOC 1991). To derive our previous result, we established two incomparable upper bounds on the overlap sum of all cycle-closing edges in an optimal cycle cover and utilized lemmas of Blum et al.
We improve the more involved one of the two bounds and, at the same time, make its proof more straightforward. This results in an improved approximation guarantee of (√{67}+2)/3 ≈ 3.396 for GREEDY. Additionally, our result implies an algorithm for the Shortest Superstring problem having an approximation guarantee of (√{67}+14)/9 ≈ 2.466, improving slightly upon the previously best guarantee of (√{57}+37)/18 ≈ 2.475 (STOC 2022).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.29/LIPIcs.ISAAC.2023.29.pdf
Shortest Superstring problem
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
30:1
30:13
10.4230/LIPIcs.ISAAC.2023.30
article
Rapid Mixing for the Hardcore Glauber Dynamics and Other Markov Chains in Bounded-Treewidth Graphs
Eppstein, David
1
Frishberg, Daniel
2
https://orcid.org/0000-0002-1861-5439
Department of Computer Science, University of California, Irvine, CA, USA
Department of Computer Science and Software Engineering, California Polytechnic State University, San Luis Obispo, CA, USA
We give a new rapid mixing result for a natural random walk on the independent sets of a graph G. We show that when G has bounded treewidth, this random walk - known as the Glauber dynamics for the hardcore model - mixes rapidly for all fixed values of the standard parameter λ > 0, giving a simple alternative to existing sampling algorithms for these structures. We also show rapid mixing for analogous Markov chains on dominating sets, b-edge covers, b-matchings, maximal independent sets, and maximal b-matchings. (For b-matchings, maximal independent sets, and maximal b-matchings we also require bounded degree.) Our results imply simpler alternatives to known algorithms for the sampling and approximate counting problems in these graphs. We prove our results by applying a divide-and-conquer framework we developed in a previous paper, as an alternative to the projection-restriction technique introduced by Jerrum, Son, Tetali, and Vigoda. We extend this prior framework to handle chains for which the application of that framework is not straightforward, strengthening existing results by Dyer, Goldberg, and Jerrum and by Heinrich for the Glauber dynamics on q-colorings of graphs of bounded treewidth and bounded degree.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.30/LIPIcs.ISAAC.2023.30.pdf
Glauber dynamics
mixing time
projection-restriction
multicommodity flow
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
31:1
31:16
10.4230/LIPIcs.ISAAC.2023.31
article
Matching Cuts in Graphs of High Girth and H-Free Graphs
Feghali, Carl
1
https://orcid.org/0000-0001-6727-7213
Lucke, Felicia
2
https://orcid.org/0000-0002-9860-2928
Paulusma, Daniël
3
https://orcid.org/0000-0001-5945-9287
Ries, Bernard
2
https://orcid.org/0000-0003-4395-5547
University of Lyon, EnsL, CNRS, LIP, F-69342, Lyon Cedex 07, France
Department of Informatics, University of Fribourg, Switzerland
Department of Computer Science, Durham University, UK
The (Perfect) Matching Cut problem is to decide if a connected graph has a (perfect) matching that is also an edge cut. The Disconnected Perfect Matching problem is to decide if a connected graph has a perfect matching that contains a matching cut. Both Matching Cut and Disconnected Perfect Matching are NP-complete for planar graphs of girth 5, whereas Perfect Matching Cut is known to be NP-complete even for subcubic bipartite graphs of arbitrarily large fixed girth. We prove that Matching Cut and Disconnected Perfect Matching are also NP-complete for bipartite graphs of arbitrarily large fixed girth and bounded maximum degree. Our result for Matching Cut resolves a 20-year old open problem. We also show that the more general problem d-Cut, for every fixed d ≥ 1, is NP-complete for bipartite graphs of arbitrarily large fixed girth and bounded maximum degree. Furthermore, we show that Matching Cut, Perfect Matching Cut and Disconnected Perfect Matching are NP-complete for H-free graphs whenever H contains a connected component with two vertices of degree at least 3. Afterwards, we update the state-of-the-art summaries for H-free graphs and compare them with each other, and with a known and full classification of the Maximum Matching Cut problem, which is to determine a largest matching cut of a graph G. Finally, by combining existing results, we obtain a complete complexity classification of Perfect Matching Cut for H-subgraph-free graphs where H is any finite set of graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.31/LIPIcs.ISAAC.2023.31.pdf
matching cut
perfect matching
girth
H-free graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
32:1
32:15
10.4230/LIPIcs.ISAAC.2023.32
article
Computing Paths of Large Rank in Planar Frameworks Deterministically
Fomin, Fedor V.
1
https://orcid.org/0000-0003-1955-4612
Golovach, Petr A.
1
https://orcid.org/0000-0002-2619-2990
Korhonen, Tuukka
1
https://orcid.org/0000-0003-0861-6515
Stamoulis, Giannos
2
https://orcid.org/0000-0002-4175-7793
Department of Informatics, University of Bergen, Norway
LIRMM, Université de Montpellier, CNRS, Montpellier, France
A framework consists of an undirected graph G and a matroid M whose elements correspond to the vertices of G. Recently, Fomin et al. [SODA 2023] and Eiben et al. [ArXiV 2023] developed parameterized algorithms for computing paths of rank k in frameworks. More precisely, for vertices s and t of G, and an integer k, they gave FPT algorithms parameterized by k deciding whether there is an (s,t)-path in G whose vertex set contains a subset of elements of M of rank k. These algorithms are based on Schwartz-Zippel lemma for polynomial identity testing and thus are randomized, and therefore the existence of a deterministic FPT algorithm for this problem remains open.
We present the first deterministic FPT algorithm that solves the problem in frameworks whose underlying graph G is planar. While the running time of our algorithm is worse than the running times of the recent randomized algorithms, our algorithm works on more general classes of matroids. In particular, this is the first FPT algorithm for the case when matroid M is represented over rationals.
Our main technical contribution is the nontrivial adaptation of the classic irrelevant vertex technique to frameworks to reduce the given instance to one of bounded treewidth. This allows us to employ the toolbox of representative sets to design a dynamic programming procedure solving the problem efficiently on instances of bounded treewidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.32/LIPIcs.ISAAC.2023.32.pdf
Planar graph
longest path
linear matroid
irrelevant vertex
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
33:1
33:19
10.4230/LIPIcs.ISAAC.2023.33
article
Pattern-Avoiding Binary Trees - Generation, Counting, and Bijections
Gregor, Petr
1
https://orcid.org/0000-0002-3608-2533
Mütze, Torsten
2
1
https://orcid.org/0000-0002-6383-7436
Namrata
3
https://orcid.org/0000-0002-6582-4196
Department of Theoretical Computer Science and Mathematical Logic, Charles University, Prague, Czech Republic
Department of Computer Science, University of Warwick, United Kingdom
Department of Computer Science, University of Warwick, Coventry, UK
In this paper we propose a notion of pattern avoidance in binary trees that generalizes the avoidance of contiguous tree patterns studied by Rowland and non-contiguous tree patterns studied by Dairyko, Pudwell, Tyner, and Wynn. Specifically, we propose algorithms for generating different classes of binary trees that are characterized by avoiding one or more of these generalized patterns. This is achieved by applying the recent Hartung-Hoang-Mütze-Williams generation framework, by encoding binary trees via permutations. In particular, we establish a one-to-one correspondence between tree patterns and certain mesh permutation patterns. We also conduct a systematic investigation of all tree patterns on at most 5 vertices, and we establish bijections between pattern-avoiding binary trees and other combinatorial objects, in particular pattern-avoiding lattice paths and set partitions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.33/LIPIcs.ISAAC.2023.33.pdf
Generation
binary tree
pattern avoidance
permutation
bijection
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
34:1
34:15
10.4230/LIPIcs.ISAAC.2023.34
article
Computing a Subtrajectory Cluster from c-Packed Trajectories
Gudmundsson, Joachim
1
https://orcid.org/0000-0002-6778-7990
Huang, Zijin
1
https://orcid.org/0000-0003-3417-5303
van Renssen, André
1
https://orcid.org/0000-0002-9294-9947
Wong, Sampson
2
https://orcid.org/0000-0003-3803-3804
The University of Sydney, Australia
BARC, University of Copenhagen, Denmark
We present a near-linear time approximation algorithm for the subtrajectory cluster problem of c-packed trajectories. Given a trajectory T of complexity n, an approximation factor ε, and a desired distance d, the problem involves finding m subtrajectories of T such that their pair-wise Fréchet distance is at most (1 + ε)d. At least one subtrajectory must be of length l or longer. A trajectory T is c-packed if the intersection of T and any ball B with radius r is at most c⋅r in length.
Previous results by Gudmundsson and Wong [Gudmundsson and Wong, 2022] established an Ω(n³) lower bound unless the Strong Exponential Time Hypothesis fails, and they presented an O(n³ log² n) time algorithm. We circumvent this conditional lower bound by studying subtrajectory cluster on c-packed trajectories, resulting in an algorithm with an O((c² n/ε²)log(c/ε)log(n/ε)) time complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.34/LIPIcs.ISAAC.2023.34.pdf
Subtrajectory cluster
c-packed trajectories
Computational geometry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
35:1
35:17
10.4230/LIPIcs.ISAAC.2023.35
article
Shortest Beer Path Queries in Digraphs with Bounded Treewidth
Gudmundsson, Joachim
1
Sha, Yuan
1
The University of Sydney, Australia
A beer digraph G is a real-valued weighted directed graph where some of the vertices have beer stores. A beer path from a vertex u to a vertex v in G is a path in G from u to v that visits at least one beer store.
In this paper we consider the online shortest beer path query in beer digraphs with bounded treewidth t. Assume that a tree decomposition of treewidth t on a beer digraph with n vertices is given. We show that after O(t³n) time preprocessing on the beer digraph, (i) a beer distance query can be answered in O(t³α(n)) time, where α(n) is the inverse Ackermann function, and (ii) a shortest beer path can be reported in O(t³α(n)L) time, where L is the number of edges on the path. In the process we show an improved O(t³α(n)L) time shortest path query algorithm, compared with the currently best O(t⁴α(n)L) time algorithm [Chaudhuri & Zaroliagis, 2000].
We also consider queries in a dynamic setting where the weight of an edge in G can change over time. We show two data structures. Assume t is constant and let β be any constant in (0,1). The first data structure uses O(n) preprocessing time, answers a beer distance query in O(α(n)) time and reports a shortest beer path in O(α(n) L) time. It can be updated in O(n^β) time after an edge weight change. The second data structure has O(n) preprocessing time, answers a beer distance query in O(log n) time, reports a shortest beer path in O(log n + L) time, and can be updated in O(log n) time after an edge weight change.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.35/LIPIcs.ISAAC.2023.35.pdf
Graph algorithms
Shortest Path
Data structures
Bounded treewidth
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
36:1
36:14
10.4230/LIPIcs.ISAAC.2023.36
article
Coloring and Recognizing Mixed Interval Graphs
Gutowski, Grzegorz
1
https://orcid.org/0000-0003-3313-1237
Junosza-Szaniawski, Konstanty
2
https://orcid.org/0000-0003-0352-8583
Klesen, Felix
3
https://orcid.org/0000-0003-1136-5673
Rzążewski, Paweł
2
4
https://orcid.org/0000-0001-7696-3848
Wolff, Alexander
3
https://orcid.org/0000-0001-5872-718X
Zink, Johannes
3
https://orcid.org/0000-0002-7398-718X
Theoretical Computer Science Department, Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
Warsaw University of Technology, Poland
Universität Würzburg, Germany
Institute of Informatics, University of Warsaw, Poland
A mixed interval graph is an interval graph that has, for every pair of intersecting intervals, either an arc (directed arbitrarily) or an (undirected) edge. We are particularly interested in scenarios where edges and arcs are defined by the geometry of intervals. In a proper coloring of a mixed interval graph G, an interval u receives a lower (different) color than an interval v if G contains arc (u,v) (edge {u,v}). Coloring of mixed graphs has applications, for example, in scheduling with precedence constraints; see a survey by Sotskov [Mathematics, 2020].
For coloring general mixed interval graphs, we present a min {ω(G), λ(G)+1}-approximation algorithm, where ω(G) is the size of a largest clique and λ(G) is the length of a longest directed path in G. For the subclass of bidirectional interval graphs (introduced recently for an application in graph drawing), we show that optimal coloring is NP-hard. This was known for general mixed interval graphs.
We introduce a new natural class of mixed interval graphs, which we call containment interval graphs. In such a graph, there is an arc (u,v) if interval u contains interval v, and there is an edge {u,v} if u and v overlap. We show that these graphs can be recognized in polynomial time, that coloring them with the minimum number of colors is NP-hard, and that there is a 2-approximation algorithm for coloring.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.36/LIPIcs.ISAAC.2023.36.pdf
Interval Graphs
Mixed Graphs
Graph Coloring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
37:1
37:20
10.4230/LIPIcs.ISAAC.2023.37
article
Shortest Beer Path Queries Based on Graph Decomposition
Hanaka, Tesshu
1
https://orcid.org/0000-0001-6943-856X
Ono, Hirotaka
2
https://orcid.org/0000-0003-0845-3947
Sadakane, Kunihiko
3
https://orcid.org/0000-0002-8212-3682
Sugiyama, Kosuke
2
https://orcid.org/0009-0004-9419-9176
Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
Graduate School of Informatics, Nagoya University, Japan
Graduate School of Information Science and Technology, The University of Tokyo, Japan
Given a directed edge-weighted graph G = (V, E) with beer vertices B ⊆ V, a beer path between two vertices u and v is a path between u and v that visits at least one beer vertex in B, and the beer distance between two vertices is the shortest length of beer paths. We consider indexing problems on beer paths, that is, a graph is given a priori, and we construct some data structures (called indexes) for the graph. Then later, we are given two vertices, and we find the beer distance or beer path between them using the data structure. For such a scheme, efficient algorithms using indexes for the beer distance and beer path queries have been proposed for outerplanar graphs and interval graphs. For example, Bacic et al. (2021) present indexes with size O(n) for outerplanar graphs and an algorithm using them that answers the beer distance between given two vertices in O(α(n)) time, where α(⋅) is the inverse Ackermann function; the performance is shown to be optimal. This paper proposes indexing data structures and algorithms for beer path queries on general graphs based on two types of graph decomposition: the tree decomposition and the triconnected component decomposition. We propose indexes with size O(m+nr²) based on the triconnected component decomposition, where r is the size of the largest triconnected component. For a given query u,v ∈ V, our algorithm using the indexes can output the beer distance in query time O(α(m)). In particular, our indexing data structures and algorithms achieve the optimal performance (the space and the query time) for series-parallel graphs, which is a wider class of outerplanar graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.37/LIPIcs.ISAAC.2023.37.pdf
graph algorithm
shortest path problem
SPQR tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
38:1
38:19
10.4230/LIPIcs.ISAAC.2023.38
article
Temporal Separators with Deadlines
Harutyunyan, Hovhannes A.
1
Koupayi, Kamran
1
Pankratov, Denis
1
Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An (s,z)-temporal separator is a set of vertices whose removal disconnects vertex s from vertex z for every time step in a temporal graph. The (s,z)-Temporal Separator problem asks to find the minimum size of an (s,z)-temporal separator for the given temporal graph. The (s,z)-Temporal Separator problem is known to be NP-hard in general, although some special cases (such as bounded treewidth) admit efficient algorithms [Fluschnik et al., 2020].
We introduce a generalization of this problem called the (s,z,t)-Temporal Separator problem, where the goal is to find a smallest subset of vertices whose removal eliminates all temporal paths from s to z which take less than t time steps. Let τ denote the number of time steps over which the temporal graph is defined (we consider discrete time steps). We characterize the set of parameters τ and t when the problem is NP-hard and when it is polynomial time solvable. Then we present a τ-approximation algorithm for the (s,z)-Temporal Separator problem and convert it to a τ²-approximation algorithm for the (s,z,t)-Temporal Separator problem. We also present an inapproximability lower bound of Ω(ln(n) + ln(τ)) for the (s,z,t)-Temporal Separator problem assuming that NP ⊄ DTIME(n^{log log n}). Then we consider three special families of graphs: (1) graphs of branchwidth at most 2, (2) graphs G such that the removal of s and z leaves a tree, and (3) graphs of bounded pathwidth. We present polynomial-time algorithms to find a minimum (s,z,t)-temporal separator for (1) and (2). As for (3), we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the (s,z,t)-Temporal Separator problem where the temporal graph has bounded pathwidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.38/LIPIcs.ISAAC.2023.38.pdf
Temporal graphs
dynamic graphs
vertex separator
vertex cut
separating set
deadlines
inapproximability
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
39:1
39:16
10.4230/LIPIcs.ISAAC.2023.39
article
Regularization of Low Error PCPs and an Application to MCSP
Hirahara, Shuichi
1
Moshkovitz, Dana
2
National Institute of Informatics, Tokyo, Japan
Department of Computer Science, University of Texas at Austin, TX, USA
In a regular PCP the verifier queries each proof symbol in the same number of tests. This number is called the degree of the proof, and it is at least 1/(sq) where s is the soundness error and q is the number of queries. It is incredibly useful to have regularity and reduced degree in PCP. There is an expander-based transformation by Papadimitriou and Yannakakis that transforms any PCP with a constant number of queries and constant soundness error to a regular PCP with constant degree. There are also transformations for low error projection and unique PCPs. Other PCPs are constructed especially to be regular. In this work we show how to regularize and reduce degree of PCPs with a possibly large number of queries and low soundness error.
As an application, we prove NP-hardness of an unweighted variant of the collective minimum monotone satisfying assignment problem, which was introduced by Hirahara (FOCS'22) to prove NP-hardness of MCSP^* (the partial function variant of the Minimum Circuit Size Problem) under randomized reductions. We present a simplified proof and sufficient conditions under which MCSP^* is NP-hard under the standard notion of reduction: MCSP^* is NP-hard under deterministic polynomial-time many-one reductions if there exists a function in E that satisfies certain direct sum properties.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.39/LIPIcs.ISAAC.2023.39.pdf
PCP theorem
regularization
Minimum Circuit Size Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
40:1
40:14
10.4230/LIPIcs.ISAAC.2023.40
article
Structural Parameterizations of b-Coloring
Jaffke, Lars
1
https://orcid.org/0000-0003-4856-5863
Lima, Paloma T.
2
https://orcid.org/0000-0001-9304-4536
Sharma, Roohani
3
https://orcid.org/0000-0003-2212-1359
University of Bergen, Norway
IT University of Copenhagen, Denmark
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
The b-Coloring problem, which given a graph G and an integer k asks whether G has a proper k-coloring such that each color class has a vertex adjacent to all color classes except its own, is known to be FPT parameterized by the vertex cover number and XP and 𝖶[1]-hard parameterized by clique-width. Its complexity when parameterized by the treewidth of the input graph remained an open problem. We settle this question by showing that b-Coloring is XNLP-complete when parameterized by the pathwidth of the input graph. Besides determining the precise parameterized complexity of this problem, this implies that b-Coloring parameterized by pathwidth is 𝖶[t]-hard for all t, and resolves the parameterized complexity of b-Coloring parameterized by treewidth. We complement this result by showing that b-Coloring is FPT when parameterized by neighborhood diversity and by twin cover, two parameters that generalize vertex cover to more dense graphs, but are incomparable to pathwidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.40/LIPIcs.ISAAC.2023.40.pdf
b-coloring
structural parameterization
XNLP
pathwidth
neighborhood diversity
twin cover
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
41:1
41:16
10.4230/LIPIcs.ISAAC.2023.41
article
Clustering What Matters in Constrained Settings: Improved Outlier to Outlier-Free Reductions
Jaiswal, Ragesh
1
https://orcid.org/0009-0002-4475-0922
Kumar, Amit
1
https://orcid.org/0000-0002-3965-6627
CSE, IIT Delhi, India
Constrained clustering problems generalize classical clustering formulations, e.g., k-median, k-means, by imposing additional constraints on the feasibility of a clustering. There has been significant recent progress in obtaining approximation algorithms for these problems, both in the metric and the Euclidean settings. However, the outlier version of these problems, where the solution is allowed to leave out m points from the clustering, is not well understood. In this work, we give a general framework for reducing the outlier version of a constrained k-median or k-means problem to the corresponding outlier-free version with only (1+ε)-loss in the approximation ratio. The reduction is obtained by mapping the original instance of the problem to f(k, m, ε) instances of the outlier-free version, where f(k, m, ε) = ((k+m)/ε)^O(m). As specific applications, we get the following results:
- First FPT (in the parameters k and m) (1+ε)-approximation algorithm for the outlier version of capacitated k-median and k-means in Euclidean spaces with hard capacities.
- First FPT (in the parameters k and m) (3+ε) and (9+ε) approximation algorithms for the outlier version of capacitated k-median and k-means, respectively, in general metric spaces with hard capacities.
- First FPT (in the parameters k and m) (2-δ)-approximation algorithm for the outlier version of the k-median problem under the Ulam metric.
Our work generalizes the results of Bhattacharya et al. and Agrawal et al. to a larger class of constrained clustering problems. Further, our reduction works for arbitrary metric spaces and so can extend clustering algorithms for outlier-free versions in both Euclidean and arbitrary metric spaces.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.41/LIPIcs.ISAAC.2023.41.pdf
clustering
constrained
outlier
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
42:1
42:18
10.4230/LIPIcs.ISAAC.2023.42
article
Single-Exponential FPT Algorithms for Enumerating Secluded ℱ-Free Subgraphs and Deleting to Scattered Graph Classes
Jansen, Bart M. P.
1
https://orcid.org/0000-0001-8204-1268
de Kroon, Jari J. H.
1
https://orcid.org/0000-0003-3328-9712
Włodarczyk, Michał
2
https://orcid.org/0000-0003-0968-8414
Eindhoven University of Technology, The Netherlands
University of Warsaw, Poland
The celebrated notion of important separators bounds the number of small (S,T)-separators in a graph which are "farthest from S" in a technical sense. In this paper, we introduce a generalization of this powerful algorithmic primitive, tailored to undirected graphs, that is phrased in terms of k-secluded vertex sets: sets with an open neighborhood of size at most k.
In this terminology, the bound on important separators says that there are at most 4^k maximal k-secluded connected vertex sets C containing S but disjoint from T. We generalize this statement significantly: even when we demand that G[C] avoids a finite set ℱ of forbidden induced subgraphs, the number of such maximal subgraphs is 2^𝒪(k) and they can be enumerated efficiently. This enumeration algorithm allows us to make significant improvements for two problems from the literature.
Our first application concerns the Connected k-Secluded ℱ-free subgraph problem, where ℱ is a finite set of forbidden induced subgraphs. Given a graph in which each vertex has a positive integer weight, the problem asks to find a maximum-weight connected k-secluded vertex set C ⊆ V(G) such that G[C] does not contain an induced subgraph isomorphic to any F ∈ ℱ. The parameterization by k is known to be solvable in triple-exponential time via the technique of recursive understanding, which we improve to single-exponential.
Our second application concerns the deletion problem to scattered graph classes. A scattered graph class is defined by demanding that every connected component is contained in at least one of the prescribed graph classes Π_1, …, Π_d. The deletion problem to a scattered graph class is to find a vertex set of size at most k whose removal yields a graph from the class. We obtain a single-exponential algorithm whenever each class Π_i is characterized by a finite number of forbidden induced subgraphs. This generalizes and improves upon earlier results in the literature.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.42/LIPIcs.ISAAC.2023.42.pdf
fixed-parameter tractability
important separators
secluded subgraphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
43:1
43:18
10.4230/LIPIcs.ISAAC.2023.43
article
Is the Algorithmic Kadison-Singer Problem Hard?
Jourdan, Ben
1
Macgregor, Peter
1
Sun, He
1
University of Edinburgh, UK
We study the following KS₂(c) problem: let c ∈ ℝ^+ be some constant, and v₁,…, v_m ∈ ℝ^d be vectors such that ‖v_i‖² ≤ α for any i ∈ [m] and ∑_{i=1}^m ⟨v_i, x⟩² = 1 for any x ∈ ℝ^d with ‖x‖ = 1. The KS₂(c) problem asks to find some S ⊂ [m], such that it holds for all x ∈ ℝ^d with ‖x‖ = 1 that |∑_{i∈S} ⟨v_i, x⟩² - 1/2| ≤ c⋅√α, or report no if such S doesn't exist. Based on the work of Marcus et al. [Adam Marcus et al., 2013] and Weaver [Nicholas Weaver, 2004], the KS₂(c) problem can be seen as the algorithmic Kadison-Singer problem with parameter c ∈ ℝ^+.
Our first result is a randomised algorithm with one-sided error for the KS₂(c) problem such that (1) our algorithm finds a valid set S ⊂ [m] with probability at least 1-2/d, if such S exists, or (2) reports no with probability 1, if no valid sets exist. The algorithm has running time O(binom(m,n)⋅poly(m, d)) for n = O(d/ε² log(d) log(1/(c√α))), where ε is a parameter which controls the error of the algorithm. This presents the first algorithm for the Kadison-Singer problem whose running time is quasi-polynomial in m in a certain regime, although having exponential dependency on d. Moreover, it shows that the algorithmic Kadison-Singer problem is easier to solve in low dimensions. Our second result is on the computational complexity of the KS₂(c) problem. We show that the KS₂(1/(4√2)) problem is FNP-hard for general values of d, and solving the KS₂(1/(4√2)) problem is as hard as solving the NAE-3SAT problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.43/LIPIcs.ISAAC.2023.43.pdf
Kadison-Singer problem
spectral sparsification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
44:1
44:18
10.4230/LIPIcs.ISAAC.2023.44
article
Succinct Planar Encoding with Minor Operations
Kammer, Frank
1
https://orcid.org/0000-0002-2662-3471
Meintrup, Johannes
1
https://orcid.org/0000-0003-4001-1153
THM, University of Applied Sciences Mittelhessen, Giessen, Germany
Let G be an unlabeled planar and simple n-vertex graph. Unlabeled graphs are graphs where the label-information is either not given or lost during the construction of data-structures. We present a succinct encoding of G that provides induced-minor operations, i.e., edge contractions and vertex deletions. Any sequence of such operations is processed in O(n) time in the word-RAM model. At all times the encoding provides constant time (per element output) neighborhood access and degree queries. Optional hash tables extend the encoding with constant expected time adjacency queries and edge-deletion (thus, all minor operations are supported) such that any number of edge deletions are computed in O(n) expected time. Constructing the encoding requires O(n) bits and O(n) time. The encoding requires ℋ(n) + o(n) bits of space with ℋ(n) being the entropy of encoding a planar graph with n vertices. Our data structure is based on the recent result of Holm et al. [ESA 2017] who presented a linear time contraction data structure that allows to maintain parallel edges and works for labeled graphs, but uses Θ(n log n) bits of space. We combine the techniques used by Holm et al. with novel ideas and the succinct encoding of Blelloch and Farzan [CPM 2010] for arbitrary separable graphs. Our result partially answers the question raised by Blelloch and Farzan whether their encoding can be modified to allow modifications of the graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.44/LIPIcs.ISAAC.2023.44.pdf
planar graph
r-division
separator
succinct encoding
graph minors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
45:1
45:14
10.4230/LIPIcs.ISAAC.2023.45
article
Improved Approximation Algorithm for Capacitated Facility Location with Uniform Facility Cost
Kao, Mong-Jen
1
https://orcid.org/0000-0002-7238-3093
Department of Computer Science, National Yang-Ming Chiao-Tung University, Hsinchu, Taiwan
We consider the hard-capacitated facility location problem with uniform facility cost (CFL-UFC). This problem arises as an indicator variation between the general CFL problem and the uncapacitated facility location (UFL) problem, and is related to the profound capacitated k-median problem (CKM).
In this work, we present a rounding-based 4-approximation algorithm for this problem, built on a two-staged rounding scheme that incorporates a set of novel ideas and also techniques developed in the past for both facility location and capacitated covering problems. Our result improves the decades-old LP-based ratio of 5 for this problem due to Levi et al. since 2004. We believe that the techniques developed in this work are of independent interests and may further lead to insights and implications for related problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.45/LIPIcs.ISAAC.2023.45.pdf
Capacitated facility location
Hard capacities
Uniform facility cost
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
46:1
46:13
10.4230/LIPIcs.ISAAC.2023.46
article
The st-Planar Edge Completion Problem Is Fixed-Parameter Tractable
Khazaliya, Liana
1
https://orcid.org/0009-0002-3012-7240
Kindermann, Philipp
2
https://orcid.org/0000-0001-5764-7719
Liotta, Giuseppe
3
https://orcid.org/0000-0002-2886-9694
Montecchiani, Fabrizio
3
https://orcid.org/0000-0002-0543-8912
Simonov, Kirill
4
https://orcid.org/0000-0001-9436-7310
Technische Universität Wien, Austria
FB IV - Informatikwissenschaften, Universität Trier, Germany
Department of Engineering, University of Perugia, Italy
Hasso Plattner Institute, Universität Potsdam, Germany
The problem of deciding whether a biconnected planar digraph G = (V,E) can be augmented to become an st-planar graph by adding a set of oriented edges E' ⊆ V × V is known to be NP-complete. We show that the problem is fixed-parameter tractable when parameterized by the size of the set E'.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.46/LIPIcs.ISAAC.2023.46.pdf
st-planar graphs
parameterized complexity
upward planarity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
47:1
47:17
10.4230/LIPIcs.ISAAC.2023.47
article
A Combinatorial Certifying Algorithm for Linear Programming Problems with Gainfree Leontief Substitution Systems
Kimura, Kei
1
https://orcid.org/0000-0002-0560-5127
Makino, Kazuhisa
2
Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
Research Institute for Mathematical Sciences, Kyoto University, Japan
Linear programming (LP) problems with gainfree Leontief substitution systems have been intensively studied in economics and operations research, and include the feasibility problem of a class of Horn systems, which arises in, e.g., polyhedral combinatorics and logic. This subclass of LP problems admits a strongly polynomial time algorithm, where devising such an algorithm for general LP problems is one of the major theoretical open questions in mathematical optimization and computer science. Recently, much attention has been paid to devising certifying algorithms in software engineering, since those algorithms enable one to confirm the correctness of outputs of programs with simple computations. Devising a combinatorial certifying algorithm for the feasibility of the fundamental class of Horn systems remains open for almost a decade. In this paper, we provide the first combinatorial (and strongly polynomial time) certifying algorithm for LP problems with gainfree Leontief substitution systems. As a by-product, we resolve the open question on the feasibility of the class of Horn systems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.47/LIPIcs.ISAAC.2023.47.pdf
linear programming problem
certifying algorithm
Horn system
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
48:1
48:14
10.4230/LIPIcs.ISAAC.2023.48
article
Reconfiguration of the Union of Arborescences
Kobayashi, Yusuke
1
https://orcid.org/0000-0001-9478-7307
Mahara, Ryoga
2
https://orcid.org/0000-0002-4471-7914
Schwarcz, Tamás
3
https://orcid.org/0000-0003-0373-7414
Research Institute for Mathematical Sciences, Kyoto University, Japan
Department of Mathematical Informatics, University of Tokyo, Japan
MTA-ELTE Momentum Matroid Optimization Research Group, Department of Operations Research, ELTE Eötvös Loránd University, Budapest, Hungary
An arborescence in a digraph is an acyclic arc subset in which every vertex except a root has exactly one incoming arc. In this paper, we show the reconfigurability of the union of k arborescences for fixed k in the following sense: for any pair of arc subsets that can be partitioned into k arborescences, one can be transformed into the other by exchanging arcs one by one so that every intermediate arc subset can also be partitioned into k arborescences. This generalizes the result by Ito et al. (2023), who showed the case with k = 1. Since the union of k arborescences can be represented as a common matroid basis of two matroids, our result gives a new non-trivial example of matroid pairs for which two common bases are always reconfigurable to each other.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.48/LIPIcs.ISAAC.2023.48.pdf
Arborescence packing
common matroid basis
combinatorial reconfiguration
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
49:1
49:10
10.4230/LIPIcs.ISAAC.2023.49
article
An Approximation Algorithm for Two-Edge-Connected Subgraph Problem via Triangle-Free Two-Edge-Cover
Kobayashi, Yusuke
1
https://orcid.org/0000-0001-9478-7307
Noguchi, Takashi
1
Research Institute for Mathematical Sciences, Kyoto University, Japan
The 2-Edge-Connected Spanning Subgraph problem (2-ECSS) is one of the most fundamental and well-studied problems in the context of network design. We are given an undirected graph G, and the objective is to find a 2-edge-connected spanning subgraph H of G with the minimum number of edges. For this problem, a lot of approximation algorithms have been proposed in the literature. In particular, very recently, Garg, Grandoni, and Ameli gave an approximation algorithm for 2-ECSS with a factor of 1.326, which is the best approximation ratio. In this paper, under the assumption that a maximum triangle-free 2-matching can be found in polynomial time in a graph, we give a (1.3+ε)-approximation algorithm for 2-ECSS, where ε is an arbitrarily small positive fixed constant. Note that a complicated polynomial-time algorithm for finding a maximum triangle-free 2-matching is announced by Hartvigsen in his PhD thesis, but it has not been peer-reviewed or checked in any other way. In our algorithm, we compute a minimum triangle-free 2-edge-cover in G with the aid of the algorithm for finding a maximum triangle-free 2-matching. Then, with the obtained triangle-free 2-edge-cover, we apply the arguments by Garg, Grandoni, and Ameli.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.49/LIPIcs.ISAAC.2023.49.pdf
approximation algorithm
survivable network design
minimum 2-edge-connected spanning subgraph
triangle-free 2-matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
50:1
50:15
10.4230/LIPIcs.ISAAC.2023.50
article
On Min-Max Graph Balancing with Strict Negative Correlation Constraints
Kuo, Ting-Yu
1
Chen, Yu-Han
2
Frosini, Andrea
3
https://orcid.org/0000-0001-7210-2231
Hsieh, Sun-Yuan
2
4
https://orcid.org/0000-0003-4746-3179
Tsai, Shi-Chun
1
https://orcid.org/0000-0002-0085-0377
Kao, Mong-Jen
1
https://orcid.org/0000-0002-7238-3093
Dept. of Computer Science, National Yang-Ming Chiao-Tung University, Hsinchu, Taiwan
Dept. of Computer Science and Information Engineering, National Cheng-Kung University, Tainan, Taiwan
Dept. of Mathematics and Informatics, University of Florence, Italy
Dept. of Computer Science and Information Engineering, National Chi-Nan University, Puli, Taiwan
We consider the min-max graph balancing problem with strict negative correlation (SNC) constraints. The graph balancing problem arises as an equivalent formulation of the classic unrelated machine scheduling problem, where we are given a hypergraph G = (V,E) with vertex-dependent edge weight function p: E×V ↦ ℤ^{≥0} that represents the processing time of the edges (jobs). The SNC constraints, which are given as edge subsets C_1,C_2,…,C_k, require that the edges in the same subset cannot be assigned to the same vertex at the same time. Under these constraints, the goal is to compute an edge orientation (assignment) that minimizes the maximum workload of the vertices.
In this paper, we conduct a general study on the approximability of this problem. First, we show that, in the presence of SNC constraints, the case with max_{e ∈ E} |e| = max_i |C_i| = 2 is the only case for which approximation solutions can be obtained. Further generalization on either direction, e.g., max_{e ∈ E} |e| or max_i |C_i|, will directly make computing a feasible solution an NP-complete problem to solve. Then, we present a 2-approximation algorithm for the case with max_{e ∈ E} |e| = max_i |C_i| = 2, based on a set of structural simplifications and a tailored assignment LP for this problem. We note that our approach is general and can be applied to similar settings, e.g., scheduling with SNC constraints to minimize the weighted completion time, to obtain similar approximation guarantees.
Further cases are discussed to describe the landscape of the approximability of this prbolem. For the case with |V| ≤ 2, which is already known to be NP-hard, we present a fully-polynomial time approximation scheme (FPTAS). On the other hand, we show that the problem is at least as hard as vertex cover to approximate when |V| ≥ 3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.50/LIPIcs.ISAAC.2023.50.pdf
Unrelated Scheduling
Graph Balancing
Strict Correlation Constraints
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
51:1
51:14
10.4230/LIPIcs.ISAAC.2023.51
article
On the Line-Separable Unit-Disk Coverage and Related Problems
Liu, Gang
1
Wang, Haitao
1
Kahlert School of Computing, University of Utah, Salt Lake City, UT, USA
Given a set P of n points and a set S of m disks in the plane, the disk coverage problem asks for a smallest subset of disks that together cover all points of P. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of P by a line 𝓁. We present an m^{2/3} n^{2/3} 2^O(log^*(m+n)) + O((n+m)log(n+m)) time algorithm for the problem. This improves the previously best result of O(nm + n log n) time. Our techniques also solve the line-constrained version of the problem, where centers of all disks of S are located on a line 𝓁 while points of P can be anywhere in the plane. Our algorithm runs in O(m√n + (n+m)log(n+m)) time, which improves the previously best result of O(nm log(m+n)) time. In addition, our results lead to an algorithm of n^{10/3} 2^O(log^*n) time for a half-plane coverage problem (given n half-planes and n points, find a smallest subset of half-planes covering all points); this improves the previously best algorithm of O(n⁴log n) time. Further, if all half-planes are lower ones, our algorithm runs in n^{4/3} 2^O(log^*n) time while the previously best algorithm takes O(n²log n) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.51/LIPIcs.ISAAC.2023.51.pdf
disk coverage
line-separable
unit-disk
line-constrained
half-planes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
52:1
52:16
10.4230/LIPIcs.ISAAC.2023.52
article
Improved Smoothed Analysis of 2-Opt for the Euclidean TSP
Manthey, Bodo
1
https://orcid.org/0000-0001-6278-5059
van Rhijn, Jesse
1
https://orcid.org/0000-0002-3416-7672
Faculty of Electrical Engineering, Mathematics, and Computer Science, University of Twente, Enschede, The Netherlands
The 2-opt heuristic is a simple local search heuristic for the Travelling Salesperson Problem (TSP). Although it usually performs well in practice, its worst-case running time is poor. Attempts to reconcile this difference have used smoothed analysis, in which adversarial instances are perturbed probabilistically. We are interested in the classical model of smoothed analysis for the Euclidean TSP, in which the perturbations are Gaussian. This model was previously used by Manthey & Veenstra, who obtained smoothed complexity bounds polynomial in n, the dimension d, and the perturbation strength σ^{-1}. However, their analysis only works for d ≥ 4. The only previous analysis for d ≤ 3 was performed by Englert, Röglin & Vöcking, who used a different perturbation model which can be translated to Gaussian perturbations. Their model yields bounds polynomial in n and σ^{-d}, and super-exponential in d. As the fact that no direct analysis exists for Gaussian perturbations that yields polynomial bounds for all d is somewhat unsatisfactory, we perform this missing analysis. Along the way, we improve all existing smoothed complexity bounds for Euclidean 2-opt with Gaussian perturbations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.52/LIPIcs.ISAAC.2023.52.pdf
Travelling salesman problem
smoothed analysis
probabilistic analysis
local search
heuristics
2-opt
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
53:1
53:17
10.4230/LIPIcs.ISAAC.2023.53
article
On the Complexity of the Eigenvalue Deletion Problem
Misra, Neeldhara
1
https://orcid.org/0000-0003-1727-5388
Mittal, Harshil
1
Saurabh, Saket
2
3
https://orcid.org/0000-0001-7847-6402
Thakkar, Dhara
1
https://orcid.org/0000-0002-4234-0105
Indian Institute of Technology, Gandhinagar, India
Institute of Mathematical Sciences, Chennai, India
University of Bergen, Norway
For any fixed positive integer r and a given budget k, the r-Eigenvalue Vertex Deletion (r-EVD) problem asks if a graph G admits a subset S of at most k vertices such that the adjacency matrix of G⧵S has at most r distinct eigenvalues. The edge deletion, edge addition, and edge editing variants are defined analogously. For r = 1, r-EVD is equivalent to the Vertex Cover problem. For r = 2, it turns out that r-EVD amounts to removing a subset S of at most k vertices so that G⧵ S is a cluster graph where all connected components have the same size.
We show that r-EVD is NP-complete even on bipartite graphs with maximum degree four for every fixed r > 2, and FPT when parameterized by the solution size and the maximum degree of the graph.
We also establish several results for the special case when r = 2. For the vertex deletion variant, we show that 2-EVD is NP-complete even on triangle-free and 3d-regular graphs for any d ≥ 2, and also NP-complete on d-regular graphs for any d ≥ 8. The edge deletion, addition, and editing variants are all NP-complete for r = 2. The edge deletion problem admits a polynomial time algorithm if the input is a cluster graph, while - in contrast - the edge addition variant is hard even when the input is a cluster graph. We show that the edge addition variant has a quadratic kernel. The edge deletion and vertex deletion variants admit a single-exponential FPT algorithm when parameterized by the solution size alone.
Our main contribution is to develop the complexity landscape for the problem of modifying a graph with the aim of reducing the number of distinct eigenvalues in the spectrum of its adjacency matrix. It turns out that this captures, apart from Vertex Cover, also a natural variation of the problem of modifying to a cluster graph as a special case, which we believe may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.53/LIPIcs.ISAAC.2023.53.pdf
Graph Modification
Rank Reduction
Eigenvalues
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
54:1
54:12
10.4230/LIPIcs.ISAAC.2023.54
article
Connected Vertex Cover on AT-Free Graphs
Mukherjee, Joydeep
1
Saha, Tamojit
1
2
Ramakrishna Mission Vivekananda Educational and Research Institute, Belur, India
Institute of Advancing Intelligence, TCG CREST, Kolkata, India
Asteroidal Triple (AT) in a graph is an independent set of three vertices such that every pair of them has a path between them avoiding the neighbourhood of the third. A graph is called AT-free if it does not contain any asteroidal triple. A connected vertex cover of a graph is a subset of its vertices which contains at least one endpoint of each edge and induces a connected subgraph. Settling the complexity of computing a minimum connected vertex cover in an AT-free graph was mentioned as an open problem in Escoffier et al. [Escoffier et al., 2010]. In this paper we answer the question by presenting an exact polynomial time algorithm for computing a minimum connected vertex cover problem on AT-free graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.54/LIPIcs.ISAAC.2023.54.pdf
Graph Algorithm
AT-free graphs
Connected Vertex Cover
Optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
55:1
55:18
10.4230/LIPIcs.ISAAC.2023.55
article
On the Fine-Grained Query Complexity of Symmetric Functions
Podder, Supartha
1
Yao, Penghui
2
3
Ye, Zekun
2
Department of Computer Science, Stony Brook University, New York, NY, USA
State Key Laboratory for Novel Software Technology, Nanjing University, China
Hefei National Laboratory, China
Watrous conjectured that the randomized and quantum query complexities of symmetric functions are polynomially equivalent, which was resolved by Ambainis and Aaronson [Scott Aaronson and Andris Ambainis, 2014], and was later improved in [André Chailloux, 2019; Shalev Ben-David et al., 2020]. This paper explores a fine-grained version of the Watrous conjecture, including the randomized and quantum algorithms with success probabilities arbitrarily close to 1/2. Our contributions include the following:
1) An analysis of the optimal success probability of quantum and randomized query algorithms of two fundamental partial symmetric Boolean functions given a fixed number of queries. We prove that for any quantum algorithm computing these two functions using T queries, there exist randomized algorithms using poly(T) queries that achieve the same success probability as the quantum algorithm, even if the success probability is arbitrarily close to 1/2. These two classes of functions are instrumental in analyzing general symmetric functions.
2) We establish that for any total symmetric Boolean function f, if a quantum algorithm uses T queries to compute f with success probability 1/2+β, then there exists a randomized algorithm using O(T²) queries to compute f with success probability 1/2 + Ω(δβ²) on a 1-δ fraction of inputs, where β,δ can be arbitrarily small positive values. As a corollary, we prove a randomized version of Aaronson-Ambainis Conjecture [Scott Aaronson and Andris Ambainis, 2014] for total symmetric Boolean functions in the regime where the success probability of algorithms can be arbitrarily close to 1/2.
3) We present polynomial equivalences for several fundamental complexity measures of partial symmetric Boolean functions. Specifically, we first prove that for certain partial symmetric Boolean functions, quantum query complexity is at most quadratic in approximate degree for any error arbitrarily close to 1/2. Next, we show exact quantum query complexity is at most quadratic in degree. Additionally, we give the tight bounds of several complexity measures, indicating their polynomial equivalence. Conversely, we exhibit an exponential separation between randomized and exact quantum query complexity for certain partial symmetric Boolean functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.55/LIPIcs.ISAAC.2023.55.pdf
Query complexity
Symmetric functions
Quantum advantages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
56:1
56:17
10.4230/LIPIcs.ISAAC.2023.56
article
Testing Properties of Distributions in the Streaming Model
Roy, Sampriti
1
https://orcid.org/0009-0003-7938-945X
Vasudev, Yadu
1
https://orcid.org/0000-0001-7918-7194
Department of Computer Science and Engineering, IIT Madras, Chennai, India
We study distribution testing in the standard access model and the conditional access model when the memory available to the testing algorithm is bounded. In both scenarios, we consider the samples appear in an online fashion. The goal is to test the properties of distribution using an optimal number of samples subject to a memory constraint on how many samples can be stored at a given time. First, we provide a trade-off between the sample complexity and the space complexity for testing identity when the samples are drawn according to the conditional access oracle. We then show that we can learn a succinct representation of a monotone distribution efficiently with a memory constraint on the number of samples that are stored that is almost optimal. We also show that the algorithm for monotone distributions can be extended to a larger class of decomposable distributions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.56/LIPIcs.ISAAC.2023.56.pdf
Property testing
distribution testing
streaming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-11-28
283
57:1
57:17
10.4230/LIPIcs.ISAAC.2023.57
article
A Strongly Polynomial-Time Algorithm for Weighted General Factors with Three Feasible Degrees
Shao, Shuai
1
https://orcid.org/0000-0003-0935-2929
Živný, Stanislav
2
https://orcid.org/0000-0002-0263-159X
School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China
Department of Computer Science, University of Oxford, UK
General factors are a generalization of matchings. Given a graph G with a set π(v) of feasible degrees, called a degree constraint, for each vertex v of G, the general factor problem is to find a (spanning) subgraph F of G such that deg_F(v) ∈ π(v) for every v of G. When all degree constraints are symmetric Δ-matroids, the problem is solvable in polynomial time. The weighted general factor problem is to find a general factor of the maximum total weight in an edge-weighted graph. Strongly polynomial-time algorithms are only known for weighted general factor problems that are reducible to the weighted matching problem by gadget constructions.
In this paper, we present a strongly polynomial-time algorithm for a type of weighted general factor problems with real-valued edge weights that is provably not reducible to the weighted matching problem by gadget constructions. As an application, we obtain a strongly polynomial-time algorithm for the terminal backup problem by reducing it to the weighted general factor problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol283-isaac2023/LIPIcs.ISAAC.2023.57/LIPIcs.ISAAC.2023.57.pdf
matchings
factors
edge constraint satisfaction problems
terminal backup problem
delta matroids