eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
1
1026
10.4230/LIPIcs.STACS.2023
article
LIPIcs, Volume 254, STACS 2023, Complete Volume
Berenbrink, Petra
1
Bouyer, Patricia
2
https://orcid.org/0000-0002-2823-0911
Dawar, Anuj
3
https://orcid.org/0000-0003-4014-8248
Kanté, Mamadou Moustapha
4
https://orcid.org/0000-0003-1838-7744
University of Hamburg, Germany
Université Paris-Saclay, CNRS, ENS Paris-Saclay, LMF, Gif-sur-Yvette, France
University of Cambridge, UK
Université Clermont Auvergne, Clermont Auvergne INP, LIMOS, CNRS, Aubière, France
LIPIcs, Volume 254, STACS 2023, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023/LIPIcs.STACS.2023.pdf
LIPIcs, Volume 254, STACS 2023, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
0:i
0:xxii
10.4230/LIPIcs.STACS.2023.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Berenbrink, Petra
1
Bouyer, Patricia
2
https://orcid.org/0000-0002-2823-0911
Dawar, Anuj
3
https://orcid.org/0000-0003-4014-8248
Kanté, Mamadou Moustapha
4
https://orcid.org/0000-0003-1838-7744
University of Hamburg, Germany
Université Paris-Saclay, CNRS, ENS Paris-Saclay, LMF, Gif-sur-Yvette, France
University of Cambridge, UK
Université Clermont Auvergne, Clermont Auvergne INP, LIMOS, CNRS, Aubière, France
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.0/LIPIcs.STACS.2023.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
1:1
1:2
10.4230/LIPIcs.STACS.2023.1
article
A Brief History of History-Determinism (Invited Talk)
Lehtinen, Karoliina
1
https://orcid.org/0000-0003-1171-8790
CNRS, Aix-Marseille University, LIS, Marseille, France
Most nondeterministic automata models are more expressive, or at least more succinct, than their deterministic counterparts; however, this comes at a cost, as deterministic automata tend to have better algorithmic properties. History-deterministic automata are an intermediate model that allows a restricted form of nondeterminism: all nondeterministic choices must be resolvable on-the-fly, with only the knowledge of the word prefix read so far - as opposed to general nondeterminism, which allows for guessing the future of the word. History-deterministic automata combine some of the algorithmic benefits of determinism with some of the increased power of nondeterminism, thus enjoying (some of) the best of both worlds.
History-determinism, as it is understood today, has its roots in several independently invented notions: Kupferman, Safra and Vardi’s automata recognising tree languages derived from word languages [Kupferman et al., 2006] (a notion that has been later referred to as automata that are good-for-trees [Udi Boker et al., 2013]), Henzinger and Piterman’s good-for-games automata [Thomas Henzinger and Nir Piterman, 2006], and Colcombet’s history-deterministic automata, introduced in his work on regular cost-automata [Colcombet, 2009]. In the ω-regular setting, where they were initially most studied, the notions of good-for-trees, good-for-games and history-determinism are equivalent, despite differences in their definitions. The key algorithmic appeal of these automata is that like deterministic automata, they have good compositional properties. This makes them particularly useful for applications such as reactive synthesis, where composition of games and automata is at the heart of effective solutions.
Since then, history-determinism has received its fair share of attention, not least because of its relevance to synthesis. Indeed it turns out to be a natural and useful form of nondeterminism more broadly, and can be generalised to all sorts of different automata models: alternating automata [Udi Boker and Karoliina Lehtinen, 2019], pushdown automata [Enzo Erlich et al., 2022; Enzo Erlich et al., 2022], timed automata [Thomas A. Henzinger et al., 2022; Sougata Bose et al., 2022], Parikh automata [Enzo Erlich et al., 2022], and quantiative automata [Udi Boker and Karoliina Lehtinen, 2021], to name a few. In each of these models, history-determinism offers some trade-offs between the power of nondeterminism and the algorithmic properties of determinism. In particular, depending on the model, they can be either more expressive or more succinct than their deterministic counterparts, while retaining better algorithmic properties - in particular with respect to deciding universality, language inclusion and games - than fully nondeterministic automata.
The drive to extend history-determinism to more powerful automata models has also lead to a better understanding of the properties of these automata, of how they compare to related notions (such as good-for-games automata and determinisability by pruning), and of the various games and tools used to study them.
This talk aims to give a broad introduction to the notion of history determinism as well as an overview of some of the recent developements on the topic. It will also highlight some of the many problems that remain open. It is loosely based on a recent survey, written jointly with Udi Boker, which gives an informal presentation of what are, in our view, the key aspects of history-determinism [Udi Boker and Karoliina Lehtinen, 2023].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.1/LIPIcs.STACS.2023.1.pdf
History-determinism
nondeterminism
automata
good-for-games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
2:1
2:2
10.4230/LIPIcs.STACS.2023.2
article
Amortised Analysis of Dynamic Data Structures (Invited Talk)
Rotenberg, Eva
1
https://orcid.org/0000-0001-5853-7909
Technical University of Denmark, Lyngby, Denmark
In dynamic data structures, one is interested in efficiently facilitating queries to a data set, while being able to efficiently perform updates as the data set undergoes changes. Often, relaxing the efficiency measure to the amortised setting allows for simpler algorithms. A well-known example of a data structure with amortised guarantees is the splay tree by Sleator and Tarjan [Daniel D. Sleator and Robert E. Tarjan, 1985].
Similarly, in data structures for dynamic graphs, one is interested in efficiently maintaining some information about the graph, or facilitating queries, as the graph undergoes changes in the form of insertion and deletion of edges. Examples of such information include connectivity, planarity, and approximate sparsity of the graph: is the graph presently connected? Is it planar? Has its arboricity grossly exceeded some specified number α̃? The related queries could be: is a connected to b? Are the edges uv and uw consecutive in the ordering around u in its current planar embedding? Or, report the O(α) out-edges of vertex x.
In this talk, we will see Brodal and Fagerberg’s amortised algorithm for orienting sparse graphs (i.e. of arboricity ≤ α), so that each vertex has O(α) out-edges [Gerth Stølting Brodal and Rolf Fagerberg, 1999]. The algorithm itself is extremely simple, and uses an elegant amortised argument in its analysis. Then, we will visit the problem of dynamic planarity testing: is the graph presently planar? Here, we will see an elegant amortised reduction to the seemingly easier problem, where planarity-violating edges may be detected and rejected [Eppstein et al., 1996]. We will see a sketch of how the current state-of-the-art algorithm for efficient planarity testing [Jacob Holm and Eva Rotenberg, 2020] uses ideas similar to those in [Gerth Stølting Brodal and Rolf Fagerberg, 1999] to analyse the behaviour of a greedy algorithm via a possibly inefficient algorithm with provably low recourse [Jacob Holm and Eva Rotenberg, 2020]. If time permits, we will touch upon a recent simple amortised data structure for maintaining information in dynamic forests [Jacob Holm et al., 2023], which builds on ideas from splay trees.
The talk concludes with some open questions in the area.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.2/LIPIcs.STACS.2023.2.pdf
Amortised analysis
splaying
dynamic graphs
planarity testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
3:1
3:1
10.4230/LIPIcs.STACS.2023.3
article
Logical Algorithmics: From Theory to Practice (Invited Talk)
Vardi, Moshe Y.
1
https://orcid.org/0000-0002-0661-5773
Rice University, Houston, TX, USA
The standard approach to algorithm development is to focus on a specific problem and develop for it a specific algorithm. Codd’s introduction of the relational model in 1970 included two fundamental ideas: (1) Relations provide a universal data representation formalism, and (2) Relational databases can be queried using first-order logic. Realizing these ideas required the development of a meta-algorithm, which takes a declarative query and executes it with respect to a database. In this talk, I will describe this approach, which I call Logical Algorithmics, in detail, and explore its profound ramification.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.3/LIPIcs.STACS.2023.3.pdf
Logic
Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
4:1
4:12
10.4230/LIPIcs.STACS.2023.4
article
The Complexity of Checking Quasi-Identities over Finite Algebras with a Mal'cev Term
Aichinger, Erhard
1
https://orcid.org/0000-0001-8998-4138
Grünbacher, Simon
1
https://orcid.org/0000-0002-5404-520X
Institute for Algebra, Johannes Kepler Universität Linz, Austria
We consider finite algebraic structures and ask whether every solution of a given system of equations satisfies some other equation. This can be formulated as checking the validity of certain first order formulae called quasi-identities. Checking the validity of quasi-identities is closely linked to solving systems of equations. For systems of equations over finite algebras with finitely many fundamental operations, a complete P/NPC dichotomy is known, while the situation appears to be more complicated for single equations. The complexity of checking the validity of a quasi-identity lies between the complexity of term equivalence (checking whether two terms induce the same function) and the complexity of solving systems of polynomial equations. We prove that for each finite algebra with a Mal'cev term and finitely many fundamental operations, checking the validity of quasi-identities is coNP-complete if the algebra is not abelian, and in P when the algebra is abelian.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.4/LIPIcs.STACS.2023.4.pdf
quasi-identities
conditional identities
systems of equations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
5:1
5:19
10.4230/LIPIcs.STACS.2023.5
article
Packing Odd Walks and Trails in Multiterminal Networks
Akhmedov, Maxim
1
https://orcid.org/0000-0002-7947-1416
Babenko, Maxim
2
https://orcid.org/0000-0003-2877-9725
Department of Mathematical Logic and Algorithms, Moscow State University, Russia
Higher School of Economics, Moscow, Russia
Let G be an undirected network with a distinguished set of terminals T ⊆ V(G) and edge capacities cap: E(G) → ℝ_+. By an odd T-walk we mean a walk in G (with possible vertex and edge self-intersections) connecting two distinct terminals and consisting of an odd number of edges. Inspired by the work of Schrijver and Seymour on odd path packing for two terminals, we consider packings of odd T-walks subject to capacities cap.
First, we present a strongly polynomial time algorithm for constructing a maximum fractional packing of odd T-walks. For even integer capacities, our algorithm constructs a packing that is half-integer. Additionally, if cap(δ(v)) is divisible by 4 for any v ∈ V(G)-T, our algorithm constructs an integer packing.
Second, we establish and prove the corresponding min-max relation.
Third, if G is inner Eulerian (i.e. degrees of all nodes in V(G)-T are even) and cap(e) = 2 for all e ∈ E, we show that there exists an integer packing of odd T-trails (i.e. odd T-walks with no repeated edges) of the same value as in case of odd T-walks, and this packing can be found in polynomial time.
To achieve the above goals, we establish a connection between packings of odd T-walks and T-trails and certain multiflow problems in undirected and bidirected graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.5/LIPIcs.STACS.2023.5.pdf
Odd path
signed and bidirected graph
multiflow
polynomial algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
6:1
6:21
10.4230/LIPIcs.STACS.2023.6
article
Improved Weighted Matching in the Sliding Window Model
Alexandru, Cezar-Mihail
1
Dvořák, Pavel
2
3
Konrad, Christian
1
Naidu, Kheeran K.
1
Department of Computer Science, University of Bristol, UK
Tata Institute of Fundamental Research, Mumbai, India
Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
We consider the Maximum-weight Matching (MWM) problem in the streaming sliding window model of computation. In this model, the input consists of a sequence of weighted edges on a given vertex set V of size n. The objective is to maintain an approximation of a maximum-weight matching in the graph spanned by the L most recent edges, for some integer L, using as little space as possible. Prior to our work, the state-of-the-art results were a (3.5+ε)-approximation algorithm for MWM by Biabani et al. [ISAAC'21] and a (3+ε)-approximation for (unweighted) Maximum Matching (MM) by Crouch et al. [ESA'13]. Both algorithms use space Õ(n).
We give the following results:
1) We give a (2+ε)-approximation algorithm for MWM with space Õ(√{nL}). Under the reasonable assumption that the graphs spanned by the edges in each sliding window are simple, our algorithm uses space Õ(n √n).
2) In the Õ(n) space regime, we give a (3+ε)-approximation algorithm for MWM, thereby closing the gap between the best-known approximation ratio for MWM and MM.
Similar to Biabani et al.’s MWM algorithm, both our algorithms execute multiple instances of the (2+ε)-approximation Õ(n)-space streaming algorithm for MWM by Paz and Schwartzman [SODA'17] on different portions of the stream. Our improvements are obtained by selecting these substreams differently. Furthermore, our (2+ε)-approximation algorithm runs the Paz-Schwartzman algorithm in reverse direction over some parts of the stream, and in forward direction over other parts, which allows for an improved approximation guarantee at the cost of increased space requirements.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.6/LIPIcs.STACS.2023.6.pdf
Sliding window algorithms
Streaming algorithms
Maximum-weight matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
7:1
7:23
10.4230/LIPIcs.STACS.2023.7
article
Approximate Sampling and Counting of Graphs with Near-Regular Degree Intervals
Amanatidis, Georgios
1
Kleer, Pieter
2
University of Essex, Colchester, UK
Tilburg University, The Netherlands
The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of social networks. In this work we study a generalization of the problem, where degree intervals are specified instead of a single degree sequence. We are interested in sampling and counting graphs whose degree sequences satisfy the corresponding degree interval constraints. A natural scenario where this problem arises is in hypothesis testing on networks that are only partially observed. We provide the first fully polynomial almost uniform sampler (FPAUS) as well as the first fully polynomial randomized approximation scheme (FPRAS) for sampling and counting, respectively, graphs with near-regular degree intervals, i.e., graphs in which every node has a degree from an interval not too far away from a given r ∈ ℕ. In order to design our FPAUS, we rely on various state-of-the-art tools from Markov chain theory and combinatorics. In particular, by carefully using Markov chain decomposition and comparison arguments, we reduce part of our problem to the recent breakthrough of Anari, Liu, Oveis Gharan, and Vinzant (2019) on sampling a base of a matroid under a strongly log-concave probability distribution, and we provide the first non-trivial algorithmic application of a breakthrough asymptotic enumeration formula of Liebenau and Wormald (2017). As a more direct approach, we also study a natural Markov chain recently introduced by Rechner, Strowick and Müller-Hannemann (2018), based on three local operations - switches, hinge flips, and additions/deletions of an edge. We obtain the first theoretical results for this Markov chain, showing it is rapidly mixing for the case of near-regular degree intervals of size at most one.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.7/LIPIcs.STACS.2023.7.pdf
graph sampling
degree interval
degree sequence
Markov Chain Monte Carlo method
switch Markov chain
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
8:1
8:18
10.4230/LIPIcs.STACS.2023.8
article
Enumerating Regular Languages with Bounded Delay
Amarilli, Antoine
1
https://orcid.org/0000-0002-7977-4441
Monet, Mikaël
2
https://orcid.org/0000-0002-6158-4607
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
We study the task, for a given language L, of enumerating the (generally infinite) sequence of its words, without repetitions, while bounding the delay between two consecutive words. To allow for delay bounds that do not depend on the current word length, we assume a model where we produce each word by editing the preceding word with a small edit script, rather than writing out the word from scratch. In particular, this witnesses that the language is orderable, i.e., we can write its words as an infinite sequence such that the Levenshtein edit distance between any two consecutive words is bounded by a value that depends only on the language. For instance, (a+b)^* is orderable (with a variant of the Gray code), but a^* + b^* is not.
We characterize which regular languages are enumerable in this sense, and show that this can be decided in PTIME in an input deterministic finite automaton (DFA) for the language. In fact, we show that, given a DFA A, we can compute in PTIME automata A₁, …, A_t such that L(A) is partitioned as L(A₁) ⊔ … ⊔ L(A_t) and every L(A_i) is orderable in this sense. Further, we show that the value of t obtained is optimal, i.e., we cannot partition L(A) into less than t orderable languages.
In the case where L(A) is orderable (i.e., t = 1), we show that the ordering can be produced by a bounded-delay algorithm: specifically, the algorithm runs in a suitable pointer machine model, and produces a sequence of bounded-length edit scripts to visit the words of L(A) without repetitions, with bounded delay - exponential in |A| - between each script. In fact, we show that we can achieve this while only allowing the edit operations push and pop at the beginning and end of the word, which implies that the word can in fact be maintained in a double-ended queue.
By contrast, when fixing the distance bound d between consecutive words and the number of classes of the partition, it is NP-hard in the input DFA A to decide if L(A) is orderable in this sense, already for finite languages.
Last, we study the model where push-pop edits are only allowed at the end of the word, corresponding to a case where the word is maintained on a stack. We show that these operations are strictly weaker and that the slender languages are precisely those that can be partitioned into finitely many languages that are orderable in this sense. For the slender languages, we can again characterize the minimal number of languages in the partition, and achieve bounded-delay enumeration.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.8/LIPIcs.STACS.2023.8.pdf
Regular language
constant-delay enumeration
edit distance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
9:1
9:19
10.4230/LIPIcs.STACS.2023.9
article
Regular Separability in Büchi VASS
Baumann, Pascal
1
https://orcid.org/0000-0002-9371-0807
Meyer, Roland
2
https://orcid.org/0000-0001-8495-671X
Zetzsche, Georg
1
https://orcid.org/0000-0002-6421-4388
Max Planck Institute for Software Systems (MPI-SWS), Kaiserslautern, Germany
TU Braunschweig, Germany
We study the (ω-)regular separability problem for Büchi VASS languages: Given two Büchi VASS with languages L₁ and L₂, check whether there is a regular language that fully contains L₁ while remaining disjoint from L₂. We show that the problem is decidable in general and PSPACE-complete in the 1-dimensional case, assuming succinct counter updates. The results rely on several arguments. We characterize the set of all regular languages disjoint from L₂. Based on this, we derive a (sound and complete) notion of inseparability witnesses, non-regular subsets of L₁. Finally, we show how to symbolically represent inseparability witnesses and how to check their existence.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.9/LIPIcs.STACS.2023.9.pdf
Separability problem
Vector addition systems
Infinite words
Decidability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
10:1
10:15
10.4230/LIPIcs.STACS.2023.10
article
Approximating Highly Inapproximable Problems on Graphs of Bounded Twin-Width
Bergé, Pierre
1
https://orcid.org/0000-0002-6170-9473
Bonnet, Édouard
1
https://orcid.org/0000-0002-1653-5822
Déprés, Hugues
1
Watrigant, Rémi
1
https://orcid.org/0000-0002-6243-5910
Univ Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
For any ε > 0, we give a polynomial-time n^ε-approximation algorithm for Max Independent Set in graphs of bounded twin-width given with an O(1)-sequence. This result is derived from the following time-approximation trade-off: We establish an O(1)^{2^q-1}-approximation algorithm running in time exp(O_q(n^{2^{-q}})), for every integer q ⩾ 0. Guided by the same framework, we obtain similar approximation algorithms for Min Coloring and Max Induced Matching. In general graphs, all these problems are known to be highly inapproximable: for any ε > 0, a polynomial-time n^{1-ε}-approximation for any of them would imply that P=NP [Håstad, FOCS '96; Zuckerman, ToC '07; Chalermsook et al., SODA '13]. We generalize the algorithms for Max Independent Set and Max Induced Matching to the independent (induced) packing of any fixed connected graph H.
In contrast, we show that such approximation guarantees on graphs of bounded twin-width given with an O(1)-sequence are very unlikely for Min Independent Dominating Set, and somewhat unlikely for Longest Path and Longest Induced Path. Regarding the existence of better approximation algorithms, there is a (very) light evidence that the obtained approximation factor of n^ε for Max Independent Set may be best possible. This is the first in-depth study of the approximability of problems in graphs of bounded twin-width. Prior to this paper, essentially the only such result was a polynomial-time O(1)-approximation algorithm for Min Dominating Set [Bonnet et al., ICALP '21].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.10/LIPIcs.STACS.2023.10.pdf
Approximation algorithms
bounded twin-width
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
11:1
11:17
10.4230/LIPIcs.STACS.2023.11
article
Tight Lower Bounds for Problems Parameterized by Rank-Width
Bergougnoux, Benjamin
1
https://orcid.org/0000-0002-6270-3663
Korhonen, Tuukka
2
https://orcid.org/0000-0003-0861-6515
Nederlof, Jesper
3
https://orcid.org/0000-0003-1848-0076
University of Warsaw, Poland
University of Bergen, Norway
Utrecht University, The Netherlands
We show that there is no 2^o(k²) n^O(1) time algorithm for Independent Set on n-vertex graphs with rank-width k, unless the Exponential Time Hypothesis (ETH) fails. Our lower bound matches the 2^O(k²) n^O(1) time algorithm given by Bui-Xuan, Telle, and Vatshelle [Discret. Appl. Math., 2010] and it answers the open question of Bergougnoux and Kanté [SIAM J. Discret. Math., 2021]. We also show that the known 2^O(k²) n^O(1) time algorithms for Weighted Dominating Set, Maximum Induced Matching and Feedback Vertex Set parameterized by rank-width k are optimal assuming ETH. Our results are the first tight ETH lower bounds parameterized by rank-width that do not follow directly from lower bounds for n-vertex graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.11/LIPIcs.STACS.2023.11.pdf
rank-width
exponential time hypothesis
Boolean-width
parameterized algorithms
independent set
dominating set
maximum induced matching
feedback vertex set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
12:1
12:18
10.4230/LIPIcs.STACS.2023.12
article
On the Multilinear Complexity of Associative Algebras
Bläser, Markus
1
Mayer, Hendrik
2
Shringi, Devansh
3
https://orcid.org/0000-0001-6442-2411
Universität des Saarlandes, Saarland Informatics Campus, Saarbrücken, Germany
Massachusetts Institute of Technology, Cambridge, MA, USA
University of Toronto, Canada
Christandl and Zuiddam [Matthias Christandl and Jeroen Zuiddam, 2019] study the multilinear complexity of d-fold matrix multiplication in the context of quantum communication complexity. Bshouty [Nader H. Bshouty, 2013] investigates the multilinear complexity of d-fold multiplication in commutative algebras to understand the size of so-called testers. The study of bilinear complexity is a classical topic in algebraic complexity theory, starting with the work by Strassen. However, there has been no systematic study of the multilinear complexity of multilinear maps.
In the present work, we systematically investigate the multilinear complexity of d-fold multiplication in arbitrary associative algebras. We prove a multilinear generalization of the famous Alder-Strassen theorem, which is a lower bound for the bilinear complexity of the (2-fold) multiplication in an associative algebra. We show that the multilinear complexity of the d-fold multiplication has a lower bound of d ⋅ dim A - (d-1)t, where t is the number of maximal twosided ideals in A. This is optimal in the sense that there are algebras for which this lower bound is tight. Furthermore, we prove the following dichotomy that the quotient algebra A/rad A determines the complexity of the d-fold multiplication in A: When the semisimple algebra A/rad A is commutative, then the multilinear complexity of the d-fold multiplication in A is polynomial in d. On the other hand, when A/rad A is noncommutative, then the multilinear complexity of the d-fold multiplication in A is exponential in d.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.12/LIPIcs.STACS.2023.12.pdf
Multilinear computations
associative algebras
matrix multiplication
Alder-Strassen theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
13:1
13:17
10.4230/LIPIcs.STACS.2023.13
article
Strongly Hyperbolic Unit Disk Graphs
Bläsius, Thomas
1
https://orcid.org/0000-0003-2450-744X
Friedrich, Tobias
2
https://orcid.org/0000-0003-0076-6308
Katzmann, Maximilian
1
https://orcid.org/0000-0002-9302-5527
Stephan, Daniel
3
Karlsruhe Institute of Technology, Germany
Hasso Plattner Institute, University of Potsdam, Germany
GSV Algorithm Consulting UG (haftungsbeschränkt) & Co. KG, Potsdam, Germany
The class of Euclidean unit disk graphs is one of the most fundamental and well-studied graph classes with underlying geometry. In this paper, we identify this class as a special case in the broader class of hyperbolic unit disk graphs and introduce strongly hyperbolic unit disk graphs as a natural counterpart to the Euclidean variant. In contrast to the grid-like structures exhibited by Euclidean unit disk graphs, strongly hyperbolic networks feature hierarchical structures, which are also observed in complex real-world networks.
We investigate basic properties of strongly hyperbolic unit disk graphs, including adjacencies and the formation of cliques, and utilize the derived insights to demonstrate that the class is useful for the development and analysis of graph algorithms. Specifically, we develop a simple greedy routing scheme and analyze its performance on strongly hyperbolic unit disk graphs in order to prove that routing can be performed more efficiently on such networks than in general.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.13/LIPIcs.STACS.2023.13.pdf
hyperbolic geometry
unit disk graphs
greedy routing
hyperbolic random graphs
graph classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
14:1
14:16
10.4230/LIPIcs.STACS.2023.14
article
Tight Bounds for Connectivity Problems Parameterized by Cutwidth
Bojikian, Narek
1
https://orcid.org/0000-0003-1072-4873
Chekan, Vera
1
https://orcid.org/0000-0002-6165-1566
Hegerfeld, Falko
1
https://orcid.org/0000-0003-2125-5048
Kratsch, Stefan
1
https://orcid.org/0000-0002-0193-7239
Humboldt-Universität zu Berlin, Germany
In this work we start the investigation of tight complexity bounds for connectivity problems parameterized by cutwidth assuming the Strong Exponential-Time Hypothesis (SETH). Van Geffen et al. [Bas A. M. van Geffen et al., 2020] posed this question for Odd Cycle Transversal and Feedback Vertex Set. We answer it for these two and four further problems, namely Connected Vertex Cover, Connected Dominating Set, Steiner Tree, and Connected Odd Cycle Transversal. For the latter two problems it sufficed to prove lower bounds that match the running time inherited from parameterization by treewidth; for the others we provide faster algorithms than relative to treewidth and prove matching lower bounds. For upper bounds we first extend the idea of Groenland et al. [Carla Groenland et al., 2022] to solve what we call coloring-like problems. Such problems are defined by a symmetric matrix M over 𝔽₂ indexed by a set of colors. The goal is to count the number (modulo some prime p) of colorings of a graph such that M has a 1-entry if indexed by the colors of the end-points of any edge. We show that this problem can be solved faster if M has small rank over 𝔽_p. We apply this result to get our upper bounds for CVC and CDS. The upper bounds for OCT and FVS use a subdivision trick to get below the bounds that matrix rank would yield.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.14/LIPIcs.STACS.2023.14.pdf
Parameterized complexity
connectivity problems
cutwidth
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
15:1
15:16
10.4230/LIPIcs.STACS.2023.15
article
Twin-Width V: Linear Minors, Modular Counting, and Matrix Multiplication
Bonnet, Édouard
1
https://orcid.org/0000-0002-1653-5822
Giocanti, Ugo
1
Ossona de Mendez, Patrice
2
3
https://orcid.org/0000-0003-0724-3729
Thomassé, Stéphan
1
Univ Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
Centre d'Analyse et de Mathématique Sociales CNRS UMR 8557, Paris, France
Computer Science Institute, Charles University (IUUK), Prague, Czech Republic
We continue developing the theory around the twin-width of totally ordered binary structures (or equivalently, matrices over a finite alphabet), initiated in the previous paper of the series. We first introduce the notion of parity and linear minors of a matrix, which consists of iteratively replacing consecutive rows or consecutive columns with a linear combination of them. We show that a matrix class (i.e., a set of matrices closed under taking submatrices) has bounded twin-width if and only if its linear-minor closure does not contain all matrices. We observe that the fixed-parameter tractable (FPT) algorithm for first-order model checking on structures given with an O(1)-sequence (certificate of bounded twin-width) and the fact that first-order transductions of bounded twin-width classes have bounded twin-width, both established in Twin-width I, extend to first-order logic with modular counting quantifiers. We make explicit a win-win argument obtained as a by-product of Twin-width IV, and somewhat similar to bidimensionality, that we call rank-bidimensionality. This generalizes the seminal work of Guillemot and Marx [SODA '14], which builds on the Marcus-Tardos theorem [JCTA '04]. It works on general matrices (not only on classes of bounded twin-width) and, for example, yields FPT algorithms deciding if a small matrix is a parity or a linear minor of another matrix given in input, or exactly computing the grid or mixed number of a given matrix (i.e., the maximum integer k such that the row set and the column set of the matrix can be partitioned into k intervals, with each of the k² defined cells containing a non-zero entry, or two distinct rows and two distinct columns, respectively).
Armed with the above-mentioned extension to modular counting, we show that the twin-width of the product of two conformal matrices A, B (i.e., whose dimensions are such that AB is defined) over a finite field is bounded by a function of the twin-width of A, of B, and of the size of the field. Furthermore, if A and B are n × n matrices of twin-width d over F_q, we show that AB can be computed in time O_{d,q}(n² log n).
We finally present an ad hoc algorithm to efficiently multiply two matrices of bounded twin-width, with a single-exponential dependence in the twin-width bound. More precisely, pipelined to observations and results of Pilipczuk et al. [STACS '22], we obtain the following. If the inputs are given in a compact tree-like form (witnessing twin-width at most d), called twin-decomposition of width d, then two n × n matrices A, B over F₂ can be multiplied in time 4^{d+o(d)}n, in the sense that a twin-decomposition of their product AB, with width 2^{d+o(d)}, is output within that time, and each entry of AB can be queried in time O_d(log log n). Furthermore, for every ε > 0, the query time can be brought to constant time O(1/ε) if the running time is increased to near-linear 4^{d+o(d)}n^{1+ε}. Notably, the running time is sublinear (essentially square root) in the number of (non-zero) entries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.15/LIPIcs.STACS.2023.15.pdf
Twin-width
matrices
parity and linear minors
model theory
linear algebra
matrix multiplication
algorithms
computational complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
16:1
16:20
10.4230/LIPIcs.STACS.2023.16
article
Non-Adaptive Proper Learning Polynomials
Bshouty, Nader H.
1
Department of Computer Science, Technion, Haifa, Israel
We give the first polynomial-time non-adaptive proper learning algorithm of Boolean sparse multivariate polynomial under the uniform distribution. Our algorithm, for s-sparse polynomial over n variables, makes q = (s/ε)^{γ(s,ε)}log n queries where 2.66 ≤ γ(s,ε) ≤ 6.922 and runs in Õ(n)⋅ poly(s,1/ε) time. We also show that for any ε = 1/s^{O(1)} any non-adaptive learning algorithm must make at least (s/ε)^{Ω(1)}log n queries. Therefore, the query complexity of our algorithm is also polynomial in the optimal query complexity and optimal in n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.16/LIPIcs.STACS.2023.16.pdf
Polynomial
Learning
Testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
17:1
17:17
10.4230/LIPIcs.STACS.2023.17
article
Cut Paths and Their Remainder Structure, with Applications
Cairo, Massimo
1
Khan, Shahbaz
2
https://orcid.org/0000-0001-9352-0088
Rizzi, Romeo
3
https://orcid.org/0000-0002-2387-0952
Schmidt, Sebastian
1
https://orcid.org/0000-0003-4878-2809
Tomescu, Alexandru I.
1
https://orcid.org/0000-0002-5747-8350
Zirondelli, Elia C.
4
Department of Computer Science, University of Helsinki, Finland
Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India
Department of Computer Science, University of Verona, Italy
Department of Mathematics, University of Trento, Italy
In a strongly connected graph G = (V,E), a cut arc (also called strong bridge) is an arc e ∈ E whose removal makes the graph no longer strongly connected. Equivalently, there exist u,v ∈ V, such that all u-v walks contain e. Cut arcs are a fundamental graph-theoretic notion, with countless applications, especially in reachability problems.
In this paper we initiate the study of cut paths, as a generalisation of cut arcs, which we naturally define as those paths P for which there exist u,v ∈ V, such that all u-v walks contain P as subwalk. We first prove various properties of cut paths and define their remainder structures, which we use to present a simple O(m)-time verification algorithm for a cut path (|V| = n, |E| = m).
Secondly, we apply cut paths and their remainder structures to improve several reachability problems from bioinformatics, as follows. A walk is called safe if it is a subwalk of every node-covering closed walk of a strongly connected graph. Multi-safety is defined analogously, by considering node-covering sets of closed walks instead. We show that cut paths provide simple O(m)-time algorithms verifying if a walk is safe or multi-safe. For multi-safety, we present the first linear time algorithm, while for safety, we present a simple algorithm where the state-of-the-art employed complex data structures. Finally we show that the simultaneous computation of remainder structures of all subwalks of a cut path can be performed in linear time, since they are related in a structured way. These properties yield an O(mn)-time algorithm outputting all maximal multi-safe walks, improving over the state-of-the-art algorithm running in time O(m²+n³).
The results of this paper only scratch the surface in the study of cut paths, and we believe a rich structure of a graph can be revealed, considering the perspective of a path, instead of just an arc.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.17/LIPIcs.STACS.2023.17.pdf
reachability
cut arc
strong bridge
covering walk
safety
persistence
essentiality
genome assembly
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
18:1
18:22
10.4230/LIPIcs.STACS.2023.18
article
Geometric Amortization of Enumeration Algorithms
Capelli, Florent
1
https://orcid.org/0000-0002-2842-8223
Strozecki, Yann
2
https://orcid.org/0000-0002-0891-3766
Université de Lille, CNRS, Inria, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France
Université Paris Saclay, UVSQ, DAVID, France
In this paper, we introduce a technique we call geometric amortization for enumeration algorithms, which can be used to make the delay of enumeration algorithms more regular with little overhead on the space it uses. More precisely, we consider enumeration algorithms having incremental linear delay, that is, algorithms enumerating, on input x, a set A(x) such that for every t ≤ ♯ A(x), it outputs at least t solutions in time O(t⋅p(|x|)), where p is a polynomial. We call p the incremental delay of the algorithm. While it is folklore that one can transform such an algorithm into an algorithm with maximal delay O(p(|x|)), the naive transformation may use exponential space. We show that, using geometric amortization, such an algorithm can be transformed into an algorithm with delay O(p(|x|)log(♯A(x))) and space O(s log(♯A(x))) where s is the space used by the original algorithm. In terms of complexity, we prove that classes DelayP and IncP₁ with polynomial space coincide.
We apply geometric amortization to show that one can trade the delay of flashlight search algorithms for their average delay up to a factor of O(log(♯A(x))). We illustrate how this tradeoff is advantageous for the enumeration of solutions of DNF formulas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.18/LIPIcs.STACS.2023.18.pdf
Enumeration
Polynomial Delay
Incremental Delay
Amortization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
19:1
19:14
10.4230/LIPIcs.STACS.2023.19
article
One Drop of Non-Determinism in a Random Deterministic Automaton
Carayol, Arnaud
1
Duchon, Philippe
2
Koechlin, Florent
3
Nicaud, Cyril
1
Univ Gustave Eiffel, CNRS, LIGM, F-77454 Marne-la-Vallée, France
Univ. Bordeaux, CNRS UMR 5800, LaBRI, F-33400 Talence, France
Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France
Every language recognized by a non-deterministic finite automaton can be recognized by a deterministic automaton, at the cost of a potential increase of the number of states, which in the worst case can go from n states to 2ⁿ states. In this article, we investigate this classical result in a probabilistic setting where we take a deterministic automaton with n states uniformly at random and add just one random transition. These automata are almost deterministic in the sense that only one state has a non-deterministic choice when reading an input letter. In our model each state has a fixed probability to be final. We prove that for any d ≥ 1, with non-negligible probability the minimal (deterministic) automaton of the language recognized by such an automaton has more than n^d states; as a byproduct, the expected size of its minimal automaton grows faster than any polynomial. Our result also holds when each state is final with some probability that depends on n, as long as it is not too close to 0 and 1, at distance at least Ω(1/√n) to be precise, therefore allowing models with a sublinear number of final states in expectation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.19/LIPIcs.STACS.2023.19.pdf
non-deterministic automaton
powerset construction
probabilistic analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
20:1
20:14
10.4230/LIPIcs.STACS.2023.20
article
Improved NP-Hardness of Approximation for Orthogonality Dimension and Minrank
Chawin, Dror
1
Haviv, Ishay
1
School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv, Israel
The orthogonality dimension of a graph G over ℝ is the smallest integer k for which one can assign a nonzero k-dimensional real vector to each vertex of G, such that every two adjacent vertices receive orthogonal vectors. We prove that for every sufficiently large integer k, it is NP-hard to decide whether the orthogonality dimension of a given graph over ℝ is at most k or at least 2^{(1-o(1))⋅k/2}. We further prove such hardness results for the orthogonality dimension over finite fields as well as for the closely related minrank parameter, which is motivated by the index coding problem in information theory. This in particular implies that it is NP-hard to approximate these graph quantities to within any constant factor. Previously, the hardness of approximation was known to hold either assuming certain variants of the Unique Games Conjecture or for approximation factors smaller than 3/2. The proofs involve the concept of line digraphs and bounds on their orthogonality dimension and on the minrank of their complement.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.20/LIPIcs.STACS.2023.20.pdf
hardness of approximation
graph coloring
orthogonality dimension
minrank
index coding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
21:1
21:20
10.4230/LIPIcs.STACS.2023.21
article
Extending Merge Resolution to a Family of QBF-Proof Systems
Chede, Sravanthi
1
https://orcid.org/0000-0001-7170-6156
Shukla, Anil
1
Indian Institute of Technology Ropar, Rupnagar, India
Merge Resolution (MRes [Olaf Beyersdorff et al., 2021]) is a recently introduced proof system for false QBFs. Unlike other known QBF proof systems, it builds winning strategies for the universal player (countermodels) within the proofs as merge maps. Merge maps are deterministic branching programs in which isomorphism checking is efficient, as a result MRes is a polynomial time verifiable proof system.
In this paper, we introduce a family of proof systems MRes-ℛ in which the information of countermodels are stored in any pre-fixed complete representation ℛ. Hence, corresponding to each possible complete representation ℛ, we have a sound and refutationally complete QBF-proof system in MRes-ℛ. To handle these arbitrary representations, we introduce consistency checking rules in MRes-ℛ instead of the isomorphism checking in MRes. As a result these proof systems are not polynomial time verifiable (Non-P). Consequently, the paper shows that using merge maps is too restrictive and with a slight change in rules, it can be replaced with arbitrary representations leading to several interesting proof systems.
We relate these new systems with the implicit proof system from the algorithm in [Joshua Blinkhorn et al., 2021], which was designed to solve DQBFs (Dependency QBFs) using clause-strategy pairs like MRes. We use the OBDD (Ordered Binary Decision Diagrams) representation suggested in [Joshua Blinkhorn et al., 2021] and deduce that "Ordered" versions of the proof systems in MRes-ℛ are indeed polynomial time verifiable.
On the lower bound side, we lift the lower bound result of regular MRes ([Olaf Beyersdorff et al., 2020]) by showing that the completion principle formulas (CR_n) from [Mikolás Janota and João Marques-Silva, 2015] which are shown to be hard for regular MRes in [Olaf Beyersdorff et al., 2020], are also hard for any regular proof system in MRes-ℛ. Thereby, the paper lifts the lower bound of regular MRes to an entire class of proof systems, which use various complete representations, including those undiscovered, instead of only merge maps. Thereby proving that the hardness of CR_n formulas is intact even after changing the weak isomorphism checking in MRes to the stronger consistency checking in MRes-ℛ.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.21/LIPIcs.STACS.2023.21.pdf
Proof complexity
QBFs
Merge Resolution
Simulation
Lower Bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
22:1
22:20
10.4230/LIPIcs.STACS.2023.22
article
On Hardness of Testing Equivalence to Sparse Polynomials Under Shifts
Chillara, Suryajith
1
Grichener, Coral
2
Shpilka, Amir
3
International Institute of Information Technology, Hyderabad, India
Google Research, Herzliya, Israel
Tel Aviv University, Israel
We say that two given polynomials f, g ∈ R[x_1, ..., x_n], over a ring R, are equivalent under shifts if there exists a vector (a_1, ..., a_n) ∈ Rⁿ such that f(x_1+a_1, ..., x_n+a_n) = g(x_1, ..., x_n). This is a special variant of the polynomial projection problem in Algebraic Complexity Theory.
Grigoriev and Karpinski (FOCS 1990), Lakshman and Saunders (SIAM J. Computing, 1995), and Grigoriev and Lakshman (ISSAC 1995) studied the problem of testing polynomial equivalence of a given polynomial to any t-sparse polynomial, over the rational numbers, and gave exponential time algorithms. In this paper, we provide hardness results for this problem.
Formally, for a ring R, let SparseShift_R be the following decision problem - Given a polynomial P(X), is there a vector 𝐚 such that P(X+𝐚) contains fewer monomials than P(X). We show that SparseShift_R is at least as hard as checking if a given system of polynomial equations over R[x_1,..., x_n] has a solution (Hilbert’s Nullstellensatz). As a consequence of this reduction, we get the following results.
1) SparseShift_ℤ is undecidable.
2) For any ring R (which is not a field) such that HN_R is NP_R-complete over the Blum-Shub-Smale model of computation, SparseShift_ is also NP_R-complete. In particular, SparseShift_ℤ is also NP_ℤ-complete.
We also study the gap version of the SparseShift_R and show the following.
1) For every function β:ℕ → ℝ_+ such that β ∈ o(1), N^β-gap-SparseShift_ℤ is also undecidable (where N is the input length).
2) For R = 𝔽_p, ℚ, ℝ or ℤ_q and for every β > 1 the β-gap-SparseShift_R problem is NP-hard. Furthermore, there exists a constant α > 1 such that for every d = O(1) in the sparse representation model, and for every d ≤ n^O(1) in the arithmetic circuit model, the α^d-gap-SparseShift_R problem is NP-hard when given polynomials of degree at most d, in O(nd) many variables, as input.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.22/LIPIcs.STACS.2023.22.pdf
algebraic complexity
shift equivalence
polynomial equivalence
Hilbert’s Nullstellensatz
hardness of approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
23:1
23:24
10.4230/LIPIcs.STACS.2023.23
article
Online Paging with Heterogeneous Cache Slots
Chrobak, Marek
1
Haney, Samuel
2
Liaee, Mehraneh
3
Panigrahi, Debmalya
4
Rajaraman, Rajmohan
3
Sundaram, Ravi
3
Young, Neal E.
1
University of California at Riverside, CA, USA
Tumult Labs, Durham, NC, USA
Northeastern University, Boston, MA, USA
Duke University, Durham, NC, USA
It is natural to generalize the online k-Server problem by allowing each request to specify not only a point p, but also a subset S of servers that may serve it. To initiate a systematic study of this generalization, we focus on uniform and star metrics. For uniform metrics, the problem is equivalent to a generalization of Paging in which each request specifies not only a page p, but also a subset S of cache slots, and is satisfied by having a copy of p in some slot in S. We call this problem Slot-Heterogenous Paging.
In realistic settings only certain subsets of cache slots or servers would appear in requests. Therefore we parameterize the problem by specifying a family 𝒮 ⊆ 2^[k] of requestable slot sets, and we establish bounds on the competitive ratio as a function of the cache size k and family 𝒮. If all request sets are allowed (𝒮 = 2^[k]), the optimal deterministic and randomized competitive ratios are exponentially worse than for standard Paging (𝒮 = {[k]}). As a function of |𝒮| and k, the optimal deterministic ratio is polynomial: at most O(k²|𝒮|) and at least Ω(√{|𝒮|}). For any laminar family {𝒮} of height h, the optimal ratios are O(hk) (deterministic) and O(h²log k) (randomized). The special case that we call All-or-One Paging extends standard Paging by allowing each request to specify a specific slot to put the requested page in. For All-or-One Paging the optimal competitive ratios are Θ(k) (deterministic) and Θ(log k) (randomized), while the offline problem is NP-hard. We extend the deterministic upper bound to the weighted variant of All-or-One Paging (a generalization of standard Weighted Paging), showing that it is also Θ(k).
Some results for the laminar case are shown via a reduction to the generalization of Paging in which each request specifies a set P of pages, and is satisfied by fetching any page from P into the cache. The optimal ratios for the latter problem (with laminar family of height h) are at most hk (deterministic) and hH_k (randomized).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.23/LIPIcs.STACS.2023.23.pdf
Caching and paging algorithms
k-server
weighted paging
laminar family
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
24:1
24:21
10.4230/LIPIcs.STACS.2023.24
article
On Rational Recursive Sequences
Clemente, Lorenzo
1
https://orcid.org/0000-0003-0578-9103
Donten-Bury, Maria
1
https://orcid.org/0000-0002-2138-4109
Mazowiecki, Filip
1
Pilipczuk, Michał
1
University of Warsaw, Poland
We study the class of rational recursive sequences (ratrec) over the rational numbers. A ratrec sequence is defined via a system of sequences using mutually recursive equations of depth 1, where the next values are computed as rational functions of the previous values. An alternative class is that of simple ratrec sequences, where one uses a single recursive equation, however of depth k: the next value is defined as a rational function of k previous values.
We conjecture that the classes ratrec and simple ratrec coincide. The main contribution of this paper is a proof of a variant of this conjecture where the initial conditions are treated symbolically, using a formal variable per sequence, while the sequences themselves consist of rational functions over those variables. While the initial conjecture does not follow from this variant, we hope that the introduced algebraic techniques may eventually be helpful in resolving the problem.
The class ratrec strictly generalises a well-known class of polynomial recursive sequences (polyrec). These are defined like ratrec, but using polynomial functions instead of rational ones. One can observe that if our conjecture is true and effective, then we can improve the complexities of the zeroness and the equivalence problems for polyrec sequences. Currently, the only known upper bound is Ackermanian, which follows from results on polynomial automata. We complement this observation by proving a PSPACE lower bound for both problems for polyrec. Our lower bound construction also implies that the Skolem problem is PSPACE-hard for the polyrec class.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.24/LIPIcs.STACS.2023.24.pdf
recursive sequences
polynomial automata
zeroness problem
equivalence problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
25:1
25:18
10.4230/LIPIcs.STACS.2023.25
article
Semigroup Intersection Problems in the Heisenberg Groups
Dong, Ruiwen
1
Department of Computer Science, University of Oxford, UK
We consider two algorithmic problems concerning sub-semigroups of Heisenberg groups and, more generally, two-step nilpotent groups. The first problem is Intersection Emptiness, which asks whether a finite number of given finitely generated semigroups have empty intersection. This problem was first studied by Markov in the 1940s. We show that Intersection Emptiness is PTIME decidable in the Heisenberg groups H_n(𝕂) over any algebraic number field 𝕂, as well as in direct products of Heisenberg groups. We also extend our decidability result to arbitrary finitely generated 2-step nilpotent groups.
The second problem is Orbit Intersection, which asks whether the orbits of two matrices under multiplication by two semigroups intersect with each other. This problem was first studied by Babai et al. (1996), who showed its decidability within commutative matrix groups. We show that Orbit Intersection is decidable within the Heisenberg group H₃(ℚ).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.25/LIPIcs.STACS.2023.25.pdf
semigroup intersection
orbit intersection
matrix semigroups
Heisenberg group
nilpotent groups
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
26:1
26:19
10.4230/LIPIcs.STACS.2023.26
article
Solving Homogeneous Linear Equations over Polynomial Semirings
Dong, Ruiwen
1
Department of Computer Science, University of Oxford, UK
For a subset B of ℝ, denote by U(B) be the semiring of (univariate) polynomials in ℝ[X] that are strictly positive on B. Let ℕ[X] be the semiring of (univariate) polynomials with non-negative integer coefficients. We study solutions of homogeneous linear equations over the polynomial semirings U(B) and ℕ[X]. In particular, we prove local-global principles for solving single homogeneous linear equations over these semirings. We then show PTIME decidability of determining the existence of non-zero solutions over ℕ[X] of single homogeneous linear equations.
Our study of these polynomial semirings is largely motivated by several semigroup algorithmic problems in the wreath product ℤ≀ℤ. As an application of our results, we show that the Identity Problem (whether a given semigroup contains the neutral element?) and the Group Problem (whether a given semigroup is a group?) for finitely generated sub-semigroups of the wreath product ℤ≀ℤ is decidable when elements of the semigroup generator have the form (y, ±1).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.26/LIPIcs.STACS.2023.26.pdf
wreath product
identity problem
polynomial semiring
positive polynomial
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
27:1
27:16
10.4230/LIPIcs.STACS.2023.27
article
An Approximation Algorithm for Distance-Constrained Vehicle Routing on Trees
Dufay, Marc
1
Mathieu, Claire
2
Zhou, Hang
3
École Polytechnique and IRIF, Palaiseau, France
CNRS Paris, France
École Polytechnique, Palaiseau, France
In the Distance-constrained Vehicle Routing Problem (DVRP), we are given a graph with integer edge weights, a depot, a set of n terminals, and a distance constraint D. The goal is to find a minimum number of tours starting and ending at the depot such that those tours together cover all the terminals and the length of each tour is at most D.
The DVRP on trees is of independent interest, because it is equivalent to the "virtual machine packing" problem on trees studied by Sindelar et al. [SPAA'11]. We design a simple and natural approximation algorithm for the tree DVRP, parameterized by ε > 0. We show that its approximation ratio is α + ε, where α ≈ 1.691, and in addition, that our analysis is essentially tight. The running time is polynomial in n and D. The approximation ratio improves on the ratio of 2 due to Nagarajan and Ravi [Networks'12].
The main novelty of this paper lies in the analysis of the algorithm. It relies on a reduction from the tree DVRP to the bounded space online bin packing problem via a new notion of "reduced length".
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.27/LIPIcs.STACS.2023.27.pdf
vehicle routing
distance constraint
approximation algorithms
trees
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
28:1
28:22
10.4230/LIPIcs.STACS.2023.28
article
Representation of Short Distances in Structurally Sparse Graphs
Dvořák, Zdeněk
1
https://orcid.org/0000-0002-8308-9746
Computer Science Institute, Charles University, Prague, Czech Republic
A partial orientation H of a graph G is a weak r-guidance system if for any two vertices at distance at most r in G, there exists a shortest path P between them such that H directs all but one edge in P towards this edge. In case that H has bounded maximum outdegree Δ, this gives an efficient representation of shortest paths of length at most r in G: For any pair of vertices, we can either determine the distance between them or decide the distance is more than r, and in the former case, find a shortest path between them, in time O(Δ^r). We show that graphs from many natural graph classes admit such weak guidance systems, and study the algorithmic aspects of this notion. We also apply the notion to obtain approximation algorithms for distance variants of the independence and domination number in graph classes that admit weak guidance systems of bounded maximum outdegree.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.28/LIPIcs.STACS.2023.28.pdf
distances
structurally sparse graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
29:1
29:17
10.4230/LIPIcs.STACS.2023.29
article
Exact Matching: Algorithms and Related Problems
El Maalouly, Nicolas
1
https://orcid.org/0000-0002-1037-0203
Department of Computer Science, ETH Zürich, Switzerland
In 1982, Papadimitriou and Yannakakis introduced the Exact Matching (EM) problem where given an edge colored graph, with colors red and blue, and an integer k, the goal is to decide whether or not the graph contains a perfect matching with exactly k red edges. Although they conjectured it to be NP-complete, soon after it was shown to be solvable in randomized polynomial time in the seminal work of Mulmuley et al., placing it in the complexity class RP. Since then, all attempts at finding a deterministic algorithm for EM have failed, thus leaving it as one of the few natural combinatorial problems in RP but not known to be contained in P, and making it an interesting instance for testing the hypothesis RP = P. Progress has been lacking even on very restrictive classes of graphs despite the problem being quite well known as evidenced by the number of works citing it.
In this paper we aim to gain more insight into EM by studying a new optimization problem we call Top-k Perfect Matching (TkPM) which we show to be polynomially equivalent to EM. By virtue of being an optimization problem, it is more natural to approximate TkPM so we provide approximation algorithms for it. Some of the approximation algorithms rely on a relaxation of EM on bipartite graphs where the output is required to be a perfect matching with a number of red edges differing from k by at most k/2, which is of independent interest and generalizes to the Exact Weight Perfect Matching (EWPM) problem. We also consider parameterized algorithms and show that TkPM can be solved in FPT time parameterized by k and the independence number of the graph. This result again relies on new tools developed for EM which are also of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.29/LIPIcs.STACS.2023.29.pdf
Perfect Matching
Exact Matching
Approximation algorithms
Independence number
Parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
30:1
30:19
10.4230/LIPIcs.STACS.2023.30
article
Counting Temporal Paths
Enright, Jessica
1
Meeks, Kitty
1
https://orcid.org/0000-0001-5299-3073
Molter, Hendrik
2
https://orcid.org/0000-0002-4590-798X
School of Computing Science, University of Glasgow, UK
Department of Computer Science and Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
The betweenness centrality of a vertex v is an important centrality measure that quantifies how many optimal paths between pairs of other vertices visit v. Computing betweenness centrality in a temporal graph, in which the edge set may change over discrete timesteps, requires us to count temporal paths that are optimal with respect to some criterion. For several natural notions of optimality, including foremost or fastest temporal paths, this counting problem reduces to #TEMPORAL PATH, the problem of counting all temporal paths between a fixed pair of vertices; like the problems of counting foremost and fastest temporal paths, #TEMPORAL PATH is #P-hard in general. Motivated by the many applications of this intractable problem, we initiate a systematic study of the parameterised and approximation complexity of #TEMPORAL PATH. We show that the problem presumably does not admit an FPT-algorithm for the feedback vertex number of the static underlying graph, and that it is hard to approximate in general. On the positive side, we prove several exact and approximate FPT-algorithms for special cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.30/LIPIcs.STACS.2023.30.pdf
Temporal Paths
Temporal Graphs
Parameterised Counting
Approximate Counting
#P-hard Counting Problems
Temporal Betweenness Centrality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
31:1
31:15
10.4230/LIPIcs.STACS.2023.31
article
Barriers for Faster Dimensionality Reduction
Nova Fandina, Ora
1
https://orcid.org/0000-0002-3545-1501
Møller Høgsgaard, Mikael
1
https://orcid.org/0000-0002-1286-5971
Green Larsen, Kasper
1
https://orcid.org/0000-0001-8841-5929
Aarhus University, Denmark
The Johnson-Lindenstrauss transform allows one to embed a dataset of n points in ℝ^d into ℝ^m, while preserving the pairwise distance between any pair of points up to a factor (1 ± ε), provided that m = Ω(ε^{-2} lg n). The transform has found an overwhelming number of algorithmic applications, allowing to speed up algorithms and reducing memory consumption at the price of a small loss in accuracy. A central line of research on such transforms, focus on developing fast embedding algorithms, with the classic example being the Fast JL transform by Ailon and Chazelle. All known such algorithms have an embedding time of Ω(d lg d), but no lower bounds rule out a clean O(d) embedding time. In this work, we establish the first non-trivial lower bounds (of magnitude Ω(m lg m)) for a large class of embedding algorithms, including in particular most known upper bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.31/LIPIcs.STACS.2023.31.pdf
Dimensional reduction
Lower bound
Linear Circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
32:1
32:16
10.4230/LIPIcs.STACS.2023.32
article
A Regular and Complete Notion of Delay for Streaming String Transducers
Filiot, Emmanuel
1
https://orcid.org/0000-0002-2520-5630
Jecker, Ismaël
2
https://orcid.org/0000-0002-6527-4470
Löding, Christof
3
https://orcid.org/0000-0002-1529-2806
Winter, Sarah
1
https://orcid.org/0000-0002-3499-1995
Université libre de Bruxelles, Belgium
University of Warsaw, Poland
RWTH Aachen University, Germany
The notion of delay between finite transducers is a core element of numerous fundamental results of transducer theory. The goal of this work is to provide a similar notion for more complex abstract machines: we introduce a new notion of delay tailored to measure the similarity between streaming string transducers (SST). We show that our notion is regular: we design a finite automaton that can check whether the delay between any two SSTs executions is smaller than some given bound. As a consequence, our notion enjoys good decidability properties: in particular, while equivalence between non-deterministic SSTs is undecidable, we show that equivalence up to fixed delay is decidable. Moreover, we show that our notion has good completeness properties: we prove that two SSTs are equivalent if and only if they are equivalent up to some (computable) bounded delay. Together with the regularity of our delay notion, it provides an alternative proof that SSTs equivalence is decidable. Finally, the definition of our delay notion is machine-independent, as it only depends on the origin semantics of SSTs. As a corollary, the completeness result also holds for equivalent machine models such as deterministic two-way transducers, or MSO transducers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.32/LIPIcs.STACS.2023.32.pdf
Streaming string transducers
Delay
Origin
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
33:1
33:22
10.4230/LIPIcs.STACS.2023.33
article
New Clocks, Optimal Line Formation and Self-Replication Population Protocols
Gąsieniec, Leszek
1
https://orcid.org/0000-0003-1809-9814
Spirakis, Paul G.
1
https://orcid.org/0000-0001-5396-3749
Stachowiak, Grzegorz
2
https://orcid.org/0000-0003-0463-3676
University of Liverpool, UK
University of Wrocław, Poland
In this paper we consider a known variant of the standard population protocol model in which agents are allowed to be connected by edges, referred to as the network constructor model. During an interaction between two agents the relevant connecting edge can be formed, maintained or eliminated by the transition function. Since pairs of agents are chosen uniformly at random the status of each edge is updated every Θ(n²) interactions in expectation which coincides with Θ(n) parallel time. This phenomenon provides a natural lower bound on the time complexity for any non-trivial network construction designed for this variant. This is in contrast with the standard population protocol model in which efficient protocols operate in O(polylog n) parallel time.
The main focus of this paper is on efficient manipulation of linear structures including formation, self-replication and distribution (including pipelining) of complex information in the adopted model.
- We propose and analyze a novel edge based phase clock counting parallel time Θ(nlog n) in the network constructor model, showing also that its leader based counterpart provides the same time guarantees in the standard population protocol model. Note that all currently known phase clocks can count parallel time not exceeding O(polylog n).
- We prove that any spanning line formation protocol requires Ω(nlog n) parallel time if high probability guaranty is imposed. We also show that the new clock enables an optimal O(nlog n) parallel time spanning line construction, which improves dramatically on the best currently known O(n²) parallel time protocol, solving the main open problem in the considered model [O. Michail and P. Spirakis, 2016].
- We propose a new probabilistic bubble-sort algorithm in which random comparisons and transfers are limited to the adjacent positions in the sequence. Utilising a novel potential function reasoning we show that rather surprisingly this probabilistic sorting procedure requires O(n²) comparisons in expectation and whp, and is on par with its deterministic counterpart.
- We propose the first population protocol allowing self-replication of a strand of an arbitrary length k (carrying k-bit message of size independent of the state space) in parallel time O(n(k+log n)). The bit pipelining mechanism and the time complexity analysis of self-replication process mimic those used in the probabilistic bubble-sort argument. The new protocol permits also simultaneous self-replication, where l copies of the strand can be created in parallel in time O(n(k+log n)log l). We also discuss application of the strand self-replication protocol to pattern matching. All protocols are always correct and provide time guarantees with high probability defined as 1-n^(-η), for a constant η > 0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.33/LIPIcs.STACS.2023.33.pdf
Population protocols
constructors
probabilistic bubble-sort
self-replication
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
34:1
34:19
10.4230/LIPIcs.STACS.2023.34
article
Avoidance Games Are PSPACE-Complete
Gledel, Valentin
1
https://orcid.org/0000-0003-4736-4656
Oijid, Nacim
2
https://orcid.org/0000-0001-8313-639X
Department of Mathematics and Mathematical Statistics, Umeå University, Sweden
Univ. Lyon, Université Lyon 1, LIRIS UMR CNRS 5205, F-69621, Lyon, France
Avoidance games are games in which two players claim vertices of a hypergraph and try to avoid some structures. These games have been studied since the introduction of the game of SIM in 1968, but only few complexity results have been found out about them. In 2001, Slany proved some partial results on Avoider-Avoider games complexity, and in 2017 Bonnet et al. proved that short Avoider-Enforcer games are Co-W[1]-hard. More recently, in 2022, Miltzow and Stojaković proved that these games are NP-hard. As these games correspond to the misère version of the well-known Maker-Breaker games, introduced in 1963 and proven PSPACE-complete in 1978, one could expect these games to be PSPACE-complete too, but the question has remained open since then. Here, we prove here that both Avoider-Avoider and Avoider-Enforcer conventions are PSPACE-complete. Using the PSPACE-hardness of Avoider-Enforcer, we provide in appendix proofs that some particular Avoider-Enforcer games also are.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.34/LIPIcs.STACS.2023.34.pdf
Games
Avoider-Enforcer
Maker-Breaker
Complexity
Avoider-Avoider
PSPACE-complete
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
35:1
35:19
10.4230/LIPIcs.STACS.2023.35
article
Parameterized Lower Bounds for Problems in P via Fine-Grained Cross-Compositions
Heeger, Klaus
1
https://orcid.org/0000-0001-8779-0890
Nichterlein, André
1
https://orcid.org/0000-0001-7451-9401
Niedermeier, Rolf
1
https://orcid.org/0000-0003-1703-1236
Algorithmics and Computational Complexity, Technische Universität Berlin, Germany
We provide a general framework to exclude parameterized running times of the form O(l^β + n^γ) for problems that have polynomial running time lower bounds under hypotheses from fine-grained complexity. Our framework is based on cross-compositions from parameterized complexity. We (conditionally) exclude running times of the form O(l^{γ/(γ-1) - ε} + n^γ) for any 1 < γ < 2 and ε > 0 for the following problems:
- Longest Common (Increasing) Subsequence: Given two length-n strings over an alphabet Σ (over ℕ) and l ∈ ℕ, is there a common (increasing) subsequence of length l in both strings?
- Discrete Fréchet Distance: Given two lists of n points each and k ∈ N, is the Fréchet distance of the lists at most k? Here l is the maximum number of points which one list is ahead of the other list in an optimum traversal.
- Planar Motion Planning: Given a set of n non-intersecting axis-parallel line segment obstacles in the plane and a line segment robot (called rod), can the rod be moved to a specified target without touching any obstacles? Here l is the maximum number of segments any segment has in its vicinity. Moreover, we exclude running times O(l^{2γ/(γ-1) - ε} + n^γ) for any 1 < γ < 3 and ε > 0 for:
- Negative Triangle: Given an edge-weighted graph with n vertices, is there a triangle whose sum of edge-weights is negative? Here l is the order of a maximum connected component.
- Triangle Collection: Given a vertex-colored graph with n vertices, is there for each triple of colors a triangle whose vertices have these three colors? Here l is the order of a maximum connected component.
- 2nd Shortest Path: Given an n-vertex edge-weighted digraph, vertices s and t, and k ∈ ℕ, has the second longest s-t-path length at most k? Here l is the directed feedback vertex set number. Except for 2nd Shortest Path all these running time bounds are tight, that is, algorithms with running time O(l^{γ/(γ-1)} + n^γ) for any 1 < γ < 2 and O(l^{2γ/(γ -1)} + n^γ) for any 1 < γ < 3, respectively, are known. Our running time lower bounds also imply lower bounds on kernelization algorithms for these problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.35/LIPIcs.STACS.2023.35.pdf
FPT in P
Kernelization
Decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
36:1
36:16
10.4230/LIPIcs.STACS.2023.36
article
Dynamic Maintenance of Monotone Dynamic Programs and Applications
Henzinger, Monika
1
https://orcid.org/0000-0002-5008-6530
Neumann, Stefan
2
https://orcid.org/0000-0002-3981-1500
Räcke, Harald
3
https://orcid.org/0000-0001-8797-717X
Schmid, Stefan
4
5
https://orcid.org/0000-0002-7798-1711
Institute of Science and Technology Austria, Klosterneuburg, Austria
KTH Royal Institute of Technology, Stockholm, Sweden
TU München, Germany
TU Berlin, Germany
Fraunhofer SIT, Darmstadt, Germany
Dynamic programming (DP) is one of the fundamental paradigms in algorithm design. However, many DP algorithms have to fill in large DP tables, represented by two-dimensional arrays, which causes at least quadratic running times and space usages. This has led to the development of improved algorithms for special cases when the DPs satisfy additional properties like, e.g., the Monge property or total monotonicity.
In this paper, we consider a new condition which assumes (among some other technical assumptions) that the rows of the DP table are monotone. Under this assumption, we introduce a novel data structure for computing (1+ε)-approximate DP solutions in near-linear time and space in the static setting, and with polylogarithmic update times when the DP entries change dynamically. To the best of our knowledge, our new condition is incomparable to previous conditions and is the first which allows to derive dynamic algorithms based on existing DPs. Instead of using two-dimensional arrays to store the DP tables, we store the rows of the DP tables using monotone piecewise constant functions. This allows us to store length-n DP table rows with entries in [0,W] using only polylog(n,W) bits, and to perform operations, such as (min,+)-convolution or rounding, on these functions in polylogarithmic time.
We further present several applications of our data structure. For bicriteria versions of k-balanced graph partitioning and simultaneous source location, we obtain the first dynamic algorithms with subpolynomial update times, as well as the first static algorithms using only near-linear time and space. Additionally, we obtain the currently fastest algorithm for fully dynamic knapsack.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.36/LIPIcs.STACS.2023.36.pdf
Dynamic programming
dynamic algorithms
data structures
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
37:1
37:23
10.4230/LIPIcs.STACS.2023.37
article
Approximate Selection with Unreliable Comparisons in Optimal Expected Time
Huang, Shengyu
1
Liu, Chih-Hung
2
https://orcid.org/0000-0001-9683-5982
Rutschmann, Daniel
3
Department of Computer Science, EPFL, Lausanne, Switzerland
Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
Department of Applied Mathematics and Computer Science, Technical University of Denmark, Copenhagen, Denmark
Given n elements, an integer k ≤ n/2 and a parameter ε ≥ 1/n, we study the problem of selecting an element with rank in (k-nε, k+nε] using unreliable comparisons where the outcome of each comparison is incorrect independently with a constant error probability, and multiple comparisons between the same pair of elements are independent. In this fault model, the fundamental problems of finding the minimum, selecting the k-th smallest element and sorting have been shown to require Θ(n log 1/Q), Θ(n log k/Q) and Θ(n log n/Q) comparisons, respectively, to achieve success probability 1-Q [Uriel Feige et al., 1994]. Considering the increasing complexity of modern computing, it is of great interest to develop approximation algorithms that enable a trade-off between the solution quality and the number of comparisons. In particular, approximation algorithms would even be able to attain a sublinear number of comparisons. Very recently, Leucci and Liu [Stefano Leucci and Chih-Hung Liu, 2022] proved that the approximate minimum selection problem, which covers the case that k ≤ nε, requires expected Θ(ε^{-1} log 1/Q) comparisons, but the general case, i.e., for nε < k ≤ n/2, is still open.
We develop a randomized algorithm that performs expected O(k/n ε^{-2} log 1/Q) comparisons to achieve success probability at least 1-Q. For k = n ε, the number of comparisons is O(ε^{-1} log 1/Q), matching Leucci and Liu’s result [Stefano Leucci and Chih-Hung Liu, 2022], whereas for k = n/2 (i.e., approximating the median), the number of comparisons is O(ε^{-2} log 1/Q). We also prove that even in the absence of comparison faults, any randomized algorithm with success probability at least 1-Q performs expected Ω(min{n, k/n ε^{-2} log 1/Q}) comparisons. As long as n is large enough, i.e., when n = Ω(k/n ε^{-2} log 1/Q), our lower bound demonstrates the optimality of our algorithm, which covers the possible range of attaining a sublinear number of comparisons. Surprisingly, for constant Q, our algorithm performs expected O(k/n ε^{-2}) comparisons, matching the best possible approximation algorithm in the absence of computation faults. In contrast, for the exact selection problem, the expected number of comparisons is Θ(n log k) with faults versus Θ(n) without faults. Our results also indicate a clear distinction between approximating the minimum and approximating the k-th smallest element, which holds even for the high probability guarantee, e.g., if k = n/2, Q = 1/n and ε = n^{-α} for α ∈ (0, 1/2), the asymptotic difference is almost quadratic, i.e., Θ̃(n^α) versus Θ̃(n^{2α}).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.37/LIPIcs.STACS.2023.37.pdf
Approximate Selection
Unreliable Comparisons
Independent Faults
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
38:1
38:18
10.4230/LIPIcs.STACS.2023.38
article
Relating Description Complexity to Entropy
Jaakkola, Reijo
1
https://orcid.org/0000-0003-4714-4637
Kuusisto, Antti
1
2
https://orcid.org/0000-0003-1356-8749
Vilander, Miikka
1
https://orcid.org/0000-0002-7301-939X
Tampere University, Finland
University of Helsinki, Finland
We demonstrate some novel links between entropy and description complexity, a notion referring to the minimal formula length for specifying given properties. Let MLU be the logic obtained by extending propositional logic with the universal modality, and let GMLU be the corresponding extension with the ability to count. In the finite, MLU is expressively complete for specifying sets of variable assignments, while GMLU is expressively complete for multisets. We show that for MLU, the model classes with maximal Boltzmann entropy are the ones with maximal description complexity. Concerning GMLU, we show that expected Boltzmann entropy is asymptotically equivalent to expected description complexity multiplied by the number of proposition symbols considered. To contrast these results, we prove that this link breaks when we move to considering first-order logic FO over vocabularies with higher-arity relations. To establish the aforementioned result, we show that almost all finite models require relatively large FO-formulas to define them. Our results relate to links between Kolmogorov complexity and entropy, demonstrating a way to conceive such results in the logic-based scenario where relational structures are classified by formulas of different sizes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.38/LIPIcs.STACS.2023.38.pdf
finite model theory
entropy
formula size
randomness
formula size game
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
39:1
39:21
10.4230/LIPIcs.STACS.2023.39
article
Induced Matching Below Guarantees: Average Paves the Way for Fixed-Parameter Tractability
Koana, Tomohiro
1
https://orcid.org/0000-0002-8684-0611
Faculty IV, Institute of Software Engineering and Theoretical Computer Science, Algorithmics and Computational Complexity, Technische Universität Berlin, Germany
In this work, we study the Induced Matching problem: Given an undirected graph G and an integer 𝓁, is there an induced matching M of size at least 𝓁? An edge subset M is an induced matching in G if M is a matching such that there is no edge between two distinct edges of M. Our work looks into the parameterized complexity of Induced Matching with respect to "below guarantee" parameterizations. We consider the parameterization u - 𝓁 for an upper bound u on the size of any induced matching. For instance, any induced matching is of size at most n/2 where n is the number of vertices, which gives us a parameter n/2 - 𝓁. In fact, there is a straightforward 9^{n/2 - 𝓁} ⋅ n^O(1)-time algorithm for Induced Matching [Moser and Thilikos, J. Discrete Algorithms]. Motivated by this, we ask: Is Induced Matching FPT for a parameter smaller than n/2 - 𝓁? In search for such parameters, we consider MM(G) - 𝓁 and IS(G) - 𝓁, where MM(G) is the maximum matching size and IS(G) is the maximum independent set size of G. We find that Induced Matching is presumably not FPT when parameterized by MM(G) - 𝓁 or IS(G) - 𝓁. In contrast to these intractability results, we find that taking the average of the two helps - our main result is a branching algorithm that solves Induced Matching in 49^{(MM(G) + IS(G))/ 2 - 𝓁} ⋅ n^O(1) time. Our algorithm makes use of the Gallai-Edmonds decomposition to find a structure to branch on.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.39/LIPIcs.STACS.2023.39.pdf
Parameterized Complexity
Below Guarantees
Induced Matching
Gallai-Edmonds Decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
40:1
40:20
10.4230/LIPIcs.STACS.2023.40
article
Finding and Counting Patterns in Sparse Graphs
Komarath, Balagopal
1
Kumar, Anant
1
Mishra, Suchismita
2
Sethia, Aditi
1
IIT Gandhinagar, India
Universidad Andrés Bello, Santiago, Chile
We consider algorithms for finding and counting small, fixed graphs in sparse host graphs. In the non-sparse setting, the parameters treedepth and treewidth play a crucial role in fast, constant-space and polynomial-space algorithms respectively. We discover two new parameters that we call matched treedepth and matched treewidth. We show that finding and counting patterns with low matched treedepth and low matched treewidth can be done asymptotically faster than the existing algorithms when the host graphs are sparse for many patterns. As an application to finding and counting fixed-size patterns, we discover Õ(m³)-time, constant-space algorithms for cycles of length at most 11 and Õ(m²)-time, polynomial-space algorithms for paths of length at most 10.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.40/LIPIcs.STACS.2023.40.pdf
Subgraph Detection and Counting
Homomorphism Polynomials
Treewidth and Treedepth
Matchings
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
41:1
41:22
10.4230/LIPIcs.STACS.2023.41
article
Maximum Matching via Maximal Matching Queries
Konrad, Christian
1
Naidu, Kheeran K.
1
Steward, Arun
1
Department of Computer Science, University of Bristol, UK
We study approximation algorithms for Maximum Matching that are given access to the input graph solely via an edge-query maximal matching oracle. More specifically, in each round, an algorithm queries a set of potential edges and the oracle returns a maximal matching in the subgraph spanned by the query edges that are also contained in the input graph. This model is more general than the vertex-query model introduced by binti Khalil and Konrad [FSTTCS'20], where each query consists of a subset of vertices and the oracle returns a maximal matching in the subgraph of the input graph induced by the queried vertices.
In this paper, we give tight bounds for deterministic edge-query algorithms for up to three rounds. In more detail:
1) As our main result, we give a deterministic 3-round edge-query algorithm with approximation factor 0.625 on bipartite graphs. This result establishes a separation between the edge-query and the vertex-query models since every deterministic 3-round vertex-query algorithm has an approximation factor of at most 0.6 [binti Khalil, Konrad, FSTTCS'20], even on bipartite graphs. Our algorithm can also be implemented in the semi-streaming model of computation in a straightforward manner and improves upon the state-of-the-art 3-pass 0.6111-approximation algorithm by Feldman and Szarf [APPROX'22] for bipartite graphs.
2) We show that the aforementioned algorithm is optimal in that every deterministic 3-round edge-query algorithm has an approximation factor of at most 0.625, even on bipartite graphs.
3) Last, we also give optimal bounds for one and two query rounds, where the best approximation factors achievable are 1/2 and 1/2 + Θ(1/n), respectively, where n is the number of vertices in the input graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.41/LIPIcs.STACS.2023.41.pdf
Maximum Matching
Query Model
Algorithms
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
42:1
42:21
10.4230/LIPIcs.STACS.2023.42
article
Distributed Quantum Interactive Proofs
Le Gall, François
1
Miyamoto, Masayuki
1
Nishimura, Harumichi
2
Graduate School of Mathematics, Nagoya University, Japan
Graduate School of Informatics, Nagoya University, Japan
The study of distributed interactive proofs was initiated by Kol, Oshman, and Saxena [PODC 2018] as a generalization of distributed decision mechanisms (proof-labeling schemes, etc.), and has received a lot of attention in recent years. In distributed interactive proofs, the nodes of an n-node network G can exchange short messages (called certificates) with a powerful prover. The goal is to decide if the input (including G itself) belongs to some language, with as few turns of interaction and as few bits exchanged between nodes and the prover as possible. There are several results showing that the size of certificates can be reduced drastically with a constant number of interactions compared to non-interactive distributed proofs.
In this paper, we introduce the quantum counterpart of distributed interactive proofs: certificates can now be quantum bits, and the nodes of the network can perform quantum computation. The first result of this paper shows that by using distributed quantum interactive proofs, the number of interactions can be significantly reduced. More precisely, our result shows that for any constant k, the class of languages that can be decided by a k-turn classical (i.e., non-quantum) distributed interactive protocol with f(n)-bit certificate size is contained in the class of languages that can be decided by a 5-turn distributed quantum interactive protocol with O(f(n))-bit certificate size. We also show that if we allow to use shared randomness, the number of turns can be reduced to three. Since no similar turn-reduction classical technique is currently known, our result gives evidence of the power of quantum computation in the setting of distributed interactive proofs as well.
As a corollary of our results, we show that there exist 5-turn/3-turn distributed quantum interactive protocols with small certificate size for problems that have been considered in prior works on distributed interactive proofs such as [Kol, Oshman, and Saxena PODC 2018, Naor, Parter, and Yogev SODA 2020].
We then utilize the framework of the distributed quantum interactive proofs to test closeness of two quantum states each of which is distributed over the entire network.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.42/LIPIcs.STACS.2023.42.pdf
distributed interactive proofs
distributed verification
quantum computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
43:1
43:21
10.4230/LIPIcs.STACS.2023.43
article
Reconfiguration of Digraph Homomorphisms
Lévêque, Benjamin
1
Mühlenthaler, Moritz
1
https://orcid.org/0000-0002-2729-127X
Suzan, Thomas
1
Laboratoire G-SCOP, Grenoble INP, Université Grenoble-Alpes, France
For a fixed graph H, the H-Recoloring problem asks whether, given two homomorphisms from a graph G to H, one homomorphism can be transformed into the other by changing the image of a single vertex in each step and maintaining a homomorphism to H throughout. The most general algorithmic result for H-Recoloring so far has been proposed by Wrochna in 2014, who introduced a topological approach to obtain a polynomial-time algorithm for any undirected loopless square-free graph H. We show that the topological approach can be used to recover essentially all previous algorithmic results for H-Recoloring and that it is applicable also in the more general setting of digraph homomorphisms. In particular, we show that H-Recoloring admits a polynomial-time algorithm i) if H is a loopless digraph that does not contain a 4-cycle of algebraic girth 0 and ii) if H is a reflexive digraph that contains no triangle of algebraic girth 1 and no 4-cycle of algebraic girth 0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.43/LIPIcs.STACS.2023.43.pdf
Digraph Homomorphisms
Combinatorial Reconfiguration
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
44:1
44:14
10.4230/LIPIcs.STACS.2023.44
article
An 𝒪(3.82^k) Time FPT Algorithm for Convex Flip Distance
Li, Haohong
1
Xia, Ge
1
Department of Computer Science, Lafayette College, Easton, PA, USA
Let P be a convex polygon in the plane, and let T be a triangulation of P. An edge e in T is called a diagonal if it is shared by two triangles in T. A flip of a diagonal e is the operation of removing e and adding the opposite diagonal of the resulting quadrilateral to obtain a new triangulation of P from T. The flip distance between two triangulations of P is the minimum number of flips needed to transform one triangulation into the other. The Convex Flip Distance problem asks if the flip distance between two given triangulations of P is at most k, for some given parameter k ∈ ℕ.
We present an FPT algorithm for the Convex Flip Distance problem that runs in time 𝒪(3.82^k) and uses polynomial space, where k is the number of flips. This algorithm significantly improves the previous best FPT algorithms for the problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.44/LIPIcs.STACS.2023.44.pdf
Flip distance
Rotation distance
Triangulations
Exact algorithms
Parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
45:1
45:22
10.4230/LIPIcs.STACS.2023.45
article
Tight Bounds for Repeated Balls-Into-Bins
Los, Dimitrios
1
https://orcid.org/0000-0002-1574-5166
Sauerwald, Thomas
1
https://orcid.org/0000-0002-0882-283X
Department of Computer Science & Technology, University of Cambridge, UK
We study the repeated balls-into-bins process introduced by Becchetti, Clementi, Natale, Pasquale and Posta (2019). This process starts with m balls arbitrarily distributed across n bins. At each round t = 1,2,…, one ball is selected from each non-empty bin, and then placed it into a bin chosen independently and uniformly at random. We prove the following results:
- For any n ⩽ m ⩽ poly(n), we prove a lower bound of Ω(m/n ⋅ log n) on the maximum load. For the special case m = n, this matches the upper bound of 𝒪(log n), as shown in [Luca Becchetti et al., 2019]. It also provides a positive answer to the conjecture in [Luca Becchetti et al., 2019] that for m = n the maximum load is ω(log n/ log log n) at least once in a polynomially large time interval. For m ∈ [ω(n), n log n], our new lower bound disproves the conjecture in [Luca Becchetti et al., 2019] that the maximum load remains 𝒪(log n).
- For any n ⩽ m ⩽ poly(n), we prove an upper bound of 𝒪(m/n ⋅ log n) on the maximum load for all steps of a polynomially large time interval. This matches our lower bound up to multiplicative constants.
- For any m ⩾ n, our analysis also implies an 𝒪(m²/n) waiting time to reach a configuration with a 𝒪(m/n ⋅ log m) maximum load, even for worst-case initial distributions.
- For m ⩾ n, we show that every ball visits every bin in 𝒪(m log m) rounds. For m = n, this improves the previous upper bound of 𝒪(n log² n) in [Luca Becchetti et al., 2019]. We also prove that the upper bound is tight up to multiplicative constants for any n ⩽ m ⩽ poly(n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.45/LIPIcs.STACS.2023.45.pdf
Repeated balls-into-bins
self-stabilizing systems
balanced allocations
potential functions
random walks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
46:1
46:13
10.4230/LIPIcs.STACS.2023.46
article
Maintaining CMSO₂ Properties on Dynamic Structures with Bounded Feedback Vertex Number
Majewski, Konrad
1
https://orcid.org/0000-0002-3922-7953
Pilipczuk, Michał
1
https://orcid.org/0000-0001-7891-1988
Sokołowski, Marek
1
https://orcid.org/0000-0001-8309-0141
Institute of Informatics, University of Warsaw, Poland
Let 𝜑 be a sentence of CMSO₂ (monadic second-order logic with quantification over edge subsets and counting modular predicates) over the signature of graphs. We present a dynamic data structure that for a given graph G that is updated by edge insertions and edge deletions, maintains whether 𝜑 is satisfied in G. The data structure is required to correctly report the outcome only when the feedback vertex number of G does not exceed a fixed constant k, otherwise it reports that the feedback vertex number is too large. With this assumption, we guarantee amortized update time O_{𝜑,k}(log n).
By combining this result with a classic theorem of Erdős and Pósa, we give a fully dynamic data structure that maintains whether a graph contains a packing of k vertex-disjoint cycles with amortized update time O_k(log n). Our data structure also works in a larger generality of relational structures over binary signatures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.46/LIPIcs.STACS.2023.46.pdf
feedback vertex set
CMSO₂ formula
data structure
dynamic graphs
fixed-parameter tractability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
47:1
47:22
10.4230/LIPIcs.STACS.2023.47
article
Sublinear-Time Probabilistic Cellular Automata
Modanese, Augusto
1
Aalto University, Espoo, Finland
We propose and investigate a probabilistic model of sublinear-time one-dimensional cellular automata. In particular, we modify the model of ACA (which are cellular automata that accept if and only if all cells simultaneously accept) so that every cell changes its state not only dependent on the states it sees in its neighborhood but also on an unbiased coin toss of its own. The resulting model is dubbed probabilistic ACA (PACA). We consider one- and two-sided error versions of the model (in the same spirit as the classes RP and BPP) and establish a separation between the classes of languages they can recognize all the way up to o(√n) time. As a consequence, we have a Ω(√n) lower bound for derandomizing constant-time one-sided error PACAs (using deterministic ACAs). We also prove that derandomization of T(n)-time PACAs (to polynomial-time deterministic cellular automata) for various regimes of T(n) = ω(log n) implies non-trivial derandomization results for the class RP (e.g., P = RP). The main contribution is an almost full characterization of the constant-time PACA classes: For one-sided error, the class equals that of the deterministic model; that is, constant-time one-sided error PACAs can be fully derandomized with only a constant multiplicative overhead in time complexity. As for two-sided error, we identify a natural class we call the linearly testable languages (LLT) and prove that the languages decidable by constant-time two-sided error PACAs are "sandwiched" in-between the closure of LLT under union and intersection and the class of locally threshold testable languages (LTT).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.47/LIPIcs.STACS.2023.47.pdf
Cellular automata
local computation
probabilistic models
subregular language classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
48:1
48:20
10.4230/LIPIcs.STACS.2023.48
article
Real Numbers Equally Compressible in Every Base
Nandakumar, Satyadev
1
https://orcid.org/0000-0002-0214-0598
Pulari, Subin
1
https://orcid.org/0000-0001-8426-4326
Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India
This work solves an open question in finite-state compressibility posed by Lutz and Mayordomo [Lutz and Mayordomo, 2021] about compressibility of real numbers in different bases.
Finite-state compressibility, or equivalently, finite-state dimension, quantifies the asymptotic lower density of information in an infinite sequence.
Absolutely normal numbers, being finite-state incompressible in every base of expansion, are precisely those numbers which have finite-state dimension equal to 1 in every base. At the other extreme, for example, every rational number has finite-state dimension equal to 0 in every base.
Generalizing this, Lutz and Mayordomo in [Lutz and Mayordomo, 2021] (see also Lutz [Lutz, 2012]) posed the question: are there numbers which have absolute positive finite-state dimension strictly between 0 and 1 - equivalently, is there a real number ξ and a compressibility ratio s ∈ (0,1) such that for every base b, the compressibility ratio of the base-b expansion of ξ is precisely s? It is conceivable that there is no such number. Indeed, some works explore "zero-one" laws for other feasible dimensions [Fortnow et al., 2011] - i.e. sequences with certain properties either have feasible dimension 0 or 1, taking no value strictly in between.
However, we answer the question of Lutz and Mayordomo affirmatively by proving a more general result. We show that given any sequence of rational numbers ⟨q_b⟩_{b=2}^∞, we can explicitly construct a single number ξ such that for any base b, the finite-state dimension/compression ratio of ξ in base-b is q_b. As a special case, this result implies the existence of absolutely dimensioned numbers for any given rational dimension between 0 and 1, as posed by Lutz and Mayordomo.
In our construction, we combine ideas from Wolfgang Schmidt’s construction of absolutely normal numbers from [Schmidt, 1961], results regarding low discrepancy sequences and several new estimates related to exponential sums.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.48/LIPIcs.STACS.2023.48.pdf
Finite-state dimension
Finite-state compression
Absolutely dimensioned numbers
Exponential sums
Weyl criterion
Normal numbers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
49:1
49:18
10.4230/LIPIcs.STACS.2023.49
article
Gap Preserving Reductions Between Reconfiguration Problems
Ohsaka, Naoto
1
https://orcid.org/0000-0001-9584-4764
CyberAgent, Inc., Tokyo, Japan
Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions for a search problem. For example, in SAT Reconfiguration, for a Boolean formula φ and two satisfying truth assignments σ_𝗌 and σ_𝗍 for φ, we are asked to determine whether there is a sequence of satisfying truth assignments for φ starting from σ_𝗌 and ending with σ_𝗍, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of φ during transformation from σ_𝗌 to σ_𝗍. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments.
In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin 3-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1991) does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine (Theor. Comput. Sci., 2005), Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.49/LIPIcs.STACS.2023.49.pdf
combinatorial reconfiguration
hardness of approximation
gap reduction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
50:1
50:22
10.4230/LIPIcs.STACS.2023.50
article
Dynamic Data Structures for Parameterized String Problems
Olkowski, Jędrzej
1
Pilipczuk, Michał
2
https://orcid.org/0000-0001-7891-1988
Rychlicki, Mateusz
3
https://orcid.org/0000-0002-8318-2588
Węgrzycki, Karol
4
5
https://orcid.org/0000-0001-9746-5733
Zych-Pawlewicz, Anna
2
https://orcid.org/0000-0002-5361-8969
Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, Poland
Institute of Informatics, University of Warsaw, Poland
School of Computing, University of Leeds, UK
Saarland University, Saarbrücken, Germany
Max Planck Institute for Informatics, Saarbrücken, Germany
We revisit classic string problems considered in the area of parameterized complexity, and study them through the lens of dynamic data structures. That is, instead of asking for a static algorithm that solves the given instance efficiently, our goal is to design a data structure that efficiently maintains a solution, or reports a lack thereof, upon updates in the instance.
We first consider the CLOSEST STRING problem, for which we design randomized dynamic data structures with amortized update times d^𝒪(d) and |Σ|^𝒪(d), respectively, where Σ is the alphabet and d is the assumed bound on the maximum distance. These are obtained by combining known static approaches to CLOSEST STRING with color-coding.
Next, we note that from a result of Frandsen et al. [J. ACM'97] one can easily infer a meta-theorem that provides dynamic data structures for parameterized string problems with worst-case update time of the form 𝒪_k(log log n), where k is the parameter in question and n is the length of the string. We showcase the utility of this meta-theorem by giving such data structures for problems DISJOINT FACTORS and EDIT DISTANCE. We also give explicit data structures for these problems, with worst-case update times 𝒪(k 2^k log log n) and 𝒪(k²log log n), respectively. Finally, we discuss how a lower bound methodology introduced by Amarilli et al. [ICALP'21] can be used to show that obtaining update time 𝒪(f(k)) for DISJOINT FACTORS and EDIT DISTANCE is unlikely already for a constant value of the parameter k.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.50/LIPIcs.STACS.2023.50.pdf
Parameterized algorithms
Dynamic data structures
String problems
Closest String
Edit Distance
Disjoint Factors
Predecessor problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
51:1
51:23
10.4230/LIPIcs.STACS.2023.51
article
An Algebraic Approach to Vectorial Programs
Paperman, Charles
1
https://orcid.org/0000-0002-6658-5238
Salvati, Sylvain
1
Soyez-Martin, Claire
1
Univ. Lille, CNRS, INRIA, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
Vectorial programming, the combination of SIMD instructions with usual processor instructions, is known to speed-up many standard algorithms. Simple regular languages have benefited from this technology. This paper is a first step towards pushing these benefits further. We take advantage of the inner algebraic structure of regular languages and produce high level representations of efficient vectorial programs that recognize certain classes of regular languages.
As a technical ingredient, we establish equivalences between classes of vectorial circuits and logical formalisms, namely unary temporal logic and first order logic. The main result is the construction of compilation procedures that turns syntactic semigroups into vectorial circuits. The circuits we obtain are small in that they improve known upper-bounds on representations of automata within the logical formalisms. The gain is mostly due to a careful sharing of sub-formulas based on algebraic tools.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.51/LIPIcs.STACS.2023.51.pdf
Automata theory
Semigroups
Vectorisation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
52:1
52:15
10.4230/LIPIcs.STACS.2023.52
article
Reconstructing Words Using Queries on Subwords or Factors
Richomme, Gwenaël
1
https://orcid.org/0000-0003-2211-7448
Rosenfeld, Matthieu
2
https://orcid.org/0000-0001-5467-5407
LIRMM, Université Paul-Valéry Montpellier 3, Université de Montpellier, CNRS, Montpellier, France
LIRMM, Université de Montpellier, CNRS, Montpellier, France
We study word reconstruction problems. Improving a previous result by P. Fleischmann, M. Lejeune, F. Manea, D. Nowotka and M. Rigo, we prove that, for any unknown word w of length n over an alphabet of cardinality k, w can be reconstructed from the number of occurrences as subwords (or scattered factors) of O(k²√{nlog₂(n)}) words. Two previous upper bounds obtained by S. S. Skiena and G. Sundaram are also slightly improved: one when considering information on the existence of subwords instead of on the numbers of their occurrences, and, the other when considering information on the existence of factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.52/LIPIcs.STACS.2023.52.pdf
Word reconstruction
Subwords
Factors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
53:1
53:21
10.4230/LIPIcs.STACS.2023.53
article
Dynamic Binary Search Trees: Improved Lower Bounds for the Greedy-Future Algorithm
Sadeh, Yaniv
1
https://orcid.org/0000-0002-5712-1028
Kaplan, Haim
1
https://orcid.org/0000-0001-9586-8002
Tel Aviv University, Israel
Binary search trees (BSTs) are one of the most basic and widely used data structures. The best static tree for serving a sequence of queries (searches) can be computed by dynamic programming. In contrast, when the BSTs are allowed to be dynamic (i.e. change by rotations between searches), we still do not know how to compute the optimal algorithm (OPT) for a given sequence. One of the candidate algorithms whose serving cost is suspected to be optimal up-to a (multiplicative) constant factor is known by the name Greedy Future (GF). In an equivalent geometric way of representing queries on BSTs, GF is in fact equivalent to another algorithm called Geometric Greedy (GG). Most of the results on GF are obtained using the geometric model and the study of GG. Despite this intensive recent fruitful research, the best lower bound we have on the competitive ratio of GF is 4/3. Furthermore, it has been conjectured that the additive gap between the cost of GF and OPT is only linear in the number of queries. In this paper we prove a lower bound of 2 on the competitive ratio of GF, and we prove that the additive gap between the cost of GF and OPT can be Ω(m ⋅ log log n) where n is the number of items in the tree and m is the number of queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.53/LIPIcs.STACS.2023.53.pdf
Binary Search Trees
Greedy Future
Geometric Greedy
Lower Bounds
Dynamic Optimality Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
54:1
54:21
10.4230/LIPIcs.STACS.2023.54
article
The Complexity of Translationally Invariant Problems Beyond Ground State Energies
Watson, James D.
1
https://orcid.org/0000-0002-6077-4898
Bausch, Johannes
2
https://orcid.org/0000-0003-3189-9162
Gharibian, Sevag
3
https://orcid.org/0000-0002-9992-3379
University College London, UK
University of Cambridge, UK
Universität Paderborn, Germany
The physically motivated quantum generalisation of k-SAT, the k-Local Hamiltonian (k-LH) problem, is well-known to be QMA-complete ("quantum NP"-complete). What is surprising, however, is that while the former is easy on 1D Boolean formulae, the latter remains hard on 1D local Hamiltonians, even if all constraints are identical [Gottesman, Irani, FOCS 2009]. Such "translation-invariant" systems are much closer in structure to what one might see in Nature. Moving beyond k-LH, what is often more physically interesting is the computation of properties of the ground space (i.e. "solution space") itself. In this work, we focus on two such recent problems: Simulating local measurements on the ground space (APX-SIM, analogous to computing properties of optimal solutions to MAX-SAT formulae) [Ambainis, CCC 2014], and deciding if the low energy space has an energy barrier (GSCON, analogous to classical reconfiguration problems) [Gharibian, Sikora, ICALP 2015]. These problems are known to be P^{QMA[log]}- and QCMA-complete, respectively, in the general case. Yet, to date, it is not known whether they remain hard in such simple 1D translationally invariant systems.
In this work, we show that the 1D translationally invariant versions of both APX-SIM and GSCON are intractable, namely are P^{QMA_{EXP}}- and QCMA^{EXP}-complete ("quantum P^{NEXP}" and "quantum NEXP"), respectively. Each of these results is attained by giving a respective generic "lifting theorem". For APX-SIM we give a framework for lifting any abstract local circuit-to-Hamiltonian mapping H satisfying mild assumptions to hardness of APX-SIM on the family of Hamiltonians produced by H, while preserving the structural properties of H (e.g. translation invariance, geometry, locality, etc). Each result also leverages counterintuitive properties of our constructions: for APX-SIM, we compress the answers to polynomially many parallel queries to a QMA oracle into a single qubit. For GSCON, we show strong robustness, i.e. soundness even against adversaries acting on all but a single qudit in the system.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.54/LIPIcs.STACS.2023.54.pdf
Complexity
Quantum Computing
Physics
Constraint Satisfaction
Combinatorial Reconfiguration
Many-Body Physics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-03-03
254
55:1
55:16
10.4230/LIPIcs.STACS.2023.55
article
Restless Temporal Path Parameterized Above Lower Bounds
Zschoche, Philipp
1
https://orcid.org/0000-0001-9846-0600
Faculty IV, Algorithmics and Computational Complexity, Technische Universität Berlin, Germany
Reachability questions are one of the most fundamental algorithmic primitives in temporal graphs - graphs whose edge set changes over discrete time steps. A core problem here is the NP-hard Short Restless Temporal Path: given a temporal graph G, two distinct vertices s and z, and two numbers δ and k, is there a δ-restless temporal s-z path of length at most k? A temporal path is a path whose edges appear in chronological order and a temporal path is δ-restless if two consecutive path edges appear at most δ time steps apart from each other. Among others, this problem has applications in neuroscience and epidemiology. While Short Restless Temporal Path is known to be computationally hard, e.g., it is NP-hard even if the temporal graph consists of three discrete time steps and it is W[1]-hard when parameterized by the feedback vertex number of the underlying graph, it is fixed-parameter tractable when parameterized by the path length k. We improve on this by showing that Short Restless Temporal Path can be solved in (randomized) 4^(k-d)|G|^O(1) time, where d is the minimum length of a temporal s-z path.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol254-stacs2023/LIPIcs.STACS.2023.55/LIPIcs.STACS.2023.55.pdf
temporal graphs
FPT
above-lower-bound parameterization