eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
1
1048
10.4230/LIPIcs.STACS.2024
article
LIPIcs, Volume 289, STACS 2024, Complete Volume
Beyersdorff, Olaf
1
https://orcid.org/0000-0002-2870-1648
Kanté, Mamadou Moustapha
2
https://orcid.org/0000-0003-1838-7744
Kupferman, Orna
3
https://orcid.org/0000-0003-4699-6117
Lokshtanov, Daniel
4
Friedrich Schiller University Jena, Germany
Université Clermont Auvergne, Clermont Auvergne INP, LIMOS, CNRS, Aubière, France
Hebrew University, Jerusalem, Israel
University of California Santa Barbara, CA, USA
LIPIcs, Volume 289, STACS 2024, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024/LIPIcs.STACS.2024.pdf
LIPIcs, Volume 289, STACS 2024, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
0:i
0:xx
10.4230/LIPIcs.STACS.2024.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Beyersdorff, Olaf
1
https://orcid.org/0000-0002-2870-1648
Kanté, Mamadou Moustapha
2
https://orcid.org/0000-0003-1838-7744
Kupferman, Orna
3
https://orcid.org/0000-0003-4699-6117
Lokshtanov, Daniel
4
Friedrich Schiller University Jena, Germany
Université Clermont Auvergne, Clermont Auvergne INP, LIMOS, CNRS, Aubière, France
Hebrew University, Jerusalem, Israel
University of California Santa Barbara, CA, USA
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.0/LIPIcs.STACS.2024.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
1:1
1:1
10.4230/LIPIcs.STACS.2024.1
article
Polynomial-Time Pseudodeterministic Constructions (Invited Talk)
Oliveira, Igor C.
1
https://orcid.org/0000-0003-4048-2385
University of Warwick, UK
A randomised algorithm for a search problem is pseudodeterministic if it produces a fixed canonical solution to the search problem with high probability. In their seminal work on the topic, Gat and Goldwasser (2011) posed as their main open problem whether prime numbers can be pseudodeterministically constructed in polynomial time.
We provide a positive solution to this question in the infinitely-often regime. In more detail, we give an unconditional polynomial-time randomised algorithm B such that, for infinitely many values of n, B(1ⁿ) outputs a canonical n-bit prime p_n with high probability. More generally, we prove that for every dense property Q of strings that can be decided in polynomial time, there is an infinitely-often pseudodeterministic polynomial-time construction of strings satisfying Q. This improves upon a subexponential-time pseudodeterministic construction of Oliveira and Santhanam (2017).
This talk will cover the main ideas behind these constructions and discuss their implications, such as the existence of infinitely many primes with succinct and efficient representations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.1/LIPIcs.STACS.2024.1.pdf
Pseudorandomness
Explicit Constructions
Pseudodeterministic Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
2:1
2:1
10.4230/LIPIcs.STACS.2024.2
article
The Role of Local Algorithms in Privacy (Invited Talk)
Raskhodnikova, Sofya
1
https://orcid.org/0000-0002-4902-050X
Department of Computer Science, Boston University, MA, USA
We will discuss research areas at the intersection of local algorithms and differential privacy. The main focus will be on using local Lipschitz filters to enable black-box differentially private queries to sensitive datasets. We will also cover new sublinear computational tasks arising in private data analysis. Finally, we will touch upon distributed models of privacy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.2/LIPIcs.STACS.2024.2.pdf
Sublinear algorithms
differential privacy
reconstruction of Lipschitz functions
local algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
3:1
3:1
10.4230/LIPIcs.STACS.2024.3
article
Structurally Tractable Graph Classes (Invited Talk)
Toruńczyk, Szymon
1
https://orcid.org/0000-0002-1130-9033
Institute of Informatics, University of Warsaw, Poland
Sparsity theory, initiated by Ossona de Mendez and Nešetřil, identifies those classes of sparse graphs that are tractable in various ways - algorithmically, combinatorially, and logically - as exactly the nowhere dense classes. An ongoing effort aims at generalizing sparsity theory to classes of graphs that are not necessarily sparse. Twin-width theory, developed by Bonnet, Thomassé and co-authors, is a step in that direction. A theory unifying the two is anticipated. It is conjectured that the relevant notion characterising dense graph classes that are tractable, generalising nowhere denseness and bounded twin-width, is the notion of a monadically dependent class, introduced by Shelah in model theory. I will survey the recent, rapid progress in the understanding of those classes, and of the related monadically stable classes. This development combines tools from structural graph theory, logic (finite and infinite model theory), and algorithms (parameterised algorithms and range search queries).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.3/LIPIcs.STACS.2024.3.pdf
Structural graph theory
Monadic dependence
monadic NIP
twin-width
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
4:1
4:15
10.4230/LIPIcs.STACS.2024.4
article
Max Weight Independent Set in Sparse Graphs with No Long Claws
Abrishami, Tara
1
Chudnovsky, Maria
2
https://orcid.org/0000-0002-8920-4944
Pilipczuk, Marcin
3
https://orcid.org/0000-0001-5680-7397
Rzążewski, Paweł
4
3
https://orcid.org/0000-0001-7696-3848
Department of Mathematics, University of Hamburg, Germany
Princeton University, NJ, USA
Institute of Informatics, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland
Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
We revisit the recent polynomial-time algorithm for the Max Weight Independent Set (MWIS) problem in bounded-degree graphs that do not contain a fixed graph whose every component is a subdivided claw as an induced subgraph [Abrishami, Chudnovsky, Dibek, Rzążewski, SODA 2022].
First, we show that with an arguably simpler approach we can obtain a faster algorithm with running time n^{𝒪(Δ²)}, where n is the number of vertices of the instance and Δ is the maximum degree. Then we combine our technique with known results concerning tree decompositions and provide a polynomial-time algorithm for MWIS in graphs excluding a fixed graph whose every component is a subdivided claw as an induced subgraph, and a fixed biclique as a subgraph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.4/LIPIcs.STACS.2024.4.pdf
Max Weight Independent Set
subdivided claw
hereditary classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
5:1
5:20
10.4230/LIPIcs.STACS.2024.5
article
Satisfiability of Context-Free String Constraints with Subword-Ordering and Transducers
Aiswarya, C.
1
2
https://orcid.org/0000-0002-4878-7581
Mal, Soumodev
1
https://orcid.org/0000-0001-5054-5664
Saivasan, Prakash
3
2
https://orcid.org/0000-0001-5060-0117
Chennai Mathematical Institute, India
CNRS, ReLaX, IRL 2000, Chennai, India
The Institute of Mathematical Sciences, HBNI, Chennai, India
We study the satisfiability of string constraints where context-free membership constraints may be imposed on variables. Additionally a variable may be constrained to be a subword of a word obtained by shuffling variables and their transductions. The satisfiability problem is known to be undecidable even without rational transductions. It is known to be NExptime-complete without transductions, if the subword relations between variables do not have a cyclic dependency between them. We show that the satisfiability problem stays decidable in this fragment even when rational transductions are added. It is 2NExptime-complete with context-free membership, and NExptime-complete with only regular membership. For the lower bound we prove a technical lemma that is of independent interest: The length of the shortest word in the intersection of a pushdown automaton (of size 𝒪(n)) and n finite-state automata (each of size 𝒪(n)) can be double exponential in n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.5/LIPIcs.STACS.2024.5.pdf
satisfiability
subword
string constraints
context-free
transducers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
6:1
6:18
10.4230/LIPIcs.STACS.2024.6
article
On a Hierarchy of Spectral Invariants for Graphs
Arvind, V.
1
2
https://orcid.org/0000-0002-1988-7866
Fuhlbrück, Frank
3
https://orcid.org/0000-0001-6007-0922
Köbler, Johannes
3
https://orcid.org/0000-0002-1215-7270
Verbitsky, Oleg
3
https://orcid.org/0000-0002-9524-1901
The Institute of Mathematical Sciences (HBNI), Chennai, India
Chennai Mathematical Institute, India
Institut für Informatik, Humboldt-Universität zu Berlin, Germany
We consider a hierarchy of graph invariants that naturally extends the spectral invariants defined by Fürer (Lin. Alg. Appl. 2010) based on the angles formed by the set of standard basis vectors and their projections onto eigenspaces of the adjacency matrix. We provide a purely combinatorial characterization of this hierarchy in terms of the walk counts. This allows us to give a complete answer to Fürer’s question about the strength of his invariants in distinguishing non-isomorphic graphs in comparison to the 2-dimensional Weisfeiler-Leman algorithm, extending the recent work of Rattan and Seppelt (SODA 2023). As another application of the characterization, we prove that almost all graphs are determined up to isomorphism in terms of the spectrum and the angles, which is of interest in view of the long-standing open problem whether almost all graphs are determined by their eigenvalues alone. Finally, we describe the exact relationship between the hierarchy and the Weisfeiler-Leman algorithms for small dimensions, as also some other important spectral characteristics of a graph such as the generalized and the main spectra.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.6/LIPIcs.STACS.2024.6.pdf
Graph Isomorphism
spectra of graphs
combinatorial refinement
strongly regular graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
7:1
7:19
10.4230/LIPIcs.STACS.2024.7
article
Computing Twin-Width Parameterized by the Feedback Edge Number
Balabán, Jakub
1
https://orcid.org/0000-0002-2475-8938
Ganian, Robert
2
https://orcid.org/0000-0002-7762-8045
Rocton, Mathis
2
https://orcid.org/0000-0002-7158-9022
Faculty of Informatics, Masaryk University, Brno, Czech Republic
Algorithms and Complexity Group, TU Wien, Austria
The problem of whether and how one can compute the twin-width of a graph - along with an accompanying contraction sequence - lies at the forefront of the area of algorithmic model theory. While significant effort has been aimed at obtaining a fixed-parameter approximation for the problem when parameterized by twin-width, here we approach the question from a different perspective and consider whether one can obtain (near-)optimal contraction sequences under a larger parameterization, notably the feedback edge number k. As our main contributions, under this parameterization we obtain (1) a linear bikernel for the problem of either computing a 2-contraction sequence or determining that none exists and (2) an approximate fixed-parameter algorithm which computes an 𝓁-contraction sequence (for an arbitrary specified 𝓁) or determines that the twin-width of the input graph is at least 𝓁. These algorithmic results rely on newly obtained insights into the structure of optimal contraction sequences, and as a byproduct of these we also slightly tighten the bound on the twin-width of graphs with small feedback edge number.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.7/LIPIcs.STACS.2024.7.pdf
twin-width
parameterized complexity
kernelization
feedback edge number
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
8:1
8:18
10.4230/LIPIcs.STACS.2024.8
article
Faster Graph Algorithms Through DAG Compression
Bannach, Max
1
https://orcid.org/0000-0002-6475-5512
Marwitz, Florian Andreas
2
https://orcid.org/0000-0002-9683-5250
Tantau, Till
3
https://orcid.org/0000-0002-3946-8028
European Space Agency, Advanced Concepts Team, Noordwijk, The Netherlands
Institute of Information Systems, Universität zu Lübeck, Germany
Institute for Theoretical Computer Science, Universität zu Lübeck, Germany
The runtime of graph algorithms such as depth-first search or Dijkstra’s algorithm is dominated by the fact that all edges of the graph need to be processed at least once, leading to prohibitive runtimes for large, dense graphs. We introduce a simple data structure for storing graphs (and more general structures) in a compressed manner using directed acyclic graphs (dags). We then show that numerous standard graph problems can be solved in time linear in the size of the dag compression of a graph, rather than in the number of edges of the graph. Crucially, many dense graphs, including but not limited to graphs of bounded twinwidth, have a dag compression of size linear in the number of vertices rather than edges. This insight allows us to improve the previous best results for the runtime of standard algorithms from quasi-linear to linear for the large class of graphs of bounded twinwidth, which includes all cographs, graphs of bounded treewidth, or graphs of bounded cliquewidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.8/LIPIcs.STACS.2024.8.pdf
graph compression
graph traversal
twinwidth
parameterized algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
9:1
9:22
10.4230/LIPIcs.STACS.2024.9
article
Testing Equivalence to Design Polynomials
Baraskar, Omkar
1
Dewan, Agrim
1
Saha, Chandan
1
Indian Institute of Science, Bengaluru, India
An n-variate polynomial g of degree d is a (n,d,t) design polynomial if the degree of the gcd of every pair of monomials of g is at most t-1. The power symmetric polynomial PSym_{n,d} : = ∑_{i = 1}ⁿ x^d_i and the sum-product polynomial SP_{s,d} : = ∑_{i = 1}^{s}∏_{j = 1}^{d} x_{i,j} are instances of design polynomials for t = 1. Another example is the Nisan-Wigderson design polynomial NW, which has been used extensively to prove various arithmetic circuit lower bounds. Given black-box access to an n-variate, degree-d polynomial f(𝐱) ∈ 𝔽[𝐱], how fast can we check if there exist an A ∈ GL(n, 𝔽) and a 𝐛 ∈ 𝔽ⁿ such that f(A𝐱+𝐛) is a (n,d,t) design polynomial? We call this problem "testing equivalence to design polynomials", or alternatively, "equivalence testing for design polynomials".
In this work, we present a randomized algorithm that finds (A, 𝐛) such that f(A𝐱+𝐛) is a (n,d,t) design polynomial, if such A and 𝐛 exist, provided t ≤ d/3. The algorithm runs in (nd)^O(t) time and works over any sufficiently large 𝔽 of characteristic 0 or > d. As applications of this test, we show two results - one is structural and the other is algorithmic. The structural result establishes a polynomial-time equivalence between the graph isomorphism problem and the polynomial equivalence problem for design polynomials. The algorithmic result implies that Patarin’s scheme (EUROCRYPT 1996) can be broken in quasi-polynomial time if a random sparse polynomial is used in the key generation phase.
We also give an efficient learning algorithm for n-variate random affine projections of multilinear degree-d design polynomials, provided n ≥ d⁴. If one obtains an analogous result under the weaker assumption "n ≥ d^ε, for any ε > 0", then the NW family is not VNP-complete unless there is a VNP-complete family whose random affine projections are learnable. It is not known if random affine projections of the permanent are learnable.
The above algorithms are obtained by using the vector space decomposition framework, introduced by Kayal and Saha (STOC 2019) and Garg, Kayal and Saha (FOCS 2020), for learning non-degenerate arithmetic circuits. A key technical difference between the analysis in the papers by Garg, Kayal and Saha (FOCS 2020) and Bhargava, Garg, Kayal and Saha (RANDOM 2022) and the analysis here is that a certain adjoint algebra, which turned out to be trivial (i.e., diagonalizable) in prior works, is non-trivial in our case. However, we show that the adjoint arising here is triangularizable which then helps in carrying out the vector space decomposition step.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.9/LIPIcs.STACS.2024.9.pdf
Polynomial equivalence
design polynomials
graph isomorphism
vector space decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
10:1
10:19
10.4230/LIPIcs.STACS.2024.10
article
Expressive Quantale-Valued Logics for Coalgebras: An Adjunction-Based Approach
Beohar, Harsh
1
https://orcid.org/0000-0001-5256-1334
Gurke, Sebastian
2
https://orcid.org/0009-0008-4343-1384
König, Barbara
2
https://orcid.org/0000-0002-4193-2889
Messing, Karla
2
https://orcid.org/0009-0003-1019-6449
Forster, Jonas
3
https://orcid.org/0000-0002-5050-2565
Schröder, Lutz
3
https://orcid.org/0000-0002-3146-5906
Wild, Paul
3
https://orcid.org/0000-0001-9796-9675
University of Sheffield, UK
Universität Duisburg-Essen, Germany
Friedrich-Alexander-Universität Erlangen-, Nürnberg, Germany
We address the task of deriving fixpoint equations from modal logics characterizing behavioural equivalences and metrics (summarized under the term conformances). We rely on an earlier work that obtains Hennessy-Milner theorems as corollaries to a fixpoint preservation property along Galois connections between suitable lattices. We instantiate this to the setting of coalgebras, in which we spell out the compatibility property ensuring that we can derive a behaviour function whose greatest fixpoint coincides with the logical conformance. We then concentrate on the linear-time case, for which we study coalgebras based on the machine functor living in Eilenberg-Moore categories, a scenario for which we obtain a particularly simple logic and fixpoint equation. The theory is instantiated to concrete examples, both in the branching-time case (bisimilarity and behavioural metrics) and in the linear-time case (trace equivalences and trace distances).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.10/LIPIcs.STACS.2024.10.pdf
modal logics
coalgebras
behavioural equivalences
behavioural metrics
linear-time semantics
Eilenberg-Moore categories
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
11:1
11:19
10.4230/LIPIcs.STACS.2024.11
article
A Characterization of Efficiently Compilable Constraint Languages
Berkholz, Christoph
1
https://orcid.org/0000-0002-3554-517X
Mengel, Stefan
2
https://orcid.org/0000-0003-1386-8784
Wilhelm, Hermann
1
https://orcid.org/0009-0004-2015-0855
Technische Universität Ilmenau, Germany
Univ. Artois, CNRS, Centre de Recherche en Informatique de Lens (CRIL), France
A central task in knowledge compilation is to compile a CNF-SAT instance into a succinct representation format that allows efficient operations such as testing satisfiability, counting, or enumerating all solutions. Useful representation formats studied in this area range from ordered binary decision diagrams (OBDDs) to circuits in decomposable negation normal form (DNNFs).
While it is known that there exist CNF formulas that require exponential size representations, the situation is less well studied for other types of constraints than Boolean disjunctive clauses. The constraint satisfaction problem (CSP) is a powerful framework that generalizes CNF-SAT by allowing arbitrary sets of constraints over any finite domain. The main goal of our work is to understand for which type of constraints (also called the constraint language) it is possible to efficiently compute representations of polynomial size. We answer this question completely and prove two tight characterizations of efficiently compilable constraint languages, depending on whether target format is structured.
We first identify the combinatorial property of "strong blockwise decomposability" and show that if a constraint language has this property, we can compute DNNF representations of linear size. For all other constraint languages we construct families of CSP-instances that provably require DNNFs of exponential size. For a subclass of "strong uniformly blockwise decomposable" constraint languages we obtain a similar dichotomy for structured DNNFs. In fact, strong (uniform) blockwise decomposability even allows efficient compilation into multi-valued analogs of OBDDs and FBDDs, respectively. Thus, we get complete characterizations for all knowledge compilation classes between O(B)DDs and DNNFs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.11/LIPIcs.STACS.2024.11.pdf
constraint satisfaction
knowledge compilation
dichotomy
DNNF
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
12:1
12:17
10.4230/LIPIcs.STACS.2024.12
article
Modal Logic Is More Succinct Iff Bi-Implication Is Available in Some Form
Berkholz, Christoph
1
https://orcid.org/0000-0002-3554-517X
Kuske, Dietrich
1
Schwarz, Christian
1
Technische Universität Ilmenau, Germany
Is it possible to write significantly smaller formulae, when using more Boolean operators in addition to the De Morgan basis (and, or, not)? For propositional logic a negative answer was given by Pratt: every formula with additional operators can be translated to the De Morgan basis with only polynomial increase in size.
Surprisingly, for modal logic the picture is different: we show that adding bi-implication allows to write exponentially smaller formulae. Moreover, we provide a complete classification of finite sets of Boolean operators showing they are either of no help (allow polynomial translations to the De Morgan basis) or can express properties as succinct as modal logic with additional bi-implication. More precisely, these results are shown for the modal logic T (and therefore for K). We complement this result showing that the modal logic S5 behaves as propositional logic: no additional Boolean operators make it possible to write significantly smaller formulae.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.12/LIPIcs.STACS.2024.12.pdf
succinctness
modal logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
13:1
13:12
10.4230/LIPIcs.STACS.2024.13
article
Temporalizing Digraphs via Linear-Size Balanced Bi-Trees
Bessy, Stéphane
1
Thomassé, Stéphan
2
Viennot, Laurent
3
LIRMM, Univ Montpellier, CNRS, Montpellier, France
Univ Lyon, EnsL, UCBL, CNRS, LIP, F-69342, LYON Cedex 07, France
Inria Paris, DI ENS, Paris, France
In a directed graph D on vertex set v₁,… ,v_n, a forward arc is an arc v_iv_j where i < j. A pair v_i,v_j is forward connected if there is a directed path from v_i to v_j consisting of forward arcs. In the Forward Connected Pairs Problem (FCPP), the input is a strongly connected digraph D, and the output is the maximum number of forward connected pairs in some vertex enumeration of D. We show that FCPP is in APX, as one can efficiently enumerate the vertices of D in order to achieve a quadratic number of forward connected pairs. For this, we construct a linear size balanced bi-tree T (an out-branching and an in-branching with same size and same root which are vertex disjoint in the sense that they share no vertex apart from their common root). The existence of such a T was left as an open problem (Brunelli, Crescenzi, Viennot, Networks 2023) motivated by the study of temporal paths in temporal networks. More precisely, T can be constructed in quadratic time (in the number of vertices) and has size at least n/3. The algorithm involves a particular depth-first search tree (Left-DFS) of independent interest, and shows that every strongly connected directed graph has a balanced separator which is a circuit. Remarkably, in the request version RFCPP of FCPP, where the input is a strong digraph D and a set of requests R consisting of pairs {x_i,y_i}, there is no constant c > 0 such that one can always find an enumeration realizing c.|R| forward connected pairs {x_i,y_i} (in either direction).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.13/LIPIcs.STACS.2024.13.pdf
digraph
temporal graph
temporalization
bi-tree
#1{in-branching
out-branching
in-tree
out-tree}
forward connected pairs
left-maximal DFS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
14:1
14:18
10.4230/LIPIcs.STACS.2024.14
article
A Subquadratic Bound for Online Bisection
Bienkowski, Marcin
1
https://orcid.org/0000-0002-2453-7772
Schmid, Stefan
2
3
https://orcid.org/0000-0002-7798-1711
University of Wrocław, Poland
TU Berlin, Germany
Weizenbaum Institute, Berlin, Germany
The online bisection problem is a natural dynamic variant of the classic optimization problem, where one has to dynamically maintain a partition of n elements into two clusters of cardinality n/2. During runtime, an online algorithm is given a sequence of requests, each being a pair of elements: an inter-cluster request costs one unit while an intra-cluster one is free. The algorithm may change the partition, paying a unit cost for each element that changes its cluster.
This natural problem admits a simple deterministic O(n²)-competitive algorithm [Avin et al., DISC 2016]. While several significant improvements over this result have been obtained since the original work, all of them either limit the generality of the input or assume some form of resource augmentation (e.g., larger clusters). Moreover, the algorithm of Avin et al. achieves the best known competitive ratio even if randomization is allowed.
In this paper, we present the first randomized online algorithm that breaks this natural quadratic barrier and achieves a competitive ratio of Õ(n^{23/12}) without resource augmentation and for an arbitrary sequence of requests.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.14/LIPIcs.STACS.2024.14.pdf
Bisection
Graph Partitioning
online balanced Repartitioning
online Algorithms
competitive Analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
15:1
15:19
10.4230/LIPIcs.STACS.2024.15
article
An Improved Approximation Algorithm for Dynamic Minimum Linear Arrangement
Bienkowski, Marcin
1
https://orcid.org/0000-0002-2453-7772
Even, Guy
2
https://orcid.org/0000-0001-5407-330X
University of Wrocław, Poland
Tel Aviv University, Israel
The dynamic offline linear arrangement problem deals with reordering n elements subject to a sequence of edge requests. The input consists of a sequence of m edges (i.e., unordered pairs of elements). The output is a sequence of permutations (i.e., bijective mapping of the elements to n equidistant points). In step t, the order of the elements is changed to the t-th permutation, and then the t-th request is served. The cost of the output consists of two parts per step: request cost and rearrangement cost. The former is the current distance between the endpoints of the request, while the latter is proportional to the number of adjacent element swaps required to move from one permutation to the consecutive permutation. The goal is to find a minimum cost solution.
We present a deterministic O(log n log log n)-approximation algorithm for this problem, improving over a randomized O(log² n)-approximation by Olver et al. [Neil Olver et al., 2018]. Our algorithm is based on first solving spreading-metric LP relaxation on a time-expanded graph, applying a tree decomposition on the basis of the LP solution, and finally converting the tree decomposition to a sequence of permutations. The techniques we employ are general and have the potential to be useful for other dynamic graph optimization problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.15/LIPIcs.STACS.2024.15.pdf
Minimum Linear Arrangement
dynamic Variant
Optimization Problems
Graph Problems
approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
16:1
16:21
10.4230/LIPIcs.STACS.2024.16
article
Gapped String Indexing in Subquadratic Space and Sublinear Query Time
Bille, Philip
1
https://orcid.org/0000-0002-1120-5154
Gørtz, Inge Li
1
https://orcid.org/0000-0002-8322-4952
Lewenstein, Moshe
2
https://orcid.org/0000-0002-8272-244X
Pissis, Solon P.
3
4
https://orcid.org/0000-0002-1445-1932
Rotenberg, Eva
1
https://orcid.org/0000-0001-5853-7909
Steiner, Teresa Anna
1
https://orcid.org/0000-0003-1078-4075
Technical University of Denmark, Lyngby, Denmark
Bar-Ilan University, Ramat-Gan, Israel
CWI, Amsterdam, The Netherlands
Vrije Universiteit, Amsterdam, The Netherlands
In Gapped String Indexing, the goal is to compactly represent a string S of length n such that for any query consisting of two strings P₁ and P₂, called patterns, and an integer interval [α, β], called gap range, we can quickly find occurrences of P₁ and P₂ in S with distance in [α, β]. Gapped String Indexing is a central problem in computational biology and text mining and has thus received significant research interest, including parameterized and heuristic approaches. Despite this interest, the best-known time-space trade-offs for Gapped String Indexing are the straightforward 𝒪(n) space and 𝒪(n+ occ) query time or Ω(n²) space and Õ(|P₁| + |P₂| + occ) query time.
We break through this barrier obtaining the first interesting trade-offs with polynomially subquadratic space and polynomially sublinear query time. In particular, we show that, for every 0 ≤ δ ≤ 1, there is a data structure for Gapped String Indexing with either Õ(n^{2-δ/3}) or Õ(n^{3-2δ}) space and Õ(|P₁| + |P₂| + n^{δ}⋅ (occ+1)) query time, where occ is the number of reported occurrences.
As a new fundamental tool towards obtaining our main result, we introduce the Shifted Set Intersection problem: preprocess a collection of sets S₁, …, S_k of integers such that for any query consisting of three integers i,j,s, we can quickly output YES if and only if there exist a ∈ S_i and b ∈ S_j with a+s = b. We start by showing that the Shifted Set Intersection problem is equivalent to the indexing variant of 3SUM (3SUM Indexing) [Golovnev et al., STOC 2020]. We then give a data structure for Shifted Set Intersection with gaps, which entails a solution to the Gapped String Indexing problem. Furthermore, we enhance our data structure for deciding Shifted Set Intersection, so that we can support the reporting variant of the problem, i.e., outputting all certificates in the affirmative case. Via the obtained equivalence to 3SUM Indexing, we thus give new improved data structures for the reporting variant of 3SUM Indexing, and we show how this improves upon the state-of-the-art solution for Jumbled Indexing [Chan and Lewenstein, STOC 2015] for any alphabet of constant size σ > 5.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.16/LIPIcs.STACS.2024.16.pdf
data structures
string indexing
indexing with gaps
two patterns
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
17:1
17:18
10.4230/LIPIcs.STACS.2024.17
article
Contributions to the Domino Problem: Seeding, Recurrence and Satisfiability
Bitar, Nicolás
1
https://orcid.org/0000-0002-3460-9442
Université Paris-Saclay, CNRS, LISN, 91190 Gif-sur-Yvette, France
We study the seeded domino problem, the recurring domino problem and the k-SAT problem on finitely generated groups. These problems are generalization of their original versions on ℤ² that were shown to be undecidable using the domino problem. We show that the seeded and recurring domino problems on a group are invariant under changes in the generating set, are many-one reduced from the respective problems on subgroups, and are positive equivalent to the problems on finite index subgroups. This leads to showing that the recurring domino problem is decidable for free groups. Coupled with the invariance properties, we conjecture that the only groups in which the seeded and recurring domino problems are decidable are virtually free groups. In the case of the k-SAT problem, we introduce a new generalization that is compatible with decision problems on finitely generated groups. We show that the subgroup membership problem many-one reduces to the 2-SAT problem, that in certain cases the k-SAT problem many one reduces to the domino problem, and finally that the domino problem reduces to 3-SAT for the class of scalable groups.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.17/LIPIcs.STACS.2024.17.pdf
Tilings
Domino problem
SAT
Computability
Finitely generated groups
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
18:1
18:17
10.4230/LIPIcs.STACS.2024.18
article
Removable Online Knapsack and Advice
Böckenhauer, Hans-Joachim
1
https://orcid.org/0000-0001-9164-3674
Frei, Fabian
2
https://orcid.org/0000-0002-1368-3205
Rossmanith, Peter
3
https://orcid.org/0000-0003-0177-8028
Department of Computer Science, ETH Zürich, Switzerland
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
Department of Computer Science, RWTH Aachen, Germany
In the proportional knapsack problem, we are given a knapsack of some capacity and a set of variably sized items. The goal is to pack a selection of these items that fills the knapsack as much as possible. The online version of this problem reveals the items and their sizes not all at once but one by one. For each item, the algorithm has to decide immediately whether to pack it or not. We consider a natural variant of this online knapsack problem, which has been coined removable knapsack. It differs from the classical variant by allowing the removal of any packed item from the knapsack. Repacking is impossible, however: Once an item is removed, it is gone for good.
We analyze the advice complexity of this problem. It measures how many advice bits an omniscient oracle needs to provide for an online algorithm to reach any given competitive ratio, which is - understood in its strict sense - just the algorithm’s approximation factor. The online knapsack problem is known for its peculiar advice behavior involving three jumps in competitivity. We show that the advice complexity of the version with removability is quite different but just as interesting: The competitivity starts from the golden ratio when no advice is given. It then drops down to 1+ε for a constant amount of advice already, which requires logarithmic advice in the classical version. Removability comes as no relief to the perfectionist, however: Optimality still requires linear advice as before. These results are particularly noteworthy from a structural viewpoint for the exceptionally slow transition from near-optimality to optimality.
Our most important and demanding result shows that the general knapsack problem, which allows an item’s value to differ from its size, exhibits a similar behavior for removability, but with an even more pronounced jump from an unbounded competitive ratio to near-optimality within just constantly many advice bits. This is a unique behavior among the problems considered in the literature so far.
An advice analysis is interesting in its own right, as it allows us to measure the information content of a problem and leads to structural insights. But it also provides insurmountable lower bounds, applicable to any kind of additional information about the instances, including predictions provided by machine-learning algorithms and artificial intelligence. Unexpectedly, advice algorithms are useful in various real-life situations, too. For example, they provide smart strategies for cooperation in winner-take-all competitions, where several participants pool together to implement different strategies and share the obtained prize. Further illustrating the versatility of our advice-complexity bounds, our results automatically improve some of the best known lower bounds on the competitive ratio for removable knapsack with randomization. The presented advice algorithms also automatically yield deterministic algorithms for established deterministic models such as knapsack with a resource buffer and various problems with more than one knapsack. In their seminal paper introducing removability to the knapsack problem, Iwama and Taketomi have indeed proposed a multiple knapsack problem for which we can establish a one-to-one correspondence with the advice model; this paper therefore even provides a comprehensive analysis for this up until now neglected problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.18/LIPIcs.STACS.2024.18.pdf
Removable Online Knapsack
Multiple Knapsack
Advice Analysis
Advice Applications
Machine Learning and AI
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
19:1
19:20
10.4230/LIPIcs.STACS.2024.19
article
The Complexity of Homomorphism Reconstructibility
Böker, Jan
1
https://orcid.org/0000-0003-4584-121X
Härtel, Louis
1
https://orcid.org/0009-0004-3446-5874
Runde, Nina
1
https://orcid.org/0009-0000-4547-1023
Seppelt, Tim
1
https://orcid.org/0000-0002-6447-0568
Standke, Christoph
1
https://orcid.org/0000-0002-3034-730X
RWTH Aachen University, Germany
Representing graphs by their homomorphism counts has led to the beautiful theory of homomorphism indistinguishability in recent years. Moreover, homomorphism counts have promising applications in database theory and machine learning, where one would like to answer queries or classify graphs solely based on the representation of a graph G as a finite vector of homomorphism counts from some fixed finite set of graphs to G. We study the computational complexity of the arguably most fundamental computational problem associated to these representations, the homomorphism reconstructability problem: given a finite sequence of graphs and a corresponding vector of natural numbers, decide whether there exists a graph G that realises the given vector as the homomorphism counts from the given graphs.
We show that this problem yields a natural example of an NP^#𝖯-hard problem, which still can be NP-hard when restricted to a fixed number of input graphs of bounded treewidth and a fixed input vector of natural numbers, or alternatively, when restricted to a finite input set of graphs. We further show that, when restricted to a finite input set of graphs and given an upper bound on the order of the graph G as additional input, the problem cannot be NP-hard unless 𝖯 = NP. For this regime, we obtain partial positive results. We also investigate the problem’s parameterised complexity and provide fpt-algorithms for the case that a single graph is given and that multiple graphs of the same order with subgraph instead of homomorphism counts are given.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.19/LIPIcs.STACS.2024.19.pdf
graph homomorphism
counting complexity
parameterised complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
20:1
20:19
10.4230/LIPIcs.STACS.2024.20
article
Solving Discontinuous Initial Value Problems with Unique Solutions Is Equivalent to Computing over the Transfinite
Bournez, Olivier
1
https://orcid.org/0000-0002-9218-1130
Gozzi, Riccardo
1
2
https://orcid.org/0009-0007-0210-8508
École polytechnique, LIX, Paris, France
University Paris-Est Créteil Val de Marne, LACL, Paris, France
We study a precise class of dynamical systems that we call solvable ordinary differential equations. We prove that analog systems mathematically ruled by solvable ordinary differential equations can be used for transfinite computation, solving tasks such as the halting problem for Turing machines and any Turing jump of the halting problem in the hyperarithmetical hierarchy. We prove that the computational power of such analog systems is exactly the one of transfinite computations of the hyperarithmetical hierarchy.
It has been proved recently that polynomial ordinary differential equations correspond unexpectedly naturally to Turing machines. Our results show that the more general exhibited class of solvable ordinary differential equations corresponds, even unexpectedly, naturally to transfinite computations. From a wide philosophical point of view, our results contribute to state that the question of whether such analog systems can be used to solve untractable problems (both for complexity for polynomial systems and for computability for solvable systems) is provably related to the question of the relations between mathematical models, models of physics and our real world.
More technically, we study a precise class of dynamical systems: bounded initial value problems involving ordinary differential equations with a unique solution. We show that the solution of these systems can still be obtained analytically even in the presence of discontinuous dynamics once we carefully select the conditions that describe how discontinuities are distributed in the domain. We call the class of right-hand terms respecting these natural and simple conditions the class of solvable ordinary differential equations. We prove that there is a method for obtaining the solution of such systems based on transfinite recursion and taking at most a countable number of steps. We explain the relevance of these systems by providing several natural examples and showcasing the fact that these solutions can be used to perform limit computations and solve tasks such as the halting problem for Turing machines and any Turing jump of the halting problem in the hyperarithmetical hierarchy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.20/LIPIcs.STACS.2024.20.pdf
Analog models
computability
transfinite computations
dynamical systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
21:1
21:18
10.4230/LIPIcs.STACS.2024.21
article
Local Certification of Local Properties: Tight Bounds, Trade-Offs and New Parameters
Bousquet, Nicolas
1
https://orcid.org/0000-0003-0170-0503
Feuilloley, Laurent
2
https://orcid.org/0000-0002-3994-0898
Zeitoun, Sébastien
2
https://orcid.org/0009-0003-2675-8581
Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205 F-69622 Villeurbanne, France
Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69622 Villeurbanne, France
Local certification is a distributed mechanism enabling the nodes of a network to check the correctness of the current configuration, thanks to small pieces of information called certificates. For many classic global properties, like checking the acyclicity of the network, the optimal size of the certificates depends on the size of the network, n. In this paper, we focus on properties for which the size of the certificates does not depend on n but on other parameters.
We focus on three such important properties and prove tight bounds for all of them. Namely, we prove that the optimal certification size is: Θ(log k) for k-colorability (and even exactly ⌈ log k ⌉ bits in the anonymous model while previous works had only proved a 2-bit lower bound); (1/2)log t+o(log t) for dominating sets at distance t (an unexpected and tighter-than-usual bound) ; and Θ(log Δ) for perfect matching in graphs of maximum degree Δ (the first non-trivial bound parameterized by Δ). We also prove some surprising upper bounds, for example, certifying the existence of a perfect matching in a planar graph can be done with only two bits. In addition, we explore various specific cases for these properties, in particular improving our understanding of the trade-off between locality of the verification and certificate size.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.21/LIPIcs.STACS.2024.21.pdf
Local certification
local properties
proof-labeling schemes
locally checkable proofs
optimal certification size
colorability
dominating set
perfect matching
fault-tolerance
graph structure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
22:1
22:19
10.4230/LIPIcs.STACS.2024.22
article
Spectral Approach to the Communication Complexity of Multi-Party Key Agreement
Caillat-Grenier, Geoffroy
1
https://orcid.org/0000-0003-4171-0407
Romashchenko, Andrei
1
https://orcid.org/0000-0001-7723-7880
LIRMM, University of Montpellier, CNRS, Montpellier, France
We propose a linear algebraic method, rooted in the spectral properties of graphs, that can be used to prove lower bounds in communication complexity. Our proof technique effectively marries spectral bounds with information-theoretic inequalities. The key insight is the observation that, in specific settings, even when data sets X and Y are closely correlated and have high mutual information, the owner of X cannot convey a reasonably short message that maintains substantial mutual information with Y. In essence, from the perspective of the owner of Y, any sufficiently brief message m = m(X) would appear nearly indistinguishable from a random bit sequence.
We employ this argument in several problems of communication complexity. Our main result concerns cryptographic protocols. We establish a lower bound for communication complexity of multi-party secret key agreement with unconditional, i.e., information-theoretic security. Specifically, for one-round protocols (simultaneous messages model) of secret key agreement with three participants we obtain an asymptotically tight lower bound. This bound implies optimality of the previously known omniscience communication protocol (this result applies to a non-interactive secret key agreement with three parties and input data sets with an arbitrary symmetric information profile).
We consider communication problems in one-shot scenarios when the parties inputs are not produced by any i.i.d. sources, and there are no ergodicity assumptions on the input data. In this setting, we found it natural to present our results using the framework of Kolmogorov complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.22/LIPIcs.STACS.2024.22.pdf
communication complexity
Kolmogorov complexity
information-theoretic cryptography
multiparty secret key agreement
expander mixing lemma
information inequalities
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
23:1
23:19
10.4230/LIPIcs.STACS.2024.23
article
Fault-tolerant k-Supplier with Outliers
Chakrabarty, Deeparnab
1
Cote, Luc
2
https://orcid.org/0009-0005-3220-7289
Sarkar, Ankita
1
https://orcid.org/0000-0001-6787-7286
Department of Computer Science, Dartmouth College, Hanover, NH, USA
Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
We present approximation algorithms for the Fault-tolerant k-Supplier with Outliers (FkSO) problem. This is a common generalization of two known problems - k-Supplier with Outliers, and Fault-tolerant k-Supplier - each of which generalize the well-known k-Supplier problem. In the k-Supplier problem the goal is to serve n clients C, by opening k facilities from a set of possible facilities F; the objective function is the farthest that any client must travel to access an open facility. In FkSO, each client v has a fault-tolerance 𝓁_v, and now desires 𝓁_v facilities to serve it; so each client v’s contribution to the objective function is now its distance to the 𝓁_v^th closest open facility. Furthermore, we are allowed to choose m clients that we will serve, and only those clients contribute to the objective function, while the remaining n-m are considered outliers.
Our main result is a (4t-1)-approximation for the FkSO problem, where t is the number of distinct values of 𝓁_v that appear in the instance. At t = 1, i.e. in the case where the 𝓁_v’s are uniformly some 𝓁, this yields a 3-approximation, improving upon the 11-approximation given for the uniform case by Inamdar and Varadarajan [2020], who also introduced the problem. Our result for the uniform case matches tight 3-approximations that exist for k-Supplier, k-Supplier with Outliers, and Fault-tolerant k-Supplier.
Our key technical contribution is an application of the round-or-cut schema to FkSO. Guided by an LP relaxation, we reduce to a simpler optimization problem, which we can solve to obtain distance bounds for the "round" step, and valid inequalities for the "cut" step. By varying how we reduce to the simpler problem, we get varying distance bounds - we include a variant that gives a (2^t + 1)-approximation, which is better for t ∈ {2,3}. In addition, for t = 1, we give a more straightforward application of round-or-cut, yielding a 3-approximation that is much simpler than our general algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.23/LIPIcs.STACS.2024.23.pdf
Clustering
approximation algorithms
round-or-cut
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
24:1
24:22
10.4230/LIPIcs.STACS.2024.24
article
Approximate Circular Pattern Matching Under Edit Distance
Charalampopoulos, Panagiotis
1
https://orcid.org/0000-0002-6024-1557
Pissis, Solon P.
2
3
https://orcid.org/0000-0002-1445-1932
Radoszewski, Jakub
4
https://orcid.org/0000-0002-0067-6401
Rytter, Wojciech
4
https://orcid.org/0000-0002-9162-6724
Waleń, Tomasz
4
https://orcid.org/0000-0002-7369-3309
Zuba, Wiktor
2
https://orcid.org/0000-0002-1988-3507
Birkbeck, University of London, UK
CWI, Amsterdam, The Netherlands
Vrije Universiteit, Amsterdam, The Netherlands
University of Warsaw, Poland
In the k-Edit Circular Pattern Matching (k-Edit CPM) problem, we are given a length-n text T, a length-m pattern P, and a positive integer threshold k, and we are to report all starting positions of the substrings of T that are at edit distance at most k from some cyclic rotation of P. In the decision version of the problem, we are to check if any such substring exists. Very recently, Charalampopoulos et al. [ESA 2022] presented 𝒪(nk²)-time and 𝒪(nk log³ k)-time solutions for the reporting and decision versions of k-Edit CPM, respectively. Here, we show that the reporting and decision versions of k-Edit CPM can be solved in 𝒪(n+(n/m) k⁶) time and 𝒪(n+(n/m) k⁵ log³ k) time, respectively, thus obtaining the first algorithms with a complexity of the type 𝒪(n+(n/m) poly(k)) for this problem. Notably, our algorithms run in 𝒪(n) time when m = Ω(k⁶) and are superior to the previous respective solutions when m = ω(k⁴). We provide a meta-algorithm that yields efficient algorithms in several other interesting settings, such as when the strings are given in a compressed form (as straight-line programs), when the strings are dynamic, or when we have a quantum computer.
We obtain our solutions by exploiting the structure of approximate circular occurrences of P in T, when T is relatively short w.r.t. P. Roughly speaking, either the starting positions of approximate occurrences of rotations of P form 𝒪(k⁴) intervals that can be computed efficiently, or some rotation of P is almost periodic (is at a small edit distance from a string with small period). Dealing with the almost periodic case is the most technically demanding part of this work; we tackle it using properties of locked fragments (originating from [Cole and Hariharan, SICOMP 2002]).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.24/LIPIcs.STACS.2024.24.pdf
circular pattern matching
approximate pattern matching
edit distance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
25:1
25:17
10.4230/LIPIcs.STACS.2024.25
article
Depth-3 Circuit Lower Bounds for k-OV
Choudhury, Tameem
1
https://orcid.org/0000-0002-5044-9717
Sreenivasaiah, Karteek
1
https://orcid.org/0000-0001-7396-3383
Department of Computer Science and Engineering, IIT Hyderabad, India
The 2-Orthogonal Vectors (2-OV) problem is the following: given two tuples A and B of n Boolean vectors, each of dimension d, decide if there exist vectors u ∈ A, and v ∈ B, such that u and v are orthogonal. This problem, and its generalization k-OV defined analogously for k tuples, are central problems in the area of fine-grained complexity. One of the major conjectures in fine-grained complexity is that k-OV cannot be solved by a randomised algorithm in n^{k-ε}poly(d) time for any constant ε > 0.
In this paper, we are interested in unconditional lower bounds against k-OV, but for weaker models of computation than the general Turing Machine. In particular, we are interested in circuit lower bounds to computing k-OV by Boolean circuit families of depth 3 of the form OR-AND-OR, or equivalently, a disjunction of CNFs.
We show that for all k ≤ d, any disjunction of t-CNFs computing k-OV requires size Ω((n/t)^k). In particular, when k is a constant, any disjunction of k-CNFs computing k-OV needs to use Ω(n^k) CNFs. This matches the brute-force construction, and for each fixed k > 2, this is the first unconditional Ω(n^k) lower bound against k-OV for a computation model that can compute it in size O(n^k). Our results partially resolve a conjecture by Kane and Williams [Daniel M. Kane and Richard Ryan Williams, 2019] (page 12, conjecture 10) about depth-3 AC⁰ circuits computing 2-OV.
As a secondary result, we show an exponential lower bound on the size of AND∘OR∘AND circuits computing 2-OV when d is very large. Since 2-OV reduces to k-OV by projections trivially, this lower bound works against k-OV as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.25/LIPIcs.STACS.2024.25.pdf
fine grained complexity
k-OV
circuit lower bounds
depth-3 circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
26:1
26:19
10.4230/LIPIcs.STACS.2024.26
article
A Myhill-Nerode Theorem for Generalized Automata, with Applications to Pattern Matching and Compression
Cotumaccio, Nicola
1
2
https://orcid.org/0000-0002-1402-5298
Gran Sasso Science Institute, L'Aquila, Italy
Dalhousie University, Halifax, Canada
The model of generalized automata, introduced by Eilenberg in 1974, allows representing a regular language more concisely than conventional automata by allowing edges to be labeled not only with characters, but also strings. Giammaresi and Montalbano introduced a notion of determinism for generalized automata [STACS 1995]. While generalized deterministic automata retain many properties of conventional deterministic automata, the uniqueness of a minimal generalized deterministic automaton is lost.
In the first part of the paper, we show that the lack of uniqueness can be explained by introducing a set 𝒲(𝒜) associated with a generalized automaton 𝒜. The set 𝒲(𝒜) is always trivially equal to the set of all prefixes of the language recognized by the automaton, if 𝒜 is a conventional automaton, but this need not be true for generalized automata. By fixing 𝒲(𝒜), we are able to derive for the first time a full Myhill-Nerode theorem for generalized automata, which contains the textbook Myhill-Nerode theorem for conventional automata as a degenerate case.
In the second part of the paper, we show that the set 𝒲(𝒜) leads to applications for pattern matching and data compression. Wheeler automata [TCS 2017, SODA 2020] are a popular class of automata that can be compactly stored using e log σ (1 + o(1)) + O(e) bits (e being the number of edges, σ being the size of the alphabet) in such a way that pattern matching queries can be solved in Õ(m) time (m being the length of the pattern). In the paper, we show how to extend these results to generalized automata. More precisely, a Wheeler generalized automata can be stored using 𝔢 log σ (1 + o(1)) + O(e + rn) bits so that pattern matching queries can be solved in Õ(rm) time, where 𝔢 is the total length of all edge labels, r is the maximum length of an edge label and n is the number of states.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.26/LIPIcs.STACS.2024.26.pdf
Generalized Automata
Myhill-Nerode Theorem
Regular Languages
Wheeler Graphs
FM-index
Burrows-Wheeler Transform
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
27:1
27:16
10.4230/LIPIcs.STACS.2024.27
article
Nonnegativity Problems for Matrix Semigroups
D'Costa, Julian
1
https://orcid.org/0000-0003-2610-5241
Ouaknine, Joël
2
https://orcid.org/0000-0003-0031-9356
Worrell, James
1
https://orcid.org/0000-0001-8151-2443
Department of Computer Science, University of Oxford, UK
Max Planck Institute for Software Systems, Saarland Informatics Campus, Saarbrücken, Germany
The matrix semigroup membership problem asks, given square matrices M,M₁,…,M_k of the same dimension, whether M lies in the semigroup generated by M₁,…,M_k. It is classical that this problem is undecidable in general, but decidable in case M₁,…,M_k commute. In this paper we consider the problem of whether, given M₁,…,M_k, the semigroup generated by M₁,…,M_k contains a non-negative matrix. We show that in case M₁,…,M_k commute, this problem is decidable subject to Schanuel’s Conjecture. We show also that the problem is undecidable if the commutativity assumption is dropped. A key lemma in our decidability proof is a procedure to determine, given a matrix M, whether the sequence of matrices (Mⁿ)_{n = 0}^∞ is ultimately nonnegative. This answers a problem posed by S. Akshay [S. Akshay et al., 2022]. The latter result is in stark contrast to the notorious fact that it is not known how to determine, for any specific matrix index (i,j), whether the sequence (Mⁿ)_{i,j} is ultimately nonnegative. Indeed the latter is equivalent to the Ultimate Positivity Problem for linear recurrence sequences, a longstanding open problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.27/LIPIcs.STACS.2024.27.pdf
Decidability
Linear Recurrence Sequences
Schanuel’s Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
28:1
28:16
10.4230/LIPIcs.STACS.2024.28
article
One n Remains to Settle the Tree Conjecture
Dippel, Jack
1
https://orcid.org/0000-0002-8087-3009
Vetta, Adrian
1
https://orcid.org/0000-0002-2213-4937
McGill University, Montreal, Canada
In the famous network creation game of Fabrikant et al. [Fabrikant et al., 2003] a set of agents play a game to build a connected graph. The n agents form the vertex set V of the graph and each vertex v ∈ V buys a set E_v of edges inducing a graph G = (V,⋃_{v∈V} E_v). The private objective of each vertex is to minimize the sum of its building cost (the cost of the edges it buys) plus its connection cost (the total distance from itself to every other vertex). Given a cost of α for each individual edge, a long-standing conjecture, called the tree conjecture, states that if α > n then every Nash equilibrium graph in the game is a spanning tree. After a plethora of work, it is known that the conjecture holds for any α > 3n-3. In this paper we prove the tree conjecture holds for α > 2n. This reduces by half the open range for α with only (n-3, 2n) remaining in order to settle the conjecture.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.28/LIPIcs.STACS.2024.28.pdf
Algorithmic Game Theory
Network Creation Games
Tree Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
29:1
29:19
10.4230/LIPIcs.STACS.2024.29
article
Semënov Arithmetic, Affine {VASS}, and String Constraints
Draghici, Andrei
1
https://orcid.org/0009-0000-9308-1169
Haase, Christoph
1
https://orcid.org/0000-0002-5452-936X
Manea, Florin
2
https://orcid.org/0000-0001-6094-3324
Department of Computer Science, University of Oxford, UK
Computer Science Department and Campus-Institut Data Science, Göttingen University, Germany
We study extensions of Semënov arithmetic, the first-order theory of the structure ⟨ℕ,+,2^x⟩. It is well-known that this theory becomes undecidable when extended with regular predicates over tuples of number strings, such as the Büchi V₂-predicate. We therefore restrict ourselves to the existential theory of Semënov arithmetic and show that this theory is decidable in EXPSPACE when extended with arbitrary regular predicates over tuples of number strings. Our approach relies on a reduction to the language emptiness problem for a restricted class of affine vector addition systems with states, which we show decidable in EXPSPACE. As an application of our result, we settle an open problem from the literature and show decidability of a class of string constraints involving length constraints.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.29/LIPIcs.STACS.2024.29.pdf
arithmetic theories
Büchi arithmetic
exponentiation
vector addition systems with states
string constraints
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
30:1
30:15
10.4230/LIPIcs.STACS.2024.30
article
Fixed-Parameter Debordering of Waring Rank
Dutta, Pranjal
1
https://orcid.org/0000-0001-9137-9025
Gesmundo, Fulvio
2
https://orcid.org/0000-0001-6402-021X
Ikenmeyer, Christian
3
https://orcid.org/0000-0003-4654-177X
Jindal, Gorav
4
https://orcid.org/0000-0002-9749-5032
Lysikov, Vladimir
5
https://orcid.org/0000-0002-7816-6524
School of Computing, National University of Singapore (NUS), Singapore
Institut de Mathématiques de Toulouse, Université Paul Sabatier, Toulouse, France
University of Warwick, UK
Max Planck Institute for Software Systems, Saarbrücken, Germany
Ruhr-Universität Bochum, Germany
Border complexity measures are defined via limits (or topological closures), so that any function which can approximated arbitrarily closely by low complexity functions itself has low border complexity. Debordering is the task of proving an upper bound on some non-border complexity measure in terms of a border complexity measure, thus getting rid of limits.
Debordering is at the heart of understanding the difference between Valiant’s determinant vs permanent conjecture, and Mulmuley and Sohoni’s variation which uses border determinantal complexity. The debordering of matrix multiplication tensors by Bini played a pivotal role in the development of efficient matrix multiplication algorithms. Consequently, debordering finds applications in both establishing computational complexity lower bounds and facilitating algorithm design. Currently, very few debordering results are known.
In this work, we study the question of debordering the border Waring rank of polynomials. Waring and border Waring rank are very well studied measures in the context of invariant theory, algebraic geometry, and matrix multiplication algorithms. For the first time, we obtain a Waring rank upper bound that is exponential in the border Waring rank and only linear in the degree. All previous known results were exponential in the degree. For polynomials with constant border Waring rank, our results imply an upper bound on the Waring rank linear in degree, which previously was only known for polynomials with border Waring rank at most 5.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.30/LIPIcs.STACS.2024.30.pdf
border complexity
Waring rank
debordering
apolarity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
31:1
31:16
10.4230/LIPIcs.STACS.2024.31
article
On the Power of Border Width-2 ABPs over Fields of Characteristic 2
Dutta, Pranjal
1
Ikenmeyer, Christian
2
Komarath, Balagopal
3
Mittal, Harshil
3
Nanoti, Saraswati Girish
3
Thakkar, Dhara
3
National University of Singapore, Singapore
University of Warwick, UK
Indian Institute of Technology Gandhinagar, India
The celebrated result by Ben-Or and Cleve [SICOMP92] showed that algebraic formulas are polynomially equivalent to width-3 algebraic branching programs (ABP) for computing polynomials. i.e., VF = VBP₃. Further, there are simple polynomials, such as ∑_{i = 1}⁸ x_i y_i, that cannot be computed by width-2 ABPs [Allender and Wang, CC16]. Bringmann, Ikenmeyer and Zuiddam, [JACM18], on the other hand, studied these questions in the setting of approximate (i.e., border complexity) computation, and showed the universality of border width-2 ABPs, over fields of characteristic ≠ 2. In particular, they showed that polynomials that can be approximated by formulas can also be approximated (with only a polynomial blowup in size) by width-2 ABPs, i.e., VF ̅ = VBP₂ ̅. The power of border width-2 algebraic branching programs when the characteristic of the field is 2 was left open.
In this paper, we show that width-2 ABPs can approximate every polynomial irrespective of the field characteristic. We show that any polynomial f with 𝓁 monomials and with at most t odd-power indeterminates per monomial can be approximated by 𝒪(𝓁⋅ (deg(f)+2^t))-size width-2 ABPs. Since 𝓁 and t are finite, this proves universality of border width-2 ABPs. For univariate polynomials, we improve this upper-bound from O(deg(f)²) to O(deg(f)).
Moreover, we show that, if a polynomial f can be approximated by small formulas, then the polynomial f^d, for some small power d, can be approximated by small width-2 ABPs. Therefore, even over fields of characteristic two, border width-2 ABPs are a reasonably powerful computational model. Our construction works over any field.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.31/LIPIcs.STACS.2024.31.pdf
Algebraic branching programs
border complexity
characteristic 2
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
32:1
32:15
10.4230/LIPIcs.STACS.2024.32
article
O(1/ε) Is the Answer in Online Weighted Throughput Maximization
Eberle, Franziska
1
https://orcid.org/0000-0001-8636-9711
Technische Universität Berlin, Germany
We study a fundamental online scheduling problem where jobs with processing times, weights, and deadlines arrive online over time at their release dates. The task is to preemptively schedule these jobs on a single or multiple (possibly unrelated) machines with the objective to maximize the weighted throughput, the total weight of jobs that complete before their deadline. To overcome known lower bounds for the competitive analysis, we assume that each job arrives with some slack ε > 0; that is, the time window for processing job j on any machine i on which it can be executed has length at least (1+ε) times j’s processing time on machine i.
Our contribution is a best possible online algorithm for weighted throughput maximization on unrelated machines: Our algorithm is 𝒪(1/ε)-competitive, which matches the lower bound for unweighted throughput maximization on a single machine. Even for a single machine, it was not known whether the problem with weighted jobs is "harder" than the problem with unweighted jobs. Thus, we answer this question and close weighted throughput maximization on a single machine with a best possible competitive ratio Θ(1/ε).
While we focus on non-migratory schedules, on identical machines, our algorithm achieves the same (up to constants) performance guarantee when compared to an optimal migratory schedule.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.32/LIPIcs.STACS.2024.32.pdf
Deadline scheduling
weighted throughput
online algorithms
competitive analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
33:1
33:17
10.4230/LIPIcs.STACS.2024.33
article
On the Exact Matching Problem in Dense Graphs
El Maalouly, Nicolas
1
https://orcid.org/0000-0002-1037-0203
Haslebacher, Sebastian
1
https://orcid.org/0000-0003-3988-3325
Wulf, Lasse
2
https://orcid.org/0000-0001-7139-4092
Department of Computer Science, ETH Zurich, Switzerland
Institute of Discrete Mathematics, TU Graz, Austria
In the Exact Matching problem, we are given a graph whose edges are colored red or blue and the task is to decide for a given integer k, if there is a perfect matching with exactly k red edges. Since 1987 it is known that the Exact Matching Problem can be solved in randomized polynomial time. Despite numerous efforts, it is still not known today whether a deterministic polynomial-time algorithm exists as well. In this paper, we make substantial progress by solving the problem for a multitude of different classes of dense graphs. We solve the Exact Matching problem in deterministic polynomial time for complete r-partite graphs, for unit interval graphs, for bipartite unit interval graphs, for graphs of bounded neighborhood diversity, for chain graphs, and for graphs without a complete bipartite t-hole. We solve the problem in quasi-polynomial time for Erdős-Rényi random graphs G(n, 1/2). We also reprove an earlier result for bounded independence number/bipartite independence number. We use two main tools to obtain these results: A local search algorithm as well as a generalization of an earlier result by Karzanov.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.33/LIPIcs.STACS.2024.33.pdf
Exact Matching
Perfect Matching
Red-Blue Matching
Bounded Color Matching
Local Search
Derandomization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
34:1
34:19
10.4230/LIPIcs.STACS.2024.34
article
Hardness of Linearly Ordered 4-Colouring of 3-Colourable 3-Uniform Hypergraphs
Filakovský, Marek
1
https://orcid.org/0000-0001-5978-2623
Nakajima, Tamio-Vesa
2
https://orcid.org/0000-0003-3684-9412
Opršal, Jakub
3
https://orcid.org/0000-0003-1245-3456
Tasinato, Gianluca
4
https://orcid.org/0009-0008-7231-5753
Wagner, Uli
4
https://orcid.org/0000-0002-1494-0568
Masaryk University, Brno, Czech Republic
University of Oxford, UK
University of Birmingham, UK
ISTA, Klosterneuburg, Austria
A linearly ordered (LO) k-colouring of a hypergraph is a colouring of its vertices with colours 1, … , k such that each edge contains a unique maximal colour. Deciding whether an input hypergraph admits LO k-colouring with a fixed number of colours is NP-complete (and in the special case of graphs, LO colouring coincides with the usual graph colouring).
Here, we investigate the complexity of approximating the "linearly ordered chromatic number" of a hypergraph. We prove that the following promise problem is NP-complete: Given a 3-uniform hypergraph, distinguish between the case that it is LO 3-colourable, and the case that it is not even LO 4-colourable. We prove this result by a combination of algebraic, topological, and combinatorial methods, building on and extending a topological approach for studying approximate graph colouring introduced by Krokhin, Opršal, Wrochna, and Živný (2023).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.34/LIPIcs.STACS.2024.34.pdf
constraint satisfaction problem
hypergraph colouring
promise problem
topological methods
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
35:1
35:13
10.4230/LIPIcs.STACS.2024.35
article
The 2-Attractor Problem Is NP-Complete
Fuchs, Janosch
1
https://orcid.org/0000-0003-3993-222X
Whittington, Philip
2
https://orcid.org/0009-0005-0910-6826
RWTH Aachen University, Germany
ETH Zürich, Switzerland
A k-attractor is a combinatorial object unifying dictionary-based compression. It allows to compare the repetitiveness measures of different dictionary compressors such as Lempel-Ziv 77, the Burrows-Wheeler transform, straight line programs and macro schemes. For a string T ∈ Σⁿ, the k-attractor is defined as a set of positions Γ ⊆ [1,n], such that every distinct substring of length at most k is covered by at least one of the selected positions. Thus, if a substring occurs multiple times in T, one position suffices to cover it. A 1-attractor is easily computed in linear time, while Kempa and Prezza [STOC 2018] have shown that for k ≥ 3, it is NP-complete to compute the smallest k-attractor by a reduction from k-set cover.
The main result of this paper answers the open question for the complexity of the 2-attractor problem, showing that the problem remains NP-complete. Kempa and Prezza’s proof for k ≥ 3 also reduces the 2-attractor problem to the 2-set cover problem, which is equivalent to edge cover, but that does not fully capture the complexity of the 2-attractor problem. For this reason, we extend edge cover by a color function on the edges, yielding the colorful edge cover problem. Any edge cover must then satisfy the additional constraint that each color is represented. This extension raises the complexity such that colorful edge cover becomes NP-complete while also more precisely modeling the 2-attractor problem. We obtain a reduction showing k-attractor to be NP-complete and APX-hard for any k ≥ 2.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.35/LIPIcs.STACS.2024.35.pdf
String attractors
dictionary compression
computational complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
36:1
36:20
10.4230/LIPIcs.STACS.2024.36
article
Directed Regular and Context-Free Languages
Ganardi, Moses
1
https://orcid.org/0000-0002-0775-7781
Sağlam, Irmak
1
https://orcid.org/0000-0002-4757-1631
Zetzsche, Georg
1
https://orcid.org/0000-0002-6421-4388
Max Planck Institute for Software Systems (MPI-SWS), Kaiserslautern, Germany
We study the problem of deciding whether a given language is directed. A language L is directed if every pair of words in L have a common (scattered) superword in L. Deciding directedness is a fundamental problem in connection with ideal decompositions of downward closed sets. Another motivation is that deciding whether two directed context-free languages have the same downward closures can be decided in polynomial time, whereas for general context-free languages, this problem is known to be coNEXP-complete.
We show that the directedness problem for regular languages, given as NFAs, belongs to AC¹, and thus polynomial time. Moreover, it is NL-complete for fixed alphabet sizes. Furthermore, we show that for context-free languages, the directedness problem is PSPACE-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.36/LIPIcs.STACS.2024.36.pdf
Subword
ideal
language
regular
context-free
equivalence
downward closure
compression
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
37:1
37:20
10.4230/LIPIcs.STACS.2024.37
article
Online Simple Knapsack with Bounded Predictions
Gehnen, Matthias
1
https://orcid.org/0000-0001-9595-2992
Lotze, Henri
1
https://orcid.org/0000-0001-5013-8831
Rossmanith, Peter
1
https://orcid.org/0000-0003-0177-8028
RWTH Aachen University, Germany
In the Online Simple Knapsack problem, an algorithm has to pack a knapsack of unit size as full as possible with items that arrive sequentially. The algorithm has no prior knowledge of the length or nature of the instance. Its performance is then measured against the best possible packing of all items of the same instance, over all possible instances.
In the classical model for online computation, it is well known that there exists no constant bound for the ratio between the size of an optimal packing and the size of an online algorithm’s packing. A recent variation of the classical online model is that of predictions. In this model, an algorithm is given knowledge about the instance in advance, which is in reality distorted by some factor δ that is commonly unknown to the algorithm. The algorithm only learns about the actual nature of the elements of an input once they are revealed and an irrevocable and immediate decision has to be made. In this work, we study a slight variation of this model in which the error term, and thus the range of sizes that an announced item may actually lay in, is given to the algorithm in advance. It thus knows the range of sizes from which the actual size of each item is selected from.
We find that the analysis of the Online Simple Knapsack problem under this model is surprisingly involved. For values of 0 < δ ≤ 1/7, we prove a tight competitive ratio of 2. From there on, we are able to prove that there are at least three alternating functions that describe the competitive ratio. We provide partially tight bounds for the whole range of 0 < δ < 1, showing in particular that the function of the competitive ratio depending on δ is not continuous.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.37/LIPIcs.STACS.2024.37.pdf
Online problem
Simple Knapsack
Predictions
Machine-Learned Advice
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
38:1
38:18
10.4230/LIPIcs.STACS.2024.38
article
The AC⁰-Complexity of Visibly Pushdown Languages
Göller, Stefan
1
Grosshans, Nathan
2
https://orcid.org/0000-0003-3400-1098
School of Electrical Engineering and Computer Science, Universität Kassel, Germany
Independent Scholar, Paris Region, France
We study the question of which visibly pushdown languages (VPLs) are in the complexity class AC⁰ and how to effectively decide this question. Our contribution is to introduce a particular subclass of one-turn VPLs, called intermediate VPLs, for which the raised question is entirely unclear: to the best of our knowledge our research community is unaware of containment or non-containment in AC⁰ for any language in our newly introduced class. Our main result states that there is an algorithm that, given a visibly pushdown automaton, correctly outputs exactly one of the following: that its language L is in AC⁰, some m ≥ 2 such that MODₘ (the words over {0,1} having a number of 1’s divisible by m) is constant-depth reducible to L (implying that L is not in AC⁰), or a finite disjoint union of intermediate VPLs that L is constant-depth equivalent to. In the latter of the three cases one can moreover effectively compute k,l ∈ ℕ_{> 0} with k≠l such that the concrete intermediate VPL L(S → ε ∣ ac^{k-1}Sb₁ ∣ ac^{l-1}Sb₂) is constant-depth reducible to the language L. Due to their particular nature we conjecture that either all intermediate VPLs are in AC⁰ or all are not. As a corollary of our main result we obtain that in case the input language is a visibly counter language our algorithm can effectively determine if it is in AC⁰ - hence our main result generalizes a result by Krebs et al. stating that it is decidable if a given visibly counter language is in AC⁰ (when restricted to well-matched words).
For our proofs we revisit so-called Ext-algebras (introduced by Czarnetzki et al.), which are closely related to forest algebras (introduced by Bojańczyk and Walukiewicz), and use Green’s relations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.38/LIPIcs.STACS.2024.38.pdf
Visibly pushdown languages
Circuit Complexity
AC0
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
39:1
39:19
10.4230/LIPIcs.STACS.2024.39
article
Quantum and Classical Communication Complexity of Permutation-Invariant Functions
Guan, Ziyi
1
Huang, Yunqi
2
Yao, Penghui
3
4
Ye, Zekun
3
EPFL, Lausanne, Switzerland
University of Technology Sydney, Australia
State Key Laboratory for Novel Software Technology, Nanjing University, China
Hefei National Laboratory, China
This paper gives a nearly tight characterization of the quantum communication complexity of the permutation-invariant Boolean functions. With such a characterization, we show that the quantum and randomized communication complexity of the permutation-invariant Boolean functions are quadratically equivalent (up to a logarithmic factor). Our results extend a recent line of research regarding query complexity [Scott Aaronson and Andris Ambainis, 2014; André Chailloux, 2019; Shalev Ben-David et al., 2020] to communication complexity, showing symmetry prevents exponential quantum speedups.
Furthermore, we show the Log-rank Conjecture holds for any non-trivial total permutation-invariant Boolean function. Moreover, we establish a relationship between the quantum/classical communication complexity and the approximate rank of permutation-invariant Boolean functions. This implies the correctness of the Log-approximate-rank Conjecture for permutation-invariant Boolean functions in both randomized and quantum settings (up to a logarithmic factor).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.39/LIPIcs.STACS.2024.39.pdf
Communication complexity
Permutation-invariant functions
Log-rank Conjecture
Quantum advantages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
40:1
40:18
10.4230/LIPIcs.STACS.2024.40
article
A Faster Algorithm for Vertex Cover Parameterized by Solution Size
Harris, David G.
1
https://orcid.org/0000-0002-3021-3555
Narayanaswamy, N. S.
2
https://orcid.org/0000-0002-8771-3921
Department of Computer Science, University of Maryland, College Park, MD, USA
Department of Computer Science and Engineering, Indian Institute of Technology Madras, India
We describe a new algorithm for vertex cover with runtime O^*(1.25284^k), where k is the size of the desired solution and O^* hides polynomial factors in the input size. This improves over the previous runtime of O^*(1.2738^k) due to Chen, Kanj, & Xia (2010) standing for more than a decade. The key to our algorithm is to use a measure which simultaneously tracks k as well as the optimal value λ of the vertex cover LP relaxation. This allows us to make use of prior algorithms for Maximum Independent Set in bounded-degree graphs and Above-Guarantee Vertex Cover.
The main step in the algorithm is to branch on high-degree vertices, while ensuring that both k and μ = k - λ are decreased at each step. There can be local obstructions in the graph that prevent μ from decreasing in this process; we develop a number of novel branching steps to handle these situations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.40/LIPIcs.STACS.2024.40.pdf
Vertex cover
FPT
Graph algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
41:1
41:18
10.4230/LIPIcs.STACS.2024.41
article
Linear Loop Synthesis for Quadratic Invariants
Hitarth, S.
1
https://orcid.org/0000-0001-7419-3560
Kenison, George
2
https://orcid.org/0000-0002-7661-7061
Kovács, Laura
3
https://orcid.org/0000-0002-8299-2714
Varonka, Anton
3
https://orcid.org/0000-0001-5758-0657
Hong Kong University of Science and Technology, Hong Kong
Liverpool John Moores University, UK
TU Wien, Austria
Invariants are key to formal loop verification as they capture loop properties that are valid before and after each loop iteration. Yet, generating invariants is a notorious task already for syntactically restricted classes of loops. Rather than generating invariants for given loops, in this paper we synthesise loops that exhibit a predefined behaviour given by an invariant. From the perspective of formal loop verification, the synthesised loops are thus correct by design and no longer need to be verified.
To overcome the hardness of reasoning with arbitrarily strong invariants, in this paper we construct simple (non-nested) while loops with linear updates that exhibit polynomial equality invariants. Rather than solving arbitrary polynomial equations, we consider loop properties defined by a single quadratic invariant in any number of variables. We present a procedure that, given a quadratic equation, decides whether a loop with affine updates satisfying this equation exists. Furthermore, if the answer is positive, the procedure synthesises a loop and ensures its variables achieve infinitely many different values.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.41/LIPIcs.STACS.2024.41.pdf
program synthesis
loop invariants
verification
Diophantine equations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
42:1
42:17
10.4230/LIPIcs.STACS.2024.42
article
Algorithms for Claims Trading
Hoefer, Martin
1
https://orcid.org/0000-0003-0131-5605
Ventre, Carmine
2
https://orcid.org/0000-0003-1464-1215
Wilhelmi, Lisa
1
https://orcid.org/0000-0003-0845-1941
Goethe University Frankfurt, Germany
King’s College London, UK
The recent banking crisis has again emphasized the importance of understanding and mitigating systemic risk in financial networks. In this paper, we study a market-driven approach to rescue a bank in distress based on the idea of claims trading, a notion defined in Chapter 11 of the U.S. Bankruptcy Code. We formalize the idea in the context of the seminal model of financial networks by Eisenberg and Noe [Eisenberg and Noe, 2001]. For two given banks v and w, we consider the operation that w takes over some claims of v and in return gives liquidity to v (or creditors of v) to ultimately rescue v (or mitigate contagion effects). We study the structural properties and computational complexity of decision and optimization problems for several variants of claims trading.
When trading incoming edges of v (i.e., claims for which v is the creditor), we show that there is no trade in which both banks v and w strictly improve their assets. We therefore consider creditor-positive trades, in which v profits strictly and w remains indifferent. For a given set C of incoming edges of v, we provide an efficient algorithm to compute payments by w that result in a creditor-positive trade and maximal assets of v. When the set C must also be chosen, the problem becomes weakly NP-hard. Our main result here is a bicriteria FPTAS to compute an approximate trade, which allows for slightly increased payments by w. The approximate trade results in nearly the optimal amount of assets of v in any exact trade. Our results extend to the case in which banks use general monotone payment functions to settle their debt and the emerging clearing state can be computed efficiently.
In contrast, for trading outgoing edges of v (i.e., claims for which v is the debtor), the goal is to maximize the increase in assets for the creditors of v. Notably, for these results the characteristics of the payment functions of the banks are essential. For payments ranking creditors one by one, we show NP-hardness of approximation within a factor polynomial in the network size, in both problem variants when the set of claims C is part of the input or not. Instead, for payments proportional to the value of each debt, our results indicate more favorable conditions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.42/LIPIcs.STACS.2024.42.pdf
Financial Networks
Claims Trade
Systemic Risk
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
43:1
43:17
10.4230/LIPIcs.STACS.2024.43
article
A Faster Algorithm for Constructing the Frequency Difference Consensus Tree
Jansson, Jesper
1
Sung, Wing-Kin
2
3
Tabatabaee, Seyed Ali
4
Yang, Yutong
3
Kyoto University, Japan
The Chinese University of Hong Kong, China
Hong Kong Genome Institute, Hong Kong Science Park, China
Dept. of Computer Science, University of British Columbia, Vancouver, Canada
A consensus tree is a phylogenetic tree that summarizes the evolutionary relationships inferred from a collection of phylogenetic trees with the same set of leaf labels. Among the many types of consensus trees that have been proposed in the last 50 years, the frequency difference consensus tree is one of the more finely resolved types that retains a large amount of information. This paper presents a new deterministic algorithm for constructing the frequency difference consensus tree. Given k phylogenetic trees with identical sets of n leaf labels, it runs in O(knlog{n}) time, improving the best previously known solution.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.43/LIPIcs.STACS.2024.43.pdf
phylogenetic tree
frequency difference consensus tree
tree algorithm
centroid path decomposition
max-Manhattan Skyline Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
44:1
44:19
10.4230/LIPIcs.STACS.2024.44
article
Decremental Sensitivity Oracles for Covering and Packing Minors
Kanesh, Lawqueen
1
https://orcid.org/0000-0001-9274-4119
Panolan, Fahad
2
https://orcid.org/0000-0001-6213-8687
Ramanujan, M. S.
3
https://orcid.org/0000-0002-2116-6048
Strulo, Peter
3
Indian Institute of Technology Jodhpur, India
School of Computing, University of Leeds, UK
University of Warwick, UK
In this paper, we present the first decremental fixed-parameter sensitivity oracles for a number of basic covering and packing problems on graphs. In particular, we obtain the first decremental sensitivity oracles for Vertex Planarization (delete k vertices to make the graph planar) and Cycle Packing (pack k vertex-disjoint cycles in the given graph). That is, we give a sensitivity oracle that preprocesses the given graph in time f(k,𝓁)n^{{O}(1)} such that, when given a set of 𝓁 edge deletions, the data structure decides in time f(k,𝓁) whether the updated graph is a positive instance of the problem. These results are obtained as a corollary of our central result, which is the first decremental sensitivity oracle for Topological Minor Deletion (cover all topological minors in the input graph that belong to a specified set, using k vertices).
Though our methodology closely follows the literature, we are able to produce the first explicit bounds on the preprocessing and query times for several problems. We also initiate the study of fixed-parameter sensitivity oracles with so-called structural parameterizations and give sufficient conditions for the existence of fixed-parameter sensitivity oracles where the parameter is just the treewidth of the graph. In contrast, all existing literature on this topic and the aforementioned results in this paper assume a bound on the solution size (a weaker parameter than treewidth for many problems). As corollaries, we obtain decremental sensitivity oracles for well-studied problems such as Vertex Cover and Dominating Set when only the treewidth of the input graph is bounded. A feature of our methodology behind these results is that we are able to obtain query times independent of treewidth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.44/LIPIcs.STACS.2024.44.pdf
Sensitivity oracles
Data Structures
FPT algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
45:1
45:17
10.4230/LIPIcs.STACS.2024.45
article
Circuit Equivalence in 2-Nilpotent Algebras
Kawałek, Piotr
1
2
https://orcid.org/0000-0003-3592-1697
Kompatscher, Michael
3
https://orcid.org/0000-0002-0163-6604
Krzaczkowski, Jacek
2
https://orcid.org/0000-0003-2861-1156
Institute of Discrete Mathematics and Geometry, Vienna University of Technology, Austria
Institute of Computer Science, University of Maria Curie-Skłodowska, Lublin, Poland
Department of Algebra, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
The circuit equivalence problem Ceqv(A) of a finite algebra A is the problem of deciding whether two circuits over A compute the same function or not. This problem not only generalises the equivalence problem for Boolean circuits, but is also of interest in universal algebra, as it models the problem of checking identities in A. In this paper we prove that Ceqv(A) ∈ 𝖯, if A is a finite 2-nilpotent algebra from a congruence modular variety.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.45/LIPIcs.STACS.2024.45.pdf
circuit equivalence
identity checking
nilpotent algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
46:1
46:17
10.4230/LIPIcs.STACS.2024.46
article
The Subpower Membership Problem of 2-Nilpotent Algebras
Kompatscher, Michael
1
https://orcid.org/0000-0002-0163-6604
Department of Algebra, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
The subpower membership problem SMP(𝐀) of a finite algebraic structure 𝐀 asks whether a given partial function from Aⁿ to A can be interpolated by a term operation of 𝐀, or not. While this problem can be EXPTIME-complete in general, Willard asked whether it is always solvable in polynomial time if 𝐀 is a Mal'tsev algebra. In particular, this includes many important structures studied in abstract algebra, such as groups, quasigroups, rings, Boolean algebras. In this paper we give an affirmative answer to Willard’s question for a big class of 2-nilpotent Mal'tsev algebras. We furthermore develop tools that might be essential in answering the question for general nilpotent Mal'tsev algebras in the future.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.46/LIPIcs.STACS.2024.46.pdf
subpower membership problem
Mal'tsev algebra
compact representation
nilpotence
clonoids
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
47:1
47:16
10.4230/LIPIcs.STACS.2024.47
article
Parameterized and Approximation Algorithms for Coverings Points with Segments in the Plane
Kowalska, Katarzyna
1
Pilipczuk, Michał
2
Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, Poland
Institute of Informatics, University of Warsaw, Poland
We study parameterized and approximation algorithms for a variant of Set Cover, where the universe of elements to be covered consists of points in the plane and the sets with which the points should be covered are segments. We call this problem Segment Set Cover. We also consider a relaxation of the problem called δ-extension, where we need to cover the points by segments that are extended by a tiny fraction, but we compare the solution’s quality to the optimum without extension.
For the unparameterized variant, we prove that Segment Set Cover does not admit a PTAS unless P=NP, even if we restrict segments to be axis-parallel and allow 1/2-extension. On the other hand, we show that parameterization helps for the tractability of Segment Set Cover: we give an FPT algorithm for unweighted Segment Set Cover parameterized by the solution size k, a parameterized approximation scheme for Weighted Segment Set Cover with k being the parameter, and an FPT algorithm for Weighted Segment Set Cover with δ-extension parameterized by k and δ. In the last two results, relaxing the problem is probably necessary: we prove that Weighted Segment Set Cover without any relaxation is W[1]-hard and, assuming ETH, there does not exist an algorithm running in time f(k)⋅ n^{o(k / log k)}. This holds even if one restricts attention to axis-parallel segments.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.47/LIPIcs.STACS.2024.47.pdf
Geometric Set Cover
fixed-parameter tractability
weighted parameterized problems
parameterized approximation scheme
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
48:1
48:17
10.4230/LIPIcs.STACS.2024.48
article
FPT Approximation of Generalised Hypertree Width for Bounded Intersection Hypergraphs
Lanzinger, Matthias
1
2
https://orcid.org/0000-0002-7601-3727
Razgon, Igor
3
https://orcid.org/0000-0002-7060-5780
TU Wien, Austria
University of Oxford, UK
Birkbeck, University of London, UK
Generalised hypertree width (ghw) is a hypergraph parameter that is central to the tractability of many prominent problems with natural hypergraph structure. Computing ghw of a hypergraph is notoriously hard. The decision version of the problem, checking whether ghw(H) ≤ k, is paraNP-hard when parameterised by k. Furthermore, approximation of ghw is at least as hard as approximation of Set-Cover, which is known to not admit any FPT approximation algorithms.
Research in the computation of ghw so far has focused on identifying structural restrictions to hypergraphs - such as bounds on the size of edge intersections - that permit XP algorithms for ghw. Yet, even under these restrictions that problem has so far evaded any kind of FPT algorithm. In this paper we make the first step towards FPT algorithms for ghw by showing that the parameter can be approximated in FPT time for graphs of bounded edge intersection size. In concrete terms we show that there exists an FPT algorithm, parameterised by k and d, that for input hypergraph H with maximal cardinality of edge intersections d and integer k either outputs a tree decomposition with ghw(H) ≤ 4k(k+d+1)(2k-1), or rejects, in which case it is guaranteed that ghw(H) > k. Thus, in the special case of hypergraphs of bounded edge intersection, we obtain an FPT O(k³)-approximation algorithm for ghw.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.48/LIPIcs.STACS.2024.48.pdf
generalized hypertree width
hypergraphs
parameterized algorithms
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
49:1
49:18
10.4230/LIPIcs.STACS.2024.49
article
Sub-Exponential Time Lower Bounds for #VC and #Matching on 3-Regular Graphs
Liu, Ying
1
2
https://orcid.org/0009-0006-9666-0768
Chen, Shiteng
1
2
https://orcid.org/0000-0002-0658-628X
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
This article focuses on the sub-exponential time lower bounds for two canonical #P-hard problems: counting the vertex covers of a given graph (#VC) and counting the matchings of a given graph (#Matching), under the well-known counting exponential time hypothesis (#ETH).
Interpolation is an essential method to build reductions in this article and in the literature. We use the idea of block interpolation to prove that both #VC and #Matching have no 2^{o(N)} time deterministic algorithm, even if the given graph with N vertices is a 3-regular graph. However, when it comes to proving the lower bounds for #VC and #Matching on planar graphs, both block interpolation and polynomial interpolation do not work. We prove that, for any integer N > 0, we can simulate N pairwise linearly independent unary functions by gadgets with only O(log N) size in the context of #VC and #Matching. Then we use log-size gadgets in the polynomial interpolation to prove that planar #VC and planar #Matching have no 2^{o(√{N/(log N)})} time deterministic algorithm. The lower bounds hold even if the given graph with N vertices is a 3-regular graph.
Based on a stronger hypothesis, randomized exponential time hypothesis (rETH), we can avoid using interpolation. We prove that if rETH holds, both planar #VC and planar #Matching have no 2^{o(√N)} time randomized algorithm, even that the given graph with N vertices is a planar 3-regular graph. The 2^{Ω(√N)} time lower bounds are tight, since there exist 2^{O(√N)} time algorithms for planar #VC and planar #Matching.
We also develop a fine-grained dichotomy for a class of counting problems, symmetric Holant*.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.49/LIPIcs.STACS.2024.49.pdf
computational complexity
planar Holant
polynomial interpolation
rETH
sub-exponential
#ETH
#Matching
#VC
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
50:1
50:18
10.4230/LIPIcs.STACS.2024.50
article
Arena-Independent Memory Bounds for Nash Equilibria in Reachability Games
Main, James C. A.
1
https://orcid.org/0009-0000-8471-4833
F.R.S.-FNRS & UMONS - Université de Mons, Belgium
We study the memory requirements of Nash equilibria in turn-based multiplayer games on possibly infinite graphs with reachability, shortest path and Büchi objectives.
We present constructions for finite-memory Nash equilibria in these games that apply to arbitrary game graphs, bypassing the finite-arena requirement that is central in existing approaches. We show that, for these three types of games, from any Nash equilibrium, we can derive another Nash equilibrium where all strategies are finite-memory such that the same players accomplish their objective, without increasing their cost for shortest path games.
Furthermore, we provide memory bounds that are independent of the size of the game graph for reachability and shortest path games. These bounds depend only on the number of players.
To the best of our knowledge, we provide the first results pertaining to finite-memory constrained Nash equilibria in infinite arenas and the first arena-independent memory bounds for Nash equilibria.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.50/LIPIcs.STACS.2024.50.pdf
multiplayer games on graphs
Nash equilibrium
finite-memory strategies
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
51:1
51:17
10.4230/LIPIcs.STACS.2024.51
article
Weighted HOM-Problem for Nonnegative Integers
Maletti, Andreas
1
https://orcid.org/0000-0003-3202-0498
Nász, Andreea-Teodora
1
https://orcid.org/0009-0006-9389-3339
Paul, Erik
1
https://orcid.org/0000-0002-0814-598X
Institute of Computer Science, Leipzig University, Germany
The HOM-problem asks whether the image of a regular tree language under a given tree homomorphism is again regular. It was recently shown to be decidable by Godoy, Giménez, Ramos, and Àlvarez. In this paper, the ℕ-weighted version of this problem is considered and its decidability is proved. More precisely, it is decidable in polynomial time whether the image of a regular ℕ-weighted tree language under a nondeleting, nonerasing tree homomorphism is regular.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.51/LIPIcs.STACS.2024.51.pdf
Weighted Tree Automaton
Decision Problem
Subtree Equality Constraint
Tree Homomorphism
HOM-Problem
Weighted Tree Grammar
Weighted HOM-Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
52:1
52:16
10.4230/LIPIcs.STACS.2024.52
article
Worst-Case and Smoothed Analysis of the Hartigan-Wong Method for k-Means Clustering
Manthey, Bodo
1
https://orcid.org/0000-0001-6278-5059
van Rhijn, Jesse
1
https://orcid.org/0000-0002-3416-7672
Faculty of Electrical Engineering, Mathematics, and Computer Science, University of Twente, Enschede, The Netherlands
We analyze the running time of the Hartigan-Wong method, an old algorithm for the k-means clustering problem. First, we construct an instance on the line on which the method can take 2^{Ω(n)} steps to converge, demonstrating that the Hartigan-Wong method has exponential worst-case running time even when k-means is easy to solve. As this is in contrast to the empirical performance of the algorithm, we also analyze the running time in the framework of smoothed analysis. In particular, given an instance of n points in d dimensions, we prove that the expected number of iterations needed for the Hartigan-Wong method to terminate is bounded by k^{12kd}⋅ poly(n, k, d, 1/σ) when the points in the instance are perturbed by independent d-dimensional Gaussian random variables of mean 0 and standard deviation σ.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.52/LIPIcs.STACS.2024.52.pdf
k-means clustering
smoothed analysis
probabilistic analysis
local search
heuristics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
53:1
53:12
10.4230/LIPIcs.STACS.2024.53
article
Homomorphism-Distinguishing Closedness for Graphs of Bounded Tree-Width
Neuen, Daniel
1
https://orcid.org/0000-0002-4940-0318
University of Bremen, Germany
Two graphs are homomorphism indistinguishable over a graph class 𝐅, denoted by G ≡_𝐅 H, if hom(F,G) = hom(F,H) for all F ∈ 𝐅 where hom(F,G) denotes the number of homomorphisms from F to G. A classical result of Lovász shows that isomorphism between graphs is equivalent to homomorphism indistinguishability over the class of all graphs. More recently, there has been a series of works giving natural algebraic and/or logical characterizations for homomorphism indistinguishability over certain restricted graph classes.
A class of graphs 𝐅 is homomorphism-distinguishing closed if, for every F ∉ 𝐅, there are graphs G and H such that G ≡_𝐅 H and hom(F,G) ≠ hom(F,H). Roberson conjectured that every class closed under taking minors and disjoint unions is homomorphism-distinguishing closed which implies that every such class defines a distinct equivalence relation between graphs. In this work, we confirm this conjecture for the classes 𝒯_k, k ≥ 1, containing all graphs of tree-width at most k.
As an application of this result, we also characterize which subgraph counts are detected by the k-dimensional Weisfeiler-Leman algorithm. This answers an open question from [Arvind et al., J. Comput. Syst. Sci., 2020].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.53/LIPIcs.STACS.2024.53.pdf
homomorphism indistinguishability
tree-width
Weisfeiler-Leman algorithm
subgraph counts
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
54:1
54:18
10.4230/LIPIcs.STACS.2024.54
article
Positionality in Σ⁰₂ and a Completeness Result
Ohlmann, Pierre
1
https://orcid.org/0000-0002-4685-5253
Skrzypczak, Michał
1
https://orcid.org/0000-0002-9647-4993
Institute of Informatics, University of Warsaw, Poland
We study the existence of positional strategies for the protagonist in infinite duration games over arbitrary game graphs. We prove that prefix-independent objectives in Σ⁰₂ which are positional and admit a (strongly) neutral letter are exactly those that are recognised by history-deterministic monotone co-Büchi automata over countable ordinals. This generalises a criterion proposed by [Kopczyński, ICALP 2006] and gives an alternative proof of closure under union for these objectives, which was known from [Ohlmann, TheoretiCS 2023].
We then give two applications of our result. First, we prove that the mean-payoff objective is positional over arbitrary game graphs. Second, we establish the following completeness result: for any objective W which is prefix-independent, admits a (weakly) neutral letter, and is positional over finite game graphs, there is an objective W' which is equivalent to W over finite game graphs and positional over arbitrary game graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.54/LIPIcs.STACS.2024.54.pdf
infinite duration games
positionality
Borel class Σ⁰₂
history determinism
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
55:1
55:18
10.4230/LIPIcs.STACS.2024.55
article
Tree-Layout Based Graph Classes: Proper Chordal Graphs
Paul, Christophe
1
https://orcid.org/0000-0001-6519-975X
Protopapas, Evangelos
1
https://orcid.org/0000-0003-0294-2985
CNRS, Université Montpellier, France
Many important graph classes are characterized by means of layouts (a vertex ordering) excluding some patterns. For example, a graph G = (V,E) is a proper interval graph if and only if G has a layout 𝐋 such that for every triple of vertices such that x≺_𝐋 y≺_𝐋 z, if xz ∈ E, then xy ∈ E and yz ∈ E. Such a triple x, y, z is called an indifference triple. In this paper, we investigate the concept of excluding a set of patterns in tree-layouts rather than layouts. A tree-layout 𝐓_G = (T,r,ρ_G) of a graph G = (V,E) is a tree T rooted at some node r and equipped with a one-to-one mapping ρ_G between V and the nodes of T such that for every edge xy ∈ E, either x is an ancestor of y, denoted x≺_{𝐓_G} y, or y is an ancestor of x. Excluding patterns in a tree-layout is now defined using the ancestor relation. This leads to an unexplored territory of graph classes. In this paper, we initiate the study of such graph classes with the class of proper chordal graphs defined by excluding indifference triples in tree-layouts. Our results combine characterization, compact and canonical representation as well as polynomial time algorithms for the recognition and the graph isomorphism of proper chordal graphs. For this, one of the key ingredients is the introduction of the concept of FPQ-hierarchy generalizing the celebrated PQ-tree data-structure.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.55/LIPIcs.STACS.2024.55.pdf
Graph classes
Graph representation
Graph isomorphism
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
56:1
56:19
10.4230/LIPIcs.STACS.2024.56
article
Randomized Query Composition and Product Distributions
Sanyal, Swagato
1
https://orcid.org/0000-0003-1546-7749
Department of Computer Science and Engineering , Indian Institute of Technology Kharagpur, India
Let 𝖱_ε denote randomized query complexity for error probability ε, and R: = 𝖱_{1/3}. In this work we investigate whether a perfect composition theorem 𝖱(f∘gⁿ) = Ω(𝖱(f)⋅ 𝖱(g)) holds for a relation f ⊆ {0,1}ⁿ × 𝒮 and a total inner function g:{0,1}^m → {0,1}.
Composition theorems of the form 𝖱(f∘gⁿ) = Ω(𝖱(f)⋅ 𝖬(g)) are known for various measures 𝖬. Such measures include the sabotage complexity RS defined by Ben-David and Kothari (ICALP 2015), the max-conflict complexity defined by Gavinsky, Lee, Santha and Sanyal (ICALP 2019), and the linearized complexity measure defined by Ben-David, Blais, Göös and Maystre (FOCS 2022). The above measures are asymptotically non-decreasing in the above order. However, for total Boolean functions no asymptotic separation is known between any two of them.
Let 𝖣^{prod} denote the maximum distributional query complexity with respect to any product (over variables) distribution . In this work we show that for any total Boolean function g, the sabotage complexity RS(g) = Ω̃(𝖣^{prod}(g)). This gives the composition theorem 𝖱(f∘gⁿ) = Ω̃(𝖱(f)⋅ 𝖣^{prod}(g)). In light of the minimax theorem which states that 𝖱(g) is the maximum distributional complexity of g over any distribution, our result makes progress towards answering the composition question.
We prove our result by means of a complexity measure 𝖱_ε^{prod} that we define for total Boolean functions. Informally, 𝖱_ε^{prod}(g) is the minimum complexity of any randomized decision tree with unlabelled leaves with the property that, for every product distribution μ over the inputs, the average bias of its leaves is at least ((1-ε)-ε)/2 = 1/2-ε. It follows by standard arguments that 𝖱_{1/3}^{prod}(g) = Ω(𝖣^{prod}(g)). We show that 𝖱_{1/3}^{prod} is equivalent to the sabotage complexity up to a logarithmic factor.
Ben-David and Kothari asked whether RS(g) = Θ(𝖱(g)) for total functions g. We generalize their question and ask if for any error ε, 𝖱_ε^{prod}(g) = Θ̃(𝖱_ε(g)). We observe that the work by Ben-David, Blais, Göös and Maystre (FOCS 2022) implies that for a perfect composition theorem 𝖱_{1/3}(f∘gⁿ) = Ω(𝖱_{1/3}(f)⋅𝖱_{1/3}(g)) to hold for any relation f and total function g, a necessary condition is that 𝖱_{1/3}(g) = O(1/(ε)⋅ 𝖱_{1/2-ε}(g)) holds for any total function g. We show that 𝖱_ε^{prod}(g) admits a similar error-reduction 𝖱_{1/3}^{prod}(g) = Õ(1/(ε)⋅𝖱_{1/2-ε}^{prod}(g)). Note that from the definition of 𝖱_ε^{prod} it is not immediately clear that 𝖱_ε^{prod} admits any error-reduction at all.
We ask if our bound RS(g) = Ω̃(𝖣^{prod}(g)) is tight. We answer this question in the negative, by showing that for the NAND tree function, sabotage complexity is polynomially larger than 𝖣^{prod}. Our proof yields an alternative and different derivation of the tight lower bound on the bounded error randomized query complexity of the NAND tree function (originally proved by Santha in 1985), which may be of independent interest. Our result shows that sometimes, 𝖱_{1/3}^{prod} and sabotage complexity may be useful in producing an asymptotically larger lower bound on 𝖱(f∘gⁿ) than Ω̃(𝖱(f)⋅ 𝖣^{prod}(g)). In addition, this gives an explicit polynomial separation between 𝖱 and 𝖣^{prod} which, to our knowledge, was not known prior to our work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.56/LIPIcs.STACS.2024.56.pdf
Randomized query complexity
Decision Tree
Boolean function complexity
Analysis of Boolean functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
57:1
57:17
10.4230/LIPIcs.STACS.2024.57
article
Shortest Two Disjoint Paths in Conservative Graphs
Schlotter, Ildikó
1
2
https://orcid.org/0000-0002-0114-8280
Centre for Economic and Regional Studies, Budapest, Hungary
Budapest University of Technology and Economics, Hungary
We consider the following problem that we call the Shortest Two Disjoint Paths problem: given an undirected graph G = (V,E) with edge weights w:E → ℝ, two terminals s and t in G, find two internally vertex-disjoint paths between s and t with minimum total weight. As shown recently by Schlotter and Sebő (2022), this problem becomes NP-hard if edges can have negative weights, even if the weight function is conservative, i.e., there are no cycles in G with negative total weight. We propose a polynomial-time algorithm that solves the Shortest Two Disjoint Paths problem for conservative weights in the case when the negative-weight edges form a constant number of trees in G.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.57/LIPIcs.STACS.2024.57.pdf
Shortest paths
disjoint paths
conservative weights
graph algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
58:1
58:17
10.4230/LIPIcs.STACS.2024.58
article
Algorithms for Computing Closest Points for Segments
Wang, Haitao
1
https://orcid.org/0000-0001-8134-7409
Kahlert School of Computing, University of Utah, Salt Lake City, UT, USA
Given a set P of n points and a set S of n segments in the plane, we consider the problem of computing for each segment of S its closest point in P. The previously best algorithm solves the problem in n^{4/3}2^{O(log^*n)} time [Bespamyatnikh, 2003] and a lower bound (under a somewhat restricted model) Ω(n^{4/3}) has also been proved. In this paper, we present an O(n^{4/3}) time algorithm and thus solve the problem optimally (under the restricted model). In addition, we also present data structures for solving the online version of the problem, i.e., given a query segment (or a line as a special case), find its closest point in P. Our new results improve the previous work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.58/LIPIcs.STACS.2024.58.pdf
Closest points
Voronoi diagrams
Segment dragging queries
Hopcroft’s problem
Algebraic decision tree model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-03-11
289
59:1
59:17
10.4230/LIPIcs.STACS.2024.59
article
Lower Bounds for Set-Blocked Clauses Proofs
Yolcu, Emre
1
https://orcid.org/0000-0002-4255-9748
Carnegie Mellon University, Pittsburgh, PA, USA
We study propositional proof systems with inference rules that formalize restricted versions of the ability to make assumptions that hold without loss of generality, commonly used informally to shorten proofs. Each system we study is built on resolution. They are called BC⁻, RAT⁻, SBC⁻, and GER⁻, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution - all "without new variables." They may be viewed as weak versions of extended resolution (ER) since they are defined by first generalizing the extension rule and then taking away the ability to introduce new variables. Except for SBC⁻, they are known to be strictly between resolution and extended resolution.
Several separations between these systems were proved earlier by exploiting the fact that they effectively simulate ER. We answer the questions left open: We prove exponential lower bounds for SBC⁻ proofs of a binary encoding of the pigeonhole principle, which separates ER from SBC⁻. Using this new separation, we prove that both RAT⁻ and GER⁻ are exponentially separated from SBC⁻. This completes the picture of their relative strengths.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol289-stacs2024/LIPIcs.STACS.2024.59/LIPIcs.STACS.2024.59.pdf
proof complexity
separations
resolution
extended resolution
blocked clauses