eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
1
988
10.4230/LIPIcs.STACS.2021
article
LIPIcs, Volume 187, STACS 2021, Complete Volume
Bläser, Markus
1
Monmege, Benjamin
2
https://orcid.org/0000-0002-4717-9955
Saarland University, Germany
Aix-Marseille University, France
LIPIcs, Volume 187, STACS 2021, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021/LIPIcs.STACS.2021.pdf
LIPIcs, Volume 187, STACS 2021, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
0:i
0:xvi
10.4230/LIPIcs.STACS.2021.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Bläser, Markus
1
Monmege, Benjamin
2
https://orcid.org/0000-0002-4717-9955
Saarland University, Germany
Aix-Marseille University, France
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.0/LIPIcs.STACS.2021.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
1:1
1:20
10.4230/LIPIcs.STACS.2021.1
article
Optimization, Complexity and Invariant Theory (Invited Talk)
Bürgisser, Peter
1
Institut für Mathematik, Technische Universität Berlin, Germany
Invariant and representation theory studies symmetries by means of group actions and is a well established source of unifying principles in mathematics and physics. Recent research suggests its relevance for complexity and optimization through quantitative and algorithmic questions. The goal of the talk is to give an introduction to new algorithmic and analysis techniques that extend convex optimization from the classical Euclidean setting to a general geodesic setting. We also point out surprising connections to a diverse set of problems in different areas of mathematics, statistics, computer science, and physics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.1/LIPIcs.STACS.2021.1.pdf
geometric invariant theory
geodesic optimization
non-commutative optimization
null cone
operator scaling
moment polytope
orbit closure intersection
geometric programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
2:1
2:7
10.4230/LIPIcs.STACS.2021.2
article
First-Order Transductions of Graphs (Invited Talk)
Ossona de Mendez, Patrice
1
2
https://orcid.org/0000-0003-0724-3729
Centre d'Analyse et de Mathématique Sociales CNRS UMR 8557, Paris, France
Charles University, Prague, Czech Republic
This paper is an extended abstract of my STACS 2021 talk "First-order transductions of graphs".
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.2/LIPIcs.STACS.2021.2.pdf
Finite model theory
structural graph theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
3:1
3:1
10.4230/LIPIcs.STACS.2021.3
article
On the Fluted Fragment (Invited Talk)
Tendera, Lidia
1
https://orcid.org/0000-0003-0681-4040
Institute of Computer Science, University of Opole, Poland
The fluted fragment is a recently rediscovered decidable fragment of first-order logic whose history is dating back to Quine and the sixties of the 20th century. The fragment is defined by fixing simultaneously the order in which variables occur in atomic formulas and the order of quantification of variables; no further restrictions concerning e.g. the number of variables, guardedness or usage of negation apply. In the talk we review some motivation and the history of the fragment, discuss the differences between the fluted fragment and other decidable fragments of first-order logic, present its basic model theoretic and algorithmic properties, and discuss recent work concerning limits of decidability of its extensions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.3/LIPIcs.STACS.2021.3.pdf
decidability
fluted fragment
first-order logic
complexity
satisfiability
non-elementary
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
4:1
4:20
10.4230/LIPIcs.STACS.2021.4
article
Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding
Aggarwal, Divesh
1
2
Chen, Yanlin
3
Kumar, Rajendra
4
2
https://orcid.org/0000-0002-4240-5458
Shen, Yixin
5
Centre for Quantum Technologies, Singapore
National University of Singapore, Singapore
Institute of Information Science, Academia Sinica, Taipei, Taiwan
IIT Kanpur, India
Université de Paris, IRIF, CNRS, F-75006, France
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results.
1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q^{13n+o(n)} time and requires poly(n)⋅ q^{16n/q²} memory. This tradeoff which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter.
2) A quantum algorithm that runs in time 2^{0.9533n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [Divesh Aggarwal et al., 2015] that has a time and space complexity 2^{n+o(n)}.
3) A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [Yanlin Chen et al., 2018] that has the same space complexity.
The time complexity of our classical and quantum algorithms are expressed using a quantity related to the kissing number of a lattice. A known upper bound of this quantity is 2^{0.402n}, but in practice for most lattices, it can be much smaller and even 2^o(n). In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.4/LIPIcs.STACS.2021.4.pdf
Lattices
Shortest Vector Problem
Discrete Gaussian Sampling
Time-Space Tradeoff
Quantum computation
Bounded distance decoding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
5:1
5:11
10.4230/LIPIcs.STACS.2021.5
article
An FPT Algorithm for Elimination Distance to Bounded Degree Graphs
Agrawal, Akanksha
1
https://orcid.org/0000-0002-0656-7572
Kanesh, Lawqueen
2
https://orcid.org/0000-0001-9274-4119
Panolan, Fahad
3
https://orcid.org/0000-0001-6213-8687
Ramanujan, M. S.
4
https://orcid.org/0000-0002-2116-6048
Saurabh, Saket
5
2
https://orcid.org/0000-0001-7847-6402
Indian Institute of Technology Madras, Chennai, India
The Institute of Mathematical Sciences, HBNI, Chennai, India
Indian Institute of Technology, Hyderabad, India
University of Warwick, Coventry, UK
University of Bergen, Norway
In the literature on parameterized graph problems, there has been an increased effort in recent years aimed at exploring novel notions of graph edit-distance that are more powerful than the size of a modulator to a specific graph class. In this line of research, Bulian and Dawar [Algorithmica, 2016] introduced the notion of elimination distance and showed that deciding whether a given graph has elimination distance at most k to any minor-closed class of graphs is fixed-parameter tractable parameterized by k [Algorithmica, 2017]. They showed that Graph Isomorphism parameterized by the elimination distance to bounded degree graphs is fixed-parameter tractable and asked whether determining the elimination distance to the class of bounded degree graphs is fixed-parameter tractable. Recently, Lindermayr et al. [MFCS 2020] obtained a fixed-parameter algorithm for this problem in the special case where the input is restricted to K₅-minor free graphs.
In this paper, we answer the question of Bulian and Dawar in the affirmative for general graphs. In fact, we give a more general result capturing elimination distance to any graph class characterized by a finite set of graphs as forbidden induced subgraphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.5/LIPIcs.STACS.2021.5.pdf
Elimination Distance
Fixed-parameter Tractability
Graph Modification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
6:1
6:13
10.4230/LIPIcs.STACS.2021.6
article
A Unified Framework of Quantum Walk Search
Apers, Simon
1
2
Gilyén, András
3
Jeffery, Stacey
4
Université libre de Bruxelles, Brussels, Belgium
CWI, Amsterdam, The Netherlands
California Institute of Technology, Pasadena, CA, USA
QuSoft, CWI, Amsterdam, The Netherlands
Many quantum algorithms critically rely on quantum walk search, or the use of quantum walks to speed up search problems on graphs. However, the main results on quantum walk search are scattered over different, incomparable frameworks, such as the hitting time framework, the MNRS framework, and the electric network framework. As a consequence, a number of pieces are currently missing. For example, recent work by Ambainis et al. (STOC'20) shows how quantum walks starting from the stationary distribution can always find elements quadratically faster. In contrast, the electric network framework allows quantum walks to start from an arbitrary initial state, but it only detects marked elements.
We present a new quantum walk search framework that unifies and strengthens these frameworks, leading to a number of new results. For example, the new framework effectively finds marked elements in the electric network setting. The new framework also allows to interpolate between the hitting time framework, minimizing the number of walk steps, and the MNRS framework, minimizing the number of times elements are checked for being marked. This allows for a more natural tradeoff between resources. In addition to quantum walks and phase estimation, our new algorithm makes use of quantum fast-forwarding, similar to the recent results by Ambainis et al. This perspective also enables us to derive more general complexity bounds on the quantum walk algorithms, e.g., based on Monte Carlo type bounds of the corresponding classical walk. As a final result, we show how in certain cases we can avoid the use of phase estimation and quantum fast-forwarding, answering an open question of Ambainis et al.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.6/LIPIcs.STACS.2021.6.pdf
Quantum Algorithms
Quantum Walks
Graph Theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
7:1
7:17
10.4230/LIPIcs.STACS.2021.7
article
Achieving Anonymity via Weak Lower Bound Constraints for k-Median and k-Means
Arutyunova, Anna
1
Schmidt, Melanie
2
Universität Bonn, Germany
Universität Köln, Germany
We study k-clustering problems with lower bounds, including k-median and k-means clustering with lower bounds. In addition to the point set P and the number of centers k, a k-clustering problem with (uniform) lower bounds gets a number B. The solution space is restricted to clusterings where every cluster has at least B points. We demonstrate how to approximate k-median with lower bounds via a reduction to facility location with lower bounds, for which O(1)-approximation algorithms are known.
Then we propose a new constrained clustering problem with lower bounds where we allow points to be assigned multiple times (to different centers). This means that for every point, the clustering specifies a set of centers to which it is assigned. We call this clustering with weak lower bounds. We give an 8-approximation for k-median clustering with weak lower bounds and an O(1)-approximation for k-means with weak lower bounds.
We conclude by showing that at a constant increase in the approximation factor, we can restrict the number of assignments of every point to 2 (or, if we allow fractional assignments, to 1+ε). This also leads to the first bicritera approximation algorithm for k-means with (standard) lower bounds where bicriteria is interpreted in the sense that the lower bounds are violated by a constant factor.
All algorithms in this paper run in time that is polynomial in n and k (and d for the Euclidean variants considered).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.7/LIPIcs.STACS.2021.7.pdf
Clustering with Constraints
lower Bounds
k-Means
Anonymity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
8:1
8:15
10.4230/LIPIcs.STACS.2021.8
article
Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata
Barloy, Corentin
1
Clemente, Lorenzo
2
https://orcid.org/0000-0003-0578-9103
École Normale Supérieure de Paris, PSL, France
University of Warsaw, Poland
We study the universality and inclusion problems for register automata over equality data (A, =). We show that the universality L(B) = (Σ × A)^* and inclusion problems L(A) ⊆ L(B) B can be solved with 2-EXPTIME complexity when both automata are without guessing and B is unambiguous, improving on the currently best-known 2-EXPSPACE upper bound by Mottet and Quaas. When the number of registers of both automata is fixed, we obtain a lower EXPTIME complexity, also improving the EXPSPACE upper bound from Mottet and Quaas for fixed number of registers. We reduce inclusion to universality, and then we reduce universality to the problem of counting the number of orbits of runs of the automaton. We show that the orbit-counting function satisfies a system of bidimensional linear recursive equations with polynomial coefficients (linrec), which generalises analogous recurrences for the Stirling numbers of the second kind, and then we show that universality reduces to the zeroness problem for linrec sequences. While such a counting approach is classical and has successfully been applied to unambiguous finite automata and grammars over finite alphabets, its application to register automata over infinite alphabets is novel.
We provide two algorithms to decide the zeroness problem for bidimensional linear recursive sequences arising from orbit-counting functions. Both algorithms rely on techniques from linear non-commutative algebra. The first algorithm performs variable elimination and has elementary complexity. The second algorithm is a refined version of the first one and it relies on the computation of the Hermite normal form of matrices over a skew polynomial field. The second algorithm yields an EXPTIME decision procedure for the zeroness problem of linrec sequences, which in turn yields the claimed bounds for the universality and inclusion problems of register automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.8/LIPIcs.STACS.2021.8.pdf
unambiguous register automata
universality and inclusion problems
multi-dimensional linear recurrence sequences
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
9:1
9:17
10.4230/LIPIcs.STACS.2021.9
article
Tight Approximation Guarantees for Concave Coverage Problems
Barman, Siddharth
1
Fawzi, Omar
2
Fermé, Paul
2
Indian Institute of Science, Bangalore, India
Univ. Lyon, ENS Lyon, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
In the maximum coverage problem, we are given subsets T_1, …, T_m of a universe [n] along with an integer k and the objective is to find a subset S ⊆ [m] of size k that maximizes C(S) : = |⋃_{i ∈ S} T_i|. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of 1-e^{-1}.
In this work we consider a generalization of this problem wherein an element a can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function φ, we define C^{φ}(S) := ∑_{a ∈ [n]}w_aφ(|S|_a), where |S|_a = |{i ∈ S : a ∈ T_i}|. The standard maximum coverage problem corresponds to taking φ(j) = min{j,1}. For any such φ, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of φ, defined by α_{φ} : = min_{x ∈ ℕ^*} 𝔼[φ(Poi(x))] / φ(𝔼[Poi(x)]). Complementing this approximation guarantee, we establish a matching NP-hardness result when φ grows in a sublinear way.
As special cases, we improve the result of [Siddharth Barman et al., 2020] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [Szymon Dudycz et al., 2020] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.9/LIPIcs.STACS.2021.9.pdf
Approximation Algorithms
Coverage Problems
Concave Function
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
10:1
10:16
10.4230/LIPIcs.STACS.2021.10
article
Symmetric Promise Constraint Satisfaction Problems: Beyond the Boolean Case
Barto, Libor
1
https://orcid.org/0000-0002-8481-6458
Battistelli, Diego
1
Berg, Kevin M.
1
https://orcid.org/0000-0002-1555-4239
Department of Algebra, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
The Promise Constraint Satisfaction Problem (PCSP) is a recently introduced vast generalization of the Constraint Satisfaction Problem (CSP). We investigate the computational complexity of a class of PCSPs beyond the most studied cases - approximation variants of satisfiability and graph coloring problems. We give an almost complete classification for the class of PCSPs of the form: given a 3-uniform hypergraph that has an admissible 2-coloring, find an admissible 3-coloring, where admissibility is given by a ternary symmetric relation. The only PCSP of this sort whose complexity is left open in this work is a natural hypergraph coloring problem, where admissibility is given by the relation "if two colors are equal, then the remaining one is higher."
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.10/LIPIcs.STACS.2021.10.pdf
constraint satisfaction problems
promise constraint satisfaction
Boolean PCSP
polymorphism
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
11:1
11:17
10.4230/LIPIcs.STACS.2021.11
article
A Characterization of Wreath Products Where Knapsack Is Decidable
Bergsträßer, Pascal
1
https://orcid.org/0000-0002-4681-2149
Ganardi, Moses
2
https://orcid.org/0000-0002-0775-7781
Zetzsche, Georg
2
https://orcid.org/0000-0002-6421-4388
Fachbereich Informatik, Technische Universität Kaiserslautern, Germany
Max Planck Institute for Software Systems (MPI-SWS), Kaiserslautern, Germany
The knapsack problem for groups was introduced by Miasnikov, Nikolaev, and Ushakov. It is defined for each finitely generated group G and takes as input group elements g_1,…,g_n,g ∈ G and asks whether there are x_1,…,x_n ≥ 0 with g_1^{x_1}⋯ g_n^{x_n} = g. We study the knapsack problem for wreath products G≀H of groups G and H.
Our main result is a characterization of those wreath products G≀H for which the knapsack problem is decidable. The characterization is in terms of decidability properties of the indiviual factors G and H. To this end, we introduce two decision problems, the intersection knapsack problem and its restriction, the positive intersection knapsack problem.
Moreover, we apply our main result to H₃(ℤ), the discrete Heisenberg group, and to Baumslag-Solitar groups BS(1,q) for q ≥ 1. First, we show that the knapsack problem is undecidable for G≀H₃(ℤ) for any G ≠ 1. This implies that for G ≠ 1 and for infinite and virtually nilpotent groups H, the knapsack problem for G≀H is decidable if and only if H is virtually abelian and solvability of systems of exponent equations is decidable for G. Second, we show that the knapsack problem is decidable for G≀BS(1,q) if and only if solvability of systems of exponent equations is decidable for G.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.11/LIPIcs.STACS.2021.11.pdf
knapsack
wreath products
decision problems in group theory
decidability
discrete Heisenberg group
Baumslag-Solitar groups
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
12:1
12:16
10.4230/LIPIcs.STACS.2021.12
article
Synchronizing Strongly Connected Partial DFAs
Berlinkov, Mikhail V.
1
Ferens, Robert
2
Ryzhikov, Andrew
3
Szykuła, Marek
2
Institute of Natural Sciences and Mathematics, Ural Federal University, Ekaterinburg, Russia
Institute of Computer Science, University of Wrocław, Poland
Université Gustave Eiffel, LIGM, Marne-la-Vallée, France
We study synchronizing partial DFAs, which extend the classical concept of synchronizing complete DFAs and are a special case of synchronizing unambiguous NFAs. A partial DFA is called synchronizing if it has a word (called a reset word) whose action brings a non-empty subset of states to a unique state and is undefined for all other states. While in the general case the problem of checking whether a partial DFA is synchronizing is PSPACE-complete, we show that in the strongly connected case this problem can be efficiently reduced to the same problem for a complete DFA. Using combinatorial, algebraic, and formal languages methods, we develop techniques that relate main synchronization problems for strongly connected partial DFAs with the same problems for complete DFAs. In particular, this includes the Černý and the rank conjectures, the problem of finding a reset word, and upper bounds on the length of the shortest reset words of literal automata of finite prefix codes. We conclude that solving fundamental synchronization problems is equally hard in both models, as an essential improvement of the results for one model implies an improvement for the other.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.12/LIPIcs.STACS.2021.12.pdf
Černý conjecture
literal automaton
partial automaton
prefix code
rank conjecture
reset threshold
reset word
synchronizing automaton
synchronizing word
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
13:1
13:16
10.4230/LIPIcs.STACS.2021.13
article
On Euclidean Steiner (1+ε)-Spanners
Bhore, Sujoy
1
https://orcid.org/0000-0003-0104-1659
Tóth, Csaba D.
2
3
https://orcid.org/0000-0002-8769-3190
Université Libre de Bruxelles, Brussels, Belgium
California State University Northridge, Los Angeles, CA, USA
Tufts University, Medford, MA, USA
Lightness and sparsity are two natural parameters for Euclidean (1+ε)-spanners. Classical results show that, when the dimension d ∈ ℕ and ε > 0 are constant, every set S of n points in d-space admits an (1+ε)-spanners with O(n) edges and weight proportional to that of the Euclidean MST of S. Tight bounds on the dependence on ε > 0 for constant d ∈ ℕ have been established only recently. Le and Solomon (FOCS 2019) showed that Steiner points can substantially improve the lightness and sparsity of a (1+ε)-spanner. They gave upper bounds of Õ(ε^{-(d+1)/2}) for the minimum lightness in dimensions d ≥ 3, and Õ(ε^{-(d-1))/2}) for the minimum sparsity in d-space for all d ≥ 1. They obtained lower bounds only in the plane (d = 2). Le and Solomon (ESA 2020) also constructed Steiner (1+ε)-spanners of lightness O(ε^{-1}logΔ) in the plane, where Δ ∈ Ω(log n) is the spread of S, defined as the ratio between the maximum and minimum distance between a pair of points.
In this work, we improve several bounds on the lightness and sparsity of Euclidean Steiner (1+ε)-spanners. Using a new geometric analysis, we establish lower bounds of Ω(ε^{-d/2}) for the lightness and Ω(ε^{-(d-1)/2}) for the sparsity of such spanners in Euclidean d-space for all d ≥ 2. We use the geometric insight from our lower bound analysis to construct Steiner (1+ε)-spanners of lightness O(ε^{-1}log n) for n points in Euclidean plane.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.13/LIPIcs.STACS.2021.13.pdf
Geometric spanner
(1+ε)-spanner
lightness
sparsity
minimum weight
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
14:1
14:17
10.4230/LIPIcs.STACS.2021.14
article
A Nearly Optimal Deterministic Online Algorithm for Non-Metric Facility Location
Bienkowski, Marcin
1
https://orcid.org/0000-0002-2453-7772
Feldkord, Björn
2
https://orcid.org/0000-0001-6591-2420
Schmidt, Paweł
1
Institute of Computer Science, University of Wrocław, Poland
Heinz Nixdorf Institut & Department of Computer Science, Universität Paderborn, Germany
In the online non-metric variant of the facility location problem, there is a given graph consisting of a set F of facilities (each with a certain opening cost), a set C of potential clients, and weighted connections between them. The online part of the input is a sequence of clients from C, and in response to any requested client, an online algorithm may open an additional subset of facilities and must connect the given client to an open facility.
We give an online, polynomial-time deterministic algorithm for this problem, with a competitive ratio of O(log |F| ⋅ (log |C| + log log |F|)). The result is optimal up to loglog factors. Our algorithm improves over the O((log |C| + log |F|) ⋅ (log |C| + log log |F|))-competitive construction that first reduces the facility location instance to a set cover one and then later solves such instance using the deterministic algorithm by Alon et al. [TALG 2006]. This is an asymptotic improvement in a typical scenario where |F| ≪ |C|.
We achieve this by a more direct approach: we design an algorithm for a fractional relaxation of the non-metric facility location problem with clustered facilities. To handle the constraints of such non-covering LP, we combine the dual fitting and multiplicative weight updates approach. By maintaining certain additional monotonicity properties of the created fractional solution, we can handle the dependencies between facilities and connections in a rounding routine.
Our result, combined with the algorithm by Naor et al. [FOCS 2011] yields the first deterministic algorithm for the online node-weighted Steiner tree problem. The resulting competitive ratio is O(log k ⋅ log² 𝓁) on graphs of 𝓁 nodes and k terminals.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.14/LIPIcs.STACS.2021.14.pdf
Online algorithms
deterministic rounding
linear programming
facility location
set cover
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
15:1
15:12
10.4230/LIPIcs.STACS.2021.15
article
An Asymptotically Fast Polynomial Space Algorithm for Hamiltonicity Detection in Sparse Directed Graphs
Björklund, Andreas
1
Private, Lund, Sweden - This work was done while being employed by Ericsson Research
We present a polynomial space Monte Carlo algorithm that given a directed graph on n vertices and average outdegree δ, detects if the graph has a Hamiltonian cycle in 2^{n-Ω(n/δ)} time. This asymptotic scaling of the savings in the running time matches the fastest known exponential space algorithm by Björklund and Williams ICALP 2019. By comparison, the previously best polynomial space algorithm by Kowalik and Majewski IPEC 2020 guarantees a 2^{n-Ω(n/(2^δ))} time bound.
Our algorithm combines for the first time the idea of obtaining a fingerprint of the presence of a Hamiltonian cycle through an inclusion-exclusion summation over the Laplacian of the graph from Björklund, Kaski, and Koutis ICALP 2017, with the idea of sieving for the non-zero terms in an inclusion-exclusion summation by listing solutions to systems of linear equations over ℤ₂ from Björklund and Husfeldt FOCS 2013.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.15/LIPIcs.STACS.2021.15.pdf
Hamiltonian cycle
directed graph
worst case analysis algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
16:1
16:18
10.4230/LIPIcs.STACS.2021.16
article
Online Simple Knapsack with Reservation Costs
Böckenhauer, Hans-Joachim
1
https://orcid.org/0000-0001-9164-3674
Burjons, Elisabet
2
https://orcid.org/0000-0001-6161-7440
Hromkovič, Juraj
1
Lotze, Henri
2
https://orcid.org/0000-0001-5013-8831
Rossmanith, Peter
2
https://orcid.org/0000-0003-0177-8028
Department of Computer Science, ETH Zürich, Switzerland
Department of Computer Science, RWTH Aachen, Germany
In the Online Simple Knapsack Problem we are given a knapsack of unit size 1. Items of size smaller or equal to 1 are presented in an iterative fashion and an algorithm has to decide whether to permanently reject or include each item into the knapsack without any knowledge about the rest of the instance. The goal is then to pack the knapsack as full as possible. In this work, we introduce a third option additional to those of packing and rejecting an item, namely that of reserving an item for the cost of a fixed fraction α of its size. An algorithm may pay this fraction in order to postpone its decision on whether to include or reject the item until after the last item of the instance was presented.
While the classical Online Simple Knapsack Problem does not admit any constantly bounded competitive ratio in the deterministic setting, we find that adding the possibility of reservation makes the problem constantly competitive, with varying competitive ratios depending on the value of α. We give upper and lower bounds for the whole range of reservation costs, with tight bounds for costs up to 1/6 - an area that is strictly 2-competitive - , for costs between √2-1 and 1 - an area that is strictly (2+α)-competitive up to ϕ -1, and strictly 1/(1-α)-competitive above ϕ-1, where ϕ is the golden ratio.
With our analysis, we find a counterintuitive characteristic of the problem: Intuitively, one would expect that the possibility of rejecting items becomes more and more helpful for an online algorithm with growing reservation costs. However, for higher reservation costs above √2-1, an algorithm that is unable to reject any items tightly matches the lower bound and is thus the best possible. On the other hand, for any positive reservation cost smaller than 1/6, any algorithm that is unable to reject any items performs considerably worse than one that is able to reject.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.16/LIPIcs.STACS.2021.16.pdf
Online problem
Simple knapsack
Reservation costs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
17:1
17:13
10.4230/LIPIcs.STACS.2021.17
article
Inapproximability of Diameter in Super-Linear Time: Beyond the 5/3 Ratio
Bonnet, Édouard
1
https://orcid.org/0000-0002-1653-5822
Univ. Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
We show, assuming the Strong Exponential Time Hypothesis, that for every ε > 0, approximating directed Diameter on m-arc graphs within ratio 7/4 - ε requires m^{4/3 - o(1)} time. Our construction uses non-negative edge weights but even holds for sparse digraphs, i.e., for which the number of vertices n and the number of arcs m satisfy m = O˜(n). This is the first result that conditionally rules out a near-linear time 5/3-approximation for a variant of Diameter.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.17/LIPIcs.STACS.2021.17.pdf
Diameter
inapproximability
SETH lower bounds
k-Orthogonal Vectors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
18:1
18:15
10.4230/LIPIcs.STACS.2021.18
article
The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem
Brodowsky, Ulrich A.
1
Hougardy, Stefan
2
https://orcid.org/0000-0001-8656-3418
Pontsheide 20, 52076 Aachen, Germany
Research Institute for Discrete Mathematics, Universität Bonn, Germany
The 2-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces two edges of the tour by two other edges, as long as this yields a shorter tour. We will prove that for Euclidean Traveling Salesman Problems with n cities the approximation ratio of the 2-Opt heuristic is Θ(log n / log log n). This improves the upper bound of O(log n) given by Chandra, Karloff, and Tovey [Barun Chandra et al., 1999] in 1999.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.18/LIPIcs.STACS.2021.18.pdf
traveling salesman problem
metric TSP
Euclidean TSP
2-Opt
approximation algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
19:1
19:19
10.4230/LIPIcs.STACS.2021.19
article
A Framework of Quantum Strong Exponential-Time Hypotheses
Buhrman, Harry
1
2
Patro, Subhasree
1
2
Speelman, Florian
1
2
QuSoft, CWI, Amsterdam, The Netherlands
University of Amsterdam, The Netherlands
The strong exponential-time hypothesis (SETH) is a commonly used conjecture in the field of complexity theory. It essentially states that determining whether a CNF formula is satisfiable can not be done faster than exhaustive search over all possible assignments. This hypothesis and its variants gave rise to a fruitful field of research, fine-grained complexity, obtaining (mostly tight) lower bounds for many problems in P whose unconditional lower bounds are very likely beyond current techniques. In this work, we introduce an extensive framework of Quantum Strong Exponential-Time Hypotheses, as quantum analogues to what SETH is for classical computation.
Using the QSETH framework, we are able to translate quantum query lower bounds on black-box problems to conditional quantum time lower bounds for many problems in P. As an example, we provide a conditional quantum time lower bound of Ω(n^1.5) for the Longest Common Subsequence and Edit Distance problems. We also show that the n² SETH-based lower bound for a recent scheme for Proofs of Useful Work carries over to the quantum setting using our framework, maintaining a quadratic gap between verifier and prover.
Lastly, we show that the assumptions in our framework can not be simplified further with relativizing proof techniques, as they are false in relativized worlds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.19/LIPIcs.STACS.2021.19.pdf
complexity theory
fine-grained complexity
longest common subsequence
edit distance
quantum query complexity
strong exponential-time hypothesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
20:1
20:18
10.4230/LIPIcs.STACS.2021.20
article
The Complexity of the Distributed Constraint Satisfaction Problem
Butti, Silvia
1
https://orcid.org/0000-0002-0171-2021
Dalmau, Victor
1
https://orcid.org/0000-0002-9365-7372
Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
We study the complexity of the Distributed Constraint Satisfaction Problem (DCSP) on a synchronous, anonymous network from a theoretical standpoint. In this setting, variables and constraints are controlled by agents which communicate with each other by sending messages through fixed communication channels. Our results endorse the well-known fact from classical CSPs that the complexity of fixed-template computational problems depends on the template’s invariance under certain operations. Specifically, we show that DCSP(Γ) is polynomial-time tractable if and only if Γ is invariant under symmetric polymorphisms of all arities. Otherwise, there are no algorithms that solve DCSP(Γ) in finite time. We also show that the same condition holds for the search variant of DCSP.
Collaterally, our results unveil a feature of the processes' neighbourhood in a distributed network, its iterated degree, which plays a major role in the analysis. We explore this notion establishing a tight connection with the basic linear programming relaxation of a CSP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.20/LIPIcs.STACS.2021.20.pdf
Constraint Satisfaction Problems
Distributed Algorithms
Polymorphisms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
21:1
21:19
10.4230/LIPIcs.STACS.2021.21
article
Distance Computations in the Hybrid Network Model via Oracle Simulations
Censor-Hillel, Keren
1
https://orcid.org/0000-0003-4395-5205
Leitersdorf, Dean
1
Polosukhin, Volodymyr
1
Technion - Israel Institute of Technology, Haifa, Israel
The Hybrid network model was introduced in [Augustine et al., SODA '20] for laying down a theoretical foundation for networks which combine two possible modes of communication: One mode allows high-bandwidth communication with neighboring nodes, and the other allows low-bandwidth communication over few long-range connections at a time. This fundamentally abstracts networks such as hybrid data centers, and class-based software-defined networks.
Our technical contribution is a density-aware approach that allows us to simulate a set of oracles for an overlay skeleton graph over a Hybrid network.
As applications of our oracle simulations, with additional machinery that we provide, we derive fast algorithms for fundamental distance-related tasks. One of our core contributions is an algorithm in the Hybrid model for computing exact weighted shortest paths from Õ(n^{1/3}) sources which completes in Õ(n^{1/3}) rounds w.h.p. This improves, in both the runtime and the number of sources, upon the algorithm of [Kuhn and Schneider, PODC ’20], which computes shortest paths from a single source in Õ(n^{2/5}) rounds w.h.p.
We additionally show a 2-approximation for weighted diameter and a (1+ε)-approximation for unweighted diameter, both in Õ(n^{1/3}) rounds w.h.p., which is comparable to the ̃ Ω(n^{1/3}) lower bound of [Kuhn and Schneider, PODC ’20] for a (2-ε)-approximation for weighted diameter and an exact unweighted diameter. We also provide fast distance approximations from multiple sources and fast approximations for eccentricities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.21/LIPIcs.STACS.2021.21.pdf
Distributed graph algorithms
Hybrid network model
Distance computations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
22:1
22:14
10.4230/LIPIcs.STACS.2021.22
article
Simple Multi-Pass Streaming Algorithms for Skyline Points and Extreme Points
Chan, Timothy M.
1
Rahul, Saladi
2
Dept. of Computer Science, University of Illinois at Urbana-Champaign, IL, USA
Dept. of Computer Science and Automation, Indian Institute of Science Bangalore, India
In this paper, we present simple randomized multi-pass streaming algorithms for fundamental computational geometry problems of finding the skyline (maximal) points and the extreme points of the convex hull. For the skyline problem, one of our algorithm occupies O(h) space and performs O(log n) passes, where h is the number of skyline points. This improves the space bound of the currently best known result by Das Sarma, Lall, Nanongkai, and Xu [VLDB'09] by a logarithmic factor. For the extreme points problem, we present the first non-trivial result for any constant dimension greater than two: an O(h log^{O(1)}n) space and O(log^dn) pass algorithm, where h is the number of extreme points. Finally, we argue why randomization seems unavoidable for these problems, by proving lower bounds on the performance of deterministic algorithms for a related problem of finding maximal elements in a poset.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.22/LIPIcs.STACS.2021.22.pdf
multi-pass streaming algorithms
skyline
convex hull
extreme points
randomized algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
23:1
23:19
10.4230/LIPIcs.STACS.2021.23
article
One-Tape Turing Machine and Branching Program Lower Bounds for MCSP
Cheraghchi, Mahdi
1
Hirahara, Shuichi
2
Myrisiotis, Dimitrios
3
Yoshida, Yuichi
2
Department of EECS, University of Michigan, Ann Arbor, MI, USA
National Institute of Informatics, Tokyo, Japan
Department of Computing, Imperial College London, London, UK
For a size parameter s: ℕ → ℕ, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0,1}ⁿ → {0,1} (represented by a string of length N : = 2ⁿ) is at most a threshold s(n). A recent line of work exhibited "hardness magnification" phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ₁ > 0, if MCSP[2^{μ₁⋅ n}] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N^{1.01}, then P≠NP.
In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs:
1) A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2^{μ₂⋅n}] in time N^{1.99}, for some constant μ₂ > μ₁.
2) A non-deterministic (or parity) branching program of size o(N^{1.5}/log N) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nečiporuk method to MKTP, which previously appeared to be difficult.
3) The size of any non-deterministic, co-non-deterministic, or parity branching program computing MCSP is at least N^{1.5-o(1)}. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models.
The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results:
1) There exists a (local) hitting set generator with seed length Õ(√N) secure against read-once polynomial-size non-deterministic branching programs on N-bit inputs.
2) Any read-once co-non-deterministic branching program computing MCSP must have size at least 2^Ω̃(N).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.23/LIPIcs.STACS.2021.23.pdf
Minimum Circuit Size Problem
Kolmogorov Complexity
One-Tape Turing Machines
Branching Programs
Lower Bounds
Pseudorandom Generators
Hitting Set Generators
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
24:1
24:15
10.4230/LIPIcs.STACS.2021.24
article
Inference and Mutual Information on Random Factor Graphs
Coja-Oghlan, Amin
1
Hahn-Klimroth, Max
1
Loick, Philipp
1
Müller, Noela
2
Panagiotou, Konstantinos
2
Pasch, Matija
2
Mathematics Institute, Goethe Universität Frankfurt am Main, Germany
Mathematics Institute, University of Munich, Germany
Random factor graphs provide a powerful framework for the study of inference problems such as decoding problems or the stochastic block model. Information-theoretically the key quantity of interest is the mutual information between the observed factor graph and the underlying ground truth around which the factor graph was created; in the stochastic block model, this would be the planted partition. The mutual information gauges whether and how well the ground truth can be inferred from the observable data. For a very general model of random factor graphs we verify a formula for the mutual information predicted by physics techniques. As an application we prove a conjecture about low-density generator matrix codes from [Montanari: IEEE Transactions on Information Theory 2005]. Further applications include phase transitions of the stochastic block model and the mixed k-spin model from physics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.24/LIPIcs.STACS.2021.24.pdf
Information theory
random factor graphs
inference problems
phase transitions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
25:1
25:19
10.4230/LIPIcs.STACS.2021.25
article
The Edit Distance to k-Subsequence Universality
Day, Joel D.
1
https://orcid.org/0000-0003-0738-9816
Fleischmann, Pamela
2
https://orcid.org/0000-0002-1531-7970
Kosche, Maria
3
https://orcid.org/0000-0002-2165-2695
Koß, Tore
3
https://orcid.org/0000-0001-6002-1581
Manea, Florin
3
4
https://orcid.org/0000-0001-6094-3324
Siemer, Stefan
3
https://orcid.org/0000-0001-7509-8135
Loughborough University, UK
Computer Science Department, Universität Kiel, Germany
Computer Science Department, Universität Göttingen, Germany
Campus-Institut Data Science, Göttingen, Germany
A word u is a subsequence of another word w if u can be obtained from w by deleting some of its letters. In the early 1970s, Imre Simon defined the relation ∼_k (called now Simon-Congruence) as follows: two words having exactly the same set of subsequences of length at most k are ∼_k-congruent. This relation was central in defining and analysing piecewise testable languages, but has found many applications in areas such as algorithmic learning theory, databases theory, or computational linguistics. Recently, it was shown that testing whether two words are ∼_k-congruent can be done in optimal linear time. Thus, it is a natural next step to ask, for two words w and u which are not ∼_k-equivalent, what is the minimal number of edit operations that we need to perform on w in order to obtain a word which is ∼_k-equivalent to u.
In this paper, we consider this problem in a setting which seems interesting: when u is a k-subsequence universal word. A word u with alph(u) = Σ is called k-subsequence universal if the set of subsequences of length k of u contains all possible words of length k over Σ. As such, our results are a series of efficient algorithms computing the edit distance from w to the language of k-subsequence universal words.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.25/LIPIcs.STACS.2021.25.pdf
Subsequence
Scattered factor
Subword
Universality
k-subsequence universality
Edit distance
Efficient algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
26:1
26:17
10.4230/LIPIcs.STACS.2021.26
article
Barrington Plays Cards: The Complexity of Card-Based Protocols
Dvořák, Pavel
1
Koucký, Michal
1
Charles University, Prague, Czech Republic
In this paper we study the computational complexity of functions that have efficient card-based protocols. A study of card-based protocols was initiated by den Boer [den Boer, 1990] as a means for secure two-party computation. Our contribution is two-fold: We classify a large class of protocols with respect to the computational complexity of functions they compute, and we propose other encodings of inputs which require fewer cards than the usual 2-card representation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.26/LIPIcs.STACS.2021.26.pdf
Efficient card-based protocol
Branching program
Turing machine
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
27:1
27:18
10.4230/LIPIcs.STACS.2021.27
article
Round-Competitive Algorithms for Uncertainty Problems with Parallel Queries
Erlebach, Thomas
1
https://orcid.org/0000-0002-4470-5868
Hoffmann, Michael
1
de Lima, Murilo Santos
1
https://orcid.org/0000-0002-2297-811X
School of Informatics, University of Leicester, UK
The area of computing with uncertainty considers problems where some information about the input elements is uncertain, but can be obtained using queries. For example, instead of the weight of an element, we may be given an interval that is guaranteed to contain the weight, and a query can be performed to reveal the weight. While previous work has considered models where queries are asked either sequentially (adaptive model) or all at once (non-adaptive model), and the goal is to minimize the number of queries that are needed to solve the given problem, we propose and study a new model where k queries can be made in parallel in each round, and the goal is to minimize the number of query rounds. We use competitive analysis and present upper and lower bounds on the number of query rounds required by any algorithm in comparison with the optimal number of query rounds. Given a set of uncertain elements and a family of m subsets of that set, we present an algorithm for determining the value of the minimum of each of the subsets that requires at most (2+ε) ⋅ opt_k+O(1/(ε) ⋅ lg m) rounds for every 0 < ε < 1, where opt_k is the optimal number of rounds, as well as nearly matching lower bounds. For the problem of determining the i-th smallest value and identifying all elements with that value in a set of uncertain elements, we give a 2-round-competitive algorithm. We also show that the problem of sorting a family of sets of uncertain elements admits a 2-round-competitive algorithm and this is the best possible.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.27/LIPIcs.STACS.2021.27.pdf
online algorithms
competitive analysis
explorable uncertainty
parallel algorithms
minimum problem
selection problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
28:1
28:16
10.4230/LIPIcs.STACS.2021.28
article
Church Synthesis on Register Automata over Linearly Ordered Data Domains
Exibard, Léo
1
2
Filiot, Emmanuel
2
Khalimov, Ayrat
2
Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
Université libre de Bruxelles, Brussels, Belgium
Register automata are finite automata equipped with a finite set of registers in which they can store data, i.e. elements from an unbounded or infinite alphabet. They provide a simple formalism to specify the behaviour of reactive systems operating over data ω-words. We study the synthesis problem for specifications given as register automata over a linearly ordered data domain (e.g. (ℕ, ≤) or (ℚ, ≤)), which allow for comparison of data with regards to the linear order. To that end, we extend the classical Church synthesis game to infinite alphabets: two players, Adam and Eve, alternately play some data, and Eve wins whenever their interaction complies with the specification, which is a language of ω-words over ordered data. Such games are however undecidable, even when the specification is recognised by a deterministic register automaton. This is in contrast with the equality case, where the problem is only undecidable for nondeterministic and universal specifications.
Thus, we study one-sided Church games, where Eve instead operates over a finite alphabet, while Adam still manipulates data. We show they are determined, and deciding the existence of a winning strategy is in ExpTime, both for ℚ and ℕ. This follows from a study of constraint sequences, which abstract the behaviour of register automata, and allow us to reduce Church games to ω-regular games. Lastly, we apply these results to the transducer synthesis problem for input-driven register automata, where each output data is restricted to be the content of some register, and show that if there exists an implementation, then there exists one which is a register transducer.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.28/LIPIcs.STACS.2021.28.pdf
Synthesis
Church Game
Register Automata
Transducers
Ordered Data Words
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
29:1
29:16
10.4230/LIPIcs.STACS.2021.29
article
A Faster Algorithm for Finding Tarski Fixed Points
Fearnley, John
1
Savani, Rahul
1
https://orcid.org/0000-0003-1262-7831
Department of Computer Science, University of Liverpool, UK
Dang et al. have given an algorithm that can find a Tarski fixed point in a k-dimensional lattice of width n using O(log^k n) queries [Chuangyin Dang et al., 2020]. Multiple authors have conjectured that this algorithm is optimal [Chuangyin Dang et al., 2020; Kousha Etessami et al., 2020], and indeed this has been proven for two-dimensional instances [Kousha Etessami et al., 2020]. We show that these conjectures are false in dimension three or higher by giving an O(log² n) query algorithm for the three-dimensional Tarski problem, which generalises to give an O(log^{k-1} n) query algorithm for the k-dimensional problem when k ≥ 3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.29/LIPIcs.STACS.2021.29.pdf
query complexity
Tarski fixed points
total function problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
30:1
30:17
10.4230/LIPIcs.STACS.2021.30
article
Solving One Variable Word Equations in the Free Group in Cubic Time
Ferens, Robert
1
https://orcid.org/0000-0002-0079-1936
Jeż, Artur
1
https://orcid.org/0000-0003-4321-3105
Institute of Computer Science, University of Wrocław, Poland
A word equation with one variable in a free group is given as U = V, where both U and V are words over the alphabet of generators of the free group and X, X⁻¹, for a fixed variable X. An element of the free group is a solution when substituting it for X yields a true equality (interpreted in the free group) of left- and right-hand sides. It is known that the set of all solutions of a given word equation with one variable is a finite union of sets of the form {α wⁱ β : i ∈ ℤ}, where α, w, β are reduced words over the alphabet of generators, and a polynomial-time algorithm (of a high degree) computing this set is known. We provide a cubic time algorithm for this problem, which also shows that the set of solutions consists of at most a quadratic number of the above-mentioned sets. The algorithm uses only simple tools of word combinatorics and group theory and is simple to state. Its analysis is involved and focuses on the combinatorics of occurrences of powers of a word within a larger word.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.30/LIPIcs.STACS.2021.30.pdf
Word equations
free group
one-variable equations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
31:1
31:14
10.4230/LIPIcs.STACS.2021.31
article
Diverse Collections in Matroids and Graphs
Fomin, Fedor V.
1
https://orcid.org/0000-0003-1955-4612
Golovach, Petr A.
1
https://orcid.org/0000-0002-2619-2990
Panolan, Fahad
2
https://orcid.org/0000-0001-6213-8687
Philip, Geevarghese
3
4
https://orcid.org/0000-0003-0717-7303
Saurabh, Saket
5
1
University of Bergen, Norway
Department of Computer Science and Engineering, IIT Hyderabad, India
Chennai Mathematical Institute, India
UMI ReLaX, Chennai, India
Institute of Mathematical Sciences, Chennai, India
We investigate the parameterized complexity of finding diverse sets of solutions to three fundamental combinatorial problems, two from the theory of matroids and the third from graph theory. The input to the Weighted Diverse Bases problem consists of a matroid M, a weight function ω:E(M)→N, and integers k ≥ 1, d ≥ 0. The task is to decide if there is a collection of k bases B_1, ..., B_k of M such that the weight of the symmetric difference of any pair of these bases is at least d. This is a diverse variant of the classical matroid base packing problem. The input to the Weighted Diverse Common Independent Sets problem consists of two matroids M₁,M₂ defined on the same ground set E, a weight function ω:E→N, and integers k ≥ 1, d ≥ 0. The task is to decide if there is a collection of k common independent sets I_1, ..., I_k of M₁ and M₂ such that the weight of the symmetric difference of any pair of these sets is at least d. This is motivated by the classical weighted matroid intersection problem. The input to the Diverse Perfect Matchings problem consists of a graph G and integers k ≥ 1, d ≥ 0. The task is to decide if G contains k perfect matchings M_1, ..., M_k such that the symmetric difference of any two of these matchings is at least d.
The underlying problem of finding one solution (basis, common independent set, or perfect matching) is known to be doable in polynomial time for each of these problems, and Diverse Perfect Matchings is known to be NP-hard for k = 2. We show that Weighted Diverse Bases and Weighted Diverse Common Independent Sets are both NP-hard. We show also that Diverse Perfect Matchings cannot be solved in polynomial time (unless P=NP) even for the case d = 1. We derive fixed-parameter tractable (FPT) algorithms for all three problems with (k,d) as the parameter.
The above results on matroids are derived under the assumption that the input matroids are given as independence oracles. For Weighted Diverse Bases we present a polynomial-time algorithm that takes a representation of the input matroid over a finite field and computes a poly(k,d)-sized kernel for the problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.31/LIPIcs.STACS.2021.31.pdf
Matroids
Diverse solutions
Fixed-parameter tractable algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
32:1
32:17
10.4230/LIPIcs.STACS.2021.32
article
Rice-Like Theorems for Automata Networks
Gamard, Guilhem
1
Guillon, Pierre
2
Perrot, Kevin
1
3
Theyssier, Guillaume
2
Aix-Marseille Université, Université de Toulon, CNRS, LIS, Marseille, France
Aix-Marseille Université, CNRS, I2M, Marseille, France
Université Côte d'Azur, CNRS, I3S, Sophia Antipolis, France
We prove general complexity lower bounds on automata networks, in the style of Rice’s theorem, but in the computable world. Our main result is that testing any fixed first-order property on the dynamics of an automata network is either trivial, or NP-hard, or coNP-hard. Moreover, there exist such properties that are arbitrarily high in the polynomial-time hierarchy. We also prove that testing a first-order property given as input on an automata network (also part of the input) is PSPACE-hard. Besides, we show that, under a natural effectiveness condition, any nontrivial property of the limit set of a nondeterministic network is PSPACE-hard. We also show that it is PSPACE-hard to separate deterministic networks with a very high and a very low number of limit configurations; however, the problem of deciding whether the number of limit configurations is maximal up to a polynomial quantity belongs to the polynomial-time hierarchy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.32/LIPIcs.STACS.2021.32.pdf
Automata networks
Rice theorem
complexity classes
polynomial hierarchy
hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
33:1
33:19
10.4230/LIPIcs.STACS.2021.33
article
Auction Algorithms for Market Equilibrium with Weak Gross Substitute Demands and Their Applications
Garg, Jugal
1
https://orcid.org/0000-0001-6439-7308
Husić, Edin
2
https://orcid.org/0000-0002-6708-5112
Végh, László A.
2
https://orcid.org/0000-0003-1152-200X
University of Illinois at Urbana-Champaign, IL, USA
Department of Mathematics, The London School of Economics and Political Science, UK
We consider the Arrow-Debreu exchange market model where agents' demands satisfy the weak gross substitutes (WGS) property. This is a well-studied property, in particular, it gives a sufficient condition for the convergence of the classical tâtonnement dynamics. In this paper, we present a simple auction algorithm that obtains an approximate market equilibrium for WGS demands. Such auction algorithms have been previously known for restricted classes of WGS demands only. As an application of our technique, we obtain an efficient algorithm to find an approximate spending-restricted market equilibrium for WGS demands, a model that has been recently introduced as a continuous relaxation of the Nash social welfare (NSW) problem. This leads to a polynomial-time constant factor approximation algorithm for NSW with budget separable piecewise linear utility functions; only a pseudopolynomial approximation algorithm was known for this setting previously.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.33/LIPIcs.STACS.2021.33.pdf
auction algorithm
weak gross substitutes
Fisher equilibrium
Gale equilibrium
Nash social welfare
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
34:1
34:18
10.4230/LIPIcs.STACS.2021.34
article
Efficiently Testing Simon’s Congruence
Gawrychowski, Paweł
1
https://orcid.org/0000-0002-6993-5440
Kosche, Maria
2
https://orcid.org/0000-0002-2165-2695
Koß, Tore
2
https://orcid.org/0000-0001-6002-1581
Manea, Florin
2
3
https://orcid.org/0000-0001-6094-3324
Siemer, Stefan
2
https://orcid.org/0000-0001-7509-8135
Faculty of Mathematics and Computer Science, University of Wrocław, Poland
Computer Science Department, Universität Göttingen, Germany
Campus-Institut Data Science, Göttingen, Germany
Simon’s congruence ∼_k is a relation on words defined by Imre Simon in the 1970s and intensely studied since then. This congruence was initially used in connection to piecewise testable languages, but also found many applications in, e.g., learning theory, databases theory, or linguistics. The ∼_k-relation is defined as follows: two words are ∼_k-congruent if they have the same set of subsequences of length at most k. A long standing open problem, stated already by Simon in his initial works on this topic, was to design an algorithm which computes, given two words s and t, the largest k for which s∼_k t. We propose the first algorithm solving this problem in linear time O(|s|+|t|) when the input words are over the integer alphabet {1,…,|s|+|t|} (or other alphabets which can be sorted in linear time). Our approach can be extended to an optimal algorithm in the case of general alphabets as well.
To achieve these results, we introduce a novel data-structure, called Simon-Tree, which allows us to construct a natural representation of the equivalence classes induced by ∼_k on the set of suffixes of a word, for all k ≥ 1. We show that such a tree can be constructed for an input word in linear time. Then, when working with two words s and t, we compute their respective Simon-Trees and efficiently build a correspondence between the nodes of these trees. This correspondence, which can also be constructed in linear time O(|s|+|t|), allows us to retrieve the largest k for which s∼_k t.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.34/LIPIcs.STACS.2021.34.pdf
Simon’s congruence
Subsequence
Scattered factor
Efficient algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
35:1
35:15
10.4230/LIPIcs.STACS.2021.35
article
Finding an Optimal Alphabet Ordering for Lyndon Factorization Is Hard
Gibney, Daniel
1
https://orcid.org/0000-0003-1493-5432
Thankachan, Sharma V.
1
Department of Computer Science, University of Central Florida, Orlando, FL, USA
This work establishes several strong hardness results on the problem of finding an ordering on a string’s alphabet that either minimizes or maximizes the number of factors in that string’s Lyndon factorization. In doing so, we demonstrate that these ordering problems are sufficiently complex to model a wide variety of ordering constraint satisfaction problems (OCSPs). Based on this, we prove that (i) the decision versions of both the minimization and maximization problems are NP-complete, (ii) for both the minimization and maximization problems there does not exist a constant approximation algorithm running in polynomial time under the Unique Game Conjecture and (iii) there does not exist an algorithm to solve the minimization problem in time poly(|T|) ⋅ 2^o(σlog σ) for a string T over an alphabet of size σ under the Exponential Time Hypothesis (essentially the brute force approach of trying every alphabet order is hard to improve significantly).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.35/LIPIcs.STACS.2021.35.pdf
Lyndon Factorization
String Algorithms
Burrows-Wheeler Transform
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
36:1
36:18
10.4230/LIPIcs.STACS.2021.36
article
Reachability in Two-Parametric Timed Automata with One Parameter Is EXPSPACE-Complete
Göller, Stefan
1
Hilaire, Mathieu
2
School of Electrical Engineering and Computer Science, Universität Kassel, Germany
Université Paris-Saclay, ENS Paris-Saclay, Laboratoire Spécification et Vérification (LSV), CNRS, Gif-sur-Yvette, France
Parametric timed automata (PTA) have been introduced by Alur, Henzinger, and Vardi as an extension of timed automata in which clocks can be compared against parameters. The reachability problem asks for the existence of an assignment of the parameters to the non-negative integers such that reachability holds in the underlying timed automaton. The reachability problem for PTA is long known to be undecidable, already over three parametric clocks.
A few years ago, Bundala and Ouaknine proved that for PTA over two parametric clocks and one parameter the reachability problem is decidable and also showed a lower bound for the complexity class PSPACE^NEXP. Our main result is that the reachability problem for parametric timed automata over two parametric clocks and one parameter is EXPSPACE-complete.
For the EXPSPACE lower bound we make use of deep results from complexity theory, namely a serializability characterization of EXPSPACE (in turn based on Barrington’s Theorem) and a logspace translation of numbers in Chinese Remainder Representation to binary representation due to Chiu, Davida, and Litow. It is shown that with small PTA over two parametric clocks and one parameter one can simulate serializability computations.
For the EXPSPACE upper bound, we first give a careful exponential time reduction from PTA over two parametric clocks and one parameter to a (slight subclass of) parametric one-counter automata over one parameter based on a minor adjustment of a construction due to Bundala and Ouaknine. For solving the reachability problem for parametric one-counter automata with one parameter, we provide a series of techniques to partition a fictitious run into several carefully chosen subruns that allow us to prove that it is sufficient to consider a parameter value of exponential magnitude only. This allows us to show a doubly-exponential upper bound on the value of the only parameter of a PTA over two parametric clocks and one parameter. We hope that extensions of our techniques lead to finally establishing decidability of the long-standing open problem of reachability in parametric timed automata with two parametric clocks (and arbitrarily many parameters) and, if decidability holds, determining its precise computational complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.36/LIPIcs.STACS.2021.36.pdf
Parametric Timed Automata
Computational Complexity
Reachability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
37:1
37:18
10.4230/LIPIcs.STACS.2021.37
article
Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration
Golovach, Petr A.
1
https://orcid.org/0000-0002-2619-2990
Komusiewicz, Christian
2
https://orcid.org/0000-0003-0829-7032
Kratsch, Dieter
3
Le, Van Bang
4
https://orcid.org/0000-0002-3303-8326
Department of Informatics, University of Bergen, Norway
Fachbereich Mathematik und Informatik, Philipps-Universität Marburg, Germany
LGIMP, Université de Lorraine, Metz, France
Institut für Informatik, Universität Rostock, Germany
An enumeration kernel as defined by Creignou et al. [Theory Comput. Syst. 2017] for a parameterized enumeration problem consists of an algorithm that transforms each instance into one whose size is bounded by the parameter plus a solution-lifting algorithm that efficiently enumerates all solutions from the set of the solutions of the kernel. We propose to consider two new versions of enumeration kernels by asking that the solutions of the original instance can be enumerated in polynomial time or with polynomial delay from the kernel solutions. Using the NP-hard Matching Cut problem parameterized by structural parameters such as the vertex cover number or the cyclomatic number of the input graph, we show that the new enumeration kernels present a useful notion of data reduction for enumeration problems which allows to compactly represent the set of feasible solutions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.37/LIPIcs.STACS.2021.37.pdf
enumeration problems
polynomial delay
output-sensitive algorithms
kernelization
structural parameterizations
matching cuts
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
38:1
38:17
10.4230/LIPIcs.STACS.2021.38
article
Average-Case Algorithms for Testing Isomorphism of Polynomials, Algebras, and Multilinear Forms
Grochow, Joshua A.
1
https://orcid.org/0000-0002-6466-0476
Qiao, Youming
2
https://orcid.org/0000-0003-4334-1449
Tang, Gang
2
Departments of Computer Science and Mathematics, University of Colorado Boulder, Boulder, CO, USA
Center for Quantum Software and Information, University of Technology Sydney, Ultimo, NSW 2007, Australia
We study the problems of testing isomorphism of polynomials, algebras, and multilinear forms. Our first main results are average-case algorithms for these problems. For example, we develop an algorithm that takes two cubic forms f, g ∈ 𝔽_q[x_1, … , x_n], and decides whether f and g are isomorphic in time q^O(n) for most f. This average-case setting has direct practical implications, having been studied in multivariate cryptography since the 1990s. Our second result concerns the complexity of testing equivalence of alternating trilinear forms. This problem is of interest in both mathematics and cryptography. We show that this problem is polynomial-time equivalent to testing equivalence of symmetric trilinear forms, by showing that they are both Tensor Isomorphism-complete (Grochow & Qiao, ITCS 2021), therefore is equivalent to testing isomorphism of cubic forms over most fields.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.38/LIPIcs.STACS.2021.38.pdf
polynomial isomorphism
trilinear form equivalence
algebra isomorphism
average-case algorithms
tensor isomorphism complete
symmetric and alternating bilinear maps
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
39:1
39:15
10.4230/LIPIcs.STACS.2021.39
article
Geometric Cover with Outliers Removal
Guo, Zhengyang
1
https://orcid.org/0000-0002-8188-0665
Li, Yi
1
https://orcid.org/0000-0002-6420-653X
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
We study the problem of partial geometric cover, which asks to find the minimum number of geometric objects (unit squares and unit disks in this work) that cover at least (n-t) of n given planar points, where 0 ≤ t ≤ n/2. When t = 0, the problem is the classical geometric cover problem, for which many existing works adopt a general framework called the shifting strategy. The shifting strategy is a divide and conquer paradigm which partitions the plane into equal-width strips, applies a local algorithm on each strip and then merges the local solutions with only a small loss on the overall approximation ratio. A challenge to extend the shifting strategy to the case of outliers is to determine the number of outliers in each strip. We develop a shifting strategy incorporating the outlier distribution, which runs in O(tn log n) time. We also develop local algorithms on strips for the outliers case, improving the running time over previous algorithms, and consequently obtain approximation algorithms to the partial geometric cover.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.39/LIPIcs.STACS.2021.39.pdf
Geometric Cover
Unit Square Cover
Unit Disk Cover
Shifting Strategy
Outliers Detection
Computational Geometry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
40:1
40:17
10.4230/LIPIcs.STACS.2021.40
article
Parameterised Counting in Logspace
Haak, Anselm
1
https://orcid.org/0000-0003-1031-5922
Meier, Arne
1
https://orcid.org/0000-0002-8061-5376
Prakash, Om
2
Rao B. V., Raghavendra
2
Institut für Theoretische Informatik, Leibniz Universität Hannover, Germany
Department of Computer Science and Engineering, IIT Madras, Chennai, India
Logarithmic space bounded complexity classes such as L and NL play a central role in space bounded computation. The study of counting versions of these complexity classes have lead to several interesting insights into the structure of computational problems such as computing the determinant and counting paths in directed acyclic graphs. Though parameterised complexity theory was initiated roughly three decades ago by Downey and Fellows, a satisfactory study of parameterised logarithmic space bounded computation was developed only in the last decade by Elberfeld, Stockhusen and Tantau (IPEC 2013, Algorithmica 2015).
In this paper, we introduce a new framework for parameterised counting in logspace, inspired by the parameterised space bounded models developed by Elberfeld, Stockhusen and Tantau (IPEC 2013, Algorithmica 2015). They defined the operators para_W and para_β for parameterised space complexity classes by allowing bounded nondeterminism with multiple-read and read-once access, respectively. Using these operators, they characterised the parameterised complexity of natural problems on graphs. In the spirit of the operators para_W and para_β by Stockhusen and Tantau, we introduce variants based on tail-nondeterminism, para_{W[1]} and para_{βtail}. Then, we consider counting versions of all four operators applied to logspace and obtain several natural complete problems for the resulting classes: counting of paths in digraphs, counting first-order models for formulas, and counting graph homomorphisms. Furthermore, we show that the complexity of a parameterised variant of the determinant function for (0,1)-matrices is #para_{βtail} L-hard and can be written as the difference of two functions in #para_{βtail} L. These problems exhibit the richness of the introduced counting classes. Our results further indicate interesting structural characteristics of these classes. For example, we show that the closure of #para_{βtail} L under parameterised logspace parsimonious reductions coincides with #para_β L, that is, modulo parameterised reductions, tail-nondeterminism with read-once access is the same as read-once nondeterminism.
Initiating the study of closure properties of these parameterised logspace counting classes, we show that all introduced classes are closed under addition and multiplication, and those without tail-nondeterminism are closed under parameterised logspace parsimonious reductions.
Also, we show that the counting classes defined can naturally be characterised by parameterised variants of classes based on branching programs in analogy to the classical counting classes.
Finally, we underline the significance of this topic by providing a promising outlook showing several open problems and options for further directions of research.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.40/LIPIcs.STACS.2021.40.pdf
Parameterized Complexity
Counting Complexity
Logspace
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
41:1
41:15
10.4230/LIPIcs.STACS.2021.41
article
Digraph Coloring and Distance to Acyclicity
Harutyunyan, Ararat
1
Lampis, Michael
1
https://orcid.org/0000-0002-5791-0887
Melissinos, Nikolaos
1
Université Paris-Dauphine, PSL Research University, CNRS, UMR 7243, LAMSADE, Paris, France
In k-Digraph Coloring we are given a digraph and are asked to partition its vertices into at most k sets, so that each set induces a DAG. This well-known problem is NP-hard, as it generalizes (undirected) k-Coloring, but becomes trivial if the input digraph is acyclic. This poses the natural parameterized complexity question of what happens when the input is "almost" acyclic. In this paper we study this question using parameters that measure the input’s distance to acyclicity in either the directed or the undirected sense.
In the directed sense perhaps the most natural notion of distance to acyclicity is directed feedback vertex set (DFVS). It is already known that, for all k ≥ 2, k-Digraph Coloring is NP-hard on digraphs of DFVS at most k+4. We strengthen this result to show that, for all k ≥ 2, k-Digraph Coloring is already NP-hard for DFVS exactly k. This immediately provides a dichotomy, as k-Digraph Coloring is trivial if DFVS is at most k-1. Refining our reduction we obtain two further consequences: (i) for all k ≥ 2, k-Digraph Coloring is NP-hard for graphs of feedback arc set (FAS) at most k²; interestingly, this leads to a second dichotomy, as we show that the problem is FPT by k if FAS is at most k²-1; (ii) k-Digraph Coloring is NP-hard for graphs of DFVS k, even if the maximum degree Δ is at most 4k-1; we show that this is also almost tight, as the problem becomes FPT for DFVS k and Δ ≤ 4k-3.
Since these results imply that the problem is also NP-hard on graphs of bounded directed treewidth, we then consider parameters that measure the distance from acyclicity of the underlying graph. On the positive side, we show that k-Digraph Coloring admits an FPT algorithm parameterized by treewidth, whose parameter dependence is (tw!)k^{tw}. Since this is considerably worse than the k^{tw} dependence of (undirected) k-Coloring, we pose the question of whether the tw! factor can be eliminated. Our main contribution in this part is to settle this question in the negative and show that our algorithm is essentially optimal, even for the much more restricted parameter treedepth and for k = 2. Specifically, we show that an FPT algorithm solving 2-Digraph Coloring with dependence td^o(td) would contradict the ETH.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.41/LIPIcs.STACS.2021.41.pdf
Digraph Coloring
Dichromatic number
NP-completeness
Parameterized complexity
Feedback vertex and arc sets
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
42:1
42:18
10.4230/LIPIcs.STACS.2021.42
article
Good r-Divisions Imply Optimal Amortized Decremental Biconnectivity
Holm, Jacob
1
https://orcid.org/0000-0001-6997-9251
Rotenberg, Eva
2
https://orcid.org/0000-0001-5853-7909
University of Copenhagen, Denmark
Technical University of Denmark, Lyngby, Denmark
We present a data structure that, given a graph G of n vertices and m edges, and a suitable pair of nested r-divisions of G, preprocesses G in O(m+n) time and handles any series of edge-deletions in O(m) total time while answering queries to pairwise biconnectivity in worst-case O(1) time. In case the vertices are not biconnected, the data structure can return a cutvertex separating them in worst-case O(1) time.
As an immediate consequence, this gives optimal amortized decremental biconnectivity, 2-edge connectivity, and connectivity for large classes of graphs, including planar graphs and other minor free graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.42/LIPIcs.STACS.2021.42.pdf
Dynamic graphs
2-connectivity
graph minors
r-divisions
graph separators
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
43:1
43:15
10.4230/LIPIcs.STACS.2021.43
article
b-Coloring Parameterized by Clique-Width
Jaffke, Lars
1
Lima, Paloma T.
1
Lokshtanov, Daniel
2
University of Bergen, Norway
University of California Santa Barbara, CA, USA
We provide a polynomial-time algorithm for b-Coloring on graphs of constant clique-width. This unifies and extends nearly all previously known polynomial-time results on graph classes, and answers open questions posed by Campos and Silva [Algorithmica, 2018] and Bonomo et al. [Graphs Combin., 2009]. This constitutes the first result concerning structural parameterizations of this problem. We show that the problem is FPT when parameterized by the vertex cover number on general graphs, and on chordal graphs when parameterized by the number of colors. Additionally, we observe that our algorithm for graphs of bounded clique-width can be adapted to solve the Fall Coloring problem within the same runtime bound. The running times of the clique-width based algorithms for b-Coloring and Fall Coloring are tight under the Exponential Time Hypothesis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.43/LIPIcs.STACS.2021.43.pdf
b-Coloring
clique-width
vertex cover
structural parameterization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
44:1
44:13
10.4230/LIPIcs.STACS.2021.44
article
A Ramsey Theorem for Finite Monoids
Jecker, Ismaël
1
Institute of Science and Technology, Klosterneuburg, Austria
Repeated idempotent elements are commonly used to characterise iterable behaviours in abstract models of computation. Therefore, given a monoid M, it is natural to ask how long a sequence of elements of M needs to be to ensure the presence of consecutive idempotent factors. This question is formalised through the notion of the Ramsey function R_M associated to M, obtained by mapping every k ∈ ℕ to the minimal integer R_M(k) such that every word u ∈ M^* of length R_M(k) contains k consecutive non-empty factors that correspond to the same idempotent element of M.
In this work, we study the behaviour of the Ramsey function R_M by investigating the regular 𝒟-length of M, defined as the largest size L(M) of a submonoid of M isomorphic to the set of natural numbers {1,2, …, L(M)} equipped with the max operation. We show that the regular 𝒟-length of M determines the degree of R_M, by proving that k^L(M) ≤ R_M(k) ≤ (k|M|⁴)^L(M).
To allow applications of this result, we provide the value of the regular 𝒟-length of diverse monoids. In particular, we prove that the full monoid of n × n Boolean matrices, which is used to express transition monoids of non-deterministic automata, has a regular 𝒟-length of (n²+n+2)/2.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.44/LIPIcs.STACS.2021.44.pdf
Semigroup
monoid
idempotent
automaton
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
45:1
45:16
10.4230/LIPIcs.STACS.2021.45
article
An Improved Sketching Algorithm for Edit Distance
Jin, Ce
1
https://orcid.org/0000-0001-5264-1772
Nelson, Jelani
2
https://orcid.org/0000-0001-7370-3733
Wu, Kewen
2
https://orcid.org/0000-0002-5894-822X
MIT, Cambridge, MA, USA
University of California at Berkeley, CA, USA
We provide improved upper bounds for the simultaneous sketching complexity of edit distance. Consider two parties, Alice with input x ∈ Σⁿ and Bob with input y ∈ Σⁿ, that share public randomness and are given a promise that the edit distance ed(x,y) between their two strings is at most some given value k. Alice must send a message sx and Bob must send sy to a third party Charlie, who does not know the inputs but shares the same public randomness and also knows k. Charlie must output ed(x,y) precisely as well as a sequence of ed(x,y) edits required to transform x into y. The goal is to minimize the lengths |sx|, |sy| of the messages sent.
The protocol of Belazzougui and Zhang (FOCS 2016), building upon the random walk method of Chakraborty, Goldenberg, and Koucký (STOC 2016), achieves a maximum message length of Õ(k⁸) bits, where Õ(⋅) hides poly(log n) factors. In this work we build upon Belazzougui and Zhang’s protocol and provide an improved analysis demonstrating that a slight modification of their construction achieves a bound of Õ(k³).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.45/LIPIcs.STACS.2021.45.pdf
edit distance
sketching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
46:1
46:16
10.4230/LIPIcs.STACS.2021.46
article
Locality Sensitive Hashing for Efficient Similar Polygon Retrieval
Kaplan, Haim
1
Tenenbaum, Jay
1
School of Computer Science, Tel Aviv University, Israel
Locality Sensitive Hashing (LSH) is an effective method of indexing a set of items to support efficient nearest neighbors queries in high-dimensional spaces. The basic idea of LSH is that similar items should produce hash collisions with higher probability than dissimilar items.
We study LSH for (not necessarily convex) polygons, and use it to give efficient data structures for similar shape retrieval. Arkin et al. [Arkin et al., 1991] represent polygons by their "turning function" - a function which follows the angle between the polygon’s tangent and the x-axis while traversing the perimeter of the polygon. They define the distance between polygons to be variations of the L_p (for p = 1,2) distance between their turning functions. This metric is invariant under translation, rotation and scaling (and the selection of the initial point on the perimeter) and therefore models well the intuitive notion of shape resemblance.
We develop and analyze LSH near neighbor data structures for several variations of the L_p distance for functions (for p = 1,2). By applying our schemes to the turning functions of a collection of polygons we obtain efficient near neighbor LSH-based structures for polygons. To tune our structures to turning functions of polygons, we prove some new properties of these turning functions that may be of independent interest.
As part of our analysis, we address the following problem which is of independent interest. Find the vertical translation of a function f that is closest in L₁ distance to a function g. We prove tight bounds on the approximation guarantee obtained by the translation which is equal to the difference between the averages of g and f.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.46/LIPIcs.STACS.2021.46.pdf
Locality sensitive hashing
polygons
turning function
L_p distance
nearest neighbors
similarity search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
47:1
47:14
10.4230/LIPIcs.STACS.2021.47
article
Binary Matrix Completion Under Diameter Constraints
Koana, Tomohiro
1
https://orcid.org/0000-0002-8684-0611
Froese, Vincent
1
https://orcid.org/0000-0002-8499-0130
Niedermeier, Rolf
1
https://orcid.org/0000-0003-1703-1236
Algorithmics and Computational Complexity, Faculty IV, Technische Universität Berlin, Germany
We thoroughly study a novel but basic combinatorial matrix completion problem: Given a binary incomplete matrix, fill in the missing entries so that the resulting matrix has a specified maximum diameter (that is, upper-bounding the maximum Hamming distance between any two rows of the completed matrix) as well as a specified minimum Hamming distance between any two of the matrix rows. This scenario is closely related to consensus string problems as well as to recently studied clustering problems on incomplete data.
We obtain an almost complete picture concerning the complexity landscape (P vs NP) regarding the diameter constraints and regarding the number of missing entries per row of the incomplete matrix. We develop polynomial-time algorithms for maximum diameter three, which are based on Deza’s theorem [Discret. Math. 1973, J. Comb. Theory, Ser. B 1974] from extremal set theory. In this way, we also provide one of the rare links between sunflower techniques and stringology. On the negative side, we prove NP-hardness for diameter at least four. For the number of missing entries per row, we show polynomial-time solvability when there is only one missing entry and NP-hardness when there can be at least two missing entries. In general, our algorithms heavily rely on Deza’s theorem and the correspondingly identified sunflower structures pave the way towards solutions based on computing graph factors and solving 2-SAT instances.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.47/LIPIcs.STACS.2021.47.pdf
sunflowers
binary matrices
Hamming distance
stringology
consensus problems
complexity dichotomy
combinatorial algorithms
graph factors
2-Sat
Hamming radius
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
48:1
48:15
10.4230/LIPIcs.STACS.2021.48
article
Absorbing Patterns in BST-Like Expression-Trees
Koechlin, Florent
1
Rotondo, Pablo
1
Laboratoire d'Informatique Gaspard-Monge, Université Gustave Eiffel, Marne-la-Vallée, France
In this article we study the effect of simple semantic reductions on random BST-like expression-trees. Such random unary-binary expression-trees are often used in benchmarks for model-checking tools. We consider the reduction induced by an absorbing pattern for some given operator ⊛, which we apply bottom-up, producing an equivalent (and smaller) tree-expression. Our main result concerns the expected size of a random tree, of given input size n → ∞, after reduction. We show that there are two different thresholds, leading to a total of five regimes, ranging from no significant reduction at all, to almost complete reduction. These regimes are completely characterized according to the probability of the absorbing operator. Our results prove that random BST-like trees have to be considered with care, and that they offer a richer range of behaviours than uniform random trees.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.48/LIPIcs.STACS.2021.48.pdf
BST trees
absorbing pattern
reduction
analytic combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
49:1
49:16
10.4230/LIPIcs.STACS.2021.49
article
Cluster Editing Parameterized Above Modification-Disjoint P₃-Packings
Li, Shaohua
1
Pilipczuk, Marcin
1
Sorge, Manuel
1
2
Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland
Institute of Logic and Computation, TU Wien, Austria
Given a graph G = (V,E) and an integer k, the Cluster Editing problem asks whether we can transform G into a union of vertex-disjoint cliques by at most k modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G = (V,E), a packing ℋ of modification-disjoint induced P₃s (no pair of P₃s in H share an edge or non-edge) and an integer 𝓁. The task is to decide whether G can be transformed into a union of vertex-disjoint cliques by at most 𝓁+|H| modifications (edge deletions or insertions). We show that this problem is NP-hard even when 𝓁 = 0 (in which case the problem asks to turn G into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H) and when each vertex is in at most 23 P₃s of the packing. This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by C. Komusiewicz at Shonan meeting no. 144 in March 2019. We then initiate the study to find the largest integer c such that the problem remains tractable when restricting to packings such that each vertex is in at most c packed P₃s. Van Bevern et al. showed that the case c = 1 is fixed-parameter tractable with respect to 𝓁 and we show that the case c = 2 is solvable in |V|^{2𝓁 + O(1)} time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.49/LIPIcs.STACS.2021.49.pdf
Graph algorithms
fixed-parameter tractability
parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
50:1
50:17
10.4230/LIPIcs.STACS.2021.50
article
Exploiting Dense Structures in Parameterized Complexity
Lochet, William
1
Lokshtanov, Daniel
2
Saurabh, Saket
3
1
Zehavi, Meirav
4
Department of Informatics, University of Bergen, Norway
University of California Santa Barbara, CA, USA
Institute of Mathematical Sciences, Chennai, India
Ben Gurion University of the Negev, Beer Sheva, Israel
Over the past few decades, the study of dense structures from the perspective of approximation algorithms has become a wide area of research. However, from the viewpoint of parameterized algorithm, this area is largely unexplored. In particular, properties of random samples have been successfully deployed to design approximation schemes for a number of fundamental problems on dense structures [Arora et al. FOCS 1995, Goldreich et al. FOCS 1996, Giotis and Guruswami SODA 2006, Karpinksi and Schudy STOC 2009]. In this paper, we fill this gap, and harness the power of random samples as well as structure theory to design kernelization as well as parameterized algorithms on dense structures. In particular, we obtain linear vertex kernels for Edge-Disjoint Paths, Edge Odd Cycle Transversal, Minimum Bisection, d-Way Cut, Multiway Cut and Multicut on everywhere dense graphs. In fact, these kernels are obtained by designing a polynomial-time algorithm when the corresponding parameter is at most Ω(n). Additionally, we obtain a cubic kernel for Vertex-Disjoint Paths on everywhere dense graphs. In addition to kernelization results, we obtain randomized subexponential-time parameterized algorithms for Edge Odd Cycle Transversal, Minimum Bisection, and d-Way Cut. Finally, we show how all of our results (as well as EPASes for these problems) can be de-randomized.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.50/LIPIcs.STACS.2021.50.pdf
Dense graphs
disjoint paths
odd cycle transversal
kernels
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
51:1
51:17
10.4230/LIPIcs.STACS.2021.51
article
Subgroup Membership in GL(2,Z)
Lohrey, Markus
1
https://orcid.org/0000-0002-4680-7198
Universität Siegen, Germany
It is shown that the subgroup membership problem for a virtually free group can be decided in polynomial time where all group elements are represented by so-called power words, i.e., words of the form p_1^{z_1} p_2^{z_2} ⋯ p_k^{z_k}. Here the p_i are explicit words over the generating set of the group and all z_i are binary encoded integers. As a corollary, it follows that the subgroup membership problem for the matrix group GL(2,ℤ) can be decided in polynomial time when all matrix entries are given in binary notation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.51/LIPIcs.STACS.2021.51.pdf
free groups
virtually free groups
subgroup membership
matrix groups
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
52:1
52:13
10.4230/LIPIcs.STACS.2021.52
article
Lower Bounds for Graph-Walking Automata
Martynova, Olga
1
https://orcid.org/0000-0002-1249-5173
Okhotin, Alexander
1
https://orcid.org/0000-0002-1615-2725
Department of Mathematics and Computer Science, St. Petersburg State University, Russia
Graph-walking automata (GWA) traverse graphs by moving between the nodes following the edges, using a finite-state control to decide where to go next. It is known that every GWA can be transformed to a GWA that halts on every input, to a GWA returning to the initial node in order to accept, as well as to a reversible GWA. This paper establishes lower bounds on the state blow-up of these transformations: it is shown that making an n-state GWA traversing k-ary graphs return to the initial node requires at least 2(n-1)(k-3) states in the worst case; the same lower bound holds for the transformation to halting automata. Automata satisfying both properties at once must have at least 4(n-1)(k-3) states. A reversible automaton must have at least 4(n-1)(k-3)-1 states. These bounds are asymptotically tight to the upper bounds proved using the methods from the literature.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.52/LIPIcs.STACS.2021.52.pdf
Finite automata
graph-walking automata
halting
reversibility
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
53:1
53:20
10.4230/LIPIcs.STACS.2021.53
article
An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs
Neuwohner, Meike
1
https://orcid.org/0000-0002-3664-3687
Research Institute for Discrete Mathematics, Universität Bonn, Germany
In this paper, we consider the task of computing an independent set of maximum weight in a given d-claw free graph G = (V,E) equipped with a positive weight function w:V → ℝ^+. Thereby, d ≥ 2 is considered a constant. The previously best known approximation algorithm for this problem is the local improvement algorithm SquareImp proposed by Berman [Berman, 2000]. It achieves a performance ratio of d/2+ε in time 𝒪(|V(G)|^(d+1)⋅(|V(G)|+|E(G)|)⋅(d-1)²⋅ (d/(2ε)+1)²) for any ε > 0, which has remained unimproved for the last twenty years. By considering a broader class of local improvements, we obtain an approximation ratio of d/2-(1/63,700,992)+ε for any ε > 0 at the cost of an additional factor of 𝒪(|V(G)|^(d-1)²) in the running time. In particular, our result implies a polynomial time d/2-approximation algorithm. Furthermore, the well-known reduction from the weighted k-Set Packing Problem to the Maximum Weight Independent Set Problem in k+1-claw free graphs provides a (k+1)/2 -(1/63,700,992)+ε-approximation algorithm for the weighted k-Set Packing Problem for any ε > 0. This improves on the previously best known approximation guarantee of (k+1)/2 + ε originating from the result of Berman [Berman, 2000].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.53/LIPIcs.STACS.2021.53.pdf
d-Claw free Graphs
independent Set
local Improvement
k-Set Packing
weighted
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
54:1
54:17
10.4230/LIPIcs.STACS.2021.54
article
Complexity of the List Homomorphism Problem in Hereditary Graph Classes
Okrasa, Karolina
1
2
https://orcid.org/0000-0003-1414-3507
Rzążewski, Paweł
1
2
https://orcid.org/0000-0001-7696-3848
Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
Institute of Informatics, University of Warsaw, Poland
A homomorphism from a graph G to a graph H is an edge-preserving mapping from V(G) to V(H). For a fixed graph H, in the list homomorphism problem, denoted by LHom(H), we are given a graph G, whose every vertex v is equipped with a list L(v) ⊆ V(H). We ask if there exists a homomorphism f from G to H, in which f(v) ∈ L(v) for every v ∈ V(G). Feder, Hell, and Huang [JGT 2003] proved that LHom(H) is polynomial time-solvable if H is a so-called bi-arc-graph, and NP-complete otherwise.
We are interested in the complexity of the LHom(H) problem in F-free graphs, i.e., graphs excluding a copy of some fixed graph F as an induced subgraph. It is known that if F is connected and is not a path nor a subdivided claw, then for every non-bi-arc graph the LHom(H) problem is NP-complete and cannot be solved in subexponential time, unless the ETH fails. We consider the remaining cases for connected graphs F.
If F is a path, we exhibit a full dichotomy. We define a class called predacious graphs and show that if H is not predacious, then for every fixed t the LHom(H) problem can be solved in quasi-polynomial time in P_t-free graphs. On the other hand, if H is predacious, then there exists t, such that the existence of a subexponential-time algorithm for LHom(H) in P_t-free graphs would violate the ETH.
If F is a subdivided claw, we show a full dichotomy in two important cases: for H being irreflexive (i.e., with no loops), and for H being reflexive (i.e., where every vertex has a loop). Unless the ETH fails, for irreflexive H the LHom(H) problem can be solved in subexponential time in graphs excluding a fixed subdivided claw if and only if H is non-predacious and triangle-free. On the other hand, if H is reflexive, then LHom(H) cannot be solved in subexponential time whenever H is not a bi-arc graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.54/LIPIcs.STACS.2021.54.pdf
list homomorphism
fine-grained complexity
hereditary graph classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
55:1
55:19
10.4230/LIPIcs.STACS.2021.55
article
Spectrum Preserving Short Cycle Removal on Regular Graphs
Paredes, Pedro
1
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
We describe a new method to remove short cycles on regular graphs while maintaining spectral bounds (the nontrivial eigenvalues of the adjacency matrix), as long as the graphs have certain combinatorial properties. These combinatorial properties are related to the number and distance between short cycles and are known to happen with high probability in uniformly random regular graphs.
Using this method we can show two results involving high girth spectral expander graphs. First, we show that given d ⩾ 3 and n, there exists an explicit distribution of d-regular Θ(n)-vertex graphs where with high probability its samples have girth Ω(log_{d-1} n) and are ε-near-Ramanujan; i.e., its eigenvalues are bounded in magnitude by 2√{d-1} + ε (excluding the single trivial eigenvalue of d). Then, for every constant d ⩾ 3 and ε > 0, we give a deterministic poly(n)-time algorithm that outputs a d-regular graph on Θ(n)-vertices that is ε-near-Ramanujan and has girth Ω(√{log n}), based on the work of [Mohanty et al., 2020].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.55/LIPIcs.STACS.2021.55.pdf
Ramanujan Graphs
High Girth Regular Graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
56:1
56:17
10.4230/LIPIcs.STACS.2021.56
article
Fine-Grained Complexity of the List Homomorphism Problem: Feedback Vertex Set and Cutwidth
Piecyk, Marta
1
Rzążewski, Paweł
1
2
https://orcid.org/0000-0001-7696-3848
Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
Institute of Informatics, University of Warsaw, Poland
For graphs G,H, a homomorphism from G to H is an edge-preserving mapping from V(G) to V(H). In the list homomorphism problem, denoted by LHom(H), we are given a graph G, whose every vertex v is equipped with a list L(v) ⊆ V(H), and we need to determine whether there exists a homomorphism from G to H which additionally respects the lists L. List homomorphisms are a natural generalization of (list) colorings.
Very recently Okrasa, Piecyk, and Rzążewski [ESA 2020] studied the fine-grained complexity of the problem, parameterized by the treewidth of the instance graph G. They defined a new invariant i^*(H), and proved that for every relevant graph H, i.e., such that LHom(H) is NP-hard, this invariant is the correct base of the exponent in the running time of any algorithm solving the LHom(H) problem.
In this paper we continue this direction and study the complexity of the problem under different parameterizations. As the first result, we show that i^*(H) is also the right complexity base if the parameter is the size of a minimum feedback vertex set of G, denoted by fvs(G). In particular, for every relevant graph H, the LHom(H) problem
- can be solved in time i^*(H)^fvs(G) ⋅ |V(G)|^𝒪(1), if a minimum feedback vertex set of G is given,
- cannot be solved in time (i^*(H) - ε)^fvs(G) ⋅ |V(G)|^𝒪(1), for any ε > 0, unless the SETH fails. Then we turn our attention to a parameterization by the cutwidth ctw(G) of G. Jansen and Nederlof [TCS 2019] showed that List k-Coloring (i.e., LHom(K_k)) can be solved in time c^ctw(G) ⋅ |V(G)|^𝒪(1) for an absolute constant c, i.e., the base of the exponential function does not depend on the number of colors. Jansen asked whether this behavior extends to graph homomorphisms. As the main result of the paper, we answer the question in the negative. We define a new graph invariant mim^*(H), closely related to the size of a maximum induced matching in H, and prove that for all relevant graphs H, the LHom(H) problem cannot be solved in time (mim^*(H)-ε)^{ctw(G)}⋅ |V(G)|^𝒪(1) for any ε > 0, unless the SETH fails. In particular, this implies that, assuming the SETH, there is no constant c, such that for every odd cycle the non-list version of the problem can be solved in time c^ctw(G) ⋅ |V(G)|^𝒪(1).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.56/LIPIcs.STACS.2021.56.pdf
list homomorphisms
fine-grained complexity
SETH
feedback vertex set
cutwidth
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
57:1
57:15
10.4230/LIPIcs.STACS.2021.57
article
6-Uniform Maker-Breaker Game Is PSPACE-Complete
Rahman, Md Lutfar
1
Watson, Thomas
1
University of Memphis, TN, USA
In a STOC 1976 paper, Schaefer proved that it is PSPACE-complete to determine the winner of the so-called Maker-Breaker game on a given set system, even when every set has size at most 11. Since then, there has been no improvement on this result. We prove that the game remains PSPACE-complete even when every set has size 6.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.57/LIPIcs.STACS.2021.57.pdf
Game
Maker-Breaker
Complexity
Reduction
PSPACE-complete
NL-hard
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
58:1
58:16
10.4230/LIPIcs.STACS.2021.58
article
Resolution with Symmetry Rule Applied to Linear Equations
Schweitzer, Pascal
1
Seebach, Constantin
1
TU Kaiserslautern, Germany
This paper considers the length of resolution proofs when using Krishnamurthy’s classic symmetry rules. We show that inconsistent linear equation systems of bounded width over a fixed finite field 𝔽_p with p a prime have, in their standard encoding as CNFs, polynomial length resolutions when using the local symmetry rule (SRC-II).
As a consequence it follows that the multipede instances for the graph isomorphism problem encoded as CNF formula have polynomial length resolution proofs. This contrasts exponential lower bounds for individualization-refinement algorithms on these graphs.
For the Cai-Fürer-Immerman graphs, for which Torán showed exponential lower bounds for resolution proofs (SAT 2013), we also show that already the global symmetry rule (SRC-I) suffices to allow for polynomial length proofs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.58/LIPIcs.STACS.2021.58.pdf
Logical Resolution
Symmetry Rule
Linear Equation System
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-03-10
187
59:1
59:12
10.4230/LIPIcs.STACS.2021.59
article
Quantum Approximate Counting with Nonadaptive Grover Iterations
Venkateswaran, Ramgopal
1
O'Donnell, Ryan
1
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
Approximate Counting refers to the problem where we are given query access to a function f : [N] → {0,1}, and we wish to estimate K = #{x : f(x) = 1} to within a factor of 1+ε (with high probability), while minimizing the number of queries. In the quantum setting, Approximate Counting can be done with O(min (√{N/ε}, √{N/K} / ε) queries. It has recently been shown that this can be achieved by a simple algorithm that only uses "Grover iterations"; however the algorithm performs these iterations adaptively. Motivated by concerns of computational simplicity, we consider algorithms that use Grover iterations with limited adaptivity. We show that algorithms using only nonadaptive Grover iterations can achieve O(√{N/ε}) query complexity, which is tight.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol187-stacs2021/LIPIcs.STACS.2021.59/LIPIcs.STACS.2021.59.pdf
quantum approximate counting
Grover search