eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
1
912
10.4230/LIPIcs.FSTTCS.2020
article
LIPIcs, Volume 182, FSTTCS 2020, Complete Volume
Saxena, Nitin
1
https://orcid.org/0000-0001-6931-898X
Simon, Sunil
1
https://orcid.org/0000-0002-7489-7477
Indian Institute of Technology Kanpur, India
LIPIcs, Volume 182, FSTTCS 2020, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020/LIPIcs.FSTTCS.2020.pdf
LIPIcs, Volume 182, FSTTCS 2020, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
0:i
0:xvi
10.4230/LIPIcs.FSTTCS.2020.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Saxena, Nitin
1
https://orcid.org/0000-0001-6931-898X
Simon, Sunil
1
https://orcid.org/0000-0002-7489-7477
Indian Institute of Technology Kanpur, India
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.0/LIPIcs.FSTTCS.2020.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
1:1
1:1
10.4230/LIPIcs.FSTTCS.2020.1
article
The Quest for Mathematical Understanding of Deep Learning (Invited Talk)
Arora, Sanjeev
1
Computer Science Department, Princeton University, NJ, USA
Deep learning has transformed Machine Learning and Artificial Intelligence in the past decade. It raises fundamental questions for mathematics and theory of computer science, since it relies upon solving large-scale nonconvex problems via gradient descent and its variants. This talk will be an introduction to mathematical questions raised by deep learning, and some partial understanding obtained in recent years.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.1/LIPIcs.FSTTCS.2020.1.pdf
machine learning
artificial intelligence
deep learning
gradient descent
optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
2:1
2:1
10.4230/LIPIcs.FSTTCS.2020.2
article
Proofs of Soundness and Proof Search (Invited Talk)
Atserias, Albert
1
https://orcid.org/0000-0002-3732-1989
Universitat Politècnica de Catalunya, Barcelona, Spain
Let P be a sound proof system for propositional logic. For each CNF formula F, let SAT(F) be a CNF formula whose satisfying assignments are in 1-to-1 correspondence with those of F (e.g., F itself). For each integer s, let REF(F,s) be a CNF formula whose satisfying assignments are in 1-to-1 correspondence with the P-refutations of F of length s. Since P is sound, it is obvious that the conjunction formula SAT(F) & REF(F,s) is unsatisfiable for any choice of F and s. It has been long known that, for many natural proof systems P and for the most natural formalizations of the formulas SAT and REF, the unsatisfiability of SAT(F) & REF(F,s) can be established by a short refutation. In addition, for many P, these short refutations live in the proof system P itself. This is the case for all Frege proof systems. In contrast it was known since the early 2000’s that Resolution proofs of Resolution’s soundness statements must have superpolynomial length. In this talk I will explain how the soundness formulas for a proof system P relate to the computational complexity of the proof search problem for P. In particular, I will explain how such formulas are used in the recent proof that the problem of approximating the minimum proof-length for Resolution is NP-hard (Atserias-Müller 2019). Besides playing a key role in this hardness of approximation result, the renewed interest in the soundness formulas led to a complete answer to the question whether Resolution has subexponential-length proofs of its own soundness statements (Garlík 2019).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.2/LIPIcs.FSTTCS.2020.2.pdf
Proof complexity
automatability
Resolution
proof search
consistency statements
lower bounds
reflection principle
satisfiability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
3:1
3:1
10.4230/LIPIcs.FSTTCS.2020.3
article
Convex Optimization and Dynamic Data Structure (Invited Talk)
Lee, Yin Tat
1
University of Washington, Seattle, WA, USA
In the last three years, there are many breakthroughs in optimization such as nearly quadratic time algorithms for bipartite matching, linear programming algorithms that are as fast as Ax = b. All of these algorithms are based on a careful combination of optimization techniques and dynamic data structures. In this talk, we will explain the framework underlying all the recent breakthroughs.
Joint work with Jan van den Brand, Michael B. Cohen, Sally Dong, Haotian Jiang, Tarun Kathuria, Danupon Nanongkai, Swati Padmanabhan, Richard Peng, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang, Sam Chiu-wai Wong, Guanghao Ye, Qiuyi Zhang.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.3/LIPIcs.FSTTCS.2020.3.pdf
Convex Optimization
Dynamic Data Structure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
4:1
4:3
10.4230/LIPIcs.FSTTCS.2020.4
article
Holonomic Techniques, Periods, and Decision Problems (Invited Talk)
Ouaknine, Joël
1
2
https://orcid.org/0000-0003-0031-9356
Max Planck Institute for Software Systems, Saarland Informatics Campus, Saarbrücken, Germany
Department of Computer Science, Oxford University, UK
Holonomic techniques have deep roots going back to Wallis, Euler, and Gauss, and have evolved in modern times as an important subfield of computer algebra, thanks in large part to the work of Zeilberger and others over the past three decades. In this talk, I will give an overview of the area, and in particular will present a select survey of known and original results on decision problems for holonomic sequences and functions. (Holonomic sequences satisfy linear recurrence relations with polynomial coefficients, and holonomic functions satisfy linear differential equations with polynomial coefficients.) I will also discuss some surprising connections to the theory of periods and exponential periods, which are classical objects of study in algebraic geometry and number theory; in particular, I will relate the decidability of certain decision problems for holonomic sequences to deep conjectures about periods and exponential periods, notably those due to Kontsevich and Zagier.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.4/LIPIcs.FSTTCS.2020.4.pdf
holonomic techniques
decision problems
recurrence sequences
minimal solutions
Positivity Problem
continued fractions
special functions
periods
exponential periods
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
5:1
5:3
10.4230/LIPIcs.FSTTCS.2020.5
article
Algorithmic Improvisation for Dependable Intelligent Autonomy (Invited Talk)
Seshia, Sanjit A.
1
https://orcid.org/0000-0001-6190-8707
University of California, Berkeley, CA, USA
Algorithmic Improvisation, also called control improvisation or controlled improvisation, is a new framework for automatically synthesizing systems with specified random but controllable behavior. In this talk, I will present the theory of algorithmic improvisation and show how it can be used in a wide variety of applications where randomness can provide variety, robustness, or unpredictability while guaranteeing safety or other properties. Applications demonstrated to date include robotic surveillance, software fuzz testing, music improvisation, human modeling, generating test cases for simulating cyber-physical systems, and generation of synthetic data sets to train and test machine learning algorithms. In this talk, I will particularly focus on applications to the design of intelligent autonomous systems, presenting work on randomized planning for robotics and a domain-specific probabilistic programming language for the design and analysis of learning-based autonomous systems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.5/LIPIcs.FSTTCS.2020.5.pdf
Formal methods
synthesis
verification
randomized algorithms
formal specification
testing
machine learning
synthetic data generation
planning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
6:1
6:1
10.4230/LIPIcs.FSTTCS.2020.6
article
On Some Recent Advances in Algebraic Complexity (Invited Talk)
Shpilka, Amir
1
https://orcid.org/0000-0003-2384-425X
School of Computer Science, Tel Aviv University, Israel
Algebraic complexity is the field studying the intrinsic difficulty of algebraic problems in an algebraic model of computation, most notably arithmetic circuits. It is a very natural model of computation that attracted a large amount of research in the last few decades, partially due to its simplicity and elegance, but mostly because of its importance. Being a more structured model than Boolean circuits, one could hope that the fundamental problems of theoretical computer science, such as separating P from NP, deciding whether P = BPP and more, will be easier to solve for arithmetic circuits.
In this talk I will give the basic definitions, explain the main questions and how they relate to their Boolean counterparts, and discuss what I view as promising approaches to tackling the most fundamental problems in the field.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.6/LIPIcs.FSTTCS.2020.6.pdf
Algebraic Complexity
Arithmetic Circuits
Polynomial Identity Testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
7:1
7:15
10.4230/LIPIcs.FSTTCS.2020.7
article
Faster Property Testers in a Variation of the Bounded Degree Model
Adler, Isolde
1
https://orcid.org/0000-0002-9667-9841
Fahey, Polly
1
https://orcid.org/0000-0002-3781-5313
University of Leeds, School of Computing, UK
Property testing algorithms are highly efficient algorithms, that come with probabilistic accuracy guarantees. For a property P, the goal is to distinguish inputs that have P from those that are far from having P with high probability correctly, by querying only a small number of local parts of the input. In property testing on graphs, the distance is measured by the number of edge modifications (additions or deletions), that are necessary to transform a graph into one with property P. Much research has focussed on the query complexity of such algorithms, i. e. the number of queries the algorithm makes to the input, but in view of applications, the running time of the algorithm is equally relevant.
In (Adler, Harwath STACS 2018), a natural extension of the bounded degree graph model of property testing to relational databases of bounded degree was introduced, and it was shown that on databases of bounded degree and bounded tree-width, every property that is expressible in monadic second-order logic with counting (CMSO) is testable with constant query complexity and sublinear running time. It remains open whether this can be improved to constant running time.
In this paper we introduce a new model, which is based on the bounded degree model, but the distance measure allows both edge (tuple) modifications and vertex (element) modifications. Our main theorem shows that on databases of bounded degree and bounded tree-width, every property that is expressible in CMSO is testable with constant query complexity and constant running time in the new model. We also show that every property that is testable in the classical model is testable in our model with the same query complexity and running time, but the converse is not true.
We argue that our model is natural and our meta-theorem showing constant-time CMSO testability supports this.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.7/LIPIcs.FSTTCS.2020.7.pdf
Constant Time Algorithms
Logic and Databases
Property Testing
Bounded Degree Model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
8:1
8:16
10.4230/LIPIcs.FSTTCS.2020.8
article
Clustering Under Perturbation Stability in Near-Linear Time
Agarwal, Pankaj K.
1
Chang, Hsien-Chih
2
Munagala, Kamesh
1
Taylor, Erin
1
Welzl, Emo
3
Department of Computer Science, Duke University, Durham, NC, USA
Department of Computer Science, Dartmouth College, Hanover, NH, USA
Department of Computer Science, ETH Zürich, Switzerland
We consider the problem of center-based clustering in low-dimensional Euclidean spaces under the perturbation stability assumption. An instance is α-stable if the underlying optimal clustering continues to remain optimal even when all pairwise distances are arbitrarily perturbed by a factor of at most α. Our main contribution is in presenting efficient exact algorithms for α-stable clustering instances whose running times depend near-linearly on the size of the data set when α ≥ 2 + √3. For k-center and k-means problems, our algorithms also achieve polynomial dependence on the number of clusters, k, when α ≥ 2 + √3 + ε for any constant ε > 0 in any fixed dimension. For k-median, our algorithms have polynomial dependence on k for α > 5 in any fixed dimension; and for α ≥ 2 + √3 in two dimensions. Our algorithms are simple, and only require applying techniques such as local search or dynamic programming to a suitably modified metric space, combined with careful choice of data structures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.8/LIPIcs.FSTTCS.2020.8.pdf
clustering
stability
local search
dynamic programming
coreset
polyhedral metric
trapezoid decomposition
range query
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
9:1
9:18
10.4230/LIPIcs.FSTTCS.2020.9
article
Width Notions for Ordering-Related Problems
Arrighi, Emmanuel
1
https://orcid.org/0000-0002-0326-1893
Fernau, Henning
2
https://orcid.org/0000-0002-4444-3220
de Oliveira Oliveira, Mateus
1
https://orcid.org/0000-0001-7798-7446
Wolf, Petra
2
https://orcid.org/0000-0003-3097-3906
University of Bergen, Norway
University of Trier, Germany
We are studying a weighted version of a linear extension problem, given some finite partial order ρ, called Completion of an Ordering. While this problem is NP-complete, we show that it lies in FPT when parameterized by the interval width of ρ. This ordering problem can be used to model several ordering problems stemming from diverse application areas, such as graph drawing, computational social choice, or computer memory management. Each application yields a special ρ. We also relate the interval width of ρ to parameterizations such as maximum range that have been introduced earlier in these applications, sometimes improving on parameterized algorithms that have been developed for these parameterizations before. This approach also gives some practical sub-exponential time algorithms for ordering problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.9/LIPIcs.FSTTCS.2020.9.pdf
Parameterized algorithms
interval width
linear extension
one-sided crossing minimization
Kemeny rank aggregation
grouping by swapping
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
10:1
10:19
10.4230/LIPIcs.FSTTCS.2020.10
article
Optimal Output Sensitive Fault Tolerant Cuts
Banerjee, Niranka
1
Raman, Venkatesh
1
Saurabh, Saket
1
2
The Institute of Mathematical Sciences, HBNI, Chennai, India
University of Bergen, Norway
In this paper we consider two classic cut-problems, Global Min-Cut and Min k-Cut, via the lens of fault tolerant network design. In particular, given a graph G on n vertices, and a positive integer f, our objective is to compute an upper bound on the size of the sparsest subgraph H of G that preserves edge connectivity of G (denoted by λ(G)) in the case of Global Min-Cut, and λ(G,k) (denotes the minimum number of edges whose removal would partition the graph into at least k connected components) in the case of Min k-Cut, upon failure of any f edges of G. The subgraph H corresponding to Global Min-Cut and Min k-Cut is called f-FTCS and f-FT-k-CS, respectively. We obtain the following results about the sizes of f-FTCS and f-FT-k-CS.
- There exists an f-FTCS with (n-1)(f+λ(G)) edges. We complement this upper bound with a matching lower bound, by constructing an infinite family of graphs where any f-FTCS must have at least ((n-λ(G)-1)(λ(G)+f-1))/2+(n-λ(G)-1)+/λ(G)(λ(G)+1))/2 edges.
- There exists an f-FT-k-CS with min{(2f+λ(G,k)-(k-1))(n-1), (f+λ(G,k))(n-k)+𝓁} edges. We complement this upper bound with a lower bound, by constructing an infinite family of graphs where any f-FT-k-CS must have at least ((n-λ(G,k)-1)(λ(G,k)+f-k+1))/2)+n-λ(G,k)+k-3+((λ(G,k)-k+3)(λ(G,k)-k+2))/2 edges. Our upper bounds exploit the structural properties of k-connectivity certificates. On the other hand, for our lower bounds we construct an infinite family of graphs, such that for any graph in the family any f-FTCS (or f-FT-k-CS) must contain all its edges. We also add that our upper bounds are constructive. That is, there exist polynomial time algorithms that construct H with the aforementioned number of edges.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.10/LIPIcs.FSTTCS.2020.10.pdf
Fault tolerant
minimum cuts
upper bound
lower bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
11:1
11:16
10.4230/LIPIcs.FSTTCS.2020.11
article
Online Matching with Recourse: Random Edge Arrivals
Bernstein, Aaron
1
Dudeja, Aditi
1
Rutgers University, New Brunswick, NJ, USA
The matching problem in the online setting models the following situation: we are given a set of servers in advance, the clients arrive one at a time, and each client has edges to some of the servers. Each client must be matched to some incident server upon arrival (or left unmatched) and the algorithm is not allowed to reverse its decisions. Due to this no-reversal restriction, we are not able to guarantee an exact maximum matching in this model, only an approximate one.
Therefore, it is natural to study a different setting, where the top priority is to match as many clients as possible, and changes to the matching are possible but expensive. Formally, the goal is to always maintain a maximum matching while minimizing the number of changes made to the matching (denoted the recourse). This model is called the online model with recourse, and has been studied extensively over the past few years. For the specific problem of matching, the focus has been on vertex-arrival model, where clients arrive one at a time with all their edges. A recent result of Bernstein et al. [Bernstein et al., 2019] gives an upper bound of O (nlog² n) recourse for the case of general bipartite graphs. For trees the best known bound is O(nlog n) recourse, due to Bosek et al. [Bosek et al., 2018]. These are nearly tight, as a lower bound of Ω(nlog n) is known.
In this paper, we consider the more general model where all the vertices are known in advance, but the edges of the graph are revealed one at a time. Even for the simple case where the graph is a path, there is a lower bound of Ω(n²). Therefore, we instead consider the natural relaxation where the graph is worst-case, but the edges are revealed in a random order. This relaxation is motivated by the fact that in many related models, such as the streaming setting or the standard online setting without recourse, faster algorithms have been obtained for the matching problem when the input comes in a random order. Our results are as follows:
- Our main result is that for the case of general (non-bipartite) graphs, the problem with random edge arrivals is almost as hard as in the adversarial setting: we show a family of graphs for which the expected recourse is Ω(n²/log n).
- We show that for some special cases of graphs, random arrival is significantly easier. For the case of trees, we get an upper bound of O(nlog²n) on the expected recourse. For the case of paths, this upper bound is O(nlog n). We also show that the latter bound is tight, i.e. that the expected recourse is at least Ω(nlog n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.11/LIPIcs.FSTTCS.2020.11.pdf
matchings
edge-arrival
online model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
12:1
12:15
10.4230/LIPIcs.FSTTCS.2020.12
article
Hard QBFs for Merge Resolution
Beyersdorff, Olaf
1
https://orcid.org/0000-0002-2870-1648
Blinkhorn, Joshua
1
https://orcid.org/0000-0001-7452-6521
Mahajan, Meena
2
https://orcid.org/0000-0002-9116-4398
Peitl, Tomáš
1
https://orcid.org/0000-0001-7799-1568
Sood, Gaurav
2
https://orcid.org/0000-0001-6501-6589
Institut für Informatik, Friedrich-Schiller-Universität Jena, Germany
The Institute of Mathematical Sciences, HBNI, Chennai, India
We prove the first proof size lower bounds for the proof system Merge Resolution (MRes [Olaf Beyersdorff et al., 2020]), a refutational proof system for prenex quantified Boolean formulas (QBF) with a CNF matrix. Unlike most QBF resolution systems in the literature, proofs in MRes consist of resolution steps together with information on countermodels, which are syntactically stored in the proofs as merge maps. As demonstrated in [Olaf Beyersdorff et al., 2020], this makes MRes quite powerful: it has strategy extraction by design and allows short proofs for formulas which are hard for classical QBF resolution systems.
Here we show the first exponential lower bounds for MRes, thereby uncovering limitations of MRes. Technically, the results are either transferred from bounds from circuit complexity (for restricted versions of MRes) or directly obtained by combinatorial arguments (for full MRes). Our results imply that the MRes approach is largely orthogonal to other QBF resolution models such as the QCDCL resolution systems QRes and QURes and the expansion systems ∀Exp+Res and IR.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.12/LIPIcs.FSTTCS.2020.12.pdf
QBF
resolution
proof complexity
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
13:1
13:17
10.4230/LIPIcs.FSTTCS.2020.13
article
On Sampling Based Algorithms for k-Means
Bhattacharya, Anup
1
Goyal, Dishant
2
Jaiswal, Ragesh
2
Kumar, Amit
2
Indian Statistical Institute Kolkata, India
Indian Institute of Technology Delhi, India
We generalise the results of Bhattacharya et al. [Bhattacharya et al., 2018] for the list-k-means problem defined as - for a (unknown) partition X₁, ..., X_k of the dataset X ⊆ ℝ^d, find a list of k-center-sets (each element in the list is a set of k centers) such that at least one of k-center-sets {c₁, ..., c_k} in the list gives an (1+ε)-approximation with respect to the cost function min_{permutation π} [∑_{i = 1}^{k} ∑_{x ∈ X_i} ||x - c_{π(i)}||²]. The list-k-means problem is important for the constrained k-means problem since algorithms for the former can be converted to {PTAS} for various versions of the latter. The algorithm for the list-k-means problem by Bhattacharya et al. is a D²-sampling based algorithm that runs in k iterations. Making use of a constant factor solution for the (classical or unconstrained) k-means problem, we generalise the algorithm of Bhattacharya et al. in two ways - (i) for any fixed set X_{j₁}, ..., X_{j_t} of t ≤ k clusters, the algorithm produces a list of (k/(ε))^{O(t/(ε))} t-center sets such that (w.h.p.) at least one of them is good for X_{j₁}, ..., X_{j_t}, and (ii) the algorithm runs in a single iteration. Following are the consequences of our generalisations:
1) Faster PTAS under stability and a parameterised reduction: Property (i) of our generalisation is useful in scenarios where finding good centers becomes easier once good centers for a few "bad" clusters have been chosen. One such case is clustering under stability of Awasthi et al. [Awasthi et al., 2010] where the number of such bad clusters is a constant. Using property (i), we significantly improve the running time of their algorithm from O(dn³) (k log{n})^{poly(1/(β), 1/(ε))} to O (dn³ (k/(ε)) ^{O(1/βε²)}). Another application is a parameterised reduction from the outlier version of k-means to the classical one where the bad clusters are the outliers.
2) Streaming algorithms: The sampling algorithm running in a single iteration (i.e., property (ii)) allows us to design a constant-pass, logspace streaming algorithm for the list-k-means problem. This can be converted to a constant-pass, logspace streaming PTAS for various constrained versions of the k-means problem. In particular, this gives a 3-pass, polylog-space streaming PTAS for the constrained binary k-means problem which in turn gives a 4-pass, polylog-space streaming PTAS for the generalised binary 𝓁₀-rank-r approximation problem. This is the first constant pass, polylog-space streaming algorithm for either of the two problems. Coreset based techniques, which is another approach for designing streaming algorithms in general, is not known to work for the constrained binary k-means problem to the best of our knowledge.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.13/LIPIcs.FSTTCS.2020.13.pdf
k-means
low rank approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
14:1
14:17
10.4230/LIPIcs.FSTTCS.2020.14
article
String Indexing for Top-k Close Consecutive Occurrences
Bille, Philip
1
https://orcid.org/0000-0002-1120-5154
Gørtz, Inge Li
1
https://orcid.org/0000-0002-8322-4952
Pedersen, Max Rishøj
1
https://orcid.org/0000-0002-8850-6422
Rotenberg, Eva
1
https://orcid.org/0000-0001-5853-7909
Steiner, Teresa Anna
1
https://orcid.org/0000-0003-1078-4075
Technical University of Denmark, DTU Compute, Lyngby, Denmark
The classic string indexing problem is to preprocess a string S into a compact data structure that supports efficient subsequent pattern matching queries, that is, given a pattern string P, report all occurrences of P within S. In this paper, we study a basic and natural extension of string indexing called the string indexing for top-k close consecutive occurrences problem (Sitcco). Here, a consecutive occurrence is a pair (i,j), i < j, such that P occurs at positions i and j in S and there is no occurrence of P between i and j, and their distance is defined as j-i. Given a pattern P and a parameter k, the goal is to report the top-k consecutive occurrences of P in S of minimal distance. The challenge is to compactly represent S while supporting queries in time close to the length of P and k. We give two time-space trade-offs for the problem. Let n be the length of S, m the length of P, and ε ∈ (0,1]. Our first result achieves O(nlog n) space and optimal query time of O(m+k), and our second result achieves linear space and query time O(m+k^{1+ε}). Along the way, we develop several techniques of independent interest, including a new translation of the problem into a line segment intersection problem and a new recursive clustering technique for trees.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.14/LIPIcs.FSTTCS.2020.14.pdf
String indexing
pattern matching
consecutive occurrences
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
15:1
15:15
10.4230/LIPIcs.FSTTCS.2020.15
article
Fair Tree Connection Games with Topology-Dependent Edge Cost
Bilò, Davide
1
https://orcid.org/0000-0003-3169-4300
Friedrich, Tobias
2
https://orcid.org/0000-0003-0076-6308
Lenzner, Pascal
2
https://orcid.org/0000-0002-3010-1019
Melnichenko, Anna
2
https://orcid.org/0000-0003-1988-0848
Molitor, Louise
2
https://orcid.org/0000-0002-9166-9927
Department of Humanities and Social Sciences, University of Sassari, Via Roma 151, 07100 Sassari (SS), Italy
Hasso Plattner Institute, University of Potsdam, Prof.-Dr.-Helmert-Straße 2-3, 14482 Potsdam, Germany
How do rational agents self-organize when trying to connect to a common target? We study this question with a simple tree formation game which is related to the well-known fair single-source connection game by Anshelevich et al. (FOCS'04) and selfish spanning tree games by Gourvès and Monnot (WINE'08). In our game agents correspond to nodes in a network that activate a single outgoing edge to connect to the common target node (possibly via other nodes). Agents pay for their path to the common target, and edge costs are shared fairly among all agents using an edge. The main novelty of our model is dynamic edge costs that depend on the in-degree of the respective endpoint. This reflects that connecting to popular nodes that have increased internal coordination costs is more expensive since they can charge higher prices for their routing service.
In contrast to related models, we show that equilibria are not guaranteed to exist, but we prove the existence for infinitely many numbers of agents. Moreover, we analyze the structure of equilibrium trees and employ these insights to prove a constant upper bound on the Price of Anarchy as well as non-trivial lower bounds on both the Price of Anarchy and the Price of Stability. We also show that in comparison with the social optimum tree the overall cost of an equilibrium tree is more fairly shared among the agents. Thus, we prove that self-organization of rational agents yields on average only slightly higher cost per agent compared to the centralized optimum, and at the same time, it induces a more fair cost distribution. Moreover, equilibrium trees achieve a beneficial trade-off between a low height and low maximum degree, and hence these trees might be of independent interest from a combinatorics point-of-view. We conclude with a discussion of promising extensions of our model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.15/LIPIcs.FSTTCS.2020.15.pdf
Network Design Games
Spanning Tree Games
Fair Cost Sharing
Price of Anarchy
Nash Equilibrium
Algorithmic Game Theory
Combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
16:1
16:17
10.4230/LIPIcs.FSTTCS.2020.16
article
Locally Decodable/Correctable Codes for Insertions and Deletions
Block, Alexander R.
1
Blocki, Jeremiah
1
Grigorescu, Elena
1
Kulkarni, Shubhang
2
Zhu, Minshen
1
Purdue University, West Lafayette, IN, USA
University of Illinois Urbana-Champaign, IL, USA
Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms.
In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions.
Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length.
Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.16/LIPIcs.FSTTCS.2020.16.pdf
Locally decodable/correctable codes
insert-delete channel
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
17:1
17:18
10.4230/LIPIcs.FSTTCS.2020.17
article
Maximum Clique in Disk-Like Intersection Graphs
Bonnet, Édouard
1
https://orcid.org/0000-0002-1653-5822
Grelier, Nicolas
2
Miltzow, Tillmann
3
Univ Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
Department of Computer Science, ETH Zürich, Switzerland
Department of Information and Computing Sciences, Utrecht University, The Netherlands
We study the complexity of Maximum Clique in intersection graphs of convex objects in the plane. On the algorithmic side, we extend the polynomial-time algorithm for unit disks [Clark '90, Raghavan and Spinrad '03] to translates of any fixed convex set. We also generalize the efficient polynomial-time approximation scheme (EPTAS) and subexponential algorithm for disks [Bonnet et al. '18, Bonamy et al. '18] to homothets of a fixed centrally symmetric convex set.
The main open question on that topic is the complexity of Maximum Clique in disk graphs. It is not known whether this problem is NP-hard. We observe that, so far, all the hardness proofs for Maximum Clique in intersection graph classes I follow the same road. They show that, for every graph G of a large-enough class C, the complement of an even subdivision of G belongs to the intersection class I. Then they conclude by invoking the hardness of Maximum Independent Set on the class C, and the fact that the even subdivision preserves that hardness. However there is a strong evidence that this approach cannot work for disk graphs [Bonnet et al. '18]. We suggest a new approach, based on a problem that we dub Max Interval Permutation Avoidance, which we prove unlikely to have a subexponential-time approximation scheme. We transfer that hardness to Maximum Clique in intersection graphs of objects which can be either half-planes (or unit disks) or axis-parallel rectangles. That problem is not amenable to the previous approach. We hope that a scaled down (merely NP-hard) variant of Max Interval Permutation Avoidance could help making progress on the disk case, for instance by showing the NP-hardness for (convex) pseudo-disks.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.17/LIPIcs.FSTTCS.2020.17.pdf
Disk Graphs
Intersection Graphs
Maximum Clique
Algorithms
NP-hardness
APX-hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
18:1
18:15
10.4230/LIPIcs.FSTTCS.2020.18
article
Parameterized Complexity of Feedback Vertex Sets on Hypergraphs
Choudhary, Pratibha
1
Kanesh, Lawqueen
2
Lokshtanov, Daniel
3
Panolan, Fahad
4
Saurabh, Saket
2
5
Indian Institute of Technology Jodhpur, Jodhpur, India
Institute of Mathematical Sciences, HBNI, Chennai, India
University of California Santa Barbara, Santa Barbara, USA
Indian Institute of Technology Hyderabad, India
University of Bergen, Norway
A feedback vertex set in a hypergraph H is a set of vertices S such that deleting S from H results in an acyclic hypergraph. Here, deleting a vertex means removing the vertex and all incident hyperedges, and a hypergraph is acyclic if its vertex-edge incidence graph is acyclic. We study the (parameterized complexity of) the Hypergraph Feedback Vertex Set (HFVS) problem: given as input a hypergraph H and an integer k, determine whether H has a feedback vertex set of size at most k. It is easy to see that this problem generalizes the classic Feedback Vertex Set (FVS) problem on graphs. Remarkably, despite the central role of FVS in parameterized algorithms and complexity, the parameterized complexity of a generalization of FVS to hypergraphs has not been studied previously. In this paper, we fill this void. Our main results are as follows
- HFVS is W[2]-hard (as opposed to FVS, which is fixed parameter tractable).
- If the input hypergraph is restricted to a linear hypergraph (no two hyperedges intersect in more than one vertex), HFVS admits a randomized algorithm with running time 2^{𝒪(k³log k)}n^{𝒪(1)}.
- If the input hypergraph is restricted to a d-hypergraph (hyperedges have cardinality at most d), then HFVS admits a deterministic algorithm with running time d^{𝒪(k)}n^{𝒪(1)}. The algorithm for linear hypergraphs combines ideas from the randomized algorithm for FVS by Becker et al. [J. Artif. Intell. Res., 2000] with the branching algorithm for Point Line Cover by Langerman and Morin [Discrete & Computational Geometry, 2005].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.18/LIPIcs.FSTTCS.2020.18.pdf
feedback vertex sets
hypergraphs
FPT
randomized algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
19:1
19:14
10.4230/LIPIcs.FSTTCS.2020.19
article
Size Bounds on Low Depth Circuits for Promise Majority
Cook, Joshua
1
The University Of Texas At Austin, TX, USA
We give two results on the size of AC0 circuits computing promise majority. ε-promise majority is majority promised that either at most an ε fraction of the input bits are 1 or at most ε are 0.
- First, we show super-quadratic size lower bounds on both monotone and general depth-3 circuits for promise majority.
- For any ε ∈ (0, 1/2), monotone depth-3 AC0 circuits for ε-promise majority have size Ω̃(ε³ n^{2 + (ln(1 - ε))/(ln(ε))}).
- For any ε ∈ (0, 1/2), general depth-3 AC0 circuits for ε-promise majority have size Ω̃(ε³ n^{2 + (ln(1 - ε²))/(2ln(ε))}). These are the first quadratic size lower bounds for depth-3 ε-promise majority circuits for ε < 0.49.
- Second, we give both uniform and non-uniform sub-quadratic size constant-depth circuits for promise majority.
- For integer k ≥ 1 and constant ε ∈ (0, 1/2), there exists monotone non uniform AC0 circuits of depth-(2 + 2 k) computing ε-promise majority with size Õ(n^{1/(1 - 2^{-k})}).
- For integer k ≥ 1 and constant ε ∈ (0, 1/2), there exists monotone uniform AC0 circuit of depth-(2 + 2 k) computing ε-promise majority with size n^{1/(1 - (2/3) ^k) + o(1)}. These circuits are based on incremental improvements to existing depth-3 circuits for promise majority given by Ajtai [Miklós Ajtai, 1983] and Viola [Emanuele Viola, 2009] combined with a divide and conquer strategy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.19/LIPIcs.FSTTCS.2020.19.pdf
AC0
Approximate Counting
Approximate Majority
Promise Majority
Depth 3 Circuits
Circuit Lower Bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
20:1
20:15
10.4230/LIPIcs.FSTTCS.2020.20
article
Lower Bounds for Semi-adaptive Data Structures via Corruption
Dvořák, Pavel
1
Loff, Bruno
2
Charles University, Prague, Czech Republic
INESC-Tec and University of Porto, Portugal
In a dynamic data structure problem we wish to maintain an encoding of some data in memory, in such a way that we may efficiently carry out a sequence of queries and updates to the data. A long-standing open problem in this area is to prove an unconditional polynomial lower bound of a trade-off between the update time and the query time of an adaptive dynamic data structure computing some explicit function. Ko and Weinstein provided such lower bound for a restricted class of semi-adaptive data structures, which compute the Disjointness function. There, the data are subsets x₁,… ,x_k and y of {1,… ,n}, the updates can modify y (by inserting and removing elements), and the queries are an index i ∈ {1,… ,k} (query i should answer whether x_i and y are disjoint, i.e., it should compute the Disjointness function applied to (x_i, y)). The semi-adaptiveness places a restriction in how the data structure can be accessed in order to answer a query. We generalize the lower bound of Ko and Weinstein to work not just for the Disjointness, but for any function having high complexity under the smooth corruption bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.20/LIPIcs.FSTTCS.2020.20.pdf
semi-adaptive dynamic data structure
polynomial lower bound
corruption bound
information theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
21:1
21:15
10.4230/LIPIcs.FSTTCS.2020.21
article
Stability-Preserving, Time-Efficient Mechanisms for School Choice in Two Rounds
Gajulapalli, Karthik
1
Liu, James A.
2
Mai, Tung
3
Vazirani, Vijay V.
1
Department of Computer Science, University of California Irvine, CA, US
K-Sky Limited, Hong Kong, Hong Kong
Adobe Research, San Jose, CA, US
We address the following dynamic version of the school choice question: a city, named City, admits students in two temporally-separated rounds, denoted R₁ and R₂. In round R₁, the capacity of each school is fixed and mechanism M₁ finds a student optimal stable matching. In round R₂, certain parameters change, e.g., new students move into the City or the City is happy to allocate extra seats to specific schools. We study a number of Settings of this kind and give polynomial time algorithms for obtaining a stable matching for the new situations.
It is well established that switching the school of a student midway, unsynchronized with her classmates, can cause traumatic effects. This fact guides us to two types of results: the first simply disallows any re-allocations in round R₂, and the second asks for a stable matching that minimizes the number of re-allocations. For the latter, we prove that the stable matchings which minimize the number of re-allocations form a sublattice of the lattice of stable matchings. Observations about incentive compatibility are woven into these results. We also give a third type of results, namely proofs of NP-hardness for a mechanism for round R₂ under certain settings.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.21/LIPIcs.FSTTCS.2020.21.pdf
stable matching
mechanism design
NP-Hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
22:1
22:15
10.4230/LIPIcs.FSTTCS.2020.22
article
New Verification Schemes for Frequency-Based Functions on Data Streams
Ghosh, Prantar
1
Dartmouth College, Hanover, NH, USA
We study the general problem of computing frequency-based functions, i.e., the sum of any given function of data stream frequencies. Special cases include fundamental data stream problems such as computing the number of distinct elements (F₀), frequency moments (F_k), and heavy-hitters. It can also be applied to calculate the maximum frequency of an element (F_{∞}).
Given that exact computation of most of these special cases provably do not admit any sublinear space algorithm, a natural approach is to consider them in an enhanced data streaming model, where we have a computationally unbounded but untrusted prover that can send proofs or help messages to ease the computation. Think of a memory-restricted client delegating the computation to a powerful cloud service. The client does not blindly trust the cloud, and with its limited memory, it wants to verify the proof that the cloud sends. Chakrabarti et al. (ICALP '09) introduced this model as the annotated data streaming model and showed that multiple problems including exact computation of frequency-based functions - that have no sublinear algorithms in basic streaming - do have algorithms, also called schemes, in the annotated streaming model with both space and proof-length sublinear in the input size.
We give a general scheme for computing any frequency-based function with both space usage and proof-size of O(n^{2/3}log n) bits, where n is the size of the universe. This improves upon the best known bound of O(n^{2/3}log^{4/3} n) given by the seminal paper of Chakrabarti et al. and as a result, also improves upon the best known bounds for the important special cases of computing F₀ and F_{∞}. We emphasize that while being quantitatively better, our scheme is also qualitatively better in the sense that it is simpler than the previously best scheme that uses intricate data structures and elaborate subroutines. Our scheme uses a simple technique tailored for this model: the verifier solves the problem partially by running an algorithm known to be helpful for it in the basic (sans prover) streaming model and then takes the prover’s help to solve the remaining part.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.22/LIPIcs.FSTTCS.2020.22.pdf
data streams
interactive proofs
Arthur-Merlin
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
23:1
23:14
10.4230/LIPIcs.FSTTCS.2020.23
article
Online Carpooling Using Expander Decompositions
Gupta, Anupam
1
Krishnaswamy, Ravishankar
2
Kumar, Amit
3
Singla, Sahil
4
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
Microsoft Research, Bengaluru, India
Department of Computer Science and Engineering, Indian Institute of Technology, Delhi, India
Department of Computer Science, Princeton University, NJ, USA
We consider the online carpooling problem: given n vertices, a sequence of edges arrive over time. When an edge e_t = (u_t, v_t) arrives at time step t, the algorithm must orient the edge either as v_t → u_t or u_t → v_t, with the objective of minimizing the maximum discrepancy of any vertex, i.e., the absolute difference between its in-degree and out-degree. Edges correspond to pairs of persons wanting to ride together, and orienting denotes designating the driver. The discrepancy objective then corresponds to every person driving close to their fair share of rides they participate in.
In this paper, we design efficient algorithms which can maintain polylog(n,T) maximum discrepancy (w.h.p) over any sequence of T arrivals, when the arriving edges are sampled independently and uniformly from any given graph G. This provides the first polylogarithmic bounds for the online (stochastic) carpooling problem. Prior to this work, the best known bounds were O(√{n log n})-discrepancy for any adversarial sequence of arrivals, or O(log log n)-discrepancy bounds for the stochastic arrivals when G is the complete graph.
The technical crux of our paper is in showing that the simple greedy algorithm, which has provably good discrepancy bounds when the arriving edges are drawn uniformly at random from the complete graph, also has polylog discrepancy when G is an expander graph. We then combine this with known expander-decomposition results to design our overall algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.23/LIPIcs.FSTTCS.2020.23.pdf
Online Algorithms
Discrepancy Minimization
Carpooling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
24:1
24:17
10.4230/LIPIcs.FSTTCS.2020.24
article
On the (Parameterized) Complexity of Almost Stable Marriage
Gupta, Sushmita
1
Jain, Pallavi
2
Roy, Sanjukta
1
Saurabh, Saket
1
3
Zehavi, Meirav
4
The Institute of Mathematical Sciences, HBNI, Chennai, India
Indian Institute of Technology Jodhpur, India
University of Bergen, Norway
Ben Gurion University of the Negev, Beer Sheva, Israel
In the Stable Marriage problem, when the preference lists are complete, all agents of the smaller side can be matched. However, this need not be true when preference lists are incomplete. In most real-life situations, where agents participate in the matching market voluntarily and submit their preferences, it is natural to assume that each agent wants to be matched to someone in his/her preference list as opposed to being unmatched. In light of the Rural Hospital Theorem, we have to relax the "no blocking pair" condition for stable matchings in order to match more agents. In this paper, we study the question of matching more agents with fewest possible blocking edges. In particular, the goal is to find a matching whose size exceeds that of a stable matching in the graph by at least t and has at most k blocking edges. We study this question in the realm of parameterized complexity with respect to several natural parameters, k,t,d, where d is the maximum length of a preference list. Unfortunately, the problem remains intractable even for the combined parameter k+t+d. Thus, we extend our study to the local search variant of this problem, in which we search for a matching that not only fulfills each of the above conditions but is "closest", in terms of its symmetric difference to the given stable matching, and obtain an FPT algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.24/LIPIcs.FSTTCS.2020.24.pdf
Stable Matching
Parameterized Complexity
Local Search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
25:1
25:17
10.4230/LIPIcs.FSTTCS.2020.25
article
Min-Cost Popular Matchings
Kavitha, Telikepalli
1
Tata Institute of Fundamental Research, Mumbai, India
Let G = (A ∪ B, E) be a bipartite graph on n vertices where every vertex ranks its neighbors in a strict order of preference. A matching M in G is popular if there is no matching N such that vertices that prefer N to M outnumber those that prefer M to N. Popular matchings always exist in G since every stable matching is popular. Thus it is easy to find a popular matching in G - however it is NP-hard to compute a min-cost popular matching in G when there is a cost function on the edge set; moreover it is NP-hard to approximate this to any multiplicative factor. An O^*(2ⁿ) algorithm to compute a min-cost popular matching in G follows from known results. Here we show:
- an algorithm with running time O^*(2^{n/4}) ≈ O^*(1.19ⁿ) to compute a min-cost popular matching;
- assume all edge costs are non-negative - then given ε > 0, a randomized algorithm with running time poly(n,1/(ε)) to compute a matching M such that cost(M) is at most twice the optimal cost and with high probability, the fraction of all matchings more popular than M is at most 1/2+ε.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.25/LIPIcs.FSTTCS.2020.25.pdf
Bipartite graphs
Stable matchings
Dual certificates
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
26:1
26:15
10.4230/LIPIcs.FSTTCS.2020.26
article
Constructing Large Matchings via Query Access to a Maximal Matching Oracle
Khalil, Lidiya Khalidah binti
1
Konrad, Christian
1
https://orcid.org/0000-0003-1802-4011
Department of Computer Science, University of Bristol, UK
Multi-pass streaming algorithm for Maximum Matching have been studied since more than 15 years and various algorithmic results are known today, including 2-pass streaming algorithms that break the 1/2-approximation barrier, and (1-ε)-approximation streaming algorithms that run in O(poly 1/ε) passes in bipartite graphs and in O((1/ε)^(1/ε)) or O(poly (1/ε) ⋅ log n) passes in general graphs, where n is the number of vertices of the input graph. However, proving impossibility results for such algorithms has so far been elusive, and, for example, even the existence of 2-pass small space streaming algorithms with approximation factor 0.999 has not yet been ruled out.
The key building block of all multi-pass streaming algorithms for Maximum Matching is the Greedy matching algorithm. Our aim is to understand the limitations of this approach: How many passes are required if the algorithm solely relies on the invocation of the Greedy algorithm?
In this paper, we initiate the study of lower bounds for restricted families of multi-pass streaming algorithms for Maximum Matching. We focus on the simple yet powerful class of algorithms that in each pass run Greedy on a vertex-induced subgraph of the input graph. In bipartite graphs, we show that 3 passes are necessary and sufficient to improve on the trivial approximation factor of 1/2: We give a lower bound of 0.6 on the approximation ratio of such algorithms, which is optimal. We further show that Ω(1/ε) passes are required for computing a (1-ε)-approximation, even in bipartite graphs. Last, the considered class of algorithms is not well-suited to general graphs: We show that Ω(n) passes are required in order to improve on the trivial approximation factor of 1/2.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.26/LIPIcs.FSTTCS.2020.26.pdf
Maximum matching approximation
Query model
Streaming algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
27:1
27:18
10.4230/LIPIcs.FSTTCS.2020.27
article
Planted Models for the Densest k-Subgraph Problem
Khanna, Yash
1
Louis, Anand
1
Indian Institute of Science, Bangalore, India
Given an undirected graph G, the Densest k-subgraph problem (DkS) asks to compute a set S ⊂ V of cardinality |S| ≤ k such that the weight of edges inside S is maximized. This is a fundamental NP-hard problem whose approximability, inspite of many decades of research, is yet to be settled. The current best known approximation algorithm due to Bhaskara et al. (2010) computes a 𝒪(n^{1/4 + ε}) approximation in time n^{𝒪(1/ε)}, for any ε > 0.
We ask what are some "easier" instances of this problem? We propose some natural semi-random models of instances with a planted dense subgraph, and study approximation algorithms for computing the densest subgraph in them. These models are inspired by the semi-random models of instances studied for various other graph problems such as the independent set problem, graph partitioning problems etc. For a large range of parameters of these models, we get significantly better approximation factors for the Densest k-subgraph problem. Moreover, our algorithm recovers a large part of the planted solution.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.27/LIPIcs.FSTTCS.2020.27.pdf
Densest k-Subgraph
Semi-Random models
Planted Models
Semidefinite Programming
Approximation Algorithms
Beyond Worst Case Analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
28:1
28:18
10.4230/LIPIcs.FSTTCS.2020.28
article
Sample-And-Gather: Fast Ruling Set Algorithms in the Low-Memory MPC Model
Kothapalli, Kishore
1
Pai, Shreyas
2
https://orcid.org/0000-0003-2409-7807
Pemmaraju, Sriram V.
2
IIIT Hyderabad, India
The University of Iowa, Iowa City, IA, USA
Motivated by recent progress on symmetry breaking problems such as maximal independent set (MIS) and maximal matching in the low-memory Massively Parallel Computation (MPC) model (e.g., Behnezhad et al. PODC 2019; Ghaffari-Uitto SODA 2019), we investigate the complexity of ruling set problems in this model. The MPC model has become very popular as a model for large-scale distributed computing and it comes with the constraint that the memory-per-machine is strongly sublinear in the input size. For graph problems, extremely fast MPC algorithms have been designed assuming Ω̃(n) memory-per-machine, where n is the number of nodes in the graph (e.g., the O(log log n) MIS algorithm of Ghaffari et al., PODC 2018). However, it has proven much more difficult to design fast MPC algorithms for graph problems in the low-memory MPC model, where the memory-per-machine is restricted to being strongly sublinear in the number of nodes, i.e., O(n^ε) for constant 0 < ε < 1.
In this paper, we present an algorithm for the 2-ruling set problem, running in Õ(log^{1/6} Δ) rounds whp, in the low-memory MPC model. Here Δ is the maximum degree of the graph. We then extend this result to β-ruling sets for any integer β > 1. Specifically, we show that a β-ruling set can be computed in the low-memory MPC model with O(n^ε) memory-per-machine in Õ(β ⋅ log^{1/(2^{β+1}-2)} Δ) rounds, whp. From this it immediately follows that a β-ruling set for β = Ω(log log log Δ)-ruling set can be computed in in just O(β log log n) rounds whp. The above results assume a total memory of Õ(m + n^{1+ε}). We also present algorithms for β-ruling sets in the low-memory MPC model assuming that the total memory over all machines is restricted to Õ(m). For β > 1, these algorithms are all substantially faster than the Ghaffari-Uitto Õ(√{log Δ})-round MIS algorithm in the low-memory MPC model.
All our results follow from a Sample-and-Gather Simulation Theorem that shows how random-sampling-based Congest algorithms can be efficiently simulated in the low-memory MPC model. We expect this simulation theorem to be of independent interest beyond the ruling set algorithms derived here.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.28/LIPIcs.FSTTCS.2020.28.pdf
Massively Parallel Computation
Ruling Set
Simulation Theorems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
29:1
29:16
10.4230/LIPIcs.FSTTCS.2020.29
article
On Parity Decision Trees for Fourier-Sparse Boolean Functions
Mande, Nikhil S.
1
Sanyal, Swagato
2
Georgetown University, Washington, DC, USA
Indian Institute of Technology Kharagpur, India
We study parity decision trees for Boolean functions. The motivation of our study is the log-rank conjecture for XOR functions and its connection to Fourier analysis and parity decision tree complexity. Our contributions are as follows. Let f : 𝔽₂ⁿ → {-1, 1} be a Boolean function with Fourier support 𝒮 and Fourier sparsity k.
- We prove via the probabilistic method that there exists a parity decision tree of depth O(√k) that computes f. This matches the best known upper bound on the parity decision tree complexity of Boolean functions (Tsang, Wong, Xie, and Zhang, FOCS 2013). Moreover, while previous constructions (Tsang et al., FOCS 2013, Shpilka, Tal, and Volk, Comput. Complex. 2017) build the trees by carefully choosing the parities to be queried in each step, our proof shows that a naive sampling of the parities suffices.
- We generalize the above result by showing that if the Fourier spectra of Boolean functions satisfy a natural "folding property", then the above proof can be adapted to establish existence of a tree of complexity polynomially smaller than O(√ k). More concretely, the folding property we consider is that for most distinct γ, δ in 𝒮, there are at least a polynomial (in k) number of pairs (α, β) of parities in 𝒮 such that α+β = γ+δ. We make a conjecture in this regard which, if true, implies that the communication complexity of an XOR function is bounded above by the fourth root of the rank of its communication matrix, improving upon the previously known upper bound of square root of rank (Tsang et al., FOCS 2013, Lovett, J. ACM. 2016).
- Motivated by the above, we present some structural results about the Fourier spectra of Boolean functions. It can be shown by elementary techniques that for any Boolean function f and all (α, β) in binom(𝒮,2), there exists another pair (γ, δ) in binom(𝒮,2) such that α + β = γ + δ. One can view this as a "trivial" folding property that all Boolean functions satisfy. Prior to our work, it was conceivable that for all (α, β) ∈ binom(𝒮,2), there exists exactly one other pair (γ, δ) ∈ binom(𝒮,2) with α + β = γ + δ. We show, among other results, that there must exist several γ ∈ 𝔽₂ⁿ such that there are at least three pairs of parities (α₁, α₂) ∈ binom(𝒮,2) with α₁+α₂ = γ. This, in particular, rules out the possibility stated earlier.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.29/LIPIcs.FSTTCS.2020.29.pdf
Parity decision trees
log-rank conjecture
analysis of Boolean functions
communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
30:1
30:17
10.4230/LIPIcs.FSTTCS.2020.30
article
Colored Cut Games
Morawietz, Nils
1
Grüttemeier, Niels
1
https://orcid.org/0000-0002-6789-2918
Komusiewicz, Christian
1
https://orcid.org/0000-0003-0829-7032
Sommer, Frank
1
https://orcid.org/0000-0003-4034-525X
Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Germany
In a graph G = (V,E) with an edge coloring 𝓁:E → C and two distinguished vertices s and t, a colored (s,t)-cut is a set C̃ ⊆ C such that deleting all edges with some color c ∈ C̃ from G disconnects s and t. Motivated by applications in the design of robust networks, we introduce a family of problems called colored cut games. In these games, an attacker and a defender choose colors to delete and to protect, respectively, in an alternating fashion. It is the goal of the attacker to achieve a colored (s,t)-cut and the goal of the defender to prevent this. First, we show that for an unbounded number of alternations, colored cut games are PSPACE-complete. We then show that, even on subcubic graphs, colored cut games with a constant number i of alternations are complete for classes in the polynomial hierarchy whose level depends on i. To complete the dichotomy, we show that all colored cut games are polynomial-time solvable on graphs with degree at most two. Finally, we show that all colored cut games admit a polynomial kernel for the parameter k+κ_r where k denotes the total attacker budget and, for any constant r, κ_r is the number of vertex deletions that are necessary to transform G into a graph where the longest path has length at most r. In the case of r = 1, κ₁ is the vertex cover number vc of the input graph and we obtain a kernel with 𝒪(vc²k²) edges. Moreover, we introduce an algorithm solving the most basic colored cut game, Colored (s,t)-Cut, in 2^{vc + k}n^{𝒪(1)} time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.30/LIPIcs.FSTTCS.2020.30.pdf
Labeled Cut
Labeled Path
Network Robustness
Kernelization
PSPACE
Polynomial Hierarchy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
31:1
31:16
10.4230/LIPIcs.FSTTCS.2020.31
article
Randomness Efficient Noise Stability and Generalized Small Bias Sets
Moshkovitz, Dana
1
Oh, Justin
1
Zuckerman, David
1
University of Texas at Austin, Department of Computer Science, TX, USA
We present a randomness efficient version of the linear noise operator T_ρ from boolean function analysis by constructing a sparse linear operator on the space of boolean functions {0,1}ⁿ → {0,1} with similar eigenvalue profile to T_ρ. The linear operator we construct is a direct consequence of a generalization of ε-biased sets to the product distribution 𝒟_p on {0,1}ⁿ where the marginal of each coordinate is p = 1/2-1/2ρ. Such a generalization is a small support distribution that fools linear tests when the input of the test comes from 𝒟_p instead of the uniform distribution. We give an explicit construction of such a distribution that requires log n + O_{p}(log log n + log1/(ε)) bits of uniform randomness to sample from, where the p subscript hides O(log² 1/p) factors. When p and ε are constant, this yields a support size nearly linear in n, whereas previous best known constructions only guarantee a size of poly(n). Furthermore, our construction implies an explicitly constructible "sparse" noisy hypercube graph that is a small set expander.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.31/LIPIcs.FSTTCS.2020.31.pdf
pseudorandomness
derandomization
epsilon biased sets
noise stability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
32:1
32:17
10.4230/LIPIcs.FSTTCS.2020.32
article
Connectivity Lower Bounds in Broadcast Congested Clique
Pai, Shreyas
1
https://orcid.org/0000-0003-2409-7807
Pemmaraju, Sriram V.
1
The University of Iowa, Iowa City, IA, USA
We prove three new lower bounds for graph connectivity in the 1-bit broadcast congested clique model, BCC(1). First, in the KT-0 version of BCC(1), in which nodes are aware of neighbors only through port numbers, we show an Ω(log n) round lower bound for Connectivity even for constant-error randomized Monte Carlo algorithms. The deterministic version of this result can be obtained via the well-known "edge-crossing" argument, but, the randomized version of this result requires establishing new combinatorial results regarding the indistinguishability graph induced by inputs. In our second result, we show that the Ω(log n) lower bound result extends to the KT-1 version of the BCC(1) model, in which nodes are aware of IDs of all neighbors, though our proof works only for deterministic algorithms. This result substantially improves upon the existing Ω(log^* n) deterministic lower bound (Jurdziński et el., SIROCCO 2018) for this problem. Since nodes know IDs of their neighbors in the KT-1 model, it is no longer possible to play "edge-crossing" tricks; instead we present a reduction from the 2-party communication complexity problem Partition in which Alice and Bob are given two set partitions on [n] and are required to determine if the join of these two set partitions equals the trivial one-part set partition. While our KT-1 Connectivity lower bound holds only for deterministic algorithms, in our third result we extend this Ω(log n) KT-1 lower bound to constant-error Monte Carlo algorithms for the closely related ConnectedComponents problem. We use information-theoretic techniques to obtain this result. All our results hold for the seemingly easy special case of Connectivity in which an algorithm has to distinguish an instance with one cycle from an instance with multiple cycles. Our results showcase three rather different lower bound techniques and lay the groundwork for further improvements in lower bounds for Connectivity in the BCC(1) model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.32/LIPIcs.FSTTCS.2020.32.pdf
Distributed Algorithms
Broadcast Congested Clique
Connectivity
Lower Bounds
Indistinguishability
Communication Complexity
Information Theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
33:1
33:19
10.4230/LIPIcs.FSTTCS.2020.33
article
Fully Dynamic Sequential and Distributed Algorithms for MAX-CUT
Wasim, Omer
1
https://orcid.org/0000-0003-4746-5211
King, Valerie
2
Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
Department of Computer Science, University of Victoria, Canada
This paper initiates the study of the MAX-CUT problem in fully dynamic graphs. Given a graph G = (V,E), we present deterministic fully dynamic distributed and sequential algorithms to maintain a cut on G which always contains at least |E|/2 edges in sublinear update time under edge insertions and deletions to G. Our results include the following deterministic algorithms: i) an O(Δ) worst-case update time sequential algorithm, where Δ denotes the maximum degree of G, ii) the first fully dynamic distributed algorithm taking O(1) rounds and O(Δ) total bits of communication per update in the Massively Parallel Computation (MPC) model with n machines and O(n) words of memory per machine. The aforementioned algorithms require at most one adjustment, that is, a move of one vertex from one side of the cut to the other.
We also give the following fully dynamic sequential algorithms: i) a deterministic O(m^{1/2}) amortized update time algorithm where m denotes the maximum number of edges in G during any sequence of updates and, ii) a randomized algorithm which takes Õ(n^{2/3}) worst-case update time when edge updates come from an oblivious adversary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.33/LIPIcs.FSTTCS.2020.33.pdf
data structures
dynamic graph algorithms
approximate maximum cut
distributed computing
parallel computing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
34:1
34:17
10.4230/LIPIcs.FSTTCS.2020.34
article
Weighted Tiling Systems for Graphs: Evaluation Complexity
Aiswarya, C.
1
2
https://orcid.org/0000-0002-4878-7581
Gastin, Paul
3
https://orcid.org/0000-0002-1313-7722
Chennai Mathematical Institute, India
IRL ReLaX, CNRS, France
LSV, ENS Paris-Saclay, CNRS, Université Paris-Saclay, France
We consider weighted tiling systems to represent functions from graphs to a commutative semiring such as the Natural semiring or the Tropical semiring. The system labels the nodes of a graph by its states, and checks if the neighbourhood of every node belongs to a set of permissible tiles, and assigns a weight accordingly. The weight of a labeling is the semiring-product of the weights assigned to the nodes, and the weight of the graph is the semiring-sum of the weights of labelings. We show that we can model interesting algorithmic questions using this formalism - like computing the clique number of a graph or computing the permanent of a matrix. The evaluation problem is, given a weighted tiling system and a graph, to compute the weight of the graph. We study the complexity of the evaluation problem and give tight upper and lower bounds for several commutative semirings. Further we provide an efficient evaluation algorithm if the input graph is of bounded tree-width.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.34/LIPIcs.FSTTCS.2020.34.pdf
Weighted graph tiling
tiling automata
Evaluation
Complexity
Tree-width
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
35:1
35:14
10.4230/LIPIcs.FSTTCS.2020.35
article
Process Symmetry in Probabilistic Transducers
Almagor, Shaull
1
https://orcid.org/0000-0001-9021-1175
Computer Science Department, Technion, Haifa, Israel
Model checking is the process of deciding whether a system satisfies a given specification. Often, when the setting comprises multiple processes, the specifications are over sets of input and output signals that correspond to individual processes. Then, many of the properties one wishes to specify are symmetric with respect to the processes identities. In this work, we consider the problem of deciding whether the given system exhibits symmetry with respect to the processes' identities. When the system is symmetric, this gives insight into the behaviour of the system, as well as allows the designer to use only representative specifications, instead of iterating over all possible process identities.
Specifically, we consider probabilistic systems, and we propose several variants of symmetry. We start with precise symmetry, in which, given a permutation π, the system maintains the exact distribution of permuted outputs, given a permuted inputs. We proceed to study approximate versions of symmetry, including symmetry induced by small L_∞ norm, variants of Parikh-image based symmetry, and qualitative symmetry. For each type of symmetry, we consider the problem of deciding whether a given system exhibits this type of symmetry.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.35/LIPIcs.FSTTCS.2020.35.pdf
Symmetry
Probabilistic Transducers
Model Checking
Permutations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
36:1
36:17
10.4230/LIPIcs.FSTTCS.2020.36
article
Reachability in Dynamical Systems with Rounding
Baier, Christel
1
https://orcid.org/0000-0002-5321-9343
Funke, Florian
1
https://orcid.org/0000-0001-7301-1550
Jantsch, Simon
1
https://orcid.org/0000-0003-1692-2408
Karimov, Toghrul
2
https://orcid.org/0000-0002-9405-2332
Lefaucheux, Engel
2
https://orcid.org/0000-0003-0875-300X
Ouaknine, Joël
2
3
https://orcid.org/0000-0003-0031-9356
Pouly, Amaury
4
https://orcid.org/0000-0002-2549-951X
Purser, David
2
https://orcid.org/0000-0003-0394-1634
Whiteland, Markus A.
2
https://orcid.org/0000-0002-6006-9902
Technische Universität Dresden, Germany
Max Planck Institute for Software Systems, Saarland Informatics Campus, Saarbrücken, Germany
Department of Computer Science, Oxford University, UK
Université de Paris, CNRS, IRIF, F-75006, Paris, France
We consider reachability in dynamical systems with discrete linear updates, but with fixed digital precision, i.e., such that values of the system are rounded at each step. Given a matrix M ∈ ℚ^{d × d}, an initial vector x ∈ ℚ^{d}, a granularity g ∈ ℚ_+ and a rounding operation [⋅] projecting a vector of ℚ^{d} onto another vector whose every entry is a multiple of g, we are interested in the behaviour of the orbit 𝒪 = ⟨[x], [M[x]],[M[M[x]]],… ⟩, i.e., the trajectory of a linear dynamical system in which the state is rounded after each step. For arbitrary rounding functions with bounded effect, we show that the complexity of deciding point-to-point reachability - whether a given target y ∈ ℚ^{d} belongs to 𝒪 - is PSPACE-complete for hyperbolic systems (when no eigenvalue of M has modulus one). We also establish decidability without any restrictions on eigenvalues for several natural classes of rounding functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.36/LIPIcs.FSTTCS.2020.36.pdf
dynamical systems
rounding
reachability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
37:1
37:15
10.4230/LIPIcs.FSTTCS.2020.37
article
Parameterized Complexity of Safety of Threshold Automata
Balasubramanian, A. R.
1
Technische Universität München, Germany
Threshold automata are a formalism for modeling fault-tolerant distributed algorithms. In this paper, we study the parameterized complexity of reachability of threshold automata. As a first result, we show that the problem becomes W[1]-hard even when parameterized by parameters which are quite small in practice. We then consider two restricted cases which arise in practice and provide fixed-parameter tractable algorithms for both these cases. Finally, we report on experimental results conducted on some protocols taken from the literature.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.37/LIPIcs.FSTTCS.2020.37.pdf
Threshold automata
distributed algorithms
parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
38:1
38:17
10.4230/LIPIcs.FSTTCS.2020.38
article
Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks
Bernemann, Rebecca
1
Cabrera, Benjamin
1
Heckel, Reiko
2
König, Barbara
1
University of Duisburg-Essen, Germany
University of Leicester, UK
This paper exploits extended Bayesian networks for uncertainty reasoning on Petri nets, where firing of transitions is probabilistic. In particular, Bayesian networks are used as symbolic representations of probability distributions, modelling the observer’s knowledge about the tokens in the net. The observer can study the net by monitoring successful and failed steps.
An update mechanism for Bayesian nets is enabled by relaxing some of their restrictions, leading to modular Bayesian nets that can conveniently be represented and modified. As for every symbolic representation, the question is how to derive information - in this case marginal probability distributions - from a modular Bayesian net. We show how to do this by generalizing the known method of variable elimination. The approach is illustrated by examples about the spreading of diseases (SIR model) and information diffusion in social networks. We have implemented our approach and provide runtime results.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.38/LIPIcs.FSTTCS.2020.38.pdf
uncertainty reasoning
probabilistic knowledge
Petri nets
Bayesian networks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
39:1
39:17
10.4230/LIPIcs.FSTTCS.2020.39
article
Synthesizing Safe Coalition Strategies
Bertrand, Nathalie
1
https://orcid.org/0000-0002-9957-5394
Bouyer, Patricia
2
https://orcid.org/0000-0002-2823-0911
Majumdar, Anirban
1
2
Université Rennes, Inria, CNRS, IRISA, Rennes, France
Université Paris-Saclay, ENS Paris-Saclay, CNRS, LSV, Gif-sur-Yvette, France
Concurrent games with a fixed number of agents have been thoroughly studied, with various solution concepts and objectives for the agents. In this paper, we consider concurrent games with an arbitrary number of agents, and study the problem of synthesizing a coalition strategy to achieve a global safety objective. The problem is non-trivial since the agents do not know a priori how many they are when they start the game. We prove that the existence of a safe arbitrary-large coalition strategy for safety objectives is a PSPACE-hard problem that can be decided in exponential space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.39/LIPIcs.FSTTCS.2020.39.pdf
concurrent games
parameterized verification
strategy synthesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
40:1
40:16
10.4230/LIPIcs.FSTTCS.2020.40
article
Dynamic Network Congestion Games
Bertrand, Nathalie
1
https://orcid.org/0000-0002-9957-5394
Markey, Nicolas
1
https://orcid.org/0000-0003-1977-7525
Sadhukhan, Suman
1
Sankur, Ocan
1
https://orcid.org/0000-0001-8146-4429
Univ Rennes, Inria, CNRS, IRISA, Rennes, France
Congestion games are a classical type of games studied in game theory, in which n players choose a resource, and their individual cost increases with the number of other players choosing the same resource. In network congestion games (NCGs), the resources correspond to simple paths in a graph, e.g. representing routing options from a source to a target. In this paper, we introduce a variant of NCGs, referred to as dynamic NCGs: in this setting, players take transitions synchronously, they select their next transitions dynamically, and they are charged a cost that depends on the number of players simultaneously using the same transition.
We study, from a complexity perspective, standard concepts of game theory in dynamic NCGs: social optima, Nash equilibria, and subgame perfect equilibria. Our contributions are the following: the existence of a strategy profile with social cost bounded by a constant is in PSPACE and NP-hard. (Pure) Nash equilibria always exist in dynamic NCGs; the existence of a Nash equilibrium with bounded cost can be decided in EXPSPACE, and computing a witnessing strategy profile can be done in doubly-exponential time. The existence of a subgame perfect equilibrium with bounded cost can be decided in 2EXPSPACE, and a witnessing strategy profile can be computed in triply-exponential time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.40/LIPIcs.FSTTCS.2020.40.pdf
Congestion games
Nash equilibria
Subgame perfect equilibria
Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
41:1
41:13
10.4230/LIPIcs.FSTTCS.2020.41
article
On the Succinctness of Alternating Parity Good-For-Games Automata
Boker, Udi
1
Kuperberg, Denis
2
https://orcid.org/0000-0001-5406-717X
Lehtinen, Karoliina
3
https://orcid.org/0000-0003-1171-8790
Skrzypczak, Michał
4
https://orcid.org/0000-0002-9647-4993
Interdisciplinary Center (IDC) Herzliya, Israel
CNRS, LIP, École Normale Supérieure, Lyon, France
University of Liverpool, UK
Institute of Informatics, University of Warsaw, Poland
We study alternating parity good-for-games (GFG) automata, i.e., alternating parity automata where both conjunctive and disjunctive choices can be resolved in an online manner, without knowledge of the suffix of the input word still to be read.
We show that they can be exponentially more succinct than both their nondeterministic and universal counterparts. Furthermore, we present a single exponential determinisation procedure and an Exptime upper bound to the problem of recognising whether an alternating automaton is GFG.
We also study the complexity of deciding "half-GFGness", a property specific to alternating automata that only requires nondeterministic choices to be resolved in an online manner. We show that this problem is PSpace-hard already for alternating automata on finite words.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.41/LIPIcs.FSTTCS.2020.41.pdf
Good for games
history-determinism
alternation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
42:1
42:17
10.4230/LIPIcs.FSTTCS.2020.42
article
A Framework for Consistency Algorithms
Chini, Peter
1
Saivasan, Prakash
2
TU Braunschweig, Germany
The Institute of Mathematical Sciences, HBNI, Chennai, India
We present a framework that provides deterministic consistency algorithms for given memory models. Such an algorithm checks whether the executions of a shared-memory concurrent program are consistent under the axioms defined by a model. For memory models like SC and TSO, checking consistency is NP-complete. Our framework shows, that despite the hardness, fast deterministic consistency algorithms can be obtained by employing tools from fine-grained complexity.
The framework is based on a universal consistency problem which can be instantiated by different memory models. We construct an algorithm for the problem running in time 𝒪^*(2^k), where k is the number of write accesses in the execution that is checked for consistency. Each instance of the framework then admits an 𝒪^*(2^k)-time consistency algorithm. By applying the framework, we obtain corresponding consistency algorithms for SC, TSO, PSO, and RMO. Moreover, we show that the obtained algorithms for SC, TSO, and PSO are optimal in the fine-grained sense: there is no consistency algorithm for these running in time 2^{o(k)} unless the exponential time hypothesis fails.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.42/LIPIcs.FSTTCS.2020.42.pdf
Consistency
Weak Memory
Fine-Grained Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
43:1
43:14
10.4230/LIPIcs.FSTTCS.2020.43
article
Equivalence of Hidden Markov Models with Continuous Observations
Darwin, Oscar
1
https://orcid.org/0000-0001-5016-014X
Kiefer, Stefan
1
https://orcid.org/0000-0003-4173-6877
Department of Computer Science, Oxford University, UK
We consider Hidden Markov Models that emit sequences of observations that are drawn from continuous distributions. For example, such a model may emit a sequence of numbers, each of which is drawn from a uniform distribution, but the support of the uniform distribution depends on the state of the Hidden Markov Model. Such models generalise the more common version where each observation is drawn from a finite alphabet. We prove that one can determine in polynomial time whether two Hidden Markov Models with continuous observations are equivalent.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.43/LIPIcs.FSTTCS.2020.43.pdf
Markov chains
equivalence
probabilistic systems
verification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
44:1
44:14
10.4230/LIPIcs.FSTTCS.2020.44
article
Nivat-Theorem and Logic for Weighted Pushdown Automata on Infinite Words
Droste, Manfred
1
https://orcid.org/0000-0001-9128-8844
Dziadek, Sven
1
https://orcid.org/0000-0001-6767-7751
Kuich, Werner
2
Institut für Informatik, Universität Leipzig, Germany
Institut für Diskrete Mathematik und Geometrie, Technische Unversität Wien, Austria
Recently, weighted ω-pushdown automata have been introduced by Droste, Ésik, Kuich. This new type of automaton has access to a stack and models quantitative aspects of infinite words. Here, we consider a simple version of those automata. The simple ω-pushdown automata do not use ε-transitions and have a very restricted stack access. In previous work, we could show this automaton model to be expressively equivalent to context-free ω-languages in the unweighted case. Furthermore, semiring-weighted simple ω-pushdown automata recognize all ω-algebraic series.
Here, we consider ω-valuation monoids as weight structures. As a first result, we prove that for this weight structure and for simple ω-pushdown automata, Büchi-acceptance and Muller-acceptance are expressively equivalent. In our second result, we derive a Nivat theorem for these automata stating that the behaviors of weighted ω-pushdown automata are precisely the projections of very simple ω-series restricted to ω-context-free languages. The third result is a weighted logic with the same expressive power as the new automaton model. To prove the equivalence, we use a similar result for weighted nested ω-word automata and apply our present result of expressive equivalence of Muller and Büchi acceptance.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.44/LIPIcs.FSTTCS.2020.44.pdf
Weighted automata
Pushdown automata
Infinite words
Weighted logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
45:1
45:15
10.4230/LIPIcs.FSTTCS.2020.45
article
Synchronization of Deterministic Visibly Push-Down Automata
Fernau, Henning
1
https://orcid.org/0000-0002-4444-3220
Wolf, Petra
1
https://orcid.org/0000-0003-3097-3906
Universität Trier, Fachbereich IV, Informatikwissenschaften, Germany
We generalize the concept of synchronizing words for finite automata, which map all states of the automata to the same state, to deterministic visibly push-down automata. Here, a synchronizing word w does not only map all states to the same state but also fulfills some conditions on the stack content of each run after reading w. We consider three types of these stack constraints: after reading w, the stack (1) is empty in each run, (2) contains the same sequence of stack symbols in each run, or (3) contains an arbitrary sequence which is independent of the other runs. We show that in contrast to general deterministic push-down automata, it is decidable for deterministic visibly push-down automata whether there exists a synchronizing word with each of these stack constraints, more precisely, the problems are in EXPTIME. Under the constraint (1), the problem is even in P. For the sub-classes of deterministic very visibly push-down automata, the problem is in P for all three types of constraints. We further study variants of the synchronization problem where the number of turns in the stack height behavior caused by a synchronizing word is restricted, as well as the problem of synchronizing a variant of a sequential transducer, which shows some visibly behavior, by a word that synchronizes the states and produces the same output on all runs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.45/LIPIcs.FSTTCS.2020.45.pdf
Synchronizing word
Deterministic visibly push-down automata
Deterministc finite atuomata
Finite-turn push-down automata
Sequential transducer
Computational complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
46:1
46:16
10.4230/LIPIcs.FSTTCS.2020.46
article
Synthesis from Weighted Specifications with Partial Domains over Finite Words
Filiot, Emmanuel
1
Löding, Christof
2
Winter, Sarah
1
Université libre de Bruxelles, Belgium
RWTH Aachen University, Germany
In this paper, we investigate the synthesis problem of terminating reactive systems from quantitative specifications. Such systems are modeled as finite transducers whose executions are represented as finite words in (I × O)^*, where I, O are finite sets of input and output symbols, respectively. A weighted specification S assigns a rational value (or -∞) to words in (I × O)^*, and we consider three kinds of objectives for synthesis, namely threshold objectives where the system’s executions are required to be above some given threshold, best-value and approximate objectives where the system is required to perform as best as it can by providing output symbols that yield the best value and ε-best value respectively w.r.t. S. We establish a landscape of decidability results for these three objectives and weighted specifications with partial domain over finite words given by deterministic weighted automata equipped with sum, discounted-sum and average measures. The resulting objectives are not regular in general and we develop an infinite game framework to solve the corresponding synthesis problems, namely the class of (weighted) critical prefix games.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.46/LIPIcs.FSTTCS.2020.46.pdf
synthesis
weighted games
weighted automata on finite words
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
47:1
47:17
10.4230/LIPIcs.FSTTCS.2020.47
article
Reachability for Updatable Timed Automata Made Faster and More Effective
Gastin, Paul
1
https://orcid.org/0000-0002-1313-7722
Mukherjee, Sayan
2
https://orcid.org/0000-0001-6473-3172
Srivathsan, B
2
https://orcid.org/0000-0003-2666-0691
LSV, ENS Paris-Saclay, CNRS, Université Paris-Saclay, France
Chennai Mathematical Institute, India
Updatable timed automata (UTA) are extensions of classical timed automata that allow special updates to clock variables, like x: = x - 1, x : = y + 2, etc., on transitions. Reachability for UTA is undecidable in general. Various subclasses with decidable reachability have been studied. A generic approach to UTA reachability consists of two phases: first, a static analysis of the automaton is performed to compute a set of clock constraints at each state; in the second phase, reachable sets of configurations, called zones, are enumerated. In this work, we improve the algorithm for the static analysis. Compared to the existing algorithm, our method computes smaller sets of constraints and guarantees termination for more UTA, making reachability faster and more effective. As the main application, we get an alternate proof of decidability and a more efficient algorithm for timed automata with bounded subtraction, a class of UTA widely used for modelling scheduling problems. We have implemented our procedure in the tool TChecker and conducted experiments that validate the benefits of our approach.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.47/LIPIcs.FSTTCS.2020.47.pdf
Updatable timed automata
Reachability
Zones
Simulations
Static analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
48:1
48:16
10.4230/LIPIcs.FSTTCS.2020.48
article
Active Prediction for Discrete Event Systems
Haar, Stefan
1
https://orcid.org/0000-0002-1892-2703
Haddad, Serge
2
https://orcid.org/0000-0002-1759-1201
Schwoon, Stefan
2
https://orcid.org/0000-0001-6622-6510
Ye, Lina
3
https://orcid.org/0000-0002-2217-4752
INRIA, LSV, ENS Paris-Saclay, CNRS, Université Paris-Saclay, France
LSV, ENS Paris-Saclay, CNRS, INRIA, Université Paris-Saclay, France
LRI, Université Paris-Saclay, CentraleSupélec, France
A central task in partially observed controllable system is to detect or prevent the occurrence of certain events called faults. Systems for which one can design a controller avoiding the faults are called actively safe. Otherwise, one may require that a fault is eventually detected, which is the task of diagnosis. Systems for which one can design a controller detecting the faults are called actively diagnosable. An intermediate requirement is prediction, which consists in determining that a fault will occur whatever the future behaviour of the system. When a system is not predictable, one may be interested in designing a controller to make it so. Here we study the latter problem, called active prediction, and its associated property, active predictability. In other words, we investigate how to determine whether or not a system enjoys the active predictability property, i.e., there exists an active predictor for the system.
Our contributions are threefold. From a semantical point of view, we refine the notion of predictability by adding two quantitative requirements: the minimal and maximal delay before the occurence of the fault, and we characterize the requirements fulfilled by a controller that performs predictions. Then we show that active predictability is EXPTIME-complete where the upper bound is obtained via a game-based approach. Finally we establish that active predictability is equivalent to active safety when the maximal delay is beyond a threshold depending on the size of the system, and we show that this threshold is accurate by exhibiting a family of systems fulfilling active predictability but not active safety.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.48/LIPIcs.FSTTCS.2020.48.pdf
Automata Theory
Partially observed systems
Diagnosability
Predictability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
49:1
49:16
10.4230/LIPIcs.FSTTCS.2020.49
article
Comparing Labelled Markov Decision Processes
Kiefer, Stefan
1
https://orcid.org/0000-0003-4173-6877
Tang, Qiyi
1
https://orcid.org/0000-0002-9265-3011
Department of Computer Science, University of Oxford, UK
A labelled Markov decision process is a labelled Markov chain with nondeterminism, i.e., together with a strategy a labelled MDP induces a labelled Markov chain. The model is related to interval Markov chains. Motivated by applications of equivalence checking for the verification of anonymity, we study the algorithmic comparison of two labelled MDPs, in particular, whether there exist strategies such that the MDPs become equivalent/inequivalent, both in terms of trace equivalence and in terms of probabilistic bisimilarity. We provide the first polynomial-time algorithms for computing memoryless strategies to make the two labelled MDPs inequivalent if such strategies exist. We also study the computational complexity of qualitative problems about making the total variation distance and the probabilistic bisimilarity distance less than one or equal to one.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.49/LIPIcs.FSTTCS.2020.49.pdf
Markov decision processes
Markov chains
Behavioural metrics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
50:1
50:18
10.4230/LIPIcs.FSTTCS.2020.50
article
Computable Analysis for Verified Exact Real Computation
Konečný, Michal
1
https://orcid.org/0000-0003-2374-9017
Steinberg, Florian
2
Thies, Holger
3
https://orcid.org/0000-0003-3959-0741
School of Engineering and Applied Science, Aston University, UK
Inria Saclay, France
Department of Informatics, Kyushu University, Japan
We use ideas from computable analysis to formalize exact real number computation in the Coq proof assistant. Our formalization is built on top of the Incone library, a Coq library for computable analysis. We use the theoretical framework that computable analysis provides to systematically generate target specifications for real number algorithms. First we give very simple algorithms that fulfill these specifications based on rational approximations. To provide more efficient algorithms, we develop alternate representations that utilize an existing formalization of floating-point algorithms and interval arithmetic in combination with methods used by software packages for exact real arithmetic that focus on execution speed. We also define a general framework to define real number algorithms independently of their concrete encoding and to prove them correct. Algorithms verified in our framework can be extracted to Haskell programs for efficient computation. The performance of the extracted code is comparable to programs produced using non-verified software packages. This is without the need to optimize the extracted code by hand.
As an example, we formalize an algorithm for the square root function based on the Heron method. The algorithm is parametric in the implementation of the real number datatype, not referring to any details of its implementation. Thus the same verified algorithm can be used with different real number representations. Since Boolean valued comparisons of real numbers are not decidable, our algorithms use basic operations that take values in the Kleeneans and Sierpinski space. We develop some of the theory of these spaces. To capture the semantics of non-sequential operations, such as the "parallel or", we use multivalued functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.50/LIPIcs.FSTTCS.2020.50.pdf
Computable Analysis
exact real computation
formal proofs
proof assistant
Coq
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
51:1
51:16
10.4230/LIPIcs.FSTTCS.2020.51
article
Perspective Games with Notifications
Kupferman, Orna
1
Shenwald, Noam
1
School of Engineering and Computer Science, Hebrew University, Jerusalem, Israel
A reactive system has to satisfy its specification in all environments. Accordingly, design of correct reactive systems corresponds to the synthesis of winning strategies in games that model the interaction between the system and its environment. The game is played on a graph whose vertices are partitioned among the players. The players jointly generate a path in the graph, with each player deciding the successor vertex whenever the path reaches a vertex she owns. The objective of the system player is to force the computation induced by the generated infinite path to satisfy a given specification. The traditional way of modelling uncertainty in such games is observation-based. There, uncertainty is longitudinal: the players partially observe all vertices in the history. Recently, researchers introduced perspective games, where uncertainty is transverse: players fully observe the vertices they own and have no information about the behavior of the computation between visits in such vertices. We introduce and study perspective games with notifications: uncertainty is still transverse, yet a player may be notified about events that happen between visits in vertices she owns. We distinguish between structural notifications, for example about visits in some vertices, and behavioral notifications, for example about the computation exhibiting a certain behavior. We study the theoretic properties of perspective games with notifications, and the problem of deciding whether a player has a winning perspective strategy. Such a strategy depends only on the visible history, which consists of both visits in vertices the player owns and notifications during visits in other vertices. We show that the problem is EXPTIME-complete for objectives given by a deterministic or universal parity automaton over an alphabet that labels the vertices of the game, and notifications given by a deterministic satellite, and is 2EXPTIME-complete for LTL objectives. In all cases, the complexity in the size of the graph and the satellite is polynomial - exponentially easier than games with observation-based partial visibility. We also analyze the complexity of the problem for richer types of satellites.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.51/LIPIcs.FSTTCS.2020.51.pdf
Games
Incomplete Information
Automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
52:1
52:35
10.4230/LIPIcs.FSTTCS.2020.52
article
On the Complexity of Multi-Pushdown Games
Meyer, Roland
1
van der Wall, Sören
1
TU Braunschweig, Germany
We study the influence of parameters like the number of contexts, phases, and stacks on the complexity of solving parity games over concurrent recursive programs. Our first result shows that k-context games are b-EXPTIME-complete, where b = max{k-2, 1}. This means up to three contexts do not increase the complexity over an analysis for the sequential case. Our second result shows that for ordered k-stack as well as k-phase games the complexity jumps to k-EXPTIME-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.52/LIPIcs.FSTTCS.2020.52.pdf
concurrency
complexity
games
infinite state
multi-pushdown
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
53:1
53:14
10.4230/LIPIcs.FSTTCS.2020.53
article
Higher-Order Nonemptiness Step by Step
Parys, Paweł
1
https://orcid.org/0000-0001-7247-1408
Institute of Informatics, University of Warsaw, Poland
We show a new simple algorithm that checks whether a given higher-order grammar generates a nonempty language of trees. The algorithm amounts to a procedure that transforms a grammar of order n to a grammar of order n-1, preserving nonemptiness, and increasing the size only exponentially. After repeating the procedure n times, we obtain a grammar of order 0, whose nonemptiness can be easily checked. Since the size grows exponentially at each step, the overall complexity is n-EXPTIME, which is known to be optimal. More precisely, the transformation (and hence the whole algorithm) is linear in the size of the grammar, assuming that the arity of employed nonterminals is bounded by a constant. The same algorithm allows to check whether an infinite tree generated by a higher-order recursion scheme is accepted by an alternating safety (or reachability) automaton, because this question can be reduced to the nonemptiness problem by taking a product of the recursion scheme with the automaton.
A proof of correctness of the algorithm is formalised in the proof assistant Coq. Our transformation is motivated by a similar transformation of Asada and Kobayashi (2020) changing a word grammar of order n to a tree grammar of order n-1. The step-by-step approach can be opposed to previous algorithms solving the nonemptiness problem "in one step", being compulsorily more complicated.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.53/LIPIcs.FSTTCS.2020.53.pdf
Higher-order grammars
Nonemptiness
Model-checking
Transformation
Order reduction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
54:1
54:16
10.4230/LIPIcs.FSTTCS.2020.54
article
The Degree of a Finite Set of Words
Perrin, Dominique
1
Ryzhikov, Andrew
1
https://orcid.org/0000-0002-2031-2488
Université Gustave Eiffel, LIGM, Marne-la-Vallée, France
We generalize the notions of the degree and composition from uniquely decipherable codes to arbitrary finite sets of words. We prove that if X = Y∘Z is a composition of finite sets of words with Y complete, then d(X) = d(Y) ⋅ d(Z), where d(T) is the degree of T. We also show that a finite set is synchronizing if and only if its degree equals one.
This is done by considering, for an arbitrary finite set X of words, the transition monoid of an automaton recognizing X^* with multiplicities. We prove a number of results for such monoids, which generalize corresponding results for unambiguous monoids of relations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.54/LIPIcs.FSTTCS.2020.54.pdf
synchronizing set
degree of a set
group of a set
monoid of relations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
55:1
55:14
10.4230/LIPIcs.FSTTCS.2020.55
article
What You Must Remember When Transforming Datawords
Praveen, M.
1
2
Chennai Mathematical Institute, India
UMI ReLaX, Indo-French joint research unit
Streaming Data String Transducers (SDSTs) were introduced to model a class of imperative and a class of functional programs, manipulating lists of data items. These can be used to write commonly used routines such as insert, delete and reverse. SDSTs can handle data values from a potentially infinite data domain. The model of Streaming String Transducers (SSTs) is the fragment of SDSTs where the infinite data domain is dropped and only finite alphabets are considered. SSTs have been much studied from a language theoretical point of view. We introduce data back into SSTs, just like data was introduced to finite state automata to get register automata. The result is Streaming String Register Transducers (SSRTs), which is a subclass of SDSTs.
We use origin semantics for SSRTs and give a machine independent characterization, along the lines of Myhill-Nerode theorem. Machine independent characterizations for similar models are the basis of learning algorithms and enable us to understand fragments of the models. Origin semantics of transducers track which positions of the output originate from which positions of the input. Although a restriction, using origin semantics is well justified and is known to simplify many problems related to transducers. We use origin semantics as a technical building block, in addition to characterizations of deterministic register automata. However, we need to build more on top of these to overcome some challenges unique to SSRTs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.55/LIPIcs.FSTTCS.2020.55.pdf
Streaming String Transducers
Data words
Machine independent characterization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
56:1
56:13
10.4230/LIPIcs.FSTTCS.2020.56
article
Minimising Good-For-Games Automata Is NP-Complete
Schewe, Sven
1
https://orcid.org/0000-0002-9093-9518
University of Liverpool, UK
This paper discusses the hardness of finding minimal good-for-games (GFG) Büchi, Co-Büchi, and parity automata with state based acceptance. The problem appears to sit between finding small deterministic and finding small nondeterministic automata, where minimality is NP-complete and PSPACE-complete, respectively. However, recent work of Radi and Kupferman has shown that minimising Co-Büchi automata with transition based acceptance is tractable, which suggests that the complexity of minimising GFG automata might be cheaper than minimising deterministic automata.
We show for the standard state based acceptance that the minimality of a GFG automaton is NP-complete for Büchi, Co-Büchi, and parity GFG automata. The proofs are a surprisingly straight forward generalisation of the proofs from deterministic Büchi automata: they use a similar reductions, and the same hard class of languages.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.56/LIPIcs.FSTTCS.2020.56.pdf
Good-for-Games Automata
Automata Minimisation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
57:1
57:20
10.4230/LIPIcs.FSTTCS.2020.57
article
Static Race Detection for RTOS Applications
Tulsyan, Rishi
1
Pai, Rekha
1
D'Souza, Deepak
1
Indian Institute of Science Bangalore, India
We present a static analysis technique for detecting data races in Real-Time Operating System (RTOS) applications. These applications are often employed in safety-critical tasks and the presence of races may lead to erroneous behaviour with serious consequences. Analyzing these applications is challenging due to the variety of non-standard synchronization mechanisms they use. We propose a technique based on the notion of an "occurs-in-between" relation between statements. This notion enables us to capture the interplay of various synchronization mechanisms. We use a pre-analysis and a small set of not-occurs-in-between patterns to detect whether two statements may race with each other. Our experimental evaluation shows that the technique is efficient and effective in identifying races with high precision.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.57/LIPIcs.FSTTCS.2020.57.pdf
Static analysis
concurrency
data-race detection
RTOS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2020-12-04
182
58:1
58:14
10.4230/LIPIcs.FSTTCS.2020.58
article
Synchronization Under Dynamic Constraints
Wolf, Petra
1
https://orcid.org/0000-0003-3097-3906
Universität Trier, Fachbereich IV, Informatikwissenschaften, Germany
We introduce a new natural variant of the synchronization problem. Our aim is to model different constraints on the order in which a potential synchronizing word might traverse through the states. We discuss how a word can induce a state-order and examine the computational complexity of different variants of the problem whether an automaton can be synchronized with a word of which the induced order agrees with a given relation. While most of the problems are PSPACE-complete we also observe NP-complete variants and variants solvable in polynomial time. One of them is the careful synchronization problem for partial weakly acyclic automata (which are partial automata whose states can be ordered such that no transition leads to a smaller state), which is shown to be solvable in time 𝒪(k² n²) where n is the size of the state set and k is the alphabet-size. The algorithm even computes a synchronizing word as a witness. This is quite surprising as the careful synchronization problem uses to be a hard problem for most classes of automata. We will also observe a drop in the complexity if we track the orders of states on several paths simultaneously instead of tracking the set of active states. Further, we give upper bounds on the length of a synchronizing word depending on the size of the input relation and show that (despite the partiality) the bound of the Černý conjecture also holds for partial weakly acyclic automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol182-fsttcs2020/LIPIcs.FSTTCS.2020.58/LIPIcs.FSTTCS.2020.58.pdf
Synchronizing automaton
Černý conjecture
Reset sequence
Dynamic constraints
Computational complexity