LIPIcs, Volume 202

46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)



Thumbnail PDF

Event

MFCS 2021, August 23-27, 2021, Tallinn, Estonia

Editors

Filippo Bonchi
  • University of Pisa, Italy
Simon J. Puglisi
  • University of Helsinki, Finland

Publication Details

  • published at: 2021-08-18
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-201-3
  • DBLP: db/conf/mfcs/mfcs2021

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 202, MFCS 2021, Complete Volume

Authors: Filippo Bonchi and Simon J. Puglisi


Abstract
LIPIcs, Volume 202, MFCS 2021, Complete Volume

Cite as

46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 1-1560, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{bonchi_et_al:LIPIcs.MFCS.2021,
  title =	{{LIPIcs, Volume 202, MFCS 2021, Complete Volume}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{1--1560},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021},
  URN =		{urn:nbn:de:0030-drops-144396},
  doi =		{10.4230/LIPIcs.MFCS.2021},
  annote =	{Keywords: LIPIcs, Volume 202, MFCS 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Filippo Bonchi and Simon J. Puglisi


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bonchi_et_al:LIPIcs.MFCS.2021.0,
  author =	{Bonchi, Filippo and Puglisi, Simon J.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.0},
  URN =		{urn:nbn:de:0030-drops-144409},
  doi =		{10.4230/LIPIcs.MFCS.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Non-Axiomatizability of the Equational Theories of Positive Relation Algebras (Invited Talk)

Authors: Amina Doumane


Abstract
In the literature, there are two ways to show that the equational theory of relations over a given signature is not finitely axiomatizable. The first-one is based on games and a construction called Rainbow construction. This method is very technical but it shows a strong result: the equational theory cannot be axiomatized by any finite set of first-order formulas. There is another method, based on a graph characterization of the equational theory of relations, which is easier to get and to understand, but proves a weaker result: the equational theory cannot be axiomatized by any finite set of equations. In this presentation, I will show how to complete the second technique to get the stronger result of non-axiomatizability by first-order formulas.

Cite as

Amina Doumane. Non-Axiomatizability of the Equational Theories of Positive Relation Algebras (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{doumane:LIPIcs.MFCS.2021.1,
  author =	{Doumane, Amina},
  title =	{{Non-Axiomatizability of the Equational Theories of Positive Relation Algebras}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.1},
  URN =		{urn:nbn:de:0030-drops-144417},
  doi =		{10.4230/LIPIcs.MFCS.2021.1},
  annote =	{Keywords: Relation algebra, Graph homomorphism, Equational theories, First-order logic}
}
Document
Invited Talk
A Deep Dive into the Weisfeiler-Leman Algorithm (Invited Talk)

Authors: Martin Grohe


Abstract
The Weisfeiler-Leman algorithm is a well-known combinatorial graph isomorphism test going back to work of Weisfeiler and Leman in the late 1960s. The algorithm has a surprising number of seemingly unrelated characterisations in terms of logic, algebra, linear and semi-definite programming, and graph homomorphisms. Due to its simplicity and efficiency, it is an important subroutine of all modern graph isomorphism tools. In recent years, further applications in linear optimisation, probabilistic inference, and machine learning have surfaced. In my talk, I will introduce the Weisfeiler-Leman algorithm and some extensions. I will discuss its expressiveness and the various characterisations, and I will speak about its applications.

Cite as

Martin Grohe. A Deep Dive into the Weisfeiler-Leman Algorithm (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grohe:LIPIcs.MFCS.2021.2,
  author =	{Grohe, Martin},
  title =	{{A Deep Dive into the Weisfeiler-Leman Algorithm}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.2},
  URN =		{urn:nbn:de:0030-drops-144429},
  doi =		{10.4230/LIPIcs.MFCS.2021.2},
  annote =	{Keywords: Weisfeiler-Leman algorithm, graph isomorphism, counting homomorphisms, finite variable logics}
}
Document
Invited Talk
Holonomic Techniques, Periods, and Decision Problems (Invited Talk)

Authors: Joël Ouaknine


Abstract
Holonomic techniques have deep roots going back to Wallis, Euler, and Gauss, and have evolved in modern times as an important subfield of computer algebra, thanks in large part to the work of Zeilberger and others over the past three decades (see, e.g., [Doron Zeilberger, 1990; Petkovšek et al., 1997]). In this talk, I give an overview of the area, and in particular present a select survey of known and original results on decision problems for holonomic sequences and functions. I also discuss some surprising connections to the theory of periods and exponential periods, which are classical objects of study in algebraic geometry and number theory; in particular, I relate the decidability of certain decision problems for holonomic sequences to deep conjectures about periods and exponential periods, notably those due to Kontsevich and Zagier. Parts of this exposition draws upon [George Kenison et al., 2021].

Cite as

Joël Ouaknine. Holonomic Techniques, Periods, and Decision Problems (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ouaknine:LIPIcs.MFCS.2021.3,
  author =	{Ouaknine, Jo\"{e}l},
  title =	{{Holonomic Techniques, Periods, and Decision Problems}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.3},
  URN =		{urn:nbn:de:0030-drops-144431},
  doi =		{10.4230/LIPIcs.MFCS.2021.3},
  annote =	{Keywords: Holonomic and hypergeometric sequences, Inequality problems, Continued fractions, Periods}
}
Document
Invited Talk
On Dynamic Graphs (Invited Talk)

Authors: Eva Rotenberg


Abstract
In graph algorithms, many questions about a graph can be answered in time proportional to the size of the input, and such linear time algorithms are considered the epitome of efficiency. However, when the graph changes slightly, e.g. by the insertion or deletion of an edge or a vertex, it is undesirable to consider the entire input again. Rather, one would wish to keep some of the partial answers to questions about the old graph, and re-use them when computing answers to questions about the resulting graph. The art of handling such changes is studied in dynamic graph algorithms. In this talk, we will see some examples of ideas and techniques for efficiently maintaining knowledge about a dynamically changing graph. We will consider classical and natural graph properties such as connectivity and planarity, and we will focus on deterministic algorithms.

Cite as

Eva Rotenberg. On Dynamic Graphs (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rotenberg:LIPIcs.MFCS.2021.4,
  author =	{Rotenberg, Eva},
  title =	{{On Dynamic Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.4},
  URN =		{urn:nbn:de:0030-drops-144445},
  doi =		{10.4230/LIPIcs.MFCS.2021.4},
  annote =	{Keywords: Graph algorithms, dynamic graphs, connectivity, planarity, matching, online algorithms}
}
Document
Invited Talk
Sublinear Algorithms for Edit Distance (Invited Talk)

Authors: Barna Saha


Abstract
The edit distance is a way of quantifying how similar two strings are to one another by counting the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. A simple dynamic programming computes the edit distance between two strings of length n in O(n²) time, and a more sophisticated algorithm runs in time O(n+t²) where t is the distance (Landau, Myers and Schmidt, SICOMP 1998). In pursuit of obtaining faster running time, the last couple of decades have seen a flurry of research on approximating edit distance, including polylogarithmic approximation in near-linear time (Andoni, Krauthgamer and Onak, FOCS 2010), and a constant-factor approximation in subquadratic time (Chakrabarty, Das, Goldenberg, Koucký and Saks, FOCS 2018). In this talk, we will discuss recent progress that goes beyond linear time, and studies sublinear time algorithms for edit distance. We will also discuss the role preprocessing might play in designing fast algorithms. This is a joint work with Elazar Goldenberg, Tomasz Kociumaka, Robert Krauthgamer, and Aviad Rubinstein.

Cite as

Barna Saha. Sublinear Algorithms for Edit Distance (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{saha:LIPIcs.MFCS.2021.5,
  author =	{Saha, Barna},
  title =	{{Sublinear Algorithms for Edit Distance}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-144452},
  doi =		{10.4230/LIPIcs.MFCS.2021.5},
  annote =	{Keywords: Edit distance, sublinear algorithms, string processing}
}
Document
An Approximation Algorithm for the Matrix Tree Multiplication Problem

Authors: Mahmoud Abo-Khamis, Ryan Curtin, Sungjin Im, Benjamin Moseley, Hung Ngo, Kirk Pruhs, and Alireza Samadian


Abstract
We consider the Matrix Tree Multiplication problem. This problem is a generalization of the classic Matrix Chain Multiplication problem covered in the dynamic programming chapter of many introductory algorithms textbooks. An instance of the Matrix Tree Multiplication problem consists of a rooted tree with a matrix associated with each edge. The output is, for each leaf in the tree, the product of the matrices on the chain/path from the root to that leaf. Matrix multiplications that are shared between various chains need only be computed once, potentially being shared between different root to leaf chains. Algorithms are evaluated by the number of scalar multiplications performed. Our main result is a linear time algorithm for which the number of scalar multiplications performed is at most 15 times the optimal number of scalar multiplications.

Cite as

Mahmoud Abo-Khamis, Ryan Curtin, Sungjin Im, Benjamin Moseley, Hung Ngo, Kirk Pruhs, and Alireza Samadian. An Approximation Algorithm for the Matrix Tree Multiplication Problem. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{abokhamis_et_al:LIPIcs.MFCS.2021.6,
  author =	{Abo-Khamis, Mahmoud and Curtin, Ryan and Im, Sungjin and Moseley, Benjamin and Ngo, Hung and Pruhs, Kirk and Samadian, Alireza},
  title =	{{An Approximation Algorithm for the Matrix Tree Multiplication Problem}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.6},
  URN =		{urn:nbn:de:0030-drops-144464},
  doi =		{10.4230/LIPIcs.MFCS.2021.6},
  annote =	{Keywords: Matrix Multiplication, Approximation Algorithm}
}
Document
Depth-First Search in Directed Planar Graphs, Revisited

Authors: Eric Allender, Archit Chauhan, and Samir Datta


Abstract
We present an algorithm for constructing a depth-first search tree in planar digraphs; the algorithm can be implemented in the complexity class AC^1(UL∩co-UL), which is contained in AC². Prior to this (for more than a quarter-century), the fastest uniform deterministic parallel algorithm for this problem was O(log^{10}n) (corresponding to the complexity class AC^{10} ⊆ NC^{11}). We also consider the problem of computing depth-first search trees in other classes of graphs, and obtain additional new upper bounds.

Cite as

Eric Allender, Archit Chauhan, and Samir Datta. Depth-First Search in Directed Planar Graphs, Revisited. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.MFCS.2021.7,
  author =	{Allender, Eric and Chauhan, Archit and Datta, Samir},
  title =	{{Depth-First Search in Directed Planar Graphs, Revisited}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.7},
  URN =		{urn:nbn:de:0030-drops-144478},
  doi =		{10.4230/LIPIcs.MFCS.2021.7},
  annote =	{Keywords: Depth-First Search, Planar Digraphs, Parallel Algorithms, Space-Bounded Complexity Classes}
}
Document
Order Reconfiguration Under Width Constraints

Authors: Emmanuel Arrighi, Henning Fernau, Mateus de Oliveira Oliveira, and Petra Wolf


Abstract
In this work, we consider the following order reconfiguration problem: Given a graph G together with linear orders ω and ω' of the vertices of G, can one transform ω into ω' by a sequence of swaps of adjacent elements in such a way that at each time step the resulting linear order has cutwidth (pathwidth) at most k? We show that this problem always has an affirmative answer when the input linear orders ω and ω' have cutwidth (pathwidth) at most k/2. Using this result, we establish a connection between two apparently unrelated problems: the reachability problem for two-letter string rewriting systems and the graph isomorphism problem for graphs of bounded cutwidth. This opens an avenue for the study of the famous graph isomorphism problem using techniques from term rewriting theory.

Cite as

Emmanuel Arrighi, Henning Fernau, Mateus de Oliveira Oliveira, and Petra Wolf. Order Reconfiguration Under Width Constraints. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arrighi_et_al:LIPIcs.MFCS.2021.8,
  author =	{Arrighi, Emmanuel and Fernau, Henning and de Oliveira Oliveira, Mateus and Wolf, Petra},
  title =	{{Order Reconfiguration Under Width Constraints}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-144486},
  doi =		{10.4230/LIPIcs.MFCS.2021.8},
  annote =	{Keywords: Parameterized Complexity, Order Reconfiguration, String Rewriting Systems}
}
Document
Universal Gauge-Invariant Cellular Automata

Authors: Pablo Arrighi, Marin Costes, and Nathanaël Eon


Abstract
Gauge symmetries play a fundamental role in Physics, as they provide a mathematical justification for the fundamental forces. Usually, one starts from a non-interactive theory which governs "matter", and features a global symmetry. One then extends the theory so as make the global symmetry into a local one (a.k.a gauge-invariance). We formalise a discrete counterpart of this process, known as gauge extension, within the Computer Science framework of Cellular Automata (CA). We prove that the CA which admit a relative gauge extension are exactly the globally symmetric ones (a.k.a the colour-blind). We prove that any CA admits a non-relative gauge extension. Both constructions yield universal gauge-invariant CA, but the latter allows for a first example where the gauge extension mediates interactions within the initial CA.

Cite as

Pablo Arrighi, Marin Costes, and Nathanaël Eon. Universal Gauge-Invariant Cellular Automata. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arrighi_et_al:LIPIcs.MFCS.2021.9,
  author =	{Arrighi, Pablo and Costes, Marin and Eon, Nathana\"{e}l},
  title =	{{Universal Gauge-Invariant Cellular Automata}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.9},
  URN =		{urn:nbn:de:0030-drops-144490},
  doi =		{10.4230/LIPIcs.MFCS.2021.9},
  annote =	{Keywords: Cellular automata, Gauge-invariance, Universality}
}
Document
Equivalence Testing of Weighted Automata over Partially Commutative Monoids

Authors: V. Arvind, Abhranil Chatterjee, Rajit Datta, and Partha Mukhopadhyay


Abstract
Motivated by equivalence testing of k-tape automata, we study the equivalence testing of weighted automata in the more general setting, over partially commutative monoids (in short, pc monoids), and show efficient algorithms in some special cases, exploiting the structure of the underlying non-commutation graph of the monoid. Specifically, if the edge clique cover number of the non-commutation graph of the pc monoid is a constant, we obtain a deterministic quasi-polynomial time algorithm for equivalence testing. As a corollary, we obtain the first deterministic quasi-polynomial time algorithms for equivalence testing of k-tape weighted automata and for equivalence testing of deterministic k-tape automata for constant k. Prior to this, the best complexity upper bound for these k-tape automata problems were randomized polynomial-time, shown by Worrell [James Worrell, 2013]. Finding a polynomial-time deterministic algorithm for equivalence testing of deterministic k-tape automata for constant k has been open for several years [Emily P. Friedman and Sheila A. Greibach, 1982] and our results make progress. We also consider pc monoids for which the non-commutation graphs have an edge cover consisting of at most k cliques and star graphs for any constant k. We obtain a randomized polynomial-time algorithm for equivalence testing of weighted automata over such monoids. Our results are obtained by designing efficient zero-testing algorithms for weighted automata over such pc monoids.

Cite as

V. Arvind, Abhranil Chatterjee, Rajit Datta, and Partha Mukhopadhyay. Equivalence Testing of Weighted Automata over Partially Commutative Monoids. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.MFCS.2021.10,
  author =	{Arvind, V. and Chatterjee, Abhranil and Datta, Rajit and Mukhopadhyay, Partha},
  title =	{{Equivalence Testing of Weighted Automata over Partially Commutative Monoids}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.10},
  URN =		{urn:nbn:de:0030-drops-144503},
  doi =		{10.4230/LIPIcs.MFCS.2021.10},
  annote =	{Keywords: Weighted Automata, Automata Equivalence, Partially Commutative Monoid}
}
Document
Finitely Tractable Promise Constraint Satisfaction Problems

Authors: Kristina Asimi and Libor Barto


Abstract
The Promise Constraint Satisfaction Problem (PCSP) is a generalization of the Constraint Satisfaction Problem (CSP) that includes approximation variants of satisfiability and graph coloring problems. Barto [LICS '19] has shown that a specific PCSP, the problem to find a valid Not-All-Equal solution to a 1-in-3-SAT instance, is not finitely tractable in that it can be solved by a trivial reduction to a tractable CSP, but such a CSP is necessarily over an infinite domain (unless P=NP). We initiate a systematic study of this phenomenon by giving a general necessary condition for finite tractability and characterizing finite tractability within a class of templates - the "basic" tractable cases in the dichotomy theorem for symmetric Boolean PCSPs allowing negations by Brakensiek and Guruswami [SODA'18].

Cite as

Kristina Asimi and Libor Barto. Finitely Tractable Promise Constraint Satisfaction Problems. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{asimi_et_al:LIPIcs.MFCS.2021.11,
  author =	{Asimi, Kristina and Barto, Libor},
  title =	{{Finitely Tractable Promise Constraint Satisfaction Problems}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.11},
  URN =		{urn:nbn:de:0030-drops-144519},
  doi =		{10.4230/LIPIcs.MFCS.2021.11},
  annote =	{Keywords: Constraint satisfaction problems, promise constraint satisfaction, Boolean PCSP, polymorphism, finite tractability, homomorphic relaxation}
}
Document
A Generic Strategy Improvement Method for Simple Stochastic Games

Authors: David Auger, Xavier Badin de Montjoye, and Yann Strozecki


Abstract
We present a generic strategy improvement algorithm (GSIA) to find an optimal strategy of simple stochastic games (SSG). We prove the correctness of GSIA, and derive a general complexity bound, which implies and improves on the results of several articles. First, we remove the assumption that the SSG is stopping, which is usually obtained by a polynomial blowup of the game. Second, we prove a tight bound on the denominator of the values associated to a strategy, and use it to prove that all strategy improvement algorithms are in fact fixed parameter tractable in the number r of random vertices. All known strategy improvement algorithms can be seen as instances of GSIA, which allows to analyze the complexity of converge from below by Condon [Condon, 1993] and to propose a class of algorithms generalising Gimbert and Horn’s algorithm [Gimbert and Horn, 2008; Gimbert and Horn, 2009]. These algorithms terminate in at most r! iterations, and for binary SSGs, they do less iterations than the current best deterministic algorithm given by Ibsen-Jensen and Miltersen [Ibsen-Jensen and Miltersen, 2012].

Cite as

David Auger, Xavier Badin de Montjoye, and Yann Strozecki. A Generic Strategy Improvement Method for Simple Stochastic Games. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{auger_et_al:LIPIcs.MFCS.2021.12,
  author =	{Auger, David and Badin de Montjoye, Xavier and Strozecki, Yann},
  title =	{{A Generic Strategy Improvement Method for Simple Stochastic Games}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.12},
  URN =		{urn:nbn:de:0030-drops-144524},
  doi =		{10.4230/LIPIcs.MFCS.2021.12},
  annote =	{Keywords: Simple Stochastic Games, Strategy Improvement, Parametrized Complexity, Stopping, Meta Algorithm, f-strategy}
}
Document
(Un)Decidability for History Preserving True Concurrent Logics

Authors: Paolo Baldan, Alberto Carraro, and Tommaso Padoan


Abstract
We investigate the satisfiability problem for a logic for true concurrency, whose formulae predicate about events in computations and their causal (in)dependencies. Variants of such logics have been studied, with different expressiveness, corresponding to a number of true concurrent behavioural equivalences. Here we focus on a mu-calculus style logic that represents the counterpart of history-preserving (hp-)bisimilarity, a typical equivalence in the true concurrent spectrum of bisimilarities. It is known that one can decide whether or not two 1-safe Petri nets (and in general finite asynchronous transition systems) are hp-bisimilar. Moreover, for the logic that captures hp-bisimilarity the model-checking problem is decidable with respect to prime event structures satisfying suitable regularity conditions. To the best of our knowledge, the problem of satisfiability has been scarcely investigated in the realm of true concurrent logics. We show that satisfiability for the logic for hp-bisimilarity is undecidable via a reduction from domino tilings. The fragment of the logic without fixpoints, instead, turns out to be decidable. We consider these results a first step towards a more complete investigation of the satisfiability problem for true concurrent logics, which we believe to have notable solvable cases.

Cite as

Paolo Baldan, Alberto Carraro, and Tommaso Padoan. (Un)Decidability for History Preserving True Concurrent Logics. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{baldan_et_al:LIPIcs.MFCS.2021.13,
  author =	{Baldan, Paolo and Carraro, Alberto and Padoan, Tommaso},
  title =	{{(Un)Decidability for History Preserving True Concurrent Logics}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.13},
  URN =		{urn:nbn:de:0030-drops-144532},
  doi =		{10.4230/LIPIcs.MFCS.2021.13},
  annote =	{Keywords: Event structures, history-preserving bisimilarity, true concurrent behavioural logics, satisfiability, decidability, domino systems}
}
Document
Parameterized Complexity of Feature Selection for Categorical Data Clustering

Authors: Sayan Bandyapadhyay, Fedor V. Fomin, Petr A. Golovach, and Kirill Simonov


Abstract
We develop new algorithmic methods with provable guarantees for feature selection in regard to categorical data clustering. While feature selection is one of the most common approaches to reduce dimensionality in practice, most of the known feature selection methods are heuristics. We study the following mathematical model. We assume that there are some inadvertent (or undesirable) features of the input data that unnecessarily increase the cost of clustering. Consequently, we want to select a subset of the original features from the data such that there is a small-cost clustering on the selected features. More precisely, for given integers l (the number of irrelevant features) and k (the number of clusters), budget B, and a set of n categorical data points (represented by m-dimensional vectors whose elements belong to a finite set of values Σ), we want to select m-l relevant features such that the cost of any optimal k-clustering on these features does not exceed B. Here the cost of a cluster is the sum of Hamming distances (l0-distances) between the selected features of the elements of the cluster and its center. The clustering cost is the total sum of the costs of the clusters. We use the framework of parameterized complexity to identify how the complexity of the problem depends on parameters k, B, and |Σ|. Our main result is an algorithm that solves the Feature Selection problem in time f(k,B,|Σ|)⋅m^{g(k,|Σ|)}⋅n² for some functions f and g. In other words, the problem is fixed-parameter tractable parameterized by B when |Σ| and k are constants. Our algorithm for Feature Selection is based on a solution to a more general problem, Constrained Clustering with Outliers. In this problem, we want to delete a certain number of outliers such that the remaining points could be clustered around centers satisfying specific constraints. One interesting fact about Constrained Clustering with Outliers is that besides Feature Selection, it encompasses many other fundamental problems regarding categorical data such as Robust Clustering, Binary and Boolean Low-rank Matrix Approximation with Outliers, and Binary Robust Projective Clustering. Thus as a byproduct of our theorem, we obtain algorithms for all these problems. We also complement our algorithmic findings with complexity lower bounds.

Cite as

Sayan Bandyapadhyay, Fedor V. Fomin, Petr A. Golovach, and Kirill Simonov. Parameterized Complexity of Feature Selection for Categorical Data Clustering. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bandyapadhyay_et_al:LIPIcs.MFCS.2021.14,
  author =	{Bandyapadhyay, Sayan and Fomin, Fedor V. and Golovach, Petr A. and Simonov, Kirill},
  title =	{{Parameterized Complexity of Feature Selection for Categorical Data Clustering}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.14},
  URN =		{urn:nbn:de:0030-drops-144544},
  doi =		{10.4230/LIPIcs.MFCS.2021.14},
  annote =	{Keywords: Robust clustering, PCA, Low rank approximation, Hypergraph enumeration}
}
Document
Decision Questions for Probabilistic Automata on Small Alphabets

Authors: Paul C. Bell and Pavel Semukhin


Abstract
We study the emptiness and λ-reachability problems for unary and binary Probabilistic Finite Automata (PFA) and characterise the complexity of these problems in terms of the degree of ambiguity of the automaton and the size of its alphabet. Our main result is that emptiness and λ-reachability are solvable in EXPTIME for polynomially ambiguous unary PFA and if, in addition, the transition matrix is over {0, 1}, we show they are in NP. In contrast to the Skolem-hardness of the λ-reachability and emptiness problems for exponentially ambiguous unary PFA, we show that these problems are NP-hard even for finitely ambiguous unary PFA. For binary polynomially ambiguous PFA with commuting transition matrices, we prove NP-hardness of the λ-reachability (dimension 9), nonstrict emptiness (dimension 37) and strict emptiness (dimension 40) problems.

Cite as

Paul C. Bell and Pavel Semukhin. Decision Questions for Probabilistic Automata on Small Alphabets. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 15:1-15:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bell_et_al:LIPIcs.MFCS.2021.15,
  author =	{Bell, Paul C. and Semukhin, Pavel},
  title =	{{Decision Questions for Probabilistic Automata on Small Alphabets}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{15:1--15:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.15},
  URN =		{urn:nbn:de:0030-drops-144559},
  doi =		{10.4230/LIPIcs.MFCS.2021.15},
  annote =	{Keywords: Probabilistic finite automata, unary alphabet, emptiness problem, bounded ambiguity}
}
Document
Ideal Membership Problem for Boolean Minority and Dual Discriminator

Authors: Arpitha P. Bharathi and Monaldo Mastrolilli


Abstract
The polynomial Ideal Membership Problem (IMP) tests if an input polynomial f ∈ 𝔽[x_1,… ,x_n] with coefficients from a field 𝔽 belongs to a given ideal I ⊆ 𝔽[x_1,… ,x_n]. It is a well-known fundamental problem with many important applications, though notoriously intractable in the general case. In this paper we consider the IMP for polynomial ideals encoding combinatorial problems and where the input polynomial f has degree at most d = O(1) (we call this problem IMP_d). A dichotomy result between "hard" (NP-hard) and "easy" (polynomial time) IMPs was achieved for Constraint Satisfaction Problems over finite domains [Andrei A. Bulatov, 2017; Dmitriy Zhuk, 2020] (this is equivalent to IMP_0) and IMP_d for the Boolean domain [Mastrolilli, 2019], both based on the classification of the IMP through functions called polymorphisms. For the latter result, there are only six polymorphisms to be studied in order to achieve a full dichotomy result for the IMP_d. The complexity of the IMP_d for five of these polymorphisms has been solved in [Mastrolilli, 2019] whereas for the ternary minority polymorphism it was incorrectly declared in [Mastrolilli, 2019] to have been resolved by a previous result. In this paper we provide the missing link by proving that the IMP_d for Boolean combinatorial ideals whose constraints are closed under the minority polymorphism can be solved in polynomial time. This completes the identification of the precise borderline of tractability for the IMP_d for constrained problems over the Boolean domain. We also prove that the proof of membership for the IMP_d for problems constrained by the dual discriminator polymorphism over any finite domain can also be found in polynomial time. Bulatov and Rafiey [Andrei A. Bulatov and Akbar Rafiey, 2020] recently proved that the IMP_d for this polymorphism is decidable in polynomial time, without needing a proof of membership. Our result gives a proof of membership and can be used in applications such as Nullstellensatz and Sum-of-Squares proofs.

Cite as

Arpitha P. Bharathi and Monaldo Mastrolilli. Ideal Membership Problem for Boolean Minority and Dual Discriminator. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bharathi_et_al:LIPIcs.MFCS.2021.16,
  author =	{Bharathi, Arpitha P. and Mastrolilli, Monaldo},
  title =	{{Ideal Membership Problem for Boolean Minority and Dual Discriminator}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.16},
  URN =		{urn:nbn:de:0030-drops-144560},
  doi =		{10.4230/LIPIcs.MFCS.2021.16},
  annote =	{Keywords: Polynomial ideal membership, Polymorphisms, Gr\"{o}bner basis theory, Constraint satisfaction problems}
}
Document
Graph Traversals as Universal Constructions

Authors: Siddharth Bhaskar and Robin Kaarsgaard


Abstract
We exploit a decomposition of graph traversals to give a novel characterization of depth-first and breadth-first traversals by means of universal constructions. Specifically, we introduce functors from two different categories of edge-ordered directed graphs into two different categories of transitively closed edge-ordered graphs; one defines the lexicographic depth-first traversal and the other the lexicographic breadth-first traversal. We show that each functor factors as a composition of universal constructions, and that the usual presentation of traversals as linear orders on vertices can be recovered with the addition of an inclusion functor. Finally, we raise the question of to what extent we can recover search algorithms from the categorical description of the traversal they compute.

Cite as

Siddharth Bhaskar and Robin Kaarsgaard. Graph Traversals as Universal Constructions. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhaskar_et_al:LIPIcs.MFCS.2021.17,
  author =	{Bhaskar, Siddharth and Kaarsgaard, Robin},
  title =	{{Graph Traversals as Universal Constructions}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.17},
  URN =		{urn:nbn:de:0030-drops-144573},
  doi =		{10.4230/LIPIcs.MFCS.2021.17},
  annote =	{Keywords: graph traversals, adjunctions, universal constructions, category theory}
}
Document
Space-Efficient Fault-Tolerant Diameter Oracles

Authors: Davide Bilò, Sarel Cohen, Tobias Friedrich, and Martin Schirneck


Abstract
We design f-edge fault-tolerant diameter oracles (f-FDO, or simply FDO if f = 1). For a given directed or undirected and possibly edge-weighted graph G with n vertices and m edges and a positive integer f, we preprocess the graph and construct a data structure that, when queried with a set F of edges, where |F| ⩽ f, returns the diameter of G-F. An f-FDO has stretch σ ⩾ 1 if the returned value D^ satisfies diam(G-F) ⩽ D^ ⩽ σ diam(G-F). For the case of a single edge failure (f = 1) in an unweighted directed graph, there exists an approximate FDO by Henzinger et al. [ITCS 2017] with stretch (1+ε), constant query time, space O(m), and a combinatorial preprocessing time of Õ(mn + n^{1.5} √{Dm/ε}), where D is the diameter. We present an FDO for directed graphs with the same stretch, query time, and space. It has a preprocessing time of Õ(mn + n²/ε), which is better for constant ε > 0. The preprocessing time nearly matches a conditional lower bound for combinatorial algorithms, also by Henzinger et al. With fast matrix multiplication, we achieve a preprocessing time of Õ(n^{2.5794} + n²/ε). We further prove an information-theoretic lower bound showing that any FDO with stretch better than 3/2 requires Ω(m) bits of space. Thus, for constant 0 < ε < 3/2, our combinatorial (1+ε)-approximate FDO is near-optimal in all parameters. In the case of multiple edge failures (f > 1) in undirected graphs with non-negative edge weights, we give an f-FDO with stretch (f+2), query time O(f²log²{n}), Õ(fn) space, and preprocessing time Õ(fm). We complement this with a lower bound excluding any finite stretch in o(fn) space. Many real-world networks have polylogarithmic diameter. We show that for those graphs and up to f = o(log n/ log log n) failures one can swap approximation for query time and space. We present an exact combinatorial f-FDO with preprocessing time mn^{1+o(1)}, query time n^o(1), and space n^{2+o(1)}. When using fast matrix multiplication instead, the preprocessing time can be improved to n^{ω+o(1)}, where ω < 2.373 is the matrix multiplication exponent.

Cite as

Davide Bilò, Sarel Cohen, Tobias Friedrich, and Martin Schirneck. Space-Efficient Fault-Tolerant Diameter Oracles. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.MFCS.2021.18,
  author =	{Bil\`{o}, Davide and Cohen, Sarel and Friedrich, Tobias and Schirneck, Martin},
  title =	{{Space-Efficient Fault-Tolerant Diameter Oracles}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.18},
  URN =		{urn:nbn:de:0030-drops-144581},
  doi =		{10.4230/LIPIcs.MFCS.2021.18},
  annote =	{Keywords: derandomization, diameter, distance sensitivity oracle, fault-tolerant data structure, space lower bound}
}
Document
ω-Forest Algebras and Temporal Logics

Authors: Achim Blumensath and Jakub Lédl


Abstract
We use the algebraic framework for languages of infinite trees introduced in [A. Blumensath, 2020] to derive effective characterisations of various temporal logics, in particular the logic EF (a fragment of CTL) and its counting variant cEF.

Cite as

Achim Blumensath and Jakub Lédl. ω-Forest Algebras and Temporal Logics. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 19:1-19:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blumensath_et_al:LIPIcs.MFCS.2021.19,
  author =	{Blumensath, Achim and L\'{e}dl, Jakub},
  title =	{{\omega-Forest Algebras and Temporal Logics}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{19:1--19:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.19},
  URN =		{urn:nbn:de:0030-drops-144594},
  doi =		{10.4230/LIPIcs.MFCS.2021.19},
  annote =	{Keywords: forest algebras, temporal logics, bisimulation}
}
Document
Constructing Deterministic ω-Automata from Examples by an Extension of the RPNI Algorithm

Authors: León Bohn and Christof Löding


Abstract
The RPNI algorithm (Oncina, Garcia 1992) constructs deterministic finite automata from finite sets of negative and positive example words. We propose and analyze an extension of this algorithm to deterministic ω-automata with different types of acceptance conditions. In order to obtain this generalization of RPNI, we develop algorithms for the standard acceptance conditions of ω-automata that check for a given set of example words and a deterministic transition system, whether these example words can be accepted in the transition system with a corresponding acceptance condition. Based on these algorithms, we can define the extension of RPNI to infinite words. We prove that it can learn all deterministic ω-automata with an informative right congruence in the limit with polynomial time and data. We also show that the algorithm, while it can learn some automata that do not have an informative right congruence, cannot learn deterministic ω-automata for all regular ω-languages in the limit. Finally, we also prove that active learning with membership and equivalence queries is not easier for automata with an informative right congruence than for general deterministic ω-automata.

Cite as

León Bohn and Christof Löding. Constructing Deterministic ω-Automata from Examples by an Extension of the RPNI Algorithm. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bohn_et_al:LIPIcs.MFCS.2021.20,
  author =	{Bohn, Le\'{o}n and L\"{o}ding, Christof},
  title =	{{Constructing Deterministic \omega-Automata from Examples by an Extension of the RPNI Algorithm}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.20},
  URN =		{urn:nbn:de:0030-drops-144607},
  doi =		{10.4230/LIPIcs.MFCS.2021.20},
  annote =	{Keywords: deterministic omega-automata, learning from examples, learning in the limit, constructing acceptance conditions, active learning}
}
Document
Computational Complexity of Covering Multigraphs with Semi-Edges: Small Cases

Authors: Jan Bok, Jiří Fiala, Petr Hliněný, Nikola Jedličková, and Jan Kratochvíl


Abstract
We initiate the study of computational complexity of graph coverings, aka locally bijective graph homomorphisms, for graphs with semi-edges. The notion of graph covering is a discretization of coverings between surfaces or topological spaces, a notion well known and deeply studied in classical topology. Graph covers have found applications in discrete mathematics for constructing highly symmetric graphs, and in computer science in the theory of local computations. In 1991, Abello et al. asked for a classification of the computational complexity of deciding if an input graph covers a fixed target graph, in the ordinary setting (of graphs with only edges). Although many general results are known, the full classification is still open. In spite of that, we propose to study the more general case of covering graphs composed of normal edges (including multiedges and loops) and so-called semi-edges. Semi-edges are becoming increasingly popular in modern topological graph theory, as well as in mathematical physics. They also naturally occur in the local computation setting, since they are lifted to matchings in the covering graph. We show that the presence of semi-edges makes the covering problem considerably harder; e.g., it is no longer sufficient to specify the vertex mapping induced by the covering, but one necessarily has to deal with the edge mapping as well. We show some solvable cases and, in particular, completely characterize the complexity of the already very nontrivial problem of covering one- and two-vertex (multi)graphs with semi-edges. Our NP-hardness results are proven for simple input graphs, and in the case of regular two-vertex target graphs, even for bipartite ones. We remark that our new characterization results also strengthen previously known results for covering graphs without semi-edges, and they in turn apply to an infinite class of simple target graphs with at most two vertices of degree more than two. Some of the results are moreover proven in a more general setting (e.g., finding k-tuples of pairwise disjoint perfect matchings in regular graphs, or finding equitable partitions of regular bipartite graphs).

Cite as

Jan Bok, Jiří Fiala, Petr Hliněný, Nikola Jedličková, and Jan Kratochvíl. Computational Complexity of Covering Multigraphs with Semi-Edges: Small Cases. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bok_et_al:LIPIcs.MFCS.2021.21,
  author =	{Bok, Jan and Fiala, Ji\v{r}{\'\i} and Hlin\v{e}n\'{y}, Petr and Jedli\v{c}kov\'{a}, Nikola and Kratochv{\'\i}l, Jan},
  title =	{{Computational Complexity of Covering Multigraphs with Semi-Edges: Small Cases}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.21},
  URN =		{urn:nbn:de:0030-drops-144611},
  doi =		{10.4230/LIPIcs.MFCS.2021.21},
  annote =	{Keywords: graph cover, covering projection, semi-edges, multigraphs, complexity}
}
Document
Coherent Control and Distinguishability of Quantum Channels via PBS-Diagrams

Authors: Cyril Branciard, Alexandre Clément, Mehdi Mhalla, and Simon Perdrix


Abstract
Even though coherent control of quantum operations appears to be achievable in practice, it is still not yet well understood. Among theoretical challenges, standard completely positive trace preserving (CPTP) maps are known not to be appropriate to represent coherently controlled quantum channels. We introduce here a graphical language for coherent control of general quantum channels inspired by practical quantum optical setups involving polarising beam splitters (PBS). We consider different situations of coherent control and disambiguate CPTP maps by considering purified channels, an extension of Stinespring’s dilation. First, we show that in classical control settings, the observational equivalence classes of purified channels correspond to the standard definition of quantum channels (CPTP maps). Then, we propose a refinement of this equivalence class generalising the "half quantum switch" situation, where one is allowed to coherently control which quantum channel is applied; in this case, quantum channel implementations can be distinguished using a so-called transformation matrix. A further refinement characterising observational equivalence with general extended PBS-diagrams as contexts is also obtained. Finally, we propose a refinement that could be used for more general coherent control settings.

Cite as

Cyril Branciard, Alexandre Clément, Mehdi Mhalla, and Simon Perdrix. Coherent Control and Distinguishability of Quantum Channels via PBS-Diagrams. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{branciard_et_al:LIPIcs.MFCS.2021.22,
  author =	{Branciard, Cyril and Cl\'{e}ment, Alexandre and Mhalla, Mehdi and Perdrix, Simon},
  title =	{{Coherent Control and Distinguishability of Quantum Channels via PBS-Diagrams}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.22},
  URN =		{urn:nbn:de:0030-drops-144629},
  doi =		{10.4230/LIPIcs.MFCS.2021.22},
  annote =	{Keywords: Quantum Computing, Diagrammatic Language, Quantum Control, Polarising Beam Splitter, Categorical Quantum Mechanics, Quantum Switch}
}
Document
Reconfiguring Independent Sets on Interval Graphs

Authors: Marcin Briański, Stefan Felsner, Jędrzej Hodor, and Piotr Micek


Abstract
We study reconfiguration of independent sets in interval graphs under the token sliding rule. We show that if two independent sets of size k are reconfigurable in an n-vertex interval graph, then there is a reconfiguration sequence of length 𝒪(k⋅ n²). We also provide a construction in which the shortest reconfiguration sequence is of length Ω(k²⋅ n). As a counterpart to these results, we also establish that Independent Set Reconfiguration is PSPACE-hard on incomparability graphs, of which interval graphs are a special case.

Cite as

Marcin Briański, Stefan Felsner, Jędrzej Hodor, and Piotr Micek. Reconfiguring Independent Sets on Interval Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brianski_et_al:LIPIcs.MFCS.2021.23,
  author =	{Bria\'{n}ski, Marcin and Felsner, Stefan and Hodor, J\k{e}drzej and Micek, Piotr},
  title =	{{Reconfiguring Independent Sets on Interval Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.23},
  URN =		{urn:nbn:de:0030-drops-144633},
  doi =		{10.4230/LIPIcs.MFCS.2021.23},
  annote =	{Keywords: reconfiguration, independent sets, interval graphs}
}
Document
Finite Convergence of μ-Calculus Fixpoints on Genuinely Infinite Structures

Authors: Florian Bruse, Marco Sälzer, and Martin Lange


Abstract
The modal μ-calculus can only express bisimulation-invariant properties. It is a simple consequence of Kleene’s Fixpoint Theorem that on structures with finite bisimulation quotients, the fixpoint iteration of any formula converges after finitely many steps. We show that the converse does not hold: we construct a word with an infinite bisimulation quotient that is locally regular so that the iteration for any fixpoint formula of the modal μ-calculus on it converges after finitely many steps. This entails decidability of μ-calculus model-checking over this word. We also show that the reason for the discrepancy between infinite bisimulation quotients and trans-finite fixpoint convergence lies in the fact that the μ-calculus can only express regular properties.

Cite as

Florian Bruse, Marco Sälzer, and Martin Lange. Finite Convergence of μ-Calculus Fixpoints on Genuinely Infinite Structures. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bruse_et_al:LIPIcs.MFCS.2021.24,
  author =	{Bruse, Florian and S\"{a}lzer, Marco and Lange, Martin},
  title =	{{Finite Convergence of \mu-Calculus Fixpoints on Genuinely Infinite Structures}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.24},
  URN =		{urn:nbn:de:0030-drops-144643},
  doi =		{10.4230/LIPIcs.MFCS.2021.24},
  annote =	{Keywords: temporal logic, fixpoint iteration, bisimulation}
}
Document
Dots & Boxes Is PSPACE-Complete

Authors: Kevin Buchin, Mart Hagedoorn, Irina Kostitsyna, and Max van Mulken


Abstract
Exactly 20 years ago at MFCS, Demaine posed the open problem whether the game of Dots & Boxes is PSPACE-complete. Dots & Boxes has been studied extensively, with for instance a chapter in Berlekamp et al. Winning Ways for Your Mathematical Plays, a whole book on the game The Dots and Boxes Game: Sophisticated Child’s Play by Berlekamp, and numerous articles in the Games of No Chance series. While known to be NP-hard, the question of its complexity remained open. We resolve this question, proving that the game is PSPACE-complete by a reduction from a game played on propositional formulas.

Cite as

Kevin Buchin, Mart Hagedoorn, Irina Kostitsyna, and Max van Mulken. Dots & Boxes Is PSPACE-Complete. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.MFCS.2021.25,
  author =	{Buchin, Kevin and Hagedoorn, Mart and Kostitsyna, Irina and van Mulken, Max},
  title =	{{Dots \& Boxes Is PSPACE-Complete}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.25},
  URN =		{urn:nbn:de:0030-drops-144657},
  doi =		{10.4230/LIPIcs.MFCS.2021.25},
  annote =	{Keywords: Dots \& Boxes, PSPACE-complete, combinatorial game}
}
Document
Uncertain Curve Simplification

Authors: Kevin Buchin, Maarten Löffler, Aleksandr Popov, and Marcel Roeloffzen


Abstract
We study the problem of polygonal curve simplification under uncertainty, where instead of a sequence of exact points, each uncertain point is represented by a region which contains the (unknown) true location of the vertex. The regions we consider are disks, line segments, convex polygons, and discrete sets of points. We are interested in finding the shortest subsequence of uncertain points such that no matter what the true location of each uncertain point is, the resulting polygonal curve is a valid simplification of the original polygonal curve under the Hausdorff or the Fréchet distance. For both these distance measures, we present polynomial-time algorithms for this problem.

Cite as

Kevin Buchin, Maarten Löffler, Aleksandr Popov, and Marcel Roeloffzen. Uncertain Curve Simplification. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 26:1-26:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.MFCS.2021.26,
  author =	{Buchin, Kevin and L\"{o}ffler, Maarten and Popov, Aleksandr and Roeloffzen, Marcel},
  title =	{{Uncertain Curve Simplification}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{26:1--26:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.26},
  URN =		{urn:nbn:de:0030-drops-144666},
  doi =		{10.4230/LIPIcs.MFCS.2021.26},
  annote =	{Keywords: Curves, Uncertainty, Simplification, Fr\'{e}chet Distance, Hausdorff Distance}
}
Document
Fractional Homomorphism, Weisfeiler-Leman Invariance, and the Sherali-Adams Hierarchy for the Constraint Satisfaction Problem

Authors: Silvia Butti and Víctor Dalmau


Abstract
Given a pair of graphs 𝐀 and 𝐁, the problems of deciding whether there exists either a homomorphism or an isomorphism from 𝐀 to 𝐁 have received a lot of attention. While graph homomorphism is known to be NP-complete, the complexity of the graph isomorphism problem is not fully understood. A well-known combinatorial heuristic for graph isomorphism is the Weisfeiler-Leman test together with its higher order variants. On the other hand, both problems can be reformulated as integer programs and various LP methods can be applied to obtain high-quality relaxations that can still be solved efficiently. We study so-called fractional relaxations of these programs in the more general context where 𝐀 and 𝐁 are not graphs but arbitrary relational structures. We give a combinatorial characterization of the Sherali-Adams hierarchy applied to the homomorphism problem in terms of fractional isomorphism. Collaterally, we also extend a number of known results from graph theory to give a characterization of the notion of fractional isomorphism for relational structures in terms of the Weisfeiler-Leman test, equitable partitions, and counting homomorphisms from trees. As a result, we obtain a description of the families of CSPs that are closed under Weisfeiler-Leman invariance in terms of their polymorphisms as well as decidability by the first level of the Sherali-Adams hierarchy.

Cite as

Silvia Butti and Víctor Dalmau. Fractional Homomorphism, Weisfeiler-Leman Invariance, and the Sherali-Adams Hierarchy for the Constraint Satisfaction Problem. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{butti_et_al:LIPIcs.MFCS.2021.27,
  author =	{Butti, Silvia and Dalmau, V{\'\i}ctor},
  title =	{{Fractional Homomorphism, Weisfeiler-Leman Invariance, and the Sherali-Adams Hierarchy for the Constraint Satisfaction Problem}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.27},
  URN =		{urn:nbn:de:0030-drops-144679},
  doi =		{10.4230/LIPIcs.MFCS.2021.27},
  annote =	{Keywords: Weisfeiler-Leman algorithm, Sherali-Adams hierarchy, Graph homomorphism, Constraint Satisfaction Problem}
}
Document
A Decidable Equivalence for a Turing-Complete, Distributed Model of Computation

Authors: Arnaldo Cesco and Roberto Gorrieri


Abstract
Place/Transition Petri nets with inhibitor arcs (PTI nets for short), which are a well-known Turing-complete, distributed model of computation, are equipped with a decidable, behavioral equivalence, called pti-place bisimilarity, that conservatively extends place bisimilarity defined over Place/Transition nets (without inhibitor arcs). We prove that pti-place bisimilarity is sensible, as it respects the causal semantics of PTI nets.

Cite as

Arnaldo Cesco and Roberto Gorrieri. A Decidable Equivalence for a Turing-Complete, Distributed Model of Computation. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 28:1-28:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cesco_et_al:LIPIcs.MFCS.2021.28,
  author =	{Cesco, Arnaldo and Gorrieri, Roberto},
  title =	{{A Decidable Equivalence for a Turing-Complete, Distributed Model of Computation}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{28:1--28:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.28},
  URN =		{urn:nbn:de:0030-drops-144686},
  doi =		{10.4230/LIPIcs.MFCS.2021.28},
  annote =	{Keywords: Petri nets, Inhibitor arc, Behavioral equivalence, Bisimulation, Decidability}
}
Document
Black-Box Hypotheses and Lower Bounds

Authors: Brynmor K. Chapman and R. Ryan Williams


Abstract
What sort of code is so difficult to analyze that every potential analyst can discern essentially no information from the code, other than its input-output behavior? In their seminal work on program obfuscation, Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, and Yang (CRYPTO 2001) proposed the Black-Box Hypothesis, which roughly states that every property of Boolean functions which has an efficient "analyst" and is "code independent" can also be computed by an analyst that only has black-box access to the code. In their formulation of the Black-Box Hypothesis, the "analysts" are arbitrary randomized polynomial-time algorithms, and the "codes" are general (polynomial-size) circuits. If true, the Black-Box Hypothesis would immediately imply NP ̸ ⊂ BPP. We consider generalized forms of the Black-Box Hypothesis, where the set of "codes" 𝒞 and the set of "analysts" 𝒜 may correspond to other efficient models of computation, from more restricted models such as AC⁰ to more general models such as nondeterministic circuits. We show how lower bounds of the form 𝒞 ̸ ⊂ 𝒜 often imply a corresponding Black-Box Hypothesis for those respective codes and analysts. We investigate the possibility of "complete" problems for the Black-Box Hypothesis: problems in 𝒞 such that they are not in 𝒜 if and only if their corresponding Black-Box Hypothesis is true. Along the way, we prove an equivalence: for nondeterministic circuit classes 𝒞, the "𝒞-circuit satisfiability problem" is not in 𝒜 if and only if the Black-Box Hypothesis is true for analysts in 𝒜.

Cite as

Brynmor K. Chapman and R. Ryan Williams. Black-Box Hypotheses and Lower Bounds. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 29:1-29:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chapman_et_al:LIPIcs.MFCS.2021.29,
  author =	{Chapman, Brynmor K. and Williams, R. Ryan},
  title =	{{Black-Box Hypotheses and Lower Bounds}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{29:1--29:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.29},
  URN =		{urn:nbn:de:0030-drops-144698},
  doi =		{10.4230/LIPIcs.MFCS.2021.29},
  annote =	{Keywords: Black-Box hypothesis, circuit complexity, lower bounds}
}
Document
Geometry of Interaction for ZX-Diagrams

Authors: Kostia Chardonnet, Benoît Valiron, and Renaud Vilmart


Abstract
ZX-Calculus is a versatile graphical language for quantum computation equipped with an equational theory. Getting inspiration from Geometry of Interaction, in this paper we propose a token-machine-based asynchronous model of both pure ZX-Calculus and its extension to mixed processes. We also show how to connect this new semantics to the usual standard interpretation of ZX-diagrams. This model allows us to have a new look at what ZX-diagrams compute, and give a more local, operational view of the semantics of ZX-diagrams.

Cite as

Kostia Chardonnet, Benoît Valiron, and Renaud Vilmart. Geometry of Interaction for ZX-Diagrams. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 30:1-30:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chardonnet_et_al:LIPIcs.MFCS.2021.30,
  author =	{Chardonnet, Kostia and Valiron, Beno\^{i}t and Vilmart, Renaud},
  title =	{{Geometry of Interaction for ZX-Diagrams}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{30:1--30:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.30},
  URN =		{urn:nbn:de:0030-drops-144701},
  doi =		{10.4230/LIPIcs.MFCS.2021.30},
  annote =	{Keywords: Quantum Computation, Linear Logic, ZX-Calculus, Geometry of Interaction}
}
Document
Diameter Versus Certificate Complexity of Boolean Functions

Authors: Siddhesh Chaubal and Anna Gál


Abstract
In this paper, we introduce a measure of Boolean functions we call diameter, that captures the relationship between certificate complexity and several other measures of Boolean functions. Our measure can be viewed as a variation on alternating number, but while alternating number can be exponentially larger than certificate complexity, we show that diameter is always upper bounded by certificate complexity. We argue that estimating diameter may help to get improved bounds on certificate complexity in terms of sensitivity, and other measures. Previous results due to Lin and Zhang [Krishnamoorthy Dinesh and Jayalal Sarma, 2018] imply that s(f) ≥ Ω(n^{1/3}) for transitive functions with constant alternating number. We improve and extend this bound and prove that s(f) ≥ √n for transitive functions with constant alternating number, as well as for transitive functions with constant diameter. {We also show that bs(f) ≥ Ω(n^{3/7}) for transitive functions under the weaker condition that the "minimum" diameter is constant.} Furthermore, we prove that the log-rank conjecture holds for functions of the form f(x ⊕ y) for functions f with diameter bounded above by a polynomial of the logarithm of the Fourier sparsity of the function f.

Cite as

Siddhesh Chaubal and Anna Gál. Diameter Versus Certificate Complexity of Boolean Functions. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 31:1-31:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chaubal_et_al:LIPIcs.MFCS.2021.31,
  author =	{Chaubal, Siddhesh and G\'{a}l, Anna},
  title =	{{Diameter Versus Certificate Complexity of Boolean Functions}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{31:1--31:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.31},
  URN =		{urn:nbn:de:0030-drops-144713},
  doi =		{10.4230/LIPIcs.MFCS.2021.31},
  annote =	{Keywords: Sensitivity Conjecture, Boolean Functions, Certificate Complexity, Block Sensitivity, Log-rank Conjecture, Alternating Number}
}
Document
Budgeted Dominating Sets in Uncertain Graphs

Authors: Keerti Choudhary, Avi Cohen, N. S. Narayanaswamy, David Peleg, and R. Vijayaragunathan


Abstract
We study the Budgeted Dominating Set (BDS) problem on uncertain graphs, namely, graphs with a probability distribution p associated with the edges, such that an edge e exists in the graph with probability p(e). The input to the problem consists of a vertex-weighted uncertain graph 𝒢 = (V, E, p, ω) and an integer budget (or solution size) k, and the objective is to compute a vertex set S of size k that maximizes the expected total domination (or total weight) of vertices in the closed neighborhood of S. We refer to the problem as the Probabilistic Budgeted Dominating Set (PBDS) problem. In this article, we present the following results on the complexity of the PBDS problem. 1) We show that the PBDS problem is NP-complete even when restricted to uncertain trees of diameter at most four. This is in sharp contrast with the well-known fact that the BDS problem is solvable in polynomial time in trees. We further show that PBDS is 𝖶[1]-hard for the budget parameter k, and under the Exponential time hypothesis it cannot be solved in n^o(k) time. 2) We show that if one is willing to settle for (1-ε) approximation, then there exists a PTAS for PBDS on trees. Moreover, for the scenario of uniform edge-probabilities, the problem can be solved optimally in polynomial time. 3) We consider the parameterized complexity of the PBDS problem, and show that Uni-PBDS (where all edge probabilities are identical) is 𝖶[1]-hard for the parameter pathwidth. On the other hand, we show that it is FPT in the combined parameters of the budget k and the treewidth. 4) Finally, we extend some of our parameterized results to planar and apex-minor-free graphs. Our first hardness proof (Thm. 1) makes use of the new problem of k-Subset Σ-Π Maximization (k-SPM), which we believe is of independent interest. We prove its NP-hardness by a reduction from the well-known k-SUM problem, presenting a close relationship between the two problems.

Cite as

Keerti Choudhary, Avi Cohen, N. S. Narayanaswamy, David Peleg, and R. Vijayaragunathan. Budgeted Dominating Sets in Uncertain Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 32:1-32:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.MFCS.2021.32,
  author =	{Choudhary, Keerti and Cohen, Avi and Narayanaswamy, N. S. and Peleg, David and Vijayaragunathan, R.},
  title =	{{Budgeted Dominating Sets in Uncertain Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{32:1--32:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.32},
  URN =		{urn:nbn:de:0030-drops-144723},
  doi =		{10.4230/LIPIcs.MFCS.2021.32},
  annote =	{Keywords: Uncertain graphs, Dominating set, NP-hard, PTAS, treewidth, planar graph}
}
Document
On the Complexity of the Escape Problem for Linear Dynamical Systems over Compact Semialgebraic Sets

Authors: Julian D'Costa, Engel Lefaucheux, Eike Neumann, Joël Ouaknine, and James Worrell


Abstract
We study the computational complexity of the Escape Problem for discrete-time linear dynamical systems over compact semialgebraic sets, or equivalently the Termination Problem for affine loops with compact semialgebraic guard sets. Consider the fragment of the theory of the reals consisting of negation-free ∃ ∀-sentences without strict inequalities. We derive several equivalent characterisations of the associated complexity class which demonstrate its robustness and illustrate its expressive power. We show that the Compact Escape Problem is complete for this class.

Cite as

Julian D'Costa, Engel Lefaucheux, Eike Neumann, Joël Ouaknine, and James Worrell. On the Complexity of the Escape Problem for Linear Dynamical Systems over Compact Semialgebraic Sets. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 33:1-33:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dcosta_et_al:LIPIcs.MFCS.2021.33,
  author =	{D'Costa, Julian and Lefaucheux, Engel and Neumann, Eike and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{On the Complexity of the Escape Problem for Linear Dynamical Systems over Compact Semialgebraic Sets}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{33:1--33:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-144734},
  doi =		{10.4230/LIPIcs.MFCS.2021.33},
  annote =	{Keywords: Discrete linear dynamical systems, Program termination, Compact semialgebraic sets, Theory of the reals}
}
Document
The Pseudo-Skolem Problem is Decidable

Authors: Julian D'Costa, Toghrul Karimov, Rupak Majumdar, Joël Ouaknine, Mahmoud Salamati, Sadegh Soudjani, and James Worrell


Abstract
We study fundamental decision problems on linear dynamical systems in discrete time. We focus on pseudo-orbits, the collection of trajectories of the dynamical system for which there is an arbitrarily small perturbation at each step. Pseudo-orbits are generalizations of orbits in the topological theory of dynamical systems. We study the pseudo-orbit problem, whether a state belongs to the pseudo-orbit of another state, and the pseudo-Skolem problem, whether a hyperplane is reachable by an ε-pseudo-orbit for every ε. These problems are analogous to the well-studied orbit problem and Skolem problem on unperturbed dynamical systems. Our main results show that the pseudo-orbit problem is decidable in polynomial time and the Skolem problem on pseudo-orbits is decidable. The former extends the seminal result of Kannan and Lipton from orbits to pseudo-orbits. The latter is in contrast to the Skolem problem for linear dynamical systems, which remains open for proper orbits.

Cite as

Julian D'Costa, Toghrul Karimov, Rupak Majumdar, Joël Ouaknine, Mahmoud Salamati, Sadegh Soudjani, and James Worrell. The Pseudo-Skolem Problem is Decidable. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 34:1-34:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dcosta_et_al:LIPIcs.MFCS.2021.34,
  author =	{D'Costa, Julian and Karimov, Toghrul and Majumdar, Rupak and Ouaknine, Jo\"{e}l and Salamati, Mahmoud and Soudjani, Sadegh and Worrell, James},
  title =	{{The Pseudo-Skolem Problem is Decidable}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{34:1--34:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.34},
  URN =		{urn:nbn:de:0030-drops-144742},
  doi =		{10.4230/LIPIcs.MFCS.2021.34},
  annote =	{Keywords: Pseudo-orbits, Orbit problem, Skolem problem, linear dynamical systems}
}
Document
A Recursion-Theoretic Characterization of the Probabilistic Class PP

Authors: Ugo Dal Lago, Reinhard Kahle, and Isabel Oitavem


Abstract
Probabilistic complexity classes, despite capturing the notion of feasibility, have escaped any treatment by the tools of so-called implicit-complexity. Their inherently semantic nature is of course a barrier to the characterization of classes like BPP or ZPP, but not all classes are semantic. In this paper, we introduce a recursion-theoretic characterization of the probabilistic class PP, using recursion schemata with pointers.

Cite as

Ugo Dal Lago, Reinhard Kahle, and Isabel Oitavem. A Recursion-Theoretic Characterization of the Probabilistic Class PP. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 35:1-35:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dallago_et_al:LIPIcs.MFCS.2021.35,
  author =	{Dal Lago, Ugo and Kahle, Reinhard and Oitavem, Isabel},
  title =	{{A Recursion-Theoretic Characterization of the Probabilistic Class PP}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{35:1--35:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.35},
  URN =		{urn:nbn:de:0030-drops-144754},
  doi =		{10.4230/LIPIcs.MFCS.2021.35},
  annote =	{Keywords: Implicit complexity, tree-recursion, probabilistic classes, polynomial time, PP}
}
Document
Parallel Polynomial Permanent Mod Powers of 2 and Shortest Disjoint Cycles

Authors: Samir Datta and Kishlaya Jaiswal


Abstract
We present a parallel algorithm for permanent mod 2^k of a matrix of univariate integer polynomials. It places the problem in ⨁L ⊆ NC². This extends the techniques of Valiant [Leslie G. Valiant, 1979], Braverman, Kulkarni and Roy [Mark Braverman et al., 2009] and Björklund and Husfeldt [Andreas Björklund and Thore Husfeldt, 2019] and yields a (randomized) parallel algorithm for shortest two disjoint paths improving upon the recent (randomized) polynomial time algorithm [Andreas Björklund and Thore Husfeldt, 2019]. We also recognize the disjoint paths problem as a special case of finding disjoint cycles, and present (randomized) parallel algorithms for finding a shortest cycle and shortest two disjoint cycles passing through any given fixed number of vertices or edges.

Cite as

Samir Datta and Kishlaya Jaiswal. Parallel Polynomial Permanent Mod Powers of 2 and Shortest Disjoint Cycles. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.MFCS.2021.36,
  author =	{Datta, Samir and Jaiswal, Kishlaya},
  title =	{{Parallel Polynomial Permanent Mod Powers of 2 and Shortest Disjoint Cycles}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.36},
  URN =		{urn:nbn:de:0030-drops-144763},
  doi =		{10.4230/LIPIcs.MFCS.2021.36},
  annote =	{Keywords: permanent mod powers of 2, parallel computation, graphs, shortest disjoint paths, shortest disjoint cycles}
}
Document
On the Relative Power of Linear Algebraic Approximations of Graph Isomorphism

Authors: Anuj Dawar and Danny Vagnozzi


Abstract
We compare the capabilities of two approaches to approximating graph isomorphism using linear algebraic methods: the invertible map tests (introduced by Dawar and Holm) and proof systems with algebraic rules, namely polynomial calculus, monomial calculus and Nullstellensatz calculus. In the case of fields of characteristic zero, these variants are all essentially equivalent to the Weisfeiler-Leman algorithms. In positive characteristic we show that the distinguishing power of the monomial calculus is no greater than the invertible map method by simulating the former in a fixed-point logic with solvability operators. In turn, we show that the distinctions made by this logic can be implemented in the Nullstellensatz calculus.

Cite as

Anuj Dawar and Danny Vagnozzi. On the Relative Power of Linear Algebraic Approximations of Graph Isomorphism. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.MFCS.2021.37,
  author =	{Dawar, Anuj and Vagnozzi, Danny},
  title =	{{On the Relative Power of Linear Algebraic Approximations of Graph Isomorphism}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.37},
  URN =		{urn:nbn:de:0030-drops-144774},
  doi =		{10.4230/LIPIcs.MFCS.2021.37},
  annote =	{Keywords: Graph isomorphism, proof complexity, invertible map tests}
}
Document
Maximum Cut on Interval Graphs of Interval Count Four Is NP-Complete

Authors: Celina M. H. de Figueiredo, Alexsander A. de Melo, Fabiano S. Oliveira, and Ana Silva


Abstract
The computational complexity of the MaxCut problem restricted to interval graphs has been open since the 80’s, being one of the problems proposed by Johnson on his Ongoing Guide to NP-completeness, and has been settled as NP-complete only recently by Adhikary, Bose, Mukherjee and Roy. On the other hand, many flawed proofs of polynomiality for MaxCut on the more restrictive class of unit/proper interval graphs (or graphs with interval count 1) have been presented along the years, and the classification of the problem is still not known. In this paper, we present the first NP-completeness proof for MaxCut when restricted to interval graphs with bounded interval count, namely graphs with interval count 4.

Cite as

Celina M. H. de Figueiredo, Alexsander A. de Melo, Fabiano S. Oliveira, and Ana Silva. Maximum Cut on Interval Graphs of Interval Count Four Is NP-Complete. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 38:1-38:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{defigueiredo_et_al:LIPIcs.MFCS.2021.38,
  author =	{de Figueiredo, Celina M. H. and de Melo, Alexsander A. and Oliveira, Fabiano S. and Silva, Ana},
  title =	{{Maximum Cut on Interval Graphs of Interval Count Four Is NP-Complete}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{38:1--38:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.38},
  URN =		{urn:nbn:de:0030-drops-144781},
  doi =		{10.4230/LIPIcs.MFCS.2021.38},
  annote =	{Keywords: maximum cut, interval graphs, interval lengths, interval count, NP-complete}
}
Document
Fuzzy Simultaneous Congruences

Authors: Max A. Deppert, Klaus Jansen, and Kim-Manuel Klein


Abstract
We introduce a very natural generalization of the well-known problem of simultaneous congruences. Instead of searching for a positive integer s that is specified by n fixed remainders modulo integer divisors a₁,… ,a_n we consider remainder intervals R₁,… ,R_n such that s is feasible if and only if s is congruent to r_i modulo a_i for some remainder r_i in interval R_i for all i. This problem is a special case of a 2-stage integer program with only two variables per constraint which is is closely related to directed Diophantine approximation as well as the mixing set problem. We give a hardness result showing that the problem is NP-hard in general. By investigating the case of harmonic divisors, i.e. a_{i+1}/a_i is an integer for all i < n, which was heavily studied for the mixing set problem as well, we also answer a recent algorithmic question from the field of real-time systems. We present an algorithm to decide the feasibility of an instance in time 𝒪(n²) and we show that if it exists even the smallest feasible solution can be computed in strongly polynomial time 𝒪(n³).

Cite as

Max A. Deppert, Klaus Jansen, and Kim-Manuel Klein. Fuzzy Simultaneous Congruences. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 39:1-39:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{deppert_et_al:LIPIcs.MFCS.2021.39,
  author =	{Deppert, Max A. and Jansen, Klaus and Klein, Kim-Manuel},
  title =	{{Fuzzy Simultaneous Congruences}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{39:1--39:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.39},
  URN =		{urn:nbn:de:0030-drops-144792},
  doi =		{10.4230/LIPIcs.MFCS.2021.39},
  annote =	{Keywords: Simultaneous congruences, Integer programming, Mixing Set, Real-time scheduling, Diophantine approximation}
}
Document
Pebble Transducers with Unary Output

Authors: Gaëtan Douéneau-Tabot


Abstract
Bojańczyk recently initiated an intensive study of deterministic pebble transducers, which are two-way automata that can drop marks (named "pebbles") on their input word, and produce an output word. They describe functions from words to words. Two natural restrictions of this definition have been investigated: marble transducers by Douéneau-Tabot et al., and comparison-free pebble transducers (that we rename here "blind transducers") by Nguyên et al. Here, we study the decidability of membership problems between the classes of functions computed by pebble, marble and blind transducers that produce a unary output. First, we show that pebble and marble transducers have the same expressive power when the outputs are unary (which is false over non-unary outputs). Then, we characterize 1-pebble transducers with unary output that describe a function computable by a blind transducer, and show that the membership problem is decidable. These results can be interpreted in terms of automated simplification of programs.

Cite as

Gaëtan Douéneau-Tabot. Pebble Transducers with Unary Output. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 40:1-40:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{doueneautabot:LIPIcs.MFCS.2021.40,
  author =	{Dou\'{e}neau-Tabot, Ga\"{e}tan},
  title =	{{Pebble Transducers with Unary Output}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{40:1--40:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.40},
  URN =		{urn:nbn:de:0030-drops-144805},
  doi =		{10.4230/LIPIcs.MFCS.2021.40},
  annote =	{Keywords: polyregular functions, pebble transducers, marble transducers, streaming string transducers, factorization forests}
}
Document
Graph Characterization of the Universal Theory of Relations

Authors: Amina Doumane


Abstract
The equational theory of relations can be characterized using graphs and homomorphisms. This result, found independently by Freyd and Scedrov and by Andréka and Bredikhin, shows that the equational theory of relations is decidable. In this paper, we extend this characterization to the whole universal first-order theory of relations. Using our characterization, we show that the positive universal fragment is also decidable.

Cite as

Amina Doumane. Graph Characterization of the Universal Theory of Relations. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{doumane:LIPIcs.MFCS.2021.41,
  author =	{Doumane, Amina},
  title =	{{Graph Characterization of the Universal Theory of Relations}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.41},
  URN =		{urn:nbn:de:0030-drops-144815},
  doi =		{10.4230/LIPIcs.MFCS.2021.41},
  annote =	{Keywords: Relation algebra, Graph homomorphism, Equational theories, First-order logic}
}
Document
Co-Degeneracy and Co-Treewidth: Using the Complement to Solve Dense Instances

Authors: Gabriel L. Duarte, Mateus de Oliveira Oliveira, and Uéverton S. Souza


Abstract
Clique-width and treewidth are two of the most important and useful graph parameters, and several problems can be solved efficiently when restricted to graphs of bounded clique-width or treewidth. Bounded treewidth implies bounded clique-width, but not vice versa. Problems like Longest Cycle, Longest Path, MaxCut, Edge Dominating Set, and Graph Coloring are fixed-parameter tractable when parameterized by the treewidth, but they cannot be solved in FPT time when parameterized by the clique-width unless FPT = W[1], as shown by Fomin, Golovach, Lokshtanov, and Saurabh [SIAM J. Comput. 2010, SIAM J. Comput. 2014]. For a given problem that is fixed-parameter tractable when parameterized by treewidth, but intractable when parameterized by clique-width, there may exist infinite families of instances of bounded clique-width and unbounded treewidth where the problem can be solved efficiently. In this work, we initiate a systematic study of the parameters co-treewidth (the treewidth of the complement of the input graph) and co-degeneracy (the degeneracy of the complement of the input graph). We show that Longest Cycle, Longest Path, and Edge Dominating Set are FPT when parameterized by co-degeneracy. On the other hand, Graph Coloring is para-NP-complete when parameterized by co-degeneracy but FPT when parameterized by the co-treewidth. Concerning MaxCut, we give an FPT algorithm parameterized by co-treewidth, while we leave open the complexity of the problem parameterized by co-degeneracy. Additionally, we show that Precoloring Extension is fixed-parameter tractable when parameterized by co-treewidth, while this problem is known to be W[1]-hard when parameterized by treewidth. These results give evidence that co-treewidth is a useful width parameter for handling dense instances of problems for which an FPT algorithm for clique-width is unlikely to exist. Finally, we develop an algorithmic framework for co-degeneracy based on the notion of Bondy-Chvátal closure.

Cite as

Gabriel L. Duarte, Mateus de Oliveira Oliveira, and Uéverton S. Souza. Co-Degeneracy and Co-Treewidth: Using the Complement to Solve Dense Instances. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 42:1-42:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{duarte_et_al:LIPIcs.MFCS.2021.42,
  author =	{Duarte, Gabriel L. and de Oliveira Oliveira, Mateus and Souza, U\'{e}verton S.},
  title =	{{Co-Degeneracy and Co-Treewidth: Using the Complement to Solve Dense Instances}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{42:1--42:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.42},
  URN =		{urn:nbn:de:0030-drops-144828},
  doi =		{10.4230/LIPIcs.MFCS.2021.42},
  annote =	{Keywords: FPT, treewidth, degeneracy, complement graph, Bondy-Chv\'{a}tal closure}
}
Document
Isometric Embeddings in Trees and Their Use in Distance Problems

Authors: Guillaume Ducoffe


Abstract
We present powerful techniques for computing the diameter, all the eccentricities, and other related distance problems on some geometric graph classes, by exploiting their "tree-likeness" properties. We illustrate the usefulness of our approach as follows: - We propose a subquadratic-time algorithm for computing all eccentricities on partial cubes of bounded lattice dimension and isometric dimension O(n^{0.5-ε}). This is one of the first positive results achieved for the diameter problem on a subclass of partial cubes beyond median graphs. - Then, we obtain almost linear-time algorithms for computing all eccentricities in some classes of face-regular plane graphs, including benzenoid systems, with applications to chemistry. Previously, only a linear-time algorithm for computing the diameter and the center was known (and an Õ(n^{5/3})-time algorithm for computing all the eccentricities). - We also present an almost linear-time algorithm for computing the eccentricities in a polygon graph with an additive one-sided error of at most 2. - Finally, on any cube-free median graph, we can compute its absolute center in almost linear time. Independently from this work, Bergé and Habib have recently presented a linear-time algorithm for computing all eccentricities in this graph class (LAGOS'21), which also implies a linear-time algorithm for the absolute center problem. Our strategy here consists in exploiting the existence of some embeddings of these graphs in either a system or a product of trees, or in a single tree but where each vertex of the graph is embedded in a subset of nodes. While this may look like a natural idea, the way it can be done efficiently, which is our main technical contribution in the paper, is surprisingly intricate.

Cite as

Guillaume Ducoffe. Isometric Embeddings in Trees and Their Use in Distance Problems. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 43:1-43:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ducoffe:LIPIcs.MFCS.2021.43,
  author =	{Ducoffe, Guillaume},
  title =	{{Isometric Embeddings in Trees and Their Use in Distance Problems}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{43:1--43:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.43},
  URN =		{urn:nbn:de:0030-drops-144835},
  doi =		{10.4230/LIPIcs.MFCS.2021.43},
  annote =	{Keywords: Tree embeddings, Range queries, Centroid decomposition, Heavy-path decomposition, Diameter, Radius and all Eccentricities computations}
}
Document
On Computing the Average Distance for Some Chordal-Like Graphs

Authors: Guillaume Ducoffe


Abstract
The Wiener index of a graph G is the sum of all its distances. Up to renormalization, it is also the average distance in G. The problem of computing this parameter has different applications in chemistry and networks. We here study when it can be done in truly subquadratic time (in the size n+m of the input) on n-vertex m-edge graphs. Our main result is a complete answer to this question, assuming the Strong Exponential-Time Hypothesis (SETH), for all the hereditary subclasses of chordal graphs. Interestingly, the exact same result also holds for the diameter problem. The case of non-hereditary chordal subclasses happens to be more challenging. For the chordal Helly graphs we propose an intricate Õ(m^{3/2})-time algorithm for computing the Wiener index, where m denotes the number of edges. We complete our results with the first known linear-time algorithm for this problem on the dually chordal graphs. The former algorithm also computes the median set.

Cite as

Guillaume Ducoffe. On Computing the Average Distance for Some Chordal-Like Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 44:1-44:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ducoffe:LIPIcs.MFCS.2021.44,
  author =	{Ducoffe, Guillaume},
  title =	{{On Computing the Average Distance for Some Chordal-Like Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{44:1--44:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.44},
  URN =		{urn:nbn:de:0030-drops-144841},
  doi =		{10.4230/LIPIcs.MFCS.2021.44},
  annote =	{Keywords: Wiener index, Graph diameter, Hardness in P, Chordal graphs, Helly graphs}
}
Document
A Cubic Vertex-Kernel for Trivially Perfect Editing

Authors: Maël Dumas, Anthony Perez, and Ioan Todinca


Abstract
We consider the Trivially Perfect Editing problem, where one is given an undirected graph G = (V,E) and a parameter k ∈ ℕ and seeks to edit (add or delete) at most k edges from G to obtain a trivially perfect graph. The related Trivially Perfect Completion and Trivially Perfect Deletion problems are obtained by only allowing edge additions or edge deletions, respectively. Trivially perfect graphs are both chordal and cographs, and have applications related to the tree-depth width parameter and to social network analysis. All variants of the problem are known to be NP-complete [Burzyn et al., 2006; James Nastos and Yong Gao, 2013] and to admit so-called polynomial kernels [Pål Grønås Drange and Michał Pilipczuk, 2018; Jiong Guo, 2007]. More precisely, the existence of an O(k³) vertex-kernel for Trivially Perfect Completion was announced by Guo [Jiong Guo, 2007] but without a stand-alone proof. More recently, Drange and Pilipczuk [Pål Grønås Drange and Michał Pilipczuk, 2018] provided O(k⁷) vertex-kernels for these problems and left open the existence of cubic vertex-kernels. In this work, we answer positively to this question for all three variants of the problem.

Cite as

Maël Dumas, Anthony Perez, and Ioan Todinca. A Cubic Vertex-Kernel for Trivially Perfect Editing. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dumas_et_al:LIPIcs.MFCS.2021.45,
  author =	{Dumas, Ma\"{e}l and Perez, Anthony and Todinca, Ioan},
  title =	{{A Cubic Vertex-Kernel for Trivially Perfect Editing}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.45},
  URN =		{urn:nbn:de:0030-drops-144851},
  doi =		{10.4230/LIPIcs.MFCS.2021.45},
  annote =	{Keywords: Parameterized complexity, kernelization algorithms, graph modification, trivially perfect graphs}
}
Document
Lower Bounds on Avoiding Thresholds

Authors: Robert Ferens, Marek Szykuła, and Vojtěch Vorel


Abstract
For a DFA, a word avoids a subset of states, if after reading that word the automaton cannot be in any state from the subset regardless of its initial state. A subset that admits an avoiding word is avoidable. The k-avoiding threshold of a DFA is the smallest number such that every avoidable subset of size k can be avoided with a word no longer than that number. We study the problem of determining the maximum possible k-avoiding thresholds. For every fixed k ≥ 1, we show a general construction of strongly connected DFAs with n states and the k-avoiding threshold in Θ(n^k). This meets the known upper bound for k ≥ 3. For k = 1 and k = 2, the known upper bounds are respectively in 𝒪(n²) and in 𝒪(n³). For k = 1, we show that 2n-3 is attainable for every number of states n in the class of strongly connected synchronizing binary DFAs, which is supposed to be the best possible in the class of all DFAs for n ≥ 8. For k = 2, we show that the conjectured solution for k = 1 (an upper bound in 𝒪(n)) also implies a tight upper bound in 𝒪(n²) on 2-avoiding threshold. Finally, we discuss the possibility of using k-avoiding thresholds of synchronizing automata to improve upper bounds on the length of the shortest reset words.

Cite as

Robert Ferens, Marek Szykuła, and Vojtěch Vorel. Lower Bounds on Avoiding Thresholds. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ferens_et_al:LIPIcs.MFCS.2021.46,
  author =	{Ferens, Robert and Szyku{\l}a, Marek and Vorel, Vojt\v{e}ch},
  title =	{{Lower Bounds on Avoiding Thresholds}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.46},
  URN =		{urn:nbn:de:0030-drops-144869},
  doi =		{10.4230/LIPIcs.MFCS.2021.46},
  annote =	{Keywords: avoiding word, \v{C}ern\'{y} conjecture, rank conjecture, reset threshold, reset word, synchronizing automaton, synchronizing word}
}
Document
HyperLTL Satisfiability Is Σ₁¹-Complete, HyperCTL* Satisfiability Is Σ₁²-Complete

Authors: Marie Fortin, Louwe B. Kuijer, Patrick Totzke, and Martin Zimmermann


Abstract
Temporal logics for the specification of information-flow properties are able to express relations between multiple executions of a system. The two most important such logics are HyperLTL and HyperCTL*, which generalise LTL and CTL* by trace quantification. It is known that this expressiveness comes at a price, i.e. satisfiability is undecidable for both logics. In this paper we settle the exact complexity of these problems, showing that both are in fact highly undecidable: we prove that HyperLTL satisfiability is Σ₁¹-complete and HyperCTL* satisfiability is Σ₁²-complete. These are significant increases over the previously known lower bounds and the first upper bounds. To prove Σ₁²-membership for HyperCTL*, we prove that every satisfiable HyperCTL* sentence has a model that is equinumerous to the continuum, the first upper bound of this kind. We prove this bound to be tight. Finally, we show that the membership problem for every level of the HyperLTL quantifier alternation hierarchy is Π₁¹-complete.

Cite as

Marie Fortin, Louwe B. Kuijer, Patrick Totzke, and Martin Zimmermann. HyperLTL Satisfiability Is Σ₁¹-Complete, HyperCTL* Satisfiability Is Σ₁²-Complete. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 47:1-47:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fortin_et_al:LIPIcs.MFCS.2021.47,
  author =	{Fortin, Marie and Kuijer, Louwe B. and Totzke, Patrick and Zimmermann, Martin},
  title =	{{HyperLTL Satisfiability Is \Sigma₁¹-Complete, HyperCTL* Satisfiability Is \Sigma₁²-Complete}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{47:1--47:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.47},
  URN =		{urn:nbn:de:0030-drops-144870},
  doi =		{10.4230/LIPIcs.MFCS.2021.47},
  annote =	{Keywords: HyperLTL, HyperCTL*, Satisfiability, Analytical Hierarchy}
}
Document
Matching Patterns with Variables Under Hamming Distance

Authors: Paweł Gawrychowski, Florin Manea, and Stefan Siemer


Abstract
A pattern α is a string of variables and terminal letters. We say that α matches a word w, consisting only of terminal letters, if w can be obtained by replacing the variables of α by terminal words. The matching problem, i.e., deciding whether a given pattern matches a given word, was heavily investigated: it is NP-complete in general, but can be solved efficiently for classes of patterns with restricted structure. In this paper, we approach this problem in a generalized setting, by considering approximate pattern matching under Hamming distance. More precisely, we are interested in what is the minimum Hamming distance between w and any word u obtained by replacing the variables of α by terminal words. Firstly, we address the class of regular patterns (in which no variable occurs twice) and propose efficient algorithms for this problem, as well as matching conditional lower bounds. We show that the problem can still be solved efficiently if we allow repeated variables, but restrict the way the different variables can be interleaved according to a locality parameter. However, as soon as we allow a variable to occur more than once and its occurrences can be interleaved arbitrarily with those of other variables, even if none of them occurs more than once, the problem becomes intractable.

Cite as

Paweł Gawrychowski, Florin Manea, and Stefan Siemer. Matching Patterns with Variables Under Hamming Distance. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 48:1-48:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.MFCS.2021.48,
  author =	{Gawrychowski, Pawe{\l} and Manea, Florin and Siemer, Stefan},
  title =	{{Matching Patterns with Variables Under Hamming Distance}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{48:1--48:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.48},
  URN =		{urn:nbn:de:0030-drops-144886},
  doi =		{10.4230/LIPIcs.MFCS.2021.48},
  annote =	{Keywords: Pattern with variables, Matching algorithms, Hamming distance, Conditional lower bounds, Patterns with structural restrictions}
}
Document
Keyboards as a New Model of Computation

Authors: Yoan Géran, Bastien Laboureix, Corto Mascle, and Valentin D. Richard


Abstract
We introduce a new formalisation of language computation, called keyboards. We consider a set of atomic operations (writing a letter, erasing a letter, going to the right or to the left) and we define a keyboard as a set of finite sequences of such operations, called keys. The generated language is the set of words obtained by applying some non-empty sequence of those keys. Unlike classical models of computation, every key can be applied anytime. We define various classes of languages based on different sets of atomic operations, and compare their expressive powers. We also compare them to rational, context-free and context-sensitive languages. We obtain a strict hierarchy of classes, whose expressiveness is orthogonal to the one of the aforementioned classical models. We also study closure properties of those classes, as well as fundamental complexity problems on keyboards.

Cite as

Yoan Géran, Bastien Laboureix, Corto Mascle, and Valentin D. Richard. Keyboards as a New Model of Computation. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 49:1-49:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{geran_et_al:LIPIcs.MFCS.2021.49,
  author =	{G\'{e}ran, Yoan and Laboureix, Bastien and Mascle, Corto and Richard, Valentin D.},
  title =	{{Keyboards as a New Model of Computation}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{49:1--49:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.49},
  URN =		{urn:nbn:de:0030-drops-144896},
  doi =		{10.4230/LIPIcs.MFCS.2021.49},
  annote =	{Keywords: formal languages, models of computation, automata theory}
}
Document
Quantum Speedups for Dynamic Programming on n-Dimensional Lattice Graphs

Authors: Adam Glos, Martins Kokainis, Ryuhei Mori, and Jevgēnijs Vihrovs


Abstract
Motivated by the quantum speedup for dynamic programming on the Boolean hypercube by Ambainis et al. (2019), we investigate which graphs admit a similar quantum advantage. In this paper, we examine a generalization of the Boolean hypercube graph, the n-dimensional lattice graph Q(D,n) with vertices in {0,1,…,D}ⁿ. We study the complexity of the following problem: given a subgraph G of Q(D,n) via query access to the edges, determine whether there is a path from 0ⁿ to Dⁿ. While the classical query complexity is Θ̃((D+1)ⁿ), we show a quantum algorithm with complexity Õ(T_Dⁿ), where T_D < D+1. The first few values of T_D are T₁ ≈ 1.817, T₂ ≈ 2.660, T₃ ≈ 3.529, T₄ ≈ 4.421, T₅ ≈ 5.332. We also prove that T_D ≥ (D+1)/e (here, e ≈ 2.718 is the Euler’s number), thus for general D, this algorithm does not provide, for example, a speedup, polynomial in the size of the lattice. While the presented quantum algorithm is a natural generalization of the known quantum algorithm for D = 1 by Ambainis et al., the analysis of complexity is rather complicated. For the precise analysis, we use the saddle-point method, which is a common tool in analytic combinatorics, but has not been widely used in this field. We then show an implementation of this algorithm with time and space complexity poly(n)^{log n} T_Dⁿ in the QRAM model, and apply it to the Set Multicover problem. In this problem, m subsets of [n] are given, and the task is to find the smallest number of these subsets that cover each element of [n] at least D times. While the time complexity of the best known classical algorithm is O(m(D+1)ⁿ), the time complexity of our quantum algorithm is poly(m,n)^{log n} T_Dⁿ.

Cite as

Adam Glos, Martins Kokainis, Ryuhei Mori, and Jevgēnijs Vihrovs. Quantum Speedups for Dynamic Programming on n-Dimensional Lattice Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 50:1-50:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{glos_et_al:LIPIcs.MFCS.2021.50,
  author =	{Glos, Adam and Kokainis, Martins and Mori, Ryuhei and Vihrovs, Jevg\={e}nijs},
  title =	{{Quantum Speedups for Dynamic Programming on n-Dimensional Lattice Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{50:1--50:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.50},
  URN =		{urn:nbn:de:0030-drops-144901},
  doi =		{10.4230/LIPIcs.MFCS.2021.50},
  annote =	{Keywords: Quantum query complexity, Dynamic programming, Lattice graphs}
}
Document
A Note on the Join of Varieties of Monoids with LI

Authors: Nathan Grosshans


Abstract
In this note, we give a characterisation in terms of identities of the join of V with the variety of finite locally trivial semigroups LI for several well-known varieties of finite monoids V by using classical algebraic-automata-theoretic techniques. To achieve this, we use the new notion of essentially-V stamps defined by Grosshans, McKenzie and Segoufin and show that it actually coincides with the join of V and LI precisely when some natural condition on the variety of languages corresponding to V is verified. This work is a kind of rediscovery of the work of J. C. Costa around 20 years ago from a rather different angle, since Costa’s work relies on the use of advanced developments in profinite topology, whereas what is presented here essentially uses an algebraic, language-based approach.

Cite as

Nathan Grosshans. A Note on the Join of Varieties of Monoids with LI. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 51:1-51:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grosshans:LIPIcs.MFCS.2021.51,
  author =	{Grosshans, Nathan},
  title =	{{A Note on the Join of Varieties of Monoids with LI}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{51:1--51:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.51},
  URN =		{urn:nbn:de:0030-drops-144918},
  doi =		{10.4230/LIPIcs.MFCS.2021.51},
  annote =	{Keywords: Varieties of monoids, join, LI}
}
Document
Optimal Regular Expressions for Palindromes of Given Length

Authors: Hermann Gruber and Markus Holzer


Abstract
The language P_n (P̃_n, respectively) consists of all words that are palindromes of length 2n (2n-1, respectively) over a fixed binary alphabet. We construct a regular expression that specifies P_n (P̃_n, respectively) of alphabetic width 4⋅ 2ⁿ-4 (3⋅ 2ⁿ-4, respectively) and show that this is optimal, that is, the expression has minimum alphabetic width among all expressions that describe P_n (P̃_n, respectively). To this end we give optimal expressions for the first k palindromes in lexicographic order of odd and even length, proving that the optimal bound is 2n+4(k-1)-2 S₂(k-1) in case of odd length and 2n+3(k-1)-2 S₂(k-1)-1 for even length, respectively. Here S₂(n) refers to the Hamming weight function, which denotes the number of ones in the binary expansion of the number n.

Cite as

Hermann Gruber and Markus Holzer. Optimal Regular Expressions for Palindromes of Given Length. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gruber_et_al:LIPIcs.MFCS.2021.52,
  author =	{Gruber, Hermann and Holzer, Markus},
  title =	{{Optimal Regular Expressions for Palindromes of Given Length}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.52},
  URN =		{urn:nbn:de:0030-drops-144921},
  doi =		{10.4230/LIPIcs.MFCS.2021.52},
  annote =	{Keywords: regular expression, descriptional complexity, lower bound, upper bound, recurrence, sum of digits}
}
Document
A Bit of Nondeterminism Makes Pushdown Automata Expressive and Succinct

Authors: Shibashis Guha, Ismaël Jecker, Karoliina Lehtinen, and Martin Zimmermann


Abstract
We study the expressiveness and succinctness of good-for-games pushdown automata (GFG-PDA) over finite words, that is, pushdown automata whose nondeterminism can be resolved based on the run constructed so far, but independently of the remainder of the input word. We prove that GFG-PDA recognise more languages than deterministic PDA (DPDA) but not all context-free languages (CFL). This class is orthogonal to unambiguous CFL. We further show that GFG-PDA can be exponentially more succinct than DPDA, while PDA can be double-exponentially more succinct than GFG-PDA. We also study GFGness in visibly pushdown automata (VPA), which enjoy better closure properties than PDA, and for which we show GFGness to be ExpTime-complete. GFG-VPA can be exponentially more succinct than deterministic VPA, while VPA can be exponentially more succinct than GFG-VPA. Both of these lower bounds are tight. Finally, we study the complexity of resolving nondeterminism in GFG-PDA. Every GFG-PDA has a positional resolver, a function that resolves nondeterminism and that is only dependant on the current configuration. Pushdown transducers are sufficient to implement the resolvers of GFG-VPA, but not those of GFG-PDA. GFG-PDA with finite-state resolvers are determinisable.

Cite as

Shibashis Guha, Ismaël Jecker, Karoliina Lehtinen, and Martin Zimmermann. A Bit of Nondeterminism Makes Pushdown Automata Expressive and Succinct. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guha_et_al:LIPIcs.MFCS.2021.53,
  author =	{Guha, Shibashis and Jecker, Isma\"{e}l and Lehtinen, Karoliina and Zimmermann, Martin},
  title =	{{A Bit of Nondeterminism Makes Pushdown Automata Expressive and Succinct}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.53},
  URN =		{urn:nbn:de:0030-drops-144932},
  doi =		{10.4230/LIPIcs.MFCS.2021.53},
  annote =	{Keywords: Pushdown Automata, Good-for-games, Synthesis, Succintness}
}
Document
Perfect Forests in Graphs and Their Extensions

Authors: Gregory Gutin and Anders Yeo


Abstract
Let G be a graph on n vertices. For i ∈ {0,1} and a connected graph G, a spanning forest F of G is called an i-perfect forest if every tree in F is an induced subgraph of G and exactly i vertices of F have even degree (including zero). An i-perfect forest of G is proper if it has no vertices of degree zero. Scott (2001) showed that every connected graph with even number of vertices contains a (proper) 0-perfect forest. We prove that one can find a 0-perfect forest with minimum number of edges in polynomial time, but it is NP-hard to obtain a 0-perfect forest with maximum number of edges. We also prove that for a prescribed edge e of G, it is NP-hard to obtain a 0-perfect forest containing e, but we can find a 0-perfect forest not containing e in polynomial time. It is easy to see that every graph with odd number of vertices has a 1-perfect forest. It is not the case for proper 1-perfect forests. We give a characterization of when a connected graph has a proper 1-perfect forest.

Cite as

Gregory Gutin and Anders Yeo. Perfect Forests in Graphs and Their Extensions. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 54:1-54:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gutin_et_al:LIPIcs.MFCS.2021.54,
  author =	{Gutin, Gregory and Yeo, Anders},
  title =	{{Perfect Forests in Graphs and Their Extensions}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{54:1--54:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.54},
  URN =		{urn:nbn:de:0030-drops-144947},
  doi =		{10.4230/LIPIcs.MFCS.2021.54},
  annote =	{Keywords: graphs, odd degree subgraphs, perfect forests, polynomial algorithms}
}
Document
On Deciding Linear Arithmetic Constraints Over p-adic Integers for All Primes

Authors: Christoph Haase and Alessio Mansutti


Abstract
Given an existential formula Φ of linear arithmetic over p-adic integers together with valuation constraints, we study the p-universality problem which consists of deciding whether Φ is satisfiable for all primes p, and the analogous problem for the closely related existential theory of Büchi arithmetic. Our main result is a coNEXP upper bound for both problems, together with a matching lower bound for existential Büchi arithmetic. On a technical level, our results are obtained from analysing properties of a certain class of p-automata, finite-state automata whose languages encode sets of tuples of natural numbers.

Cite as

Christoph Haase and Alessio Mansutti. On Deciding Linear Arithmetic Constraints Over p-adic Integers for All Primes. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haase_et_al:LIPIcs.MFCS.2021.55,
  author =	{Haase, Christoph and Mansutti, Alessio},
  title =	{{On Deciding Linear Arithmetic Constraints Over p-adic Integers for All Primes}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{55:1--55:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.55},
  URN =		{urn:nbn:de:0030-drops-144953},
  doi =		{10.4230/LIPIcs.MFCS.2021.55},
  annote =	{Keywords: linear arithmetic, B\"{u}chi arithmetic, p-adic numbers, automatic structures}
}
Document
Obstructing Classification via Projection

Authors: Pantea Haghighatkhah, Wouter Meulemans, Bettina Speckmann, Jérôme Urhausen, and Kevin Verbeek


Abstract
Machine learning and data mining techniques are effective tools to classify large amounts of data. But they tend to preserve any inherent bias in the data, for example, with regards to gender or race. Removing such bias from data or the learned representations is quite challenging. In this paper we study a geometric problem which models a possible approach for bias removal. Our input is a set of points P in Euclidean space ℝ^d and each point is labeled with k binary-valued properties. A priori we assume that it is "easy" to classify the data according to each property. Our goal is to obstruct the classification according to one property by a suitable projection to a lower-dimensional Euclidean space ℝ^m (m < d), while classification according to all other properties remains easy. What it means for classification to be easy depends on the classification model used. We first consider classification by linear separability as employed by support vector machines. We use Kirchberger’s Theorem to show that, under certain conditions, a simple projection to ℝ^{d-1} suffices to eliminate the linear separability of one of the properties whilst maintaining the linear separability of the other properties. We also study the problem of maximizing the linear "inseparability" of the chosen property. Second, we consider more complex forms of separability and prove a connection between the number of projections required to obstruct classification and the Helly-type properties of such separabilities.

Cite as

Pantea Haghighatkhah, Wouter Meulemans, Bettina Speckmann, Jérôme Urhausen, and Kevin Verbeek. Obstructing Classification via Projection. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 56:1-56:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haghighatkhah_et_al:LIPIcs.MFCS.2021.56,
  author =	{Haghighatkhah, Pantea and Meulemans, Wouter and Speckmann, Bettina and Urhausen, J\'{e}r\^{o}me and Verbeek, Kevin},
  title =	{{Obstructing Classification via Projection}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{56:1--56:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.56},
  URN =		{urn:nbn:de:0030-drops-144965},
  doi =		{10.4230/LIPIcs.MFCS.2021.56},
  annote =	{Keywords: Projection, classification, models of learning}
}
Document
Online Domination: The Value of Getting to Know All Your Neighbors

Authors: Hovhannes A. Harutyunyan, Denis Pankratov, and Jesse Racicot


Abstract
We study the dominating set problem in an online setting. An algorithm is required to guarantee competitiveness against an adversary that reveals the input graph one node at a time. When a node is revealed, the algorithm learns about the entire neighborhood of the node (including those nodes that have not yet been revealed). Furthermore, the adversary is required to keep the revealed portion of the graph connected at all times. We present an algorithm that achieves 2-competitiveness on trees. We also present algorithms that achieve 2.5-competitiveness on cactus graphs, (t-1)-competitiveness on K_{1,t}-free graphs, and Θ(√{Δ}) for maximum degree Δ graphs. We show that all of those competitive ratios are tight. Then, we study several more general classes of graphs, such as threshold, bipartite planar, and series-parallel graphs, and show that they do not admit competitive algorithms (i.e., when competitive ratio is independent of the input size). Previously, the dominating set problem was considered in a different input model (often together with the restriction of the input graph being always connected), where a vertex is revealed alongside its restricted neighborhood: those neighbors that are among already revealed vertices. Thus, conceptually, our results quantify the value of knowing the entire neighborhood at the time a vertex is revealed as compared to the restricted neighborhood. For instance, it was known in the restricted neighborhood model that 3-competitiveness is optimal for trees, whereas knowing the neighbors allows us to improve it to 2-competitiveness.

Cite as

Hovhannes A. Harutyunyan, Denis Pankratov, and Jesse Racicot. Online Domination: The Value of Getting to Know All Your Neighbors. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 57:1-57:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{harutyunyan_et_al:LIPIcs.MFCS.2021.57,
  author =	{Harutyunyan, Hovhannes A. and Pankratov, Denis and Racicot, Jesse},
  title =	{{Online Domination: The Value of Getting to Know All Your Neighbors}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{57:1--57:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.57},
  URN =		{urn:nbn:de:0030-drops-144979},
  doi =		{10.4230/LIPIcs.MFCS.2021.57},
  annote =	{Keywords: Dominating set, online algorithms, competitive ratio, trees, cactus graphs, bipartite planar graphs, series-parallel graphs, closed neighborhood}
}
Document
A Linear-Time Nominal μ-Calculus with Name Allocation

Authors: Daniel Hausmann, Stefan Milius, and Lutz Schröder


Abstract
Logics and automata models for languages over infinite alphabets, such as Freeze LTL and register automata, serve the verification of processes or documents with data. They relate tightly to formalisms over nominal sets, such as nondetermininistic orbit-finite automata (NOFAs), where names play the role of data. Reasoning problems in such formalisms tend to be computationally hard. Name-binding nominal automata models such as {regular nondeterministic nominal automata (RNNAs)} have been shown to be computationally more tractable. In the present paper, we introduce a linear-time fixpoint logic Bar-μTL} for finite words over an infinite alphabet, which features full negation and freeze quantification via name binding. We show by a nontrivial reduction to extended regular nondeterministic nominal automata that even though Bar-μTL} allows unrestricted nondeterminism and unboundedly many registers, model checking Bar-μTL} over RNNAs and satisfiability checking both have elementary complexity. For example, model checking is in 2ExpSpace, more precisely in parametrized ExpSpace, effectively with the number of registers as the parameter.

Cite as

Daniel Hausmann, Stefan Milius, and Lutz Schröder. A Linear-Time Nominal μ-Calculus with Name Allocation. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hausmann_et_al:LIPIcs.MFCS.2021.58,
  author =	{Hausmann, Daniel and Milius, Stefan and Schr\"{o}der, Lutz},
  title =	{{A Linear-Time Nominal \mu-Calculus with Name Allocation}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.58},
  URN =		{urn:nbn:de:0030-drops-144987},
  doi =		{10.4230/LIPIcs.MFCS.2021.58},
  annote =	{Keywords: Model checking, linear-time logic, nominal sets}
}
Document
Test of Quantumness with Small-Depth Quantum Circuits

Authors: Shuichi Hirahara and François Le Gall


Abstract
Recently Brakerski, Christiano, Mahadev, Vazirani and Vidick (FOCS 2018) have shown how to construct a test of quantumness based on the learning with errors (LWE) assumption: a test that can be solved efficiently by a quantum computer but cannot be solved by a classical polynomial-time computer under the LWE assumption. This test has lead to several cryptographic applications. In particular, it has been applied to producing certifiable randomness from a single untrusted quantum device, self-testing a single quantum device and device-independent quantum key distribution. In this paper, we show that this test of quantumness, and essentially all the above applications, can actually be implemented by a very weak class of quantum circuits: constant-depth quantum circuits combined with logarithmic-depth classical computation. This reveals novel complexity-theoretic properties of this fundamental test of quantumness and gives new concrete evidence of the superiority of small-depth quantum circuits over classical computation.

Cite as

Shuichi Hirahara and François Le Gall. Test of Quantumness with Small-Depth Quantum Circuits. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 59:1-59:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.MFCS.2021.59,
  author =	{Hirahara, Shuichi and Le Gall, Fran\c{c}ois},
  title =	{{Test of Quantumness with Small-Depth Quantum Circuits}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{59:1--59:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.59},
  URN =		{urn:nbn:de:0030-drops-144996},
  doi =		{10.4230/LIPIcs.MFCS.2021.59},
  annote =	{Keywords: Quantum computing, small-depth circuits, quantum cryptography}
}
Document
On Search Complexity of Discrete Logarithm

Authors: Pavel Hubáček and Jan Václavek


Abstract
In this work, we study the discrete logarithm problem in the context of TFNP - the complexity class of search problems with a syntactically guaranteed existence of solutions for all instances. Our main results establish that suitable variants of the discrete logarithm problem are complete for the complexity class PPP, respectively PWPP, i.e., the subclasses of TFNP capturing total search problems with a solution guaranteed by the pigeonhole principle, respectively the weak pigeonhole principle. Besides answering an open problem from the recent work of Sotiraki, Zampetakis, and Zirdelis (FOCS’18), our completeness results for PPP and PWPP have implications for the recent line of work proving conditional lower bounds for problems in TFNP under cryptographic assumptions. In particular, they highlight that any attempt at basing average-case hardness in subclasses of TFNP (other than PWPP and PPP) on the average-case hardness of the discrete logarithm problem must exploit its structural properties beyond what is necessary for constructions of collision-resistant hash functions. Additionally, our reductions provide new structural insights into the class PWPP by establishing two new PWPP-complete problems. First, the problem Dove, a relaxation of the PPP-complete problem Pigeon. Dove is the first PWPP-complete problem not defined in terms of an explicitly shrinking function. Second, the problem Claw, a total search problem capturing the computational complexity of breaking claw-free permutations. In the context of TFNP, the PWPP-completeness of Claw matches the known intrinsic relationship between collision-resistant hash functions and claw-free permutations established in the cryptographic literature.

Cite as

Pavel Hubáček and Jan Václavek. On Search Complexity of Discrete Logarithm. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 60:1-60:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hubacek_et_al:LIPIcs.MFCS.2021.60,
  author =	{Hub\'{a}\v{c}ek, Pavel and V\'{a}clavek, Jan},
  title =	{{On Search Complexity of Discrete Logarithm}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{60:1--60:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.60},
  URN =		{urn:nbn:de:0030-drops-145006},
  doi =		{10.4230/LIPIcs.MFCS.2021.60},
  annote =	{Keywords: discrete logarithm, total search problems, completeness, TFNP, PPP, PWPP}
}
Document
A Homological Condition on Equational Unifiability

Authors: Mirai Ikebuchi


Abstract
Equational unification is the problem of solving an equation modulo equational axioms. In this paper, we provide a relationship between equational unification and homological algebra for equational theories. We will construct a functor from the category of sets of equational axioms to the category of abelian groups. Then, our main theorem gives a necessary condition of equational unifiability that is described in terms of abelian groups associated with equational axioms and homomorphisms between them. To construct our functor, we use a ringoid (a category enriched over the category of abelian groups) obtained from the equational axioms and a free resolution of a "good" module over the ringoid, which was developed by Malbos and Mimram.

Cite as

Mirai Ikebuchi. A Homological Condition on Equational Unifiability. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 61:1-61:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ikebuchi:LIPIcs.MFCS.2021.61,
  author =	{Ikebuchi, Mirai},
  title =	{{A Homological Condition on Equational Unifiability}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{61:1--61:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.61},
  URN =		{urn:nbn:de:0030-drops-145010},
  doi =		{10.4230/LIPIcs.MFCS.2021.61},
  annote =	{Keywords: Equational unification, Homological algebra, equational theories}
}
Document
Ordered Fragments of First-Order Logic

Authors: Reijo Jaakkola


Abstract
Using a recently introduced algebraic framework for classifying fragments of first-order logic, we study the complexity of the satisfiability problem for several ordered fragments of first-order logic, which are obtained from the ordered logic and the fluted logic by modifying some of their syntactical restrictions.

Cite as

Reijo Jaakkola. Ordered Fragments of First-Order Logic. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jaakkola:LIPIcs.MFCS.2021.62,
  author =	{Jaakkola, Reijo},
  title =	{{Ordered Fragments of First-Order Logic}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.62},
  URN =		{urn:nbn:de:0030-drops-145025},
  doi =		{10.4230/LIPIcs.MFCS.2021.62},
  annote =	{Keywords: ordered logic, fluted logic, complexity, decidability}
}
Document
The Simplest Non-Regular Deterministic Context-Free Language

Authors: Petr Jančar and Jiří Šíma


Abstract
We introduce a new notion of 𝒞-simple problems for a class 𝒞 of decision problems (i.e. languages), w.r.t. a particular reduction. A problem is 𝒞-simple if it can be reduced to each problem in 𝒞. This can be viewed as a conceptual counterpart to 𝒞-hard problems to which all problems in 𝒞 reduce. Our concrete example is the class of non-regular deterministic context-free languages (DCFL'), with a truth-table reduction by Mealy machines. The main technical result is a proof that the DCFL' language L_# = {0^n1^n ∣ n ≥ 1} is DCFL'-simple, and can be thus viewed as one of the simplest languages in the class DCFL', in a precise sense. The notion of DCFL'-simple languages is nontrivial: e.g., the language L_R = {wcw^R∣ w ∈ {a,b}^*} is not DCFL'-simple. By describing an application in the area of neural networks (elaborated in another paper), we demonstrate that 𝒞-simple problems under suitable reductions can provide a tool for expanding the lower-bound results known for single problems to the whole classes of problems.

Cite as

Petr Jančar and Jiří Šíma. The Simplest Non-Regular Deterministic Context-Free Language. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 63:1-63:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jancar_et_al:LIPIcs.MFCS.2021.63,
  author =	{Jan\v{c}ar, Petr and \v{S}{\'\i}ma, Ji\v{r}{\'\i}},
  title =	{{The Simplest Non-Regular Deterministic Context-Free Language}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{63:1--63:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.63},
  URN =		{urn:nbn:de:0030-drops-145037},
  doi =		{10.4230/LIPIcs.MFCS.2021.63},
  annote =	{Keywords: deterministic context-free language, truth-table reduction, Mealy automaton, pushdown automaton}
}
Document
On the Hardness of Compressing Weights

Authors: Bart M. P. Jansen, Shivesh K. Roy, and Michał Włodarczyk


Abstract
We investigate computational problems involving large weights through the lens of kernelization, which is a framework of polynomial-time preprocessing aimed at compressing the instance size. Our main focus is the weighted Clique problem, where we are given an edge-weighted graph and the goal is to detect a clique of total weight equal to a prescribed value. We show that the weighted variant, parameterized by the number of vertices n, is significantly harder than the unweighted problem by presenting an 𝒪(n^{3 - ε}) lower bound on the size of the kernel, under the assumption that NP ̸ ⊆ coNP/poly. This lower bound is essentially tight: we show that we can reduce the problem to the case with weights bounded by 2^𝒪(n), which yields a randomized kernel of 𝒪(n³) bits. We generalize these results to the weighted d-Uniform Hyperclique problem, Subset Sum, and weighted variants of Boolean Constraint Satisfaction Problems (CSPs). We also study weighted minimization problems and show that weight compression is easier when we only want to {preserve the collection of} optimal solutions. Namely, we show that for node-weighted Vertex Cover on bipartite graphs it is possible to maintain the set of optimal solutions using integer weights from the range [1, n], but if we want to maintain the ordering of the weights of all inclusion-minimal solutions, then weights as large as 2^Ω(n) are necessary.

Cite as

Bart M. P. Jansen, Shivesh K. Roy, and Michał Włodarczyk. On the Hardness of Compressing Weights. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 64:1-64:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.MFCS.2021.64,
  author =	{Jansen, Bart M. P. and Roy, Shivesh K. and W{\l}odarczyk, Micha{\l}},
  title =	{{On the Hardness of Compressing Weights}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{64:1--64:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.64},
  URN =		{urn:nbn:de:0030-drops-145049},
  doi =		{10.4230/LIPIcs.MFCS.2021.64},
  annote =	{Keywords: kernelization, compression, edge-weighted clique, constraint satisfaction problems}
}
Document
Griddings of Permutations and Hardness of Pattern Matching

Authors: Vít Jelínek, Michal Opler, and Jakub Pekárek


Abstract
We study the complexity of the decision problem known as Permutation Pattern Matching, or PPM. The input of PPM consists of a pair of permutations τ (the "text") and π (the "pattern"), and the goal is to decide whether τ contains π as a subpermutation. On general inputs, PPM is known to be NP-complete by a result of Bose, Buss and Lubiw. In this paper, we focus on restricted instances of PPM where the text is assumed to avoid a fixed (small) pattern σ; this restriction is known as Av(σ)-PPM. It has been previously shown that Av(σ)-PPM is polynomial for any σ of size at most 3, while it is NP-hard for any σ containing a monotone subsequence of length four. In this paper, we present a new hardness reduction which allows us to show, in a uniform way, that Av(σ)-PPM is hard for every σ of size at least 6, for every σ of size 5 except the symmetry class of 41352, as well as for every σ symmetric to one of the three permutations 4321, 4312 and 4231. Moreover, assuming the exponential time hypothesis, none of these hard cases of Av(σ)-PPM can be solved in time 2^o(n/log n). Previously, such conditional lower bound was not known even for the unconstrained PPM problem. On the tractability side, we combine the CSP approach of Guillemot and Marx with the structural results of Huczynska and Vatter to show that for any monotone-griddable permutation class 𝒞, PPM is polynomial when the text is restricted to a permutation from 𝒞.

Cite as

Vít Jelínek, Michal Opler, and Jakub Pekárek. Griddings of Permutations and Hardness of Pattern Matching. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 65:1-65:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jelinek_et_al:LIPIcs.MFCS.2021.65,
  author =	{Jel{\'\i}nek, V{\'\i}t and Opler, Michal and Pek\'{a}rek, Jakub},
  title =	{{Griddings of Permutations and Hardness of Pattern Matching}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{65:1--65:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.65},
  URN =		{urn:nbn:de:0030-drops-145050},
  doi =		{10.4230/LIPIcs.MFCS.2021.65},
  annote =	{Keywords: Permutation, pattern matching, NP-hardness}
}
Document
Sets of Linear Forms Which Are Hard to Compute

Authors: Michael Kaminski and Igor E. Shparlinski


Abstract
We present a uniform description of sets of m linear forms in n variables over the field of rational numbers whose computation requires m(n - 1) additions. Our result is based on bounds on the height of the annihilating polynomials in the Perron theorem and an effective form of the Lindemann-Weierstrass theorem which is due to Sert (1999).

Cite as

Michael Kaminski and Igor E. Shparlinski. Sets of Linear Forms Which Are Hard to Compute. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 66:1-66:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kaminski_et_al:LIPIcs.MFCS.2021.66,
  author =	{Kaminski, Michael and Shparlinski, Igor E.},
  title =	{{Sets of Linear Forms Which Are Hard to Compute}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{66:1--66:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.66},
  URN =		{urn:nbn:de:0030-drops-145065},
  doi =		{10.4230/LIPIcs.MFCS.2021.66},
  annote =	{Keywords: Linear algorithms, additive complexity, effective Perron theorem, effective Lindemann-Weierstrass theorem}
}
Document
On Positivity and Minimality for Second-Order Holonomic Sequences

Authors: George Kenison, Oleksiy Klurman, Engel Lefaucheux, Florian Luca, Pieter Moree, Joël Ouaknine, Markus A. Whiteland, and James Worrell


Abstract
An infinite sequence ⟨u_n⟩_n of real numbers is holonomic (also known as P-recursive or P-finite) if it satisfies a linear recurrence relation with polynomial coefficients. Such a sequence is said to be positive if each u_n ≥ 0, and minimal if, given any other linearly independent sequence ⟨v_n⟩_n satisfying the same recurrence relation, the ratio u_n/v_n → 0 as n → ∞. In this paper we give a Turing reduction of the problem of deciding positivity of second-order holonomic sequences to that of deciding minimality of such sequences. More specifically, we give a procedure for determining positivity of second-order holonomic sequences that terminates in all but an exceptional number of cases, and we show that in these exceptional cases positivity can be determined using an oracle for deciding minimality.

Cite as

George Kenison, Oleksiy Klurman, Engel Lefaucheux, Florian Luca, Pieter Moree, Joël Ouaknine, Markus A. Whiteland, and James Worrell. On Positivity and Minimality for Second-Order Holonomic Sequences. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 67:1-67:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kenison_et_al:LIPIcs.MFCS.2021.67,
  author =	{Kenison, George and Klurman, Oleksiy and Lefaucheux, Engel and Luca, Florian and Moree, Pieter and Ouaknine, Jo\"{e}l and Whiteland, Markus A. and Worrell, James},
  title =	{{On Positivity and Minimality for Second-Order Holonomic Sequences}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{67:1--67:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.67},
  URN =		{urn:nbn:de:0030-drops-145071},
  doi =		{10.4230/LIPIcs.MFCS.2021.67},
  annote =	{Keywords: Holonomic sequences, Minimal solutions, Positivity Problem}
}
Document
Improved Upper Bounds for the Rigidity of Kronecker Products

Authors: Bohdan Kivva


Abstract
The rigidity of a matrix A for target rank r is the minimum number of entries of A that need to be changed in order to obtain a matrix of rank at most r. At MFCS'77, Valiant introduced matrix rigidity as a tool to prove circuit lower bounds for linear functions and since then this notion received much attention and found applications in other areas of complexity theory. The problem of constructing an explicit family of matrices that are sufficiently rigid for Valiant’s reduction (Valiant-rigid) still remains open. Moreover, since 2017 most of the long-studied candidates have been shown not to be Valiant-rigid. Some of those former candidates for rigidity are Kronecker products of small matrices. In a recent paper (STOC'21), Alman gave a general non-rigidity result for such matrices: he showed that if an n× n matrix A (over any field) is a Kronecker product of d× d matrices M₁,… ,M_k (so n = d^k) (d ≥ 2) then changing only n^{1+ε} entries of A one can reduce its rank to ≤ n^{1-γ}, where 1/γ is roughly 2^d/ε². In this note we improve this result in two directions. First, we do not require the matrices M_i to have equal size. Second, we reduce 1/γ from exponential in d to roughly d^{3/2}/ε² (where d is the maximum size of the matrices M_i), and to nearly linear (roughly d/ε²) for matrices M_i of sizes within a constant factor of each other. As an application of our results we significantly expand the class of Hadamard matrices that are known not to be Valiant-rigid; these now include the Kronecker products of Paley-Hadamard matrices and Hadamard matrices of bounded size.

Cite as

Bohdan Kivva. Improved Upper Bounds for the Rigidity of Kronecker Products. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 68:1-68:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kivva:LIPIcs.MFCS.2021.68,
  author =	{Kivva, Bohdan},
  title =	{{Improved Upper Bounds for the Rigidity of Kronecker Products}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{68:1--68:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.68},
  URN =		{urn:nbn:de:0030-drops-145081},
  doi =		{10.4230/LIPIcs.MFCS.2021.68},
  annote =	{Keywords: Matrix rigidity, Kronecker product, Hadamard matrices}
}
Document
The Power of One Clean Qubit in Communication Complexity

Authors: Hartmut Klauck and Debbie Lim


Abstract
We study quantum communication protocols, in which the players' storage starts out in a state where one qubit is in a pure state, and all other qubits are totally mixed (i.e. in a random state), and no other storage is available (for messages or internal computations). This restriction on the available quantum memory has been studied extensively in the model of quantum circuits, and it is known that classically simulating quantum circuits operating on such memory is hard when the additive error of the simulation is exponentially small (in the input length), under the assumption that the polynomial hierarchy does not collapse. We study this setting in communication complexity. The goal is to consider larger additive error for simulation-hardness results, and to not use unproven assumptions. We define a complexity measure for this model that takes into account that standard error reduction techniques do not work here. We define a clocked and a semi-unclocked model, and describe efficient simulations between those. We characterize a one-way communication version of the model in terms of weakly unbounded error communication complexity. Our main result is that there is a quantum protocol using one clean qubit only and using O(log n) qubits of communication, such that any classical protocol simulating the acceptance behaviour of the quantum protocol within additive error 1/poly(n) needs communication Ω(n). We also describe a candidate problem, for which an exponential gap between the one-clean-qubit communication complexity and the randomized communication complexity is likely to hold, and hence a classical simulation of the one-clean-qubit model within constant additive error might be hard in communication complexity. We describe a geometrical conjecture that implies the lower bound.

Cite as

Hartmut Klauck and Debbie Lim. The Power of One Clean Qubit in Communication Complexity. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 69:1-69:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{klauck_et_al:LIPIcs.MFCS.2021.69,
  author =	{Klauck, Hartmut and Lim, Debbie},
  title =	{{The Power of One Clean Qubit in Communication Complexity}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{69:1--69:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.69},
  URN =		{urn:nbn:de:0030-drops-145097},
  doi =		{10.4230/LIPIcs.MFCS.2021.69},
  annote =	{Keywords: Quantum Complexity Theory, Quantum Communication Complexity, One Clean Qubit Model}
}
Document
Connecting Constructive Notions of Ordinals in Homotopy Type Theory

Authors: Nicolai Kraus, Fredrik Nordvall Forsberg, and Chuangjie Xu


Abstract
In classical set theory, there are many equivalent ways to introduce ordinals. In a constructive setting, however, the different notions split apart, with different advantages and disadvantages for each. We consider three different notions of ordinals in homotopy type theory, and show how they relate to each other: A notation system based on Cantor normal forms, a refined notion of Brouwer trees (inductively generated by zero, successor and countable limits), and wellfounded extensional orders. For Cantor normal forms, most properties are decidable, whereas for wellfounded extensional transitive orders, most are undecidable. Formulations for Brouwer trees are usually partially decidable. We demonstrate that all three notions have properties expected of ordinals: their order relations, although defined differently in each case, are all extensional and wellfounded, and the usual arithmetic operations can be defined in each case. We connect these notions by constructing structure preserving embeddings of Cantor normal forms into Brouwer trees, and of these in turn into wellfounded extensional orders. We have formalised most of our results in cubical Agda.

Cite as

Nicolai Kraus, Fredrik Nordvall Forsberg, and Chuangjie Xu. Connecting Constructive Notions of Ordinals in Homotopy Type Theory. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 70:1-70:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kraus_et_al:LIPIcs.MFCS.2021.70,
  author =	{Kraus, Nicolai and Nordvall Forsberg, Fredrik and Xu, Chuangjie},
  title =	{{Connecting Constructive Notions of Ordinals in Homotopy Type Theory}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{70:1--70:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.70},
  URN =		{urn:nbn:de:0030-drops-145100},
  doi =		{10.4230/LIPIcs.MFCS.2021.70},
  annote =	{Keywords: Constructive ordinals, Cantor normal forms, Brouwer trees}
}
Document
Maximum Votes Pareto-Efficient Allocations via Swaps on a Social Network

Authors: Fu Li and Xiong Zheng


Abstract
In recent work, Gourv{è}s, Lesca, and Wilczynski (IJCAI 17) propose a variant of the classic housing markets model in which the matching between agents and objects evolves through Pareto-improving swaps between pairs of agents who are adjacent in a social network. To explore the swap dynamics of their model, they pose several basic questions concerning the set of reachable matchings, and investigate the computational complexity of these questions when the graph structure of the social network is a star, path, or tree, or is unrestricted. We are interested in how to direct the agents to swap objects with each other in order to arrive at a reachable matching that is both efficient and most agreeable. In particular, we study the computational complexity of reaching a Pareto-efficient matching that maximizes the number of agents who prefer their match to their initial endowments. We consider various graph structures of the social network: path, star, tree, or being unrestricted. Additionally, we consider two assumptions regarding preference relations of agents: strict (ties among objects not allowed) or weak (ties among objects allowed). By designing two polynomial-time algorithms and two NP-hardness reductions, we resolve the complexity of all cases not yet known. Our main contributions include a polynomial-time algorithm for path networks with strict preferences and an NP-hardness result in a star network with weak preferences.

Cite as

Fu Li and Xiong Zheng. Maximum Votes Pareto-Efficient Allocations via Swaps on a Social Network. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.MFCS.2021.71,
  author =	{Li, Fu and Zheng, Xiong},
  title =	{{Maximum Votes Pareto-Efficient Allocations via Swaps on a Social Network}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{71:1--71:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.71},
  URN =		{urn:nbn:de:0030-drops-145112},
  doi =		{10.4230/LIPIcs.MFCS.2021.71},
  annote =	{Keywords: Housing markets, Distributed process, Algorithms, Complexity}
}
Document
Finite Models for a Spatial Logic with Discrete and Topological Path Operators

Authors: Sven Linker, Fabio Papacchini, and Michele Sevegnani


Abstract
This paper analyses models of a spatial logic with path operators based on the class of neighbourhood spaces, also called pretopological or closure spaces, a generalisation of topological spaces. For this purpose, we distinguish two dimensions: the type of spaces on which models are built, and the type of allowed paths. For the spaces, we investigate general neighbourhood spaces and the subclass of quasi-discrete spaces, which closely resemble graphs. For the paths, we analyse the cases of quasi-discrete paths, which consist of an enumeration of points, and topological paths, based on the unit interval. We show that the logic admits finite models over quasi-discrete spaces, both with quasi-discrete and topological paths. Finally, we prove that for general neighbourhood spaces, the logic does not have the finite model property, either for quasi-discrete or topological paths.

Cite as

Sven Linker, Fabio Papacchini, and Michele Sevegnani. Finite Models for a Spatial Logic with Discrete and Topological Path Operators. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 72:1-72:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{linker_et_al:LIPIcs.MFCS.2021.72,
  author =	{Linker, Sven and Papacchini, Fabio and Sevegnani, Michele},
  title =	{{Finite Models for a Spatial Logic with Discrete and Topological Path Operators}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{72:1--72:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.72},
  URN =		{urn:nbn:de:0030-drops-145120},
  doi =		{10.4230/LIPIcs.MFCS.2021.72},
  annote =	{Keywords: spatial logic, topology, finite models}
}
Document
Recursive Backdoors for SAT

Authors: Nikolas Mählmann, Sebastian Siebertz, and Alexandre Vigny


Abstract
A strong backdoor in a formula φ of propositional logic to a tractable class C of formulas is a set B of variables of φ such that every assignment of the variables in B results in a formula from C. Strong backdoors of small size or with a good structure, e.g. with small backdoor treewidth, lead to efficient solutions for the propositional satisfiability problem SAT. In this paper we propose the new notion of recursive backdoors, which is inspired by the observation that in order to solve SAT we can independently recurse into the components that are created by partial assignments of variables. The quality of a recursive backdoor is measured by its recursive backdoor depth. Similar to the concept of backdoor treewidth, recursive backdoors of bounded depth include backdoors of unbounded size that have a certain treelike structure. However, the two concepts are incomparable and our results yield new tractability results for SAT.

Cite as

Nikolas Mählmann, Sebastian Siebertz, and Alexandre Vigny. Recursive Backdoors for SAT. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 73:1-73:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mahlmann_et_al:LIPIcs.MFCS.2021.73,
  author =	{M\"{a}hlmann, Nikolas and Siebertz, Sebastian and Vigny, Alexandre},
  title =	{{Recursive Backdoors for SAT}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{73:1--73:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.73},
  URN =		{urn:nbn:de:0030-drops-145138},
  doi =		{10.4230/LIPIcs.MFCS.2021.73},
  annote =	{Keywords: Propositional satisfiability SAT, Backdoors, Parameterized Algorithms}
}
Document
Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group

Authors: Caroline Mattes and Armin Weiß


Abstract
Power circuits have been introduced in 2012 by Myasnikov, Ushakov and Won as a data structure for non-elementarily compressed integers supporting the arithmetic operations addition and (x,y) ↦ x⋅2^y. The same authors applied power circuits to give a polynomial-time solution to the word problem of the Baumslag group, which has a non-elementary Dehn function. In this work, we examine power circuits and the word problem of the Baumslag group under parallel complexity aspects. In particular, we establish that the word problem of the Baumslag group can be solved in NC - even though one of the essential steps is to compare two integers given by power circuits and this, in general, is shown to be 𝖯-complete. The key observation is that the depth of the occurring power circuits is logarithmic and such power circuits can be compared in NC.

Cite as

Caroline Mattes and Armin Weiß. Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 74:1-74:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mattes_et_al:LIPIcs.MFCS.2021.74,
  author =	{Mattes, Caroline and Wei{\ss}, Armin},
  title =	{{Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{74:1--74:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.74},
  URN =		{urn:nbn:de:0030-drops-145148},
  doi =		{10.4230/LIPIcs.MFCS.2021.74},
  annote =	{Keywords: Word problem, Baumslag group, power circuit, parallel complexity}
}
Document
The Complexity of Transitively Orienting Temporal Graphs

Authors: George B. Mertzios, Hendrik Molter, Malte Renken, Paul G. Spirakis, and Philipp Zschoche


Abstract
In a temporal network with discrete time-labels on its edges, entities and information can only "flow" along sequences of edges whose time-labels are non-decreasing (resp. increasing), i.e. along temporal (resp. strict temporal) paths. Nevertheless, in the model for temporal networks of [Kempe, Kleinberg, Kumar, JCSS, 2002], the individual time-labeled edges remain undirected: an edge e = {u,v} with time-label t specifies that "u communicates with v at time t". This is a symmetric relation between u and v, and it can be interpreted that the information can flow in either direction. In this paper we make a first attempt to understand how the direction of information flow on one edge can impact the direction of information flow on other edges. More specifically, naturally extending the classical notion of a transitive orientation in static graphs, we introduce the fundamental notion of a temporal transitive orientation and we systematically investigate its algorithmic behavior in various situations. An orientation of a temporal graph is called temporally transitive if, whenever u has a directed edge towards v with time-label t₁ and v has a directed edge towards w with time-label t₂ ≥ t₁, then u also has a directed edge towards w with some time-label t₃ ≥ t₂. If we just demand that this implication holds whenever t₂ > t₁, the orientation is called strictly temporally transitive, as it is based on the fact that there is a strict directed temporal path from u to w. Our main result is a conceptually simple, yet technically quite involved, polynomial-time algorithm for recognizing whether a given temporal graph 𝒢 is transitively orientable. In wide contrast we prove that, surprisingly, it is NP-hard to recognize whether 𝒢 is strictly transitively orientable. Additionally we introduce and investigate further related problems to temporal transitivity, notably among them the temporal transitive completion problem, for which we prove both algorithmic and hardness results.

Cite as

George B. Mertzios, Hendrik Molter, Malte Renken, Paul G. Spirakis, and Philipp Zschoche. The Complexity of Transitively Orienting Temporal Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 75:1-75:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mertzios_et_al:LIPIcs.MFCS.2021.75,
  author =	{Mertzios, George B. and Molter, Hendrik and Renken, Malte and Spirakis, Paul G. and Zschoche, Philipp},
  title =	{{The Complexity of Transitively Orienting Temporal Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{75:1--75:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.75},
  URN =		{urn:nbn:de:0030-drops-145157},
  doi =		{10.4230/LIPIcs.MFCS.2021.75},
  annote =	{Keywords: Temporal graph, transitive orientation, transitive closure, polynomial-time algorithm, NP-hardness, satisfiability}
}
Document
Temporal Reachability Minimization: Delaying vs. Deleting

Authors: Hendrik Molter, Malte Renken, and Philipp Zschoche


Abstract
We study spreading processes in temporal graphs, i. e., graphs whose connections change over time. These processes naturally model real-world phenomena such as infectious diseases or information flows. More precisely, we investigate how such a spreading process, emerging from a given set of sources, can be contained to a small part of the graph. To this end we consider two ways of modifying the graph, which are (1) deleting connections and (2) delaying connections. We show a close relationship between the two associated problems and give a polynomial time algorithm when the graph has tree structure. For the general version, we consider parameterization by the number of vertices to which the spread is contained. Surprisingly, we prove W[1]-hardness for the deletion variant but fixed-parameter tractability for the delaying variant.

Cite as

Hendrik Molter, Malte Renken, and Philipp Zschoche. Temporal Reachability Minimization: Delaying vs. Deleting. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 76:1-76:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{molter_et_al:LIPIcs.MFCS.2021.76,
  author =	{Molter, Hendrik and Renken, Malte and Zschoche, Philipp},
  title =	{{Temporal Reachability Minimization: Delaying vs. Deleting}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{76:1--76:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.76},
  URN =		{urn:nbn:de:0030-drops-145161},
  doi =		{10.4230/LIPIcs.MFCS.2021.76},
  annote =	{Keywords: Temporal Graphs, Temporal Paths, Disease Spreading, Network Flows, Parameterized Algorithms, NP-hard Problems}
}
Document
A Timecop’s Chase Around the Table

Authors: Nils Morawietz and Petra Wolf


Abstract
We consider the cops and robbers game variant consisting of one cop and one robber on time-varying graphs (TVG). The considered TVGs are edge periodic graphs, i.e., for each edge, a binary string s_e determines in which time step the edge is present, namely the edge e is present in time step t if and only if the string s_e contains a 1 at position t mod |s_e|. This periodicity allows for a compact representation of an infinite TVG. We prove that even for very simple underlying graphs, i.e., directed and undirected cycles the problem whether a cop-winning strategy exists is NP-hard and W[1]-hard parameterized by the number of vertices. Our second main result are matching lower bounds for the ratio between the length of the underlying cycle and the least common multiple (lcm) of the lengths of binary strings describing edge-periodicies over which the graph is robber-winning. Our third main result improves the previously known EXPTIME upper bound for Periodic Cop & Robber on general edge periodic graphs to PSPACE-membership.

Cite as

Nils Morawietz and Petra Wolf. A Timecop’s Chase Around the Table. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 77:1-77:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{morawietz_et_al:LIPIcs.MFCS.2021.77,
  author =	{Morawietz, Nils and Wolf, Petra},
  title =	{{A Timecop’s Chase Around the Table}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{77:1--77:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.77},
  URN =		{urn:nbn:de:0030-drops-145176},
  doi =		{10.4230/LIPIcs.MFCS.2021.77},
  annote =	{Keywords: Time variable graph, Edge periodic cycle, Game of cops and robbers, Computational complexity}
}
Document
Syntactic Minimization Of Nondeterministic Finite Automata

Authors: Robert S. R. Myers and Henning Urbat


Abstract
Nondeterministic automata may be viewed as succinct programs implementing deterministic automata, i.e. complete specifications. Converting a given deterministic automaton into a small nondeterministic one is known to be computationally very hard; in fact, the ensuing decision problem is PSPACE-complete. This paper stands in stark contrast to the status quo. We restrict attention to subatomic nondeterministic automata, whose individual states accept unions of syntactic congruence classes. They are general enough to cover almost all structural results concerning nondeterministic state-minimality. We prove that converting a monoid recognizing a regular language into a small subatomic acceptor corresponds to an NP-complete problem. The NP certificates are solutions of simple equations involving relations over the syntactic monoid. We also consider the subclass of atomic nondeterministic automata introduced by Brzozowski and Tamm. Given a deterministic automaton and another one for the reversed language, computing small atomic acceptors is shown to be NP-complete with analogous certificates. Our complexity results emerge from an algebraic characterization of (sub)atomic acceptors in terms of deterministic automata with semilattice structure, combined with an equivalence of categories leading to succinct representations.

Cite as

Robert S. R. Myers and Henning Urbat. Syntactic Minimization Of Nondeterministic Finite Automata. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 78:1-78:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{myers_et_al:LIPIcs.MFCS.2021.78,
  author =	{Myers, Robert S. R. and Urbat, Henning},
  title =	{{Syntactic Minimization Of Nondeterministic Finite Automata}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{78:1--78:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.78},
  URN =		{urn:nbn:de:0030-drops-145186},
  doi =		{10.4230/LIPIcs.MFCS.2021.78},
  annote =	{Keywords: Algebraic language theory, Nondeterministic automata, NP-completeness}
}
Document
Idempotent Turing Machines

Authors: Keisuke Nakano


Abstract
A function f is said to be idempotent if f(f(x)) = f(x) holds whenever f(x) is defined. This paper presents a computation model for idempotent functions, called an idempotent Turing machine. The computation model is necessarily and sufficiently expressive in the sense that not only does it always compute an idempotent function but also every idempotent computable function can be computed by an idempotent Turing machine. Furthermore, a few typical properties of the computation model such as robustness and universality are shown. Our computation model is expected to be a basis of special-purpose (or domain-specific) programming languages in which only but all idempotent computable functions can be defined.

Cite as

Keisuke Nakano. Idempotent Turing Machines. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 79:1-79:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{nakano:LIPIcs.MFCS.2021.79,
  author =	{Nakano, Keisuke},
  title =	{{Idempotent Turing Machines}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{79:1--79:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.79},
  URN =		{urn:nbn:de:0030-drops-145191},
  doi =		{10.4230/LIPIcs.MFCS.2021.79},
  annote =	{Keywords: Turing machines, Idempotent functions, Computable functions, Computation model}
}
Document
Ergodic Theorems and Converses for PSPACE Functions

Authors: Satyadev Nandakumar and Subin Pulari


Abstract
We initiate the study of effective pointwise ergodic theorems in resource-bounded settings. Classically, the convergence of the ergodic averages for integrable functions can be arbitrarily slow [Ulrich Krengel, 1978]. In contrast, we show that for a class of PSPACE L¹ functions, and a class of PSPACE computable measure-preserving ergodic transformations, the ergodic average exists and is equal to the space average on every EXP random. We establish a partial converse that PSPACE non-randomness can be characterized as non-convergence of ergodic averages. Further, we prove that there is a class of resource-bounded randoms, viz. SUBEXP-space randoms, on which the corresponding ergodic theorem has an exact converse - a point x is SUBEXP-space random if and only if the corresponding effective ergodic theorem holds for x.

Cite as

Satyadev Nandakumar and Subin Pulari. Ergodic Theorems and Converses for PSPACE Functions. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 80:1-80:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{nandakumar_et_al:LIPIcs.MFCS.2021.80,
  author =	{Nandakumar, Satyadev and Pulari, Subin},
  title =	{{Ergodic Theorems and Converses for PSPACE Functions}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{80:1--80:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.80},
  URN =		{urn:nbn:de:0030-drops-145204},
  doi =		{10.4230/LIPIcs.MFCS.2021.80},
  annote =	{Keywords: Ergodic Theorem, Resource-bounded randomness, Computable analysis, Complexity theory}
}
Document
On Guidable Index of Tree Automata

Authors: Damian Niwiński and Michał Skrzypczak


Abstract
We study guidable parity automata over infinite trees introduced by Colcombet and Löding, which form an expressively complete subclass of all non-deterministic tree automata. We show that, for any non-deterministic automaton, an equivalent guidable automaton with the smallest possible index can be effectively found. Moreover, if an input automaton is of a special kind, i.e. it is deterministic or game automaton then a guidable automaton with an optimal index can be deterministic (respectively game) automaton as well. Recall that the problem whether an equivalent non-deterministic automaton with the smallest possible index can be effectively found is open, and a positive answer is known only in the case when an input automaton is a deterministic, or more generally, a game automaton.

Cite as

Damian Niwiński and Michał Skrzypczak. On Guidable Index of Tree Automata. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 81:1-81:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{niwinski_et_al:LIPIcs.MFCS.2021.81,
  author =	{Niwi\'{n}ski, Damian and Skrzypczak, Micha{\l}},
  title =	{{On Guidable Index of Tree Automata}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{81:1--81:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.81},
  URN =		{urn:nbn:de:0030-drops-145214},
  doi =		{10.4230/LIPIcs.MFCS.2021.81},
  annote =	{Keywords: guidable automata, index problem, \omega-regular games}
}
Document
Feedback Vertex Set and Even Cycle Transversal for H-Free Graphs: Finding Large Block Graphs

Authors: Giacomo Paesani, Daniël Paulusma, and Paweł Rzążewski


Abstract
We prove new complexity results for Feedback Vertex Set and Even Cycle Transversal on H-free graphs, that is, graphs that do not contain some fixed graph H as an induced subgraph. In particular, we prove that both problems are polynomial-time solvable for sP₃-free graphs for every integer s ≥ 1; here, the graph sP₃ denotes the disjoint union of s paths on three vertices. Our results show that both problems exhibit the same behaviour on H-free graphs (subject to some open cases). This is in part explained by a new general algorithm we design for finding in a graph G a largest induced subgraph whose blocks belong to some finite class C of graphs. We also compare our results with the state-of-the-art results for the Odd Cycle Transversal problem, which is known to behave differently on H-free graphs.

Cite as

Giacomo Paesani, Daniël Paulusma, and Paweł Rzążewski. Feedback Vertex Set and Even Cycle Transversal for H-Free Graphs: Finding Large Block Graphs. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 82:1-82:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{paesani_et_al:LIPIcs.MFCS.2021.82,
  author =	{Paesani, Giacomo and Paulusma, Dani\"{e}l and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Feedback Vertex Set and Even Cycle Transversal for H-Free Graphs: Finding Large Block Graphs}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{82:1--82:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.82},
  URN =		{urn:nbn:de:0030-drops-145224},
  doi =		{10.4230/LIPIcs.MFCS.2021.82},
  annote =	{Keywords: Feedback vertex set, even cycle transversal, odd cactus, forest, block}
}
Document
Stabilization Bounds for Influence Propagation from a Random Initial State

Authors: Pál András Papp and Roger Wattenhofer


Abstract
We study the stabilization time of two common types of influence propagation. In majority processes, nodes in a graph want to switch to the most frequent state in their neighborhood, while in minority processes, nodes want to switch to the least frequent state in their neighborhood. We consider the sequential model of these processes, and assume that every node starts out from a uniform random state. We first show that if nodes change their state for any small improvement in the process, then stabilization can last for up to Θ(n²) steps in both cases. Furthermore, we also study the proportional switching case, when nodes only decide to change their state if they are in conflict with a (1+λ)/2 fraction of their neighbors, for some parameter λ ∈ (0,1). In this case, we show that if λ < 1/3, then there is a construction where stabilization can indeed last for Ω(n^{1+c}) steps for some constant c > 0. On the other hand, if λ > 1/2, we prove that the stabilization time of the processes is upper-bounded by O(n ⋅ log n).

Cite as

Pál András Papp and Roger Wattenhofer. Stabilization Bounds for Influence Propagation from a Random Initial State. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 83:1-83:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{papp_et_al:LIPIcs.MFCS.2021.83,
  author =	{Papp, P\'{a}l Andr\'{a}s and Wattenhofer, Roger},
  title =	{{Stabilization Bounds for Influence Propagation from a Random Initial State}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{83:1--83:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.83},
  URN =		{urn:nbn:de:0030-drops-145239},
  doi =		{10.4230/LIPIcs.MFCS.2021.83},
  annote =	{Keywords: Majority process, Minority process, Stabilization time, Random initialization, Asynchronous model}
}
Document
Parameterized (Modular) Counting and Cayley Graph Expanders

Authors: Norbert Peyerimhoff, Marc Roth, Johannes Schmitt, Jakob Stix, and Alina Vdovina


Abstract
We study the problem #EdgeSub(Φ) of counting k-edge subgraphs satisfying a given graph property Φ in a large host graph G. Building upon the breakthrough result of Curticapean, Dell and Marx (STOC 17), we express the number of such subgraphs as a finite linear combination of graph homomorphism counts and derive the complexity of computing this number by studying its coefficients. Our approach relies on novel constructions of low-degree Cayley graph expanders of p-groups, which might be of independent interest. The properties of those expanders allow us to analyse the coefficients in the aforementioned linear combinations over the field 𝔽_p which gives us significantly more control over the cancellation behaviour of the coefficients. Our main result is an exhaustive and fine-grained complexity classification of #EdgeSub(Φ) for minor-closed properties Φ, closing the missing gap in previous work by Roth, Schmitt and Wellnitz (ICALP 21). Additionally, we observe that our methods also apply to modular counting. Among others, we obtain novel intractability results for the problems of counting k-forests and matroid bases modulo a prime p. Furthermore, from an algorithmic point of view, we construct algorithms for the problems of counting k-paths and k-cycles modulo 2 that outperform the best known algorithms for their non-modular counterparts. In the course of our investigations we also provide an exhaustive parameterized complexity classification for the problem of counting graph homomorphisms modulo a prime p.

Cite as

Norbert Peyerimhoff, Marc Roth, Johannes Schmitt, Jakob Stix, and Alina Vdovina. Parameterized (Modular) Counting and Cayley Graph Expanders. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 84:1-84:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{peyerimhoff_et_al:LIPIcs.MFCS.2021.84,
  author =	{Peyerimhoff, Norbert and Roth, Marc and Schmitt, Johannes and Stix, Jakob and Vdovina, Alina},
  title =	{{Parameterized (Modular) Counting and Cayley Graph Expanders}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{84:1--84:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.84},
  URN =		{urn:nbn:de:0030-drops-145246},
  doi =		{10.4230/LIPIcs.MFCS.2021.84},
  annote =	{Keywords: Cayley graphs, counting complexity, expander graphs, fine-grained complexity, parameterized complexity}
}
Document
A Hierarchy of Nondeterminism

Authors: Bader Abu Radi, Orna Kupferman, and Ofer Leshkowitz


Abstract
We study three levels in a hierarchy of nondeterminism: A nondeterministic automaton A is determinizable by pruning (DBP) if we can obtain a deterministic automaton equivalent to A by removing some of its transitions. Then, A is good-for-games (GFG) if its nondeterministic choices can be resolved in a way that only depends on the past. Finally, A is semantically deterministic (SD) if different nondeterministic choices in A lead to equivalent states. Some applications of automata in formal methods require deterministic automata, yet in fact can use automata with some level of nondeterminism. For example, DBP automata are useful in the analysis of online algorithms, and GFG automata are useful in synthesis and control. For automata on finite words, the three levels in the hierarchy coincide. We study the hierarchy for Büchi, co-Büchi, and weak automata on infinite words. We show that the hierarchy is strict, study the expressive power of the different levels in it, as well as the complexity of deciding the membership of a language in a given level. Finally, we describe a probability-based analysis of the hierarchy, which relates the level of nondeterminism with the probability that a random run on a word in the language is accepting.

Cite as

Bader Abu Radi, Orna Kupferman, and Ofer Leshkowitz. A Hierarchy of Nondeterminism. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 85:1-85:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aburadi_et_al:LIPIcs.MFCS.2021.85,
  author =	{Abu Radi, Bader and Kupferman, Orna and Leshkowitz, Ofer},
  title =	{{A Hierarchy of Nondeterminism}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{85:1--85:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.85},
  URN =		{urn:nbn:de:0030-drops-145254},
  doi =		{10.4230/LIPIcs.MFCS.2021.85},
  annote =	{Keywords: Automata on Infinite Words, Expressive power, Complexity, Games}
}
Document
Boolean Automata and Atoms of Regular Languages

Authors: Hellis Tamm


Abstract
We examine the role that atoms of regular languages play in boolean automata. We observe that the size of a minimal boolean automaton of a regular language is directly related to the number of atoms of the language. We present a method to construct minimal boolean automata, using the atoms of a given regular language. The "illegal" cover problem of the Kameda-Weiner method for NFA minimization implies that using the union operation only to construct an automaton from a cover - as is the case with NFAs -, is not sufficient. We show that by using the union and the intersection operations (without the complementation operation), it is possible to construct boolean automata accepting a given language, for a given maximal cover.

Cite as

Hellis Tamm. Boolean Automata and Atoms of Regular Languages. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 86:1-86:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{tamm:LIPIcs.MFCS.2021.86,
  author =	{Tamm, Hellis},
  title =	{{Boolean Automata and Atoms of Regular Languages}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{86:1--86:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.86},
  URN =		{urn:nbn:de:0030-drops-145267},
  doi =		{10.4230/LIPIcs.MFCS.2021.86},
  annote =	{Keywords: Boolean automaton, Regular language, Atoms}
}
Document
The Gödel Fibration

Authors: Davide Trotta, Matteo Spadetto, and Valeria de Paiva


Abstract
We introduce the notion of a Gödel fibration, which is a fibration categorically embodying both the logical principles of traditional Skolemization (we can exchange the order of quantifiers paying the price of a functional) and the existence of a prenex normal form presentation for every logical formula. Building up from Hofstra’s earlier fibrational characterization of de Paiva’s categorical Dialectica construction, we show that a fibration is an instance of the Dialectica construction if and only if it is a Gödel fibration. This result establishes an intrinsic presentation of the Dialectica fibration, contributing to the understanding of the Dialectica construction itself and of its properties from a logical perspective.

Cite as

Davide Trotta, Matteo Spadetto, and Valeria de Paiva. The Gödel Fibration. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 87:1-87:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{trotta_et_al:LIPIcs.MFCS.2021.87,
  author =	{Trotta, Davide and Spadetto, Matteo and de Paiva, Valeria},
  title =	{{The G\"{o}del Fibration}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{87:1--87:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.87},
  URN =		{urn:nbn:de:0030-drops-145272},
  doi =		{10.4230/LIPIcs.MFCS.2021.87},
  annote =	{Keywords: Dialectica category, G\"{o}del fibration, Pseudo-monad}
}
Document
Abstract Congruence Criteria for Weak Bisimilarity

Authors: Stelios Tsampas, Christian Williams, Andreas Nuyts, Dominique Devriese, and Frank Piessens


Abstract
We introduce three general compositionality criteria over operational semantics and prove that, when all three are satisfied together, they guarantee weak bisimulation being a congruence. Our work is founded upon Turi and Plotkin’s mathematical operational semantics and the coalgebraic approach to weak bisimulation by Brengos. We demonstrate each criterion with various examples of success and failure and establish a formal connection with the simply WB cool rule format of Bloom and van Glabbeek. In addition, we show that the three criteria induce lax models in the sense of Bonchi et al.

Cite as

Stelios Tsampas, Christian Williams, Andreas Nuyts, Dominique Devriese, and Frank Piessens. Abstract Congruence Criteria for Weak Bisimilarity. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 88:1-88:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{tsampas_et_al:LIPIcs.MFCS.2021.88,
  author =	{Tsampas, Stelios and Williams, Christian and Nuyts, Andreas and Devriese, Dominique and Piessens, Frank},
  title =	{{Abstract Congruence Criteria for Weak Bisimilarity}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{88:1--88:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.88},
  URN =		{urn:nbn:de:0030-drops-145281},
  doi =		{10.4230/LIPIcs.MFCS.2021.88},
  annote =	{Keywords: Structural Operational Semantics, distributive laws, weak bisimilarity}
}
Document
Quantum Multiple-Valued Decision Diagrams in Graphical Calculi

Authors: Renaud Vilmart


Abstract
Graphical calculi such as the ZH-calculus are powerful tools in the study and analysis of quantum processes, with links to other models of quantum computation such as quantum circuits, measurement-based computing, etc. A somewhat compact but systematic way to describe a quantum process is through the use of quantum multiple-valued decision diagrams (QMDDs), which have already been used for the synthesis of quantum circuits as well as for verification. We show in this paper how to turn a QMDD into an equivalent ZH-diagram, and vice-versa, and show how reducing a QMDD translates in the ZH-Calculus, hence allowing tools from one formalism to be used into the other.

Cite as

Renaud Vilmart. Quantum Multiple-Valued Decision Diagrams in Graphical Calculi. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 89:1-89:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vilmart:LIPIcs.MFCS.2021.89,
  author =	{Vilmart, Renaud},
  title =	{{Quantum Multiple-Valued Decision Diagrams in Graphical Calculi}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{89:1--89:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.89},
  URN =		{urn:nbn:de:0030-drops-145295},
  doi =		{10.4230/LIPIcs.MFCS.2021.89},
  annote =	{Keywords: Quantum Computing, ZH-Calculus, Decision Diagrams}
}
Document
Decision Problems for Origin-Close Top-Down Tree Transducers

Authors: Sarah Winter


Abstract
Tree transductions are binary relations of finite trees. For tree transductions defined by non-deterministic top-down tree transducers, inclusion, equivalence and synthesis problems are known to be undecidable. Adding origin semantics to tree transductions, i.e., tagging each output node with the input node it originates from, is a known way to recover decidability for inclusion and equivalence. The origin semantics is rather rigid, in this work, we introduce a similarity measure for transducers with origin semantics and show that we can decide inclusion, equivalence and synthesis problems for origin-close non-deterministic top-down tree transducers.

Cite as

Sarah Winter. Decision Problems for Origin-Close Top-Down Tree Transducers. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 90:1-90:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{winter:LIPIcs.MFCS.2021.90,
  author =	{Winter, Sarah},
  title =	{{Decision Problems for Origin-Close Top-Down Tree Transducers}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{90:1--90:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.90},
  URN =		{urn:nbn:de:0030-drops-145308},
  doi =		{10.4230/LIPIcs.MFCS.2021.90},
  annote =	{Keywords: tree tranducers, equivalence, uniformization, synthesis, origin semantics}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail