LIPIcs, Volume 83

42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)



Thumbnail PDF

Event

MFCS 2017, August 21-25, 2017, Aalborg, Denmark

Editors

Kim G. Larsen
Hans L. Bodlaender
Jean-Francois Raskin

Publication Details

  • published at: 2017-12-01
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-046-0
  • DBLP: db/conf/mfcs/mfcs2017

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 83, MFCS'17, Complete Volume

Authors: Kim G. Larsen, Hans L. Bodlaender, and Jean-Francois Raskin


Abstract
LIPIcs, Volume 83, MFCS'17, Complete Volume

Cite as

42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{larsen_et_al:LIPIcs.MFCS.2017,
  title =	{{LIPIcs, Volume 83, MFCS'17, Complete Volume}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017},
  URN =		{urn:nbn:de:0030-drops-82073},
  doi =		{10.4230/LIPIcs.MFCS.2017},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Kim G. Larsen, Hans L. Bodlaender, and Jean-Francois Raskin


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.MFCS.2017.0,
  author =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.0},
  URN =		{urn:nbn:de:0030-drops-80564},
  doi =		{10.4230/LIPIcs.MFCS.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Does Looking Inside a Circuit Help?

Authors: Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Pierre McKenzie, and Shadab Romani


Abstract
The Black-Box Hypothesisstates that any property of Boolean functions decided efficiently (e.g., in BPP) with inputs represented by circuits can also be decided efficiently in the black-box setting, where an algorithm is given an oracle access to the input function and an upper bound on its circuit size. If this hypothesis is true, then P neq NP. We focus on the consequences of the hypothesis being false, showing that (under general conditions on the structure of a counterexample) it implies a non-trivial algorithm for CSAT. More specifically, we show that if there is a property F of boolean functions such that F has high sensitivity on some input function f of subexponential circuit complexity (which is a sufficient condition for F being a counterexample to the Black-Box Hypothesis), then CSAT is solvable by a subexponential-size circuit family. Moreover, if such a counterexample F is symmetric, then CSAT is in Ppoly. These results provide some evidence towards the conjecture (made in this paper) that the Black-Box Hypothesis is false if and only if CSAT is easy.

Cite as

Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Pierre McKenzie, and Shadab Romani. Does Looking Inside a Circuit Help?. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{impagliazzo_et_al:LIPIcs.MFCS.2017.1,
  author =	{Impagliazzo, Russell and Kabanets, Valentine and Kolokolova, Antonina and McKenzie, Pierre and Romani, Shadab},
  title =	{{Does Looking Inside a Circuit Help?}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.1},
  URN =		{urn:nbn:de:0030-drops-80975},
  doi =		{10.4230/LIPIcs.MFCS.2017.1},
  annote =	{Keywords: Black-Box Hypothesis, Rice's theorem, circuit complexity, SAT, sensitivity of boolean functions, decision tree complexity}
}
Document
The Power of Programs over Monoids in DA

Authors: Nathan Grosshans, Pierre McKenzie, and Luc Segoufin


Abstract
The program-over-monoid model of computation originates with Barrington's proof that it captures the complexity class NC^1. Here we make progress in understanding the subtleties of the model. First, we identify a new tameness condition on a class of monoids that entails a natural characterization of the regular languages recognizable by programs over monoids from the class. Second, we prove that the class known as DA satisfies tameness and hence that the regular languages recognized by programs over monoids in DA are precisely those recognizable in the classical sense by morphisms from QDA. Third, we show by contrast that the well studied class of monoids called J is not tame and we exhibit a regular language, recognized by a program over a monoid from J, yet not recognizable classically by morphisms from the class QJ. Finally, we exhibit a program-length-based hierarchy within the class of languages recognized by programs over monoids from DA.

Cite as

Nathan Grosshans, Pierre McKenzie, and Luc Segoufin. The Power of Programs over Monoids in DA. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{grosshans_et_al:LIPIcs.MFCS.2017.2,
  author =	{Grosshans, Nathan and McKenzie, Pierre and Segoufin, Luc},
  title =	{{The Power of Programs over Monoids in DA}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.2},
  URN =		{urn:nbn:de:0030-drops-80909},
  doi =		{10.4230/LIPIcs.MFCS.2017.2},
  annote =	{Keywords: Programs over monoids, DA, lower-bounds}
}
Document
Regular Language Distance and Entropy

Authors: Austin J. Parker, Kelly B. Yancey, and Matthew P. Yancey


Abstract
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of channel capacity, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.

Cite as

Austin J. Parker, Kelly B. Yancey, and Matthew P. Yancey. Regular Language Distance and Entropy. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{parker_et_al:LIPIcs.MFCS.2017.3,
  author =	{Parker, Austin J. and Yancey, Kelly B. and Yancey, Matthew P.},
  title =	{{Regular Language Distance and Entropy}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.3},
  URN =		{urn:nbn:de:0030-drops-80945},
  doi =		{10.4230/LIPIcs.MFCS.2017.3},
  annote =	{Keywords: regular languages, channel capacity, entropy, Jaccard, symbolic dynamics}
}
Document
The Complexity of Boolean Surjective General-Valued CSPs

Authors: Peter Fulla and Stanislav Zivny


Abstract
Valued constraint satisfaction problems (VCSPs) are discrete optimisation problems with the objective function given as a sum of fixed-arity functions; the values are rational numbers or infinity. In Boolean surjective VCSPs variables take on labels from D={0,1} and an optimal assignment is required to use both labels from D. A classic example is the global min-cut problem in graphs. Building on the work of Uppman, we establish a dichotomy theorem and thus give a complete complexity classification of Boolean surjective VCSPs. The newly discovered tractable case has an interesting structure related to projections of downsets and upsets. Our work generalises the dichotomy for {0,infinity}-valued constraint languages corresponding to CSPs) obtained by Creignou and Hebrard, and the dichotomy for {0,1}-valued constraint languages (corresponding to Min-CSPs) obtained by Uppman.

Cite as

Peter Fulla and Stanislav Zivny. The Complexity of Boolean Surjective General-Valued CSPs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{fulla_et_al:LIPIcs.MFCS.2017.4,
  author =	{Fulla, Peter and Zivny, Stanislav},
  title =	{{The Complexity of Boolean Surjective General-Valued CSPs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.4},
  URN =		{urn:nbn:de:0030-drops-80623},
  doi =		{10.4230/LIPIcs.MFCS.2017.4},
  annote =	{Keywords: constraint satisfaction problems, surjective CSP, valued CSP, min-cut, polymorphisms, multimorphisms}
}
Document
On the Expressive Power of Quasiperiodic SFT

Authors: Bruno Durand and Andrei Romashchenko


Abstract
In this paper we study the shifts, which are the shift-invariant and topologically closed sets of configurations over a finite alphabet in Z^d. The minimal shifts are those shifts in which all configurations contain exactly the same patterns. Two classes of shifts play a prominent role in symbolic dynamics, in language theory and in the theory of computability: the shifts of finite type (obtained by forbidding a finite number of finite patterns) and the effective shifts (obtained by forbidding a computably enumerable set of finite patterns). We prove that every effective minimal shift can be represented as a factor of a projective subdynamics on a minimal shift of finite type in a bigger (by 1) dimension. This result transfers to the class of minimal shifts a theorem by M.Hochman known for the class of all effective shifts and thus answers an open question by E. Jeandel. We prove a similar result for quasiperiodic shifts and also show that there exists a quasiperiodic shift of finite type for which Kolmogorov complexity of all patterns of size n\times n is \Omega(n).

Cite as

Bruno Durand and Andrei Romashchenko. On the Expressive Power of Quasiperiodic SFT. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{durand_et_al:LIPIcs.MFCS.2017.5,
  author =	{Durand, Bruno and Romashchenko, Andrei},
  title =	{{On the Expressive Power of Quasiperiodic SFT}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.5},
  URN =		{urn:nbn:de:0030-drops-80985},
  doi =		{10.4230/LIPIcs.MFCS.2017.5},
  annote =	{Keywords: minimal SFT, tilings, quasiperiodicityIn this paper we study the shifts, which are the shift-invariant and topologically closed sets of configurations}
}
Document
Parameterized Algorithms for Partitioning Graphs into Highly Connected Clusters

Authors: Ivan Bliznets and Nikolai Karpov


Abstract
Clustering is a well-known and important problem with numerous applications. The graph-based model is one of the typical cluster models. In the graph model generally clusters are defined as cliques. However, such approach might be too restrictive as in some applications, not all objects from the same cluster must be connected. That is why different types of cliques relaxations often considered as clusters. In our work, we consider a problem of partitioning graph into clusters and a problem of isolating cluster of a special type where by cluster we mean highly connected subgraph. Initially, such clusterization was proposed by Hartuv and Shamir. And their HCS clustering algorithm was extensively applied in practice. It was used to cluster cDNA fingerprints, to find complexes in protein-protein interaction data, to group protein sequences hierarchically into superfamily and family clusters, to find families of regulatory RNA structures. The HCS algorithm partitions graph in highly connected subgraphs. However, it is achieved by deletion of not necessarily the minimum number of edges. In our work, we try to minimize the number of edge deletions. We consider problems from the parameterized point of view where the main parameter is a number of allowed edge deletions. The presented algorithms significantly improve previous known running times for the Highly Connected Deletion (improved from \cOs\left(81^k\right) to \cOs\left(3^k\right)), Isolated Highly Connected Subgraph (from \cOs(4^k) to \cOs\left(k^{\cO\left(k^{\sfrac{2}{3}}\right)}\right) ), Seeded Highly Connected Edge Deletion (from \cOs\left(16^{k^{\sfrac{3}{4}}}\right) to \cOs\left(k^{\sqrt{k}}\right)) problems. Furthermore, we present a subexponential algorithm for Highly Connected Deletion problem if the number of clusters is bounded. Overall our work contains three subexponential algorithms which is unusual as very recently there were known very few problems admitting subexponential algorithms.

Cite as

Ivan Bliznets and Nikolai Karpov. Parameterized Algorithms for Partitioning Graphs into Highly Connected Clusters. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bliznets_et_al:LIPIcs.MFCS.2017.6,
  author =	{Bliznets, Ivan and Karpov, Nikolai},
  title =	{{Parameterized Algorithms for Partitioning Graphs into Highly Connected Clusters}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.6},
  URN =		{urn:nbn:de:0030-drops-80728},
  doi =		{10.4230/LIPIcs.MFCS.2017.6},
  annote =	{Keywords: clustering, parameterized complexity, highly connected}
}
Document
Hypercube LSH for Approximate near Neighbors

Authors: Thijs Laarhoven


Abstract
A celebrated technique for finding near neighbors for the angular distance involves using a set of random hyperplanes to partition the space into hash regions [Charikar, STOC 2002]. Experiments later showed that using a set of orthogonal hyperplanes, thereby partitioning the space into the Voronoi regions induced by a hypercube, leads to even better results [Terasawa and Tanaka, WADS 2007]. However, no theoretical explanation for this improvement was ever given, and it remained unclear how the resulting hypercube hash method scales in high dimensions. In this work, we provide explicit asymptotics for the collision probabilities when using hypercubes to partition the space. For instance, two near-orthogonal vectors are expected to collide with probability (1/pi)^d in dimension d, compared to (1/2)^d when using random hyperplanes. Vectors at angle pi/3 collide with probability (sqrt[3]/pi)^d, compared to (2/3)^d for random hyperplanes, and near-parallel vectors collide with similar asymptotic probabilities in both cases. For c-approximate nearest neighbor searching, this translates to a decrease in the exponent rho of locality-sensitive hashing (LSH) methods of a factor up to log2(pi) ~ 1.652 compared to hyperplane LSH. For c = 2, we obtain rho ~ 0.302 for hypercube LSH, improving upon the rho ~ 0.377 for hyperplane LSH. We further describe how to use hypercube LSH in practice, and we consider an example application in the area of lattice algorithms.

Cite as

Thijs Laarhoven. Hypercube LSH for Approximate near Neighbors. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{laarhoven:LIPIcs.MFCS.2017.7,
  author =	{Laarhoven, Thijs},
  title =	{{Hypercube LSH for Approximate near Neighbors}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.7},
  URN =		{urn:nbn:de:0030-drops-80926},
  doi =		{10.4230/LIPIcs.MFCS.2017.7},
  annote =	{Keywords: (approximate) near neighbors, locality-sensitive hashing, large deviations, dimensionality reduction, lattice algorithms}
}
Document
Generalized Predecessor Existence Problems for Boolean Finite Dynamical Systems

Authors: Akinori Kawachi, Mitsunori Ogihara, and Kei Uchizawa


Abstract
A Boolean Finite Synchronous Dynamical System (BFDS, for short) consists of a finite number of objects that each maintains a boolean state, where after individually receiving state assignments, the objects update their state with respect to object-specific time-independent boolean functions synchronously in discrete time steps. The present paper studies the computational complexity of determining, given a boolean finite synchronous dynamical system, a configuration, which is a boolean vector representing the states of the objects, and a positive integer t, whether there exists another configuration from which the given configuration can be reached in t steps. It was previously shown that this problem, which we call the t-Predecessor Problem, is NP-complete even for t = 1 if the update function of an object is either the conjunction of arbitrary fan-in or the disjunction of arbitrary fan-in. This paper studies the computational complexity of the t-Predecessor Problem for a variety of sets of permissible update functions as well as for polynomially bounded t. It also studies the t-Garden-Of-Eden Problem, a variant of the t-Predecessor Problem that asks whether a configuration has a t-predecessor, which itself has no predecessor. The paper obtains complexity theoretical characterizations of all but one of these problems.

Cite as

Akinori Kawachi, Mitsunori Ogihara, and Kei Uchizawa. Generalized Predecessor Existence Problems for Boolean Finite Dynamical Systems. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kawachi_et_al:LIPIcs.MFCS.2017.8,
  author =	{Kawachi, Akinori and Ogihara, Mitsunori and Uchizawa, Kei},
  title =	{{Generalized Predecessor Existence Problems for Boolean Finite Dynamical Systems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.8},
  URN =		{urn:nbn:de:0030-drops-81078},
  doi =		{10.4230/LIPIcs.MFCS.2017.8},
  annote =	{Keywords: Computational complexity, dynamical systems, Garden of Eden, predecessor}
}
Document
Dividing Splittable Goods Evenly and With Limited Fragmentation

Authors: Peter Damaschke


Abstract
A splittable good provided in n pieces shall be divided as evenly as possible among m agents, where every agent can take shares of at most F pieces. We call F the fragmentation. For F=1 we can solve the max-min and min-max problems in linear time. The case F=2 has neat formulations and structural characterizations in terms of weighted graphs. Here we focus on perfectly balanced solutions. While the problem is strongly NP-hard in general, it can be solved in linear time if m>=n-1, and a solution always exists in this case. Moreover, case F=2 is fixed-parameter tractable in the parameter 2m-n. The results also give rise to various open problems.

Cite as

Peter Damaschke. Dividing Splittable Goods Evenly and With Limited Fragmentation. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 9:1-9:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{damaschke:LIPIcs.MFCS.2017.9,
  author =	{Damaschke, Peter},
  title =	{{Dividing Splittable Goods Evenly and With Limited Fragmentation}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{9:1--9:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.9},
  URN =		{urn:nbn:de:0030-drops-80579},
  doi =		{10.4230/LIPIcs.MFCS.2017.9},
  annote =	{Keywords: packing, load balancing, weighted graph, linear-time algorithm, parameterized algorithm}
}
Document
Small-Space LCE Data Structure with Constant-Time Queries

Authors: Yuka Tanimura, Takaaki Nishimoto, Hideo Bannai, Shunsuke Inenaga, and Masayuki Takeda


Abstract
The longest common extension (LCE) problem is to preprocess a given string w of length n so that the length of the longest common prefix between suffixes of w that start at any two given positions is answered quickly. In this paper, we present a data structure of O(z \tau^2 + \frac{n}{\tau}) words of space which answers LCE queries in O(1) time and can be built in O(n \log \sigma) time, where 1 \leq \tau \leq \sqrt{n} is a parameter, z is the size of the Lempel-Ziv 77 factorization of w and \sigma is the alphabet size. The proposed LCE data structure not access the input string w when answering queries, and thus w can be deleted after preprocessing. On top of this main result, we obtain further results using (variants of) our LCE data structure, which include the following: - For highly repetitive strings where the z\tau^2 term is dominated by \frac{n}{\tau}, we obtain a constant-time and sub-linear space LCE query data structure. - Even when the input string is not well compressible via Lempel-Ziv 77 factorization, we still can obtain a constant-time and sub-linear space LCE data structure for suitable \tau and for \sigma \leq 2^{o(\log n)}. - The time-space trade-off lower bounds for the LCE problem by Bille et al. [J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR, abs/1611.02891, 2016] do not apply in some cases with our LCE data structure.

Cite as

Yuka Tanimura, Takaaki Nishimoto, Hideo Bannai, Shunsuke Inenaga, and Masayuki Takeda. Small-Space LCE Data Structure with Constant-Time Queries. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tanimura_et_al:LIPIcs.MFCS.2017.10,
  author =	{Tanimura, Yuka and Nishimoto, Takaaki and Bannai, Hideo and Inenaga, Shunsuke and Takeda, Masayuki},
  title =	{{Small-Space LCE Data Structure with Constant-Time Queries}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.10},
  URN =		{urn:nbn:de:0030-drops-81021},
  doi =		{10.4230/LIPIcs.MFCS.2017.10},
  annote =	{Keywords: longest common extension, truncated suffix trees, t-covers}
}
Document
ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics

Authors: Emmanuel Jeandel, Simon Perdrix, Renaud Vilmart, and Quanlong Wang


Abstract
The ZX-Calculus is a powerful graphical language for quantum mechanics and quantum information processing. The completeness of the language - i.e. the ability to derive any true equation - is a crucial question. In the quest of a complete ZX-calculus, supplementarity has been recently proved to be necessary for quantum diagram reasoning (MFCS 2016). Roughly speaking, supplementarity consists in merging two subdiagrams when they are parameterized by antipodal angles. We introduce a generalised supplementarity - called cyclotomic supplementarity - which consists in merging n subdiagrams at once, when the n angles divide the circle into equal parts. We show that when n is an odd prime number, the cyclotomic supplementarity cannot be derived, leading to a countable family of new axioms for diagrammatic quantum reasoning. We exhibit another new simple axiom that cannot be derived from the existing rules of the ZX-Calculus, implying in particular the incompleteness of the language for the so-called Clifford+T quantum mechanics. We end up with a new axiomatisation of an extended ZX-Calculus, including an axiom schema for the cyclotomic supplementarity.

Cite as

Emmanuel Jeandel, Simon Perdrix, Renaud Vilmart, and Quanlong Wang. ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 11:1-11:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jeandel_et_al:LIPIcs.MFCS.2017.11,
  author =	{Jeandel, Emmanuel and Perdrix, Simon and Vilmart, Renaud and Wang, Quanlong},
  title =	{{ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{11:1--11:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.11},
  URN =		{urn:nbn:de:0030-drops-81173},
  doi =		{10.4230/LIPIcs.MFCS.2017.11},
  annote =	{Keywords: Categorical Quantum Mechanincs, ZX-Calculus, Completeness, Cyclotomic Supplmentarity, Clifford+T}
}
Document
Counting Problems for Parikh Images

Authors: Christoph Haase, Stefan Kiefer, and Markus Lohrey


Abstract
Given finite-state automata (or context-free grammars) A,B over the same alphabet and a Parikh vector p, we study the complexity of deciding whether the number of words in the language of A with Parikh image p is greater than the number of such words in the language of B. Recently, this problem turned out to be tightly related to the cost problem for weighted Markov chains. We classify the complexity according to whether A and B are deterministic, the size of the alphabet, and the encoding of p (binary or unary).

Cite as

Christoph Haase, Stefan Kiefer, and Markus Lohrey. Counting Problems for Parikh Images. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 12:1-12:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{haase_et_al:LIPIcs.MFCS.2017.12,
  author =	{Haase, Christoph and Kiefer, Stefan and Lohrey, Markus},
  title =	{{Counting Problems for Parikh Images}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{12:1--12:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.12},
  URN =		{urn:nbn:de:0030-drops-80597},
  doi =		{10.4230/LIPIcs.MFCS.2017.12},
  annote =	{Keywords: Parikh images, finite automata, counting problems}
}
Document
Communication Complexity of Pairs of Graph Families with Applications

Authors: Sudeshna Kolay, Fahad Panolan, and Saket Saurabh


Abstract
Given a graph G and a pair (\mathcal{F}_1,\mathcal{F}_2) of graph families, the function {\sf GDISJ}_{G,{\cal F}_1,{\cal F}_2} takes as input, two induced subgraphs G_1 and G_2 of G, such that G_1 \in \mathcal{F}_1 and G_2 \in \mathcal{F}_2 and returns 1 if V(G_1)\cap V(G_2)=\emptyset and 0 otherwise. We study the communication complexity of this problem in the two-party model. In particular, we look at pairs of hereditary graph families. We show that the communication complexity of this function, when the two graph families are hereditary, is sublinear if and only if there are finitely many graphs in the intersection of these two families. Then, using concepts from parameterized complexity, we obtain nuanced upper bounds on the communication complexity of GDISJ_G,\cal F_1,\cal F_2. A concept related to communication protocols is that of a (\mathcal{F}_1,\mathcal{F}_2)-separating family of a graph G. A collection \mathcal{F} of subsets of V(G) is called a (\mathcal{F}_1,\mathcal{F}_2)-separating family} for G, if for any two vertex disjoint induced subgraphs G_1\in \mathcal{F}_1,G_2\in \mathcal{F}_2, there is a set F \in \mathcal{F} with V(G_1) \subseteq F and V(G_2) \cap F = \emptyset. Given a graph G on n vertices, for any pair (\mathcal{F}_1,\mathcal{F}_2) of hereditary graph families with sublinear communication complexity for GDISJ_G,\cal F_1,\cal F_2, we give an enumeration algorithm that finds a subexponential sized (\mathcal{F}_1,\mathcal{F}_2)-separating family. In fact, we give an enumeration algorithm that finds a 2^{o(k)}n^{\Oh(1)} sized (\mathcal{F}_1,\mathcal{F}_2)-separating family; where k denotes the size of a minimum sized set S of vertices such that V(G)\setminus S has a bipartition (V_1,V_2) with G[V_1] \in {\cal F}_1 and G[V_2]\in {\cal F}_2. We exhibit a wide range of applications for these separating families, to obtain combinatorial bounds, enumeration algorithms as well as exact and FPT algorithms for several problems.

Cite as

Sudeshna Kolay, Fahad Panolan, and Saket Saurabh. Communication Complexity of Pairs of Graph Families with Applications. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kolay_et_al:LIPIcs.MFCS.2017.13,
  author =	{Kolay, Sudeshna and Panolan, Fahad and Saurabh, Saket},
  title =	{{Communication Complexity of Pairs of Graph Families with Applications}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.13},
  URN =		{urn:nbn:de:0030-drops-80849},
  doi =		{10.4230/LIPIcs.MFCS.2017.13},
  annote =	{Keywords: Communication Complexity, Separating Family, FPT algorithms}
}
Document
Monitor Logics for Quantitative Monitor Automata

Authors: Erik Paul


Abstract
We introduce a new logic called Monitor Logic and show that it is expressively equivalent to Quantitative Monitor Automata.

Cite as

Erik Paul. Monitor Logics for Quantitative Monitor Automata. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 14:1-14:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{paul:LIPIcs.MFCS.2017.14,
  author =	{Paul, Erik},
  title =	{{Monitor Logics for Quantitative Monitor Automata}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{14:1--14:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.14},
  URN =		{urn:nbn:de:0030-drops-81133},
  doi =		{10.4230/LIPIcs.MFCS.2017.14},
  annote =	{Keywords: Quantitative Monitor Automata, Nested Weighted Automata, Monitor Logics, Weighted Logics}
}
Document
The Complexity of Quantum Disjointness

Authors: Hartmut Klauck


Abstract
We introduce the communication problem QNDISJ, short for Quantum (Unique) Non-Disjointness, and study its complexity under different modes of communication complexity. The main motivation for the problem is that it is a candidate for the separation of the quantum communication complexity classes QMA and QCMA. The problem generalizes the Vector-in-Subspace and Non-Disjointness problems. We give tight bounds for the QMA, quantum, randomized communication complexities of the problem. We show polynomially related upper and lower bounds for the MA complexity. We also show an upper bound for QCMA protocols, and show that the bound is tight for a natural class of QCMA protocols for the problem. The latter lower bound is based on a geometric lemma, that states that every subset of the n-dimensional sphere of measure 2^-p must contain an ortho-normal set of points of size Omega(n/p). We also study a "small-spaces" version of the problem, and give upper and lower bounds for its randomized complexity that show that the QNDISJ problem is harder than Non-disjointness for randomized protocols. Interestingly, for quantum modes the complexity depends only on the dimension of the smaller space, whereas for classical modes the dimension of the larger space matters.

Cite as

Hartmut Klauck. The Complexity of Quantum Disjointness. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{klauck:LIPIcs.MFCS.2017.15,
  author =	{Klauck, Hartmut},
  title =	{{The Complexity of Quantum Disjointness}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.15},
  URN =		{urn:nbn:de:0030-drops-81010},
  doi =		{10.4230/LIPIcs.MFCS.2017.15},
  annote =	{Keywords: Communication Complexity, Quantum Proof Systems}
}
Document
Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis

Authors: Xiaotie Deng, Yansong Gao, and Jie Zhang


Abstract
The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is Theta(sqrt{n}) times of what random priority achieves. These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the worst-case analysis.

Cite as

Xiaotie Deng, Yansong Gao, and Jie Zhang. Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.MFCS.2017.16,
  author =	{Deng, Xiaotie and Gao, Yansong and Zhang, Jie},
  title =	{{Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.16},
  URN =		{urn:nbn:de:0030-drops-80936},
  doi =		{10.4230/LIPIcs.MFCS.2017.16},
  annote =	{Keywords: mechanism design, approximation ratio, smoothed analysis, average-case analysis}
}
Document
Time Complexity of Constraint Satisfaction via Universal Algebra

Authors: Peter Jonsson, Victor Lagerkvist, and Biman Roy


Abstract
The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in subexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1, where n denotes the number of variables. Problems like k-SAT can be viewed as special cases of the constraint satisfaction problem (CSP), which is the problem of determining whether a set of constraints is satisfiable. In this paper we study the worst-case time complexity of NP-complete CSPs. Our main interest is in the CSP problem parameterized by a constraint language Gamma (CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is believed that CSP(Gamma) is either tractable or NP-complete, and the algebraic CSP dichotomy conjecture gives a sharp delineation of these two classes based on algebraic properties of constraint languages. Under this conjecture and the ETH, we first rule out the existence of subexponential algorithms for finite domain NP-complete CSP(Gamma) problems. This result also extends to certain infinite-domain CSPs and structurally restricted CSP(Gamma) problems. We then begin a study of the complexity of NP-complete CSPs where one is allowed to arbitrarily restrict the values of individual variables, which is a very well-studied subclass of CSPs. For such CSPs with finite domain D, we identify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if CSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is solvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a lower bound for all CSPs of this particular kind. We also prove that the complexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is false. This implies, for instance, that for every c>1 there exists a finite-domain Gamma such that CSP(Gamma) is NP complete and solvable in O(c^n) time.

Cite as

Peter Jonsson, Victor Lagerkvist, and Biman Roy. Time Complexity of Constraint Satisfaction via Universal Algebra. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jonsson_et_al:LIPIcs.MFCS.2017.17,
  author =	{Jonsson, Peter and Lagerkvist, Victor and Roy, Biman},
  title =	{{Time Complexity of Constraint Satisfaction via Universal Algebra}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.17},
  URN =		{urn:nbn:de:0030-drops-80710},
  doi =		{10.4230/LIPIcs.MFCS.2017.17},
  annote =	{Keywords: Clone Theory, Universal Algebra, Constraint Satisfaction Problems}
}
Document
The Hardness of Solving Simple Word Equations

Authors: Joel D. Day, Florin Manea, and Dirk Nowotka


Abstract
We investigate the class of regular-ordered word equations. In such equations, each variable occurs at most once in each side and the order of the variables occurring in both left and right hand sides is preserved (the variables can be, however, separated by potentially distinct constant factors). Surprisingly, we obtain that solving such simple equations, even when the sides contain exactly the same variables, is NP-hard. By considerations regarding the combinatorial structure of the minimal solutions of the more general quadratic equations we obtain that the satisfiability problem for regular-ordered equations is in NP. The complexity of solving such word equations under regular constraints is also settled. Finally, we show that a related class of simple word equations, that generalises one-variable equations, is in P.

Cite as

Joel D. Day, Florin Manea, and Dirk Nowotka. The Hardness of Solving Simple Word Equations. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{day_et_al:LIPIcs.MFCS.2017.18,
  author =	{Day, Joel D. and Manea, Florin and Nowotka, Dirk},
  title =	{{The Hardness of Solving Simple Word Equations}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.18},
  URN =		{urn:nbn:de:0030-drops-81233},
  doi =		{10.4230/LIPIcs.MFCS.2017.18},
  annote =	{Keywords: Word Equations, Regular Patterns, Regular Constraints}
}
Document
Comparison of Max-Plus Automata and Joint Spectral Radius of Tropical Matrices

Authors: Laure Daviaud, Pierre Guillon, and Glenn Merlet


Abstract
Weighted automata over the tropical semiring Zmax are closely related to finitely generated semigroups of matrices over Zmax. In this paper, we use results in automata theory to study two quantities associated with sets of matrices: the joint spectral radius and the ultimate rank. We prove that these two quantities are not computable over the tropical semiring, i.e. there is no algorithm that takes as input a finite set of matrices S and provides as output the joint spectral radius (resp. the ultimate rank) of S. On the other hand, we prove that the joint spectral radius is nevertheless approximable and we exhibit restricted cases in which the joint spectral radius and the ultimate rank are computable. To reach this aim, we study the problem of comparing functions computed by weighted automata over the tropical semiring. This problem is known to be undecidable, and we prove that it remains undecidable in some specific subclasses of automata.

Cite as

Laure Daviaud, Pierre Guillon, and Glenn Merlet. Comparison of Max-Plus Automata and Joint Spectral Radius of Tropical Matrices. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{daviaud_et_al:LIPIcs.MFCS.2017.19,
  author =	{Daviaud, Laure and Guillon, Pierre and Merlet, Glenn},
  title =	{{Comparison of Max-Plus Automata and Joint Spectral Radius of Tropical Matrices}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.19},
  URN =		{urn:nbn:de:0030-drops-81052},
  doi =		{10.4230/LIPIcs.MFCS.2017.19},
  annote =	{Keywords: max-plus automata, max-plus matrices, weighted automata, tropical semiring, joint spectral radius, ultimate rank}
}
Document
Binary Search in Graphs Revisited

Authors: Argyrios Deligkas, George B. Mertzios, and Paul G. Spirakis


Abstract
In the classical binary search in a path the aim is to detect an unknown target by asking as few queries as possible, where each query reveals the direction to the target. This binary search algorithm has been recently extended by [Emamjomeh-Zadeh et al., STOC, 2016] to the problem of detecting a target in an arbitrary graph. Similarly to the classical case in the path, the algorithm of Emamjomeh-Zadeh et al. maintains a candidates’ set for the target, while each query asks an appropriately chosen vertex– the "median"–which minimises a potential \Phi among the vertices of the candidates' set. In this paper we address three open questions posed by Emamjomeh-Zadeh et al., namely (a) detecting a target when the query response is a direction to an approximately shortest path to the target, (b) detecting a target when querying a vertex that is an approximate median of the current candidates' set (instead of an exact one), and (c) detecting multiple targets, for which to the best of our knowledge no progress has been made so far. We resolve questions (a) and (b) by providing appropriate upper and lower bounds, as well as a new potential Γ that guarantees efficient target detection even by querying an approximate median each time. With respect to (c), we initiate a systematic study for detecting two targets in graphs and we identify sufficient conditions on the queries that allow for strong (linear) lower bounds and strong (polylogarithmic) upper bounds for the number of queries. All of our positive results can be derived using our new potential \Gamma that allows querying approximate medians.

Cite as

Argyrios Deligkas, George B. Mertzios, and Paul G. Spirakis. Binary Search in Graphs Revisited. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{deligkas_et_al:LIPIcs.MFCS.2017.20,
  author =	{Deligkas, Argyrios and Mertzios, George B. and Spirakis, Paul G.},
  title =	{{Binary Search in Graphs Revisited}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.20},
  URN =		{urn:nbn:de:0030-drops-80589},
  doi =		{10.4230/LIPIcs.MFCS.2017.20},
  annote =	{Keywords: binary search, graph, approximate query, probabilistic algorithm, lower bound.}
}
Document
A Formal Semantics of Influence in Bayesian Reasoning

Authors: Bart Jacobs and Fabio Zanasi


Abstract
This paper proposes a formal definition of influence in Bayesian reasoning, based on the notions of state (as probability distribution), predicate, validity and conditioning. Our approach highlights how conditioning a joint entwined/entangled state with a predicate on one of its components has 'crossover' influence on the other components. We use the total variation metric on probability distributions to quantitatively measure such influence. These insights are applied to give a rigorous explanation of the fundamental concept of d-separation in Bayesian networks.

Cite as

Bart Jacobs and Fabio Zanasi. A Formal Semantics of Influence in Bayesian Reasoning. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{jacobs_et_al:LIPIcs.MFCS.2017.21,
  author =	{Jacobs, Bart and Zanasi, Fabio},
  title =	{{A Formal Semantics of Influence in Bayesian Reasoning}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.21},
  URN =		{urn:nbn:de:0030-drops-80896},
  doi =		{10.4230/LIPIcs.MFCS.2017.21},
  annote =	{Keywords: probability distribution, Bayesian network, influence}
}
Document
The Complexity of SORE-definability Problems

Authors: Ping Lu, Zhilin Wu, and Haiming Chen


Abstract
Single occurrence regular expressions (SORE) are a special kind of deterministic regular expressions, which are extensively used in the schema languages DTD and XSD for XML documents. In this paper, with motivations from the simplification of XML schemas, we consider the SORE-definability problem: Given a regular expression, decide whether it has an equivalent SORE. We investigate extensively the complexity of the SORE-definability problem: We consider both (standard) regular expressions and regular expressions with counting, and distinguish between the alphabets of size at least two and unary alphabets. In all cases, we obtain tight complexity bounds. In addition, we consider another variant of this problem, the bounded SORE-definability problem, which is to decide, given a regular expression E and a number M (encoded in unary or binary), whether there is an SORE, which is equivalent to E on the set of words of length at most M. We show that in several cases, there is an exponential decrease in the complexity when switching from the SORE-definability problem to its bounded variant.

Cite as

Ping Lu, Zhilin Wu, and Haiming Chen. The Complexity of SORE-definability Problems. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.MFCS.2017.22,
  author =	{Lu, Ping and Wu, Zhilin and Chen, Haiming},
  title =	{{The Complexity of SORE-definability Problems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.22},
  URN =		{urn:nbn:de:0030-drops-80822},
  doi =		{10.4230/LIPIcs.MFCS.2017.22},
  annote =	{Keywords: Single occurrence regular expressions, Definability, Complexity}
}
Document
TC^0 Circuits for Algorithmic Problems in Nilpotent Groups

Authors: Alexei Myasnikov and Armin Weiß


Abstract
Recently, Macdonald et. al. showed that many algorithmic problems for finitely generated nilpotent groups including computation of normal forms, the subgroup membership problem, the conjugacy problem, and computation of subgroup presentations can be done in LOGSPACE. Here we follow their approach and show that all these problems are complete for the uniform circuit class TC^0 - uniformly for all r-generated nilpotent groups of class at most c for fixed r and c. Moreover, if we allow a certain binary representation of the inputs, then the word problem and computation of normal forms is still in uniform TC^0, while all the other problems we examine are shown to be TC^0-Turing reducible to the problem of computing greatest common divisors and expressing them as linear combinations.

Cite as

Alexei Myasnikov and Armin Weiß. TC^0 Circuits for Algorithmic Problems in Nilpotent Groups. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{myasnikov_et_al:LIPIcs.MFCS.2017.23,
  author =	{Myasnikov, Alexei and Wei{\ss}, Armin},
  title =	{{TC^0 Circuits for Algorithmic Problems in Nilpotent Groups}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.23},
  URN =		{urn:nbn:de:0030-drops-81089},
  doi =		{10.4230/LIPIcs.MFCS.2017.23},
  annote =	{Keywords: nilpotent groups, TC^0, abelian groups, word problem, conjugacy problem, subgroup membership problem, greatest common divisors}
}
Document
Better Complexity Bounds for Cost Register Automata

Authors: Eric Allender, Andreas Krebs, and Pierre McKenzie


Abstract
Cost register automata (CRAs) are one-way finite automata whose transitions have the side effect that a register is set to the result of applying a state-dependent semiring operation to a pair of registers. Here it is shown that CRAs over the tropical semiring (N U {infinity},\min,+) can simulate polynomial time computation, proving along the way that a naturally defined width-k circuit value problem over the tropical semiring is P-complete. Then the copyless variant of the CRA, requiring that semiring operations be applied to distinct registers, is shown no more powerful than NC^1 when the semiring is (Z,+,x) or (Gamma^*,max,concat). This relates questions left open in recent work on the complexity of CRA-computable functions to long-standing class separation conjectures in complexity theory, such as NC versus P and NC^1 versus GapNC^1.

Cite as

Eric Allender, Andreas Krebs, and Pierre McKenzie. Better Complexity Bounds for Cost Register Automata. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.MFCS.2017.24,
  author =	{Allender, Eric and Krebs, Andreas and McKenzie, Pierre},
  title =	{{Better Complexity Bounds for Cost Register Automata}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.24},
  URN =		{urn:nbn:de:0030-drops-80661},
  doi =		{10.4230/LIPIcs.MFCS.2017.24},
  annote =	{Keywords: computational complexity, cost registers}
}
Document
Kernelization of the Subset General Position Problem in Geometry

Authors: Jean-Daniel Boissonnat, Kunal Dutta, Arijit Ghosh, and Sudeshna Kolay


Abstract
In this paper, we consider variants of the Geometric Subset General Position problem. In defining this problem, a geometric subsystem is specified, like a subsystem of lines, hyperplanes or spheres. The input of the problem is a set of n points in \mathbb{R}^d and a positive integer k. The objective is to find a subset of at least k input points such that this subset is in general position with respect to the specified subsystem. For example, a set of points is in general position with respect to a subsystem of hyperplanes in \mathbb{R}^d if no d+1 points lie on the same hyperplane. In this paper, we study the Hyperplane Subset General Position problem under two parameterizations. When parameterized by k then we exhibit a polynomial kernelization for the problem. When parameterized by h=n-k, or the dual parameter, then we exhibit polynomial kernels which are also tight, under standard complexity theoretic assumptions. We can also exhibit similar kernelization results for d-Polynomial Subset General Position, where a vector space of polynomials of degree at most d are specified as the underlying subsystem such that the size of the basis for this vector space is b. The objective is to find a set of at least k input points, or in the dual delete at most h = n-k points, such that no b+1 points lie on the same polynomial. Notice that this is a generalization of many well-studied geometric variants of the Set Cover problem, such as Circle Subset General Position. We also study general projective variants of these problems. These problems are also related to other geometric problems like Subset Delaunay Triangulation problem.

Cite as

Jean-Daniel Boissonnat, Kunal Dutta, Arijit Ghosh, and Sudeshna Kolay. Kernelization of the Subset General Position Problem in Geometry. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 25:1-25:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{boissonnat_et_al:LIPIcs.MFCS.2017.25,
  author =	{Boissonnat, Jean-Daniel and Dutta, Kunal and Ghosh, Arijit and Kolay, Sudeshna},
  title =	{{Kernelization of the Subset General Position Problem in Geometry}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.25},
  URN =		{urn:nbn:de:0030-drops-80863},
  doi =		{10.4230/LIPIcs.MFCS.2017.25},
  annote =	{Keywords: Incidence Geometry, Kernel Lower bounds, Hyperplanes, Bounded degree polynomials}
}
Document
Satisfiable Tseitin Formulas Are Hard for Nondeterministic Read-Once Branching Programs

Authors: Ludmila Glinskih and Dmitry Itsykson


Abstract
We consider satisfiable Tseitin formulas TS_{G,c} based on d-regular expanders G with the absolute value of the second largest eigenvalue less than d/3. We prove that any nondeterministic read-once branching program (1-NBP) representing TS_{G,c} has size 2^{\Omega(n)}, where n is the number of vertices in G. It extends the recent result by Itsykson at el. [STACS 2017] from OBDD to 1-NBP. On the other hand it is easy to see that TS_{G,c} can be represented as a read-2 branching program (2-BP) of size O(n), as the negation of a nondeterministic read-once branching program (1-coNBP) of size O(n) and as a CNF formula of size O(n). Thus TS_{G,c} gives the best possible separations (up to a constant in the exponent) between 1-NBP and 2-BP, 1-NBP and 1-coNBP and between 1-NBP and CNF.

Cite as

Ludmila Glinskih and Dmitry Itsykson. Satisfiable Tseitin Formulas Are Hard for Nondeterministic Read-Once Branching Programs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 26:1-26:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{glinskih_et_al:LIPIcs.MFCS.2017.26,
  author =	{Glinskih, Ludmila and Itsykson, Dmitry},
  title =	{{Satisfiable Tseitin Formulas Are Hard for Nondeterministic Read-Once Branching Programs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{26:1--26:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.26},
  URN =		{urn:nbn:de:0030-drops-80767},
  doi =		{10.4230/LIPIcs.MFCS.2017.26},
  annote =	{Keywords: Tseitin formula, read-once branching program, expander}
}
Document
The Complexity of Quantified Constraints Using the Algebraic Formulation

Authors: Catarina Carvalho, Barnaby Martin, and Dmitriy Zhuk


Abstract
Let A be an idempotent algebra on a finite domain. We combine results of Chen, Zhuk and Carvalho et al. to argue that if A satisfies the polynomially generated powers property (PGP), then QCSP(Inv(A)) is in NP. We then use the result of Zhuk to prove a converse, that if Inv(A) satisfies the exponentially generated powers property (EGP), then QCSP(Inv(A)) is co-NP-hard. Since Zhuk proved that only PGP and EGP are possible, we derive a full dichotomy for the QCSP, justifying the moral correctness of what we term the Chen Conjecture. We examine in closer detail the situation for domains of size three. Over any finite domain, the only type of PGP that can occur is switchability. Switchability was introduced by Chen as a generalisation of the already-known Collapsibility. For three-element domain algebras A that are Switchable, we prove that for every finite subset Delta of Inv(A), Pol(Delta) is Collapsible. The significance of this is that, for QCSP on finite structures (over three-element domain), all QCSP tractability explained by Switchability is already explained by Collapsibility. Finally, we present a three-element domain complexity classification vignette, using known as well as derived results.

Cite as

Catarina Carvalho, Barnaby Martin, and Dmitriy Zhuk. The Complexity of Quantified Constraints Using the Algebraic Formulation. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{carvalho_et_al:LIPIcs.MFCS.2017.27,
  author =	{Carvalho, Catarina and Martin, Barnaby and Zhuk, Dmitriy},
  title =	{{The Complexity of Quantified Constraints Using the Algebraic Formulation}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.27},
  URN =		{urn:nbn:de:0030-drops-80793},
  doi =		{10.4230/LIPIcs.MFCS.2017.27},
  annote =	{Keywords: Quantified Constraints, Computational Complexity, Universal Algebra, Constraint Satisfaction}
}
Document
Induced Embeddings into Hamming Graphs

Authors: Martin Milanic, Peter Mursic, and Marcelo Mydlarz


Abstract
Let d be a positive integer. Can a given graph G be realized in R^d so that vertices are mapped to distinct points, two vertices being adjacent if and only if the corresponding points lie on a common line that is parallel to some axis? Graphs admitting such realizations have been studied in the literature for decades under different names. Peterson asked in [Discrete Appl. Math., 2003] about the complexity of the recognition problem. While the two-dimensional case corresponds to the class of line graphs of bipartite graphs and is well-understood, the complexity question has remained open for all higher dimensions. In this paper, we answer this question. We establish the NP-completeness of the recognition problem for any fixed dimension, even in the class of bipartite graphs. To do this, we strengthen a characterization of induced subgraphs of 3-dimensional Hamming graphs due to Klavžar and Peterin. We complement the hardness result by showing that for some important classes of perfect graphs –including chordal graphs and distance-hereditary graphs– the minimum dimension of the Euclidean space in which the graph can be realized, or the impossibility of doing so, can be determined in linear time.

Cite as

Martin Milanic, Peter Mursic, and Marcelo Mydlarz. Induced Embeddings into Hamming Graphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 28:1-28:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{milanic_et_al:LIPIcs.MFCS.2017.28,
  author =	{Milanic, Martin and Mursic, Peter and Mydlarz, Marcelo},
  title =	{{Induced Embeddings into Hamming Graphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{28:1--28:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.28},
  URN =		{urn:nbn:de:0030-drops-81289},
  doi =		{10.4230/LIPIcs.MFCS.2017.28},
  annote =	{Keywords: gridline graph, Hamming graph, induced embedding, NP-completeness, chordal graph}
}
Document
Structured Connectivity Augmentation

Authors: Fedor V. Fomin, Petr A. Golovach, and Dimitrios M. Thilikos


Abstract
We initiate the algorithmic study of the following "structured augmentation" question: is it possible to increase the connectivity of a given graph G by superposing it with another given graph H? More precisely, graph F is the superposition of G and H with respect to injective mapping \phi:V(H)->V(G) if every edge uv of F is either an edge of G, or \phi^{-1}(u)\phi^{-1}(v) is an edge of H. Thus F contains both G and H as subgraphs, and the edge set of F is the union of the edge sets of G and \phi(H). We consider the following optimization problem. Given graphs G, H, and a weight function \omega assigning non-negative weights to pairs of vertices of V(G), the task is to find \phi of minimum weight \omega(\phi)=\sum_{xy\in E(H)}\omega(\phi(x)\phi(y)) such that the edge connectivity of the superposition F of G and H with respect to \phi is higher than the edge connectivity of G. Our main result is the following ``dichotomy'' complexity classification. We say that a class of graphs C has bounded vertex-cover number, if there is a constant t depending on C only such that the vertex-cover number of every graph from C does not exceed t. We show that for every class of graphs C with bounded vertex-cover number, the problems of superposing into a connected graph F and to 2-edge connected graph F, are solvable in polynomial time when H\in C. On the other hand, for any hereditary class C with unbounded vertex-cover number, both problems are NP-hard when H\in C. For the unweighted variants of structured augmentation problems, i.e. the problems where the task is to identify whether there is a superposition of graphs of required connectivity, we provide necessary and sufficient combinatorial conditions on the existence of such superpositions. These conditions imply polynomial time algorithms solving the unweighted variants of the problems.

Cite as

Fedor V. Fomin, Petr A. Golovach, and Dimitrios M. Thilikos. Structured Connectivity Augmentation. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 29:1-29:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.MFCS.2017.29,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Thilikos, Dimitrios M.},
  title =	{{Structured Connectivity Augmentation}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{29:1--29:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.29},
  URN =		{urn:nbn:de:0030-drops-80603},
  doi =		{10.4230/LIPIcs.MFCS.2017.29},
  annote =	{Keywords: connectivity augmentation, graph superposition, complexity}
}
Document
Combinatorial Properties and Recognition of Unit Square Visibility Graphs

Authors: Katrin Casel, Henning Fernau, Alexander Grigoriev, Markus L. Schmid, and Sue Whitesides


Abstract
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.

Cite as

Katrin Casel, Henning Fernau, Alexander Grigoriev, Markus L. Schmid, and Sue Whitesides. Combinatorial Properties and Recognition of Unit Square Visibility Graphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.MFCS.2017.30,
  author =	{Casel, Katrin and Fernau, Henning and Grigoriev, Alexander and Schmid, Markus L. and Whitesides, Sue},
  title =	{{Combinatorial Properties and Recognition of Unit Square Visibility Graphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.30},
  URN =		{urn:nbn:de:0030-drops-80770},
  doi =		{10.4230/LIPIcs.MFCS.2017.30},
  annote =	{Keywords: Visibility graphs, visibility layout, NP-completeness, exact algorithms}
}
Document
Weighted Operator Precedence Languages

Authors: Manfred Droste, Stefan Dück, Dino Mandrioli, and Matteo Pradella


Abstract
In the last years renewed investigation of operator precedence languages (OPL) led to discover important properties thereof: OPL are closed with respect to all major operations, are characterized, besides the original grammar family, in terms of an automata family (OPA) and an MSO logic; furthermore they significantly generalize the well-known visibly pushdown languages (VPL). In another area of research, quantitative models of systems are also greatly in demand. In this paper, we lay the foundation to marry these two research fields. We introduce weighted operator precedence automata and show how they are both strict extensions of OPA and weighted visibly pushdown automata. We prove a Nivat-like result which shows that quantitative OPL can be described by unweighted OPA and very particular weighted OPA. In a Büchi-like theorem, we show that weighted OPA are expressively equivalent to a weighted MSO-logic for OPL.

Cite as

Manfred Droste, Stefan Dück, Dino Mandrioli, and Matteo Pradella. Weighted Operator Precedence Languages. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 31:1-31:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{droste_et_al:LIPIcs.MFCS.2017.31,
  author =	{Droste, Manfred and D\"{u}ck, Stefan and Mandrioli, Dino and Pradella, Matteo},
  title =	{{Weighted Operator Precedence Languages}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{31:1--31:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.31},
  URN =		{urn:nbn:de:0030-drops-81150},
  doi =		{10.4230/LIPIcs.MFCS.2017.31},
  annote =	{Keywords: Quantitative automata, operator precedence languages, input-driven languages, visibly pushdown languages, quantitative logic}
}
Document
Model Checking and Validity in Propositional and Modal Inclusion Logics

Authors: Lauri Hella, Antti Kuusisto, Arne Meier, and Jonni Virtema


Abstract
Propositional and modal inclusion logic are formalisms that belong to the family of logics based on team semantics. This article investigates the model checking and validity problems of these logics. We identify complexity bounds for both problems, covering both lax and strict team semantics. By doing so, we come close to finalising the programme that ultimately aims to classify the complexities of the basic reasoning problems for modal and propositional dependence, independence, and inclusion logics.

Cite as

Lauri Hella, Antti Kuusisto, Arne Meier, and Jonni Virtema. Model Checking and Validity in Propositional and Modal Inclusion Logics. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hella_et_al:LIPIcs.MFCS.2017.32,
  author =	{Hella, Lauri and Kuusisto, Antti and Meier, Arne and Virtema, Jonni},
  title =	{{Model Checking and Validity in Propositional and Modal Inclusion Logics}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.32},
  URN =		{urn:nbn:de:0030-drops-81007},
  doi =		{10.4230/LIPIcs.MFCS.2017.32},
  annote =	{Keywords: Inclusion Logic, Model Checking, Complexity}
}
Document
Emptiness Problems for Integer Circuits

Authors: Dominik Barth, Moritz Beck, Titus Dose, Christian Glaßer, Larissa Michler, and Marc Technau


Abstract
We study the computational complexity of emptiness problems for circuits over sets of natural numbers with the operations union, intersection, complement, addition, and multiplication. For most settings of allowed operations we precisely characterize the complexity in terms of completeness for classes like NL, NP, and PSPACE. The case where intersection, addition, and multiplication is allowed turns out to be equivalent to the complement of polynomial identity testing (PIT). Our results imply the following improvements and insights on problems studied in earlier papers. We improve the bounds for the membership problem MC(\cup,\cap,¯,+,×) studied by McKenzie and Wagner 2007 and for the equivalence problem EQ(\cup,\cap,¯,+,×) studied by Glaßer et al. 2010. Moreover, it turns out that the following problems are equivalent to PIT, which shows that the challenge to improve their bounds is just a reformulation of a major open problem in algebraic computing complexity: 1. membership problem MC(\cap,+,×) studied by McKenzie and Wagner 2007 2. integer membership problems MC_Z(+,×), MC_Z(\cap,+,×) studied by Travers 2006 3. equivalence problem EQ(+,×) studied by Glaßer et al. 2010

Cite as

Dominik Barth, Moritz Beck, Titus Dose, Christian Glaßer, Larissa Michler, and Marc Technau. Emptiness Problems for Integer Circuits. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{barth_et_al:LIPIcs.MFCS.2017.33,
  author =	{Barth, Dominik and Beck, Moritz and Dose, Titus and Gla{\ss}er, Christian and Michler, Larissa and Technau, Marc},
  title =	{{Emptiness Problems for Integer Circuits}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.33},
  URN =		{urn:nbn:de:0030-drops-80641},
  doi =		{10.4230/LIPIcs.MFCS.2017.33},
  annote =	{Keywords: computational complexity, integer expressions, integer circuits, polynomial identity testing}
}
Document
Another Characterization of the Higher K-Trivials

Authors: Paul-Elliot Anglès d'Auriac and Benoit Monin


Abstract
In algorithmic randomness, the class of K-trivial sets has proved itself to be remarkable, due to its numerous different characterizations. We pursue in this paper some work already initiated on K-trivials in the context of higher randomness. In particular we give here another characterization of the non hyperarithmetic higher K-trivial sets.

Cite as

Paul-Elliot Anglès d'Auriac and Benoit Monin. Another Characterization of the Higher K-Trivials. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 34:1-34:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{anglesdauriac_et_al:LIPIcs.MFCS.2017.34,
  author =	{Angl\`{e}s d'Auriac, Paul-Elliot and Monin, Benoit},
  title =	{{Another Characterization of the Higher K-Trivials}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{34:1--34:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.34},
  URN =		{urn:nbn:de:0030-drops-80837},
  doi =		{10.4230/LIPIcs.MFCS.2017.34},
  annote =	{Keywords: Algorithmic randomness, higher computability, K-triviality, effective descriptive set theory, Kolmogorov complexity}
}
Document
The Quantum Monad on Relational Structures

Authors: Samson Abramsky, Rui Soares Barbosa, Nadish de Silva, and Octavio Zapata


Abstract
Homomorphisms between relational structures play a central role in finite model theory, constraint satisfaction, and database theory. A central theme in quantum computation is to show how quantum resources can be used to gain advantage in information processing tasks. In particular, non-local games have been used to exhibit quantum advantage in boolean constraint satisfaction, and to obtain quantum versions of graph invariants such as the chromatic number. We show how quantum strategies for homomorphism games between relational structures can be viewed as Kleisli morphisms for a quantum monad on the (classical) category of relational structures and homomorphisms. We use these results to exhibit a wide range of examples of contextuality-powered quantum advantage, and to unify several apparently diverse strands of previous work.

Cite as

Samson Abramsky, Rui Soares Barbosa, Nadish de Silva, and Octavio Zapata. The Quantum Monad on Relational Structures. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 35:1-35:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{abramsky_et_al:LIPIcs.MFCS.2017.35,
  author =	{Abramsky, Samson and Barbosa, Rui Soares and de Silva, Nadish and Zapata, Octavio},
  title =	{{The Quantum Monad on Relational Structures}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{35:1--35:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.35},
  URN =		{urn:nbn:de:0030-drops-81290},
  doi =		{10.4230/LIPIcs.MFCS.2017.35},
  annote =	{Keywords: non-local games, quantum computation, monads}
}
Document
Towards a Polynomial Kernel for Directed Feedback Vertex Set

Authors: Benjamin Bergougnoux, Eduard Eiben, Robert Ganian, Sebastian Ordyniak, and M. S. Ramanujan


Abstract
In the Directed Feedback Vertex Set (DFVS) problem, the input is a directed graph D and an integer k. The objective is to determine whether there exists a set of at most k vertices intersecting every directed cycle of D. DFVS was shown to be fixed-parameter tractable when parameterized by solution size by Chen, Liu, Lu, O'Sullivan and Razgon [JACM 2008]; since then, the existence of a polynomial kernel for this problem has become one of the largest open problems in the area of parameterized algorithmics. In this paper, we study DFVS parameterized by the feedback vertex set number of the underlying undirected graph. We provide two main contributions: a polynomial kernel for this problem on general instances, and a linear kernel for the case where the input digraph is embeddable on a surface of bounded genus.

Cite as

Benjamin Bergougnoux, Eduard Eiben, Robert Ganian, Sebastian Ordyniak, and M. S. Ramanujan. Towards a Polynomial Kernel for Directed Feedback Vertex Set. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.MFCS.2017.36,
  author =	{Bergougnoux, Benjamin and Eiben, Eduard and Ganian, Robert and Ordyniak, Sebastian and Ramanujan, M. S.},
  title =	{{Towards a Polynomial Kernel for Directed Feedback Vertex Set}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.36},
  URN =		{urn:nbn:de:0030-drops-81126},
  doi =		{10.4230/LIPIcs.MFCS.2017.36},
  annote =	{Keywords: parameterized algorithms, kernelization, (directed) feedback vertex set}
}
Document
Timed Network Games

Authors: Guy Avni, Shibashis Guha, and Orna Kupferman


Abstract
Network games are widely used as a model for selfish resource-allocation problems. In the classical model, each player selects a path connecting her source and target vertex. The cost of traversing an edge depends on the number of players that traverse it. Thus, it abstracts the fact that different users may use a resource at different times and for different durations, which plays an important role in defining the costs of the users in reality. For example, when transmitting packets in a communication network, routing traffic in a road network, or processing a task in a production system, the traversal of the network involves an inherent delay, and so sharing and congestion of resources crucially depends on time. We study timed network games, which add a time component to network games. Each vertex v in the network is associated with a cost function, mapping the load on v to the price that a player pays for staying in v for one time unit with this load. In addition, each edge has a guard, describing time intervals in which the edge can be traversed, forcing the players to spend time on vertices. Unlike earlier work that add a time component to network games, the time in our model is continuous and cannot be discretized. In particular, players have uncountably many strategies, and a game may have uncountably many pure Nash equilibria. We study properties of timed network games with cost-sharing or congestion cost functions: their stability, equilibrium inefficiency, and complexity. In particular, we show that the answer to the question whether we can restrict attention to boundary strategies, namely ones in which edges are traversed only at the boundaries of guards, is mixed.

Cite as

Guy Avni, Shibashis Guha, and Orna Kupferman. Timed Network Games. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{avni_et_al:LIPIcs.MFCS.2017.37,
  author =	{Avni, Guy and Guha, Shibashis and Kupferman, Orna},
  title =	{{Timed Network Games}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.37},
  URN =		{urn:nbn:de:0030-drops-80675},
  doi =		{10.4230/LIPIcs.MFCS.2017.37},
  annote =	{Keywords: Network Games, Timed Automata, Nash Equilibrium, Equilibrium Inefficiency}
}
Document
Efficient Identity Testing and Polynomial Factorization in Nonassociative Free Rings

Authors: Vikraman Arvind, Rajit Datta, Partha Mukhopadhyay, and S. Raja


Abstract
In this paper we study arithmetic computations in the nonassociative, and noncommutative free polynomial ring F{X}. Prior to this work, nonassociative arithmetic computation was considered by Hrubes, Wigderson, and Yehudayoff, and they showed lower bounds and proved completeness results. We consider Polynomial Identity Testing and Polynomial Factorization in F{X} and show the following results. 1. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give a deterministic polynomial algorithm to decide if f is identically zero. Our result is obtained by a suitable adaptation of the PIT algorithm of Raz and Shpilka for noncommutative ABPs. 2. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give an efficient deterministic algorithm to compute circuits for the irreducible factors of f in polynomial time when F is the field of rationals. Over finite fields of characteristic p, our algorithm runs in time polynomial in input size and p.

Cite as

Vikraman Arvind, Rajit Datta, Partha Mukhopadhyay, and S. Raja. Efficient Identity Testing and Polynomial Factorization in Nonassociative Free Rings. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 38:1-38:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.MFCS.2017.38,
  author =	{Arvind, Vikraman and Datta, Rajit and Mukhopadhyay, Partha and Raja, S.},
  title =	{{Efficient Identity Testing and Polynomial Factorization in  Nonassociative Free Rings}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{38:1--38:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.38},
  URN =		{urn:nbn:de:0030-drops-80690},
  doi =		{10.4230/LIPIcs.MFCS.2017.38},
  annote =	{Keywords: Circuits, Nonassociative, Noncommutative, Polynomial Identity Testing, Factorization}
}
Document
Faster Algorithms for Mean-Payoff Parity Games

Authors: Krishnendu Chatterjee, Monika Henzinger, and Alexander Svozil


Abstract
Graph games provide the foundation for modeling and synthesis of reactive processes. Such games are played over graphs where the vertices are controlled by two adversarial players. We consider graph games where the objective of the first player is the conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a mean-payoff condition). There are two variants of the problem, namely, the threshold problem where the quantitative goal is to ensure that the mean-payoff value is above a threshold, and the value problem where the quantitative goal is to ensure the optimal mean-payoff value; in both cases ensuring the qualitative parity objective. The previous best-known algorithms for game graphs with n vertices, m edges, parity objectives with d priorities, and maximal absolute reward value W for mean-payoff objectives, are as follows: O(n^(d+1)·m·W) for the threshold problem, and O(n^(d+2)·m·W) for the value problem. Our main contributions are faster algorithms, and the running times of our algorithms are as follows: O(n^(d-1)·m·W) for the threshold problem, and O(n^d·m·W·log(n·W)) for the value problem. For mean-payoff parity objectives with two priorities, our algorithms match the best-known bounds of the algorithms for mean-payoff games (without conjunction with parity objectives). Our results are relevant in synthesis of reactive systems with both functional requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective).

Cite as

Krishnendu Chatterjee, Monika Henzinger, and Alexander Svozil. Faster Algorithms for Mean-Payoff Parity Games. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.MFCS.2017.39,
  author =	{Chatterjee, Krishnendu and Henzinger, Monika and Svozil, Alexander},
  title =	{{Faster Algorithms for Mean-Payoff Parity Games}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.39},
  URN =		{urn:nbn:de:0030-drops-80809},
  doi =		{10.4230/LIPIcs.MFCS.2017.39},
  annote =	{Keywords: graph games, mean-payoff parity games}
}
Document
Attainable Values of Reset Thresholds

Authors: Michalina Dzyga, Robert Ferens, Vladimir V. Gusev, and Marek Szykula


Abstract
An automaton is synchronizing if there exists a word that sends all states of the automaton to a single state. The reset threshold is the length of the shortest such word. We study the set RT_n of attainable reset thresholds by automata with n states. Relying on constructions of digraphs with known local exponents we show that the intervals [1, (n^2-3n+4)/2] and [(p-1)(q-1), p(q-2)+n-q+1], where 2 <= p < q <= n, p+q > n, gcd(p,q)=1, belong to RT_n, even if restrict our attention to strongly connected automata. Moreover, we prove that in this case the smallest value that does not belong to RT_n is at least n^2 - O(n^{1.7625} log n / log log n). This value is increased further assuming certain conjectures about the gaps between consecutive prime numbers. We also show that any value smaller than n(n-1)/2 is attainable by an automaton with a sink state and any value smaller than n^2-O(n^{1.5}) is attainable in general case. Furthermore, we solve the problem of existence of slowly synchronizing automata over an arbitrarily large alphabet, by presenting for every fixed size of the alphabet an infinite series of irreducibly synchronizing automata with the reset threshold n^2-O(n).

Cite as

Michalina Dzyga, Robert Ferens, Vladimir V. Gusev, and Marek Szykula. Attainable Values of Reset Thresholds. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dzyga_et_al:LIPIcs.MFCS.2017.40,
  author =	{Dzyga, Michalina and Ferens, Robert and Gusev, Vladimir V. and Szykula, Marek},
  title =	{{Attainable Values of Reset Thresholds}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.40},
  URN =		{urn:nbn:de:0030-drops-81220},
  doi =		{10.4230/LIPIcs.MFCS.2017.40},
  annote =	{Keywords: Cerny conjecture, exponent, primitive digraph, reset word, synchronizing automaton}
}
Document
Lower Bounds and PIT for Non-Commutative Arithmetic Circuits with Restricted Parse Trees

Authors: Guillaume Lagarde, Nutan Limaye, and Srikanth Srinivasan


Abstract
We investigate the power of Non-commutative Arithmetic Circuits, which compute polynomials over the free non-commutative polynomial ring F<x_1,...,x_N>, where variables do not commute. We consider circuits that are restricted in the ways in which they can compute monomials: this can be seen as restricting the families of parse trees that appear in the circuit. Such restrictions capture essentially all non-commutative circuit models for which lower bounds are known. We prove several results about such circuits. - We show explicit exponential lower bounds for circuits with up to an exponential number of parse trees, strengthening the work of Lagarde, Malod, and Perifel (ECCC 2016), who prove such a result for Unique Parse Tree (UPT) circuits which have a single parse tree. - We show explicit exponential lower bounds for circuits whose parse trees are rotations of a single tree. This simultaneously generalizes recent lower bounds of Limaye, Malod, and Srinivasan (Theory of Computing 2016) and the above lower bounds of Lagarde et al., which are known to be incomparable. - We make progress on a question of Nisan (STOC 1991) regarding separating the power of Algebraic Branching Programs (ABPs) and Formulas in the non-commutative setting by showing a tight lower bound of n^{Omega(log d)} for any UPT formula computing the product of d n*n matrices. When d <= log n, we can also prove superpolynomial lower bounds for formulas with up to 2^{o(d)} many parse trees (for computing the same polynomial). Improving this bound to allow for 2^{O(d)} trees would yield an unconditional separation between ABPs and Formulas. - We give deterministic white-box PIT algorithms for UPT circuits over any field (strengthening a result of Lagarde et al. (2016)) and also for sums of a constant number of UPT circuits with different parse trees.

Cite as

Guillaume Lagarde, Nutan Limaye, and Srikanth Srinivasan. Lower Bounds and PIT for Non-Commutative Arithmetic Circuits with Restricted Parse Trees. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lagarde_et_al:LIPIcs.MFCS.2017.41,
  author =	{Lagarde, Guillaume and Limaye, Nutan and Srinivasan, Srikanth},
  title =	{{Lower Bounds and PIT for Non-Commutative Arithmetic Circuits with Restricted Parse Trees}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.41},
  URN =		{urn:nbn:de:0030-drops-81094},
  doi =		{10.4230/LIPIcs.MFCS.2017.41},
  annote =	{Keywords: Non-commutative Arithemetic circuits, Partial derivatives, restrictions}
}
Document
Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking

Authors: Michal Pilipczuk, Erik Jan van Leeuwen, and Andreas Wiese


Abstract
Consider the Maximum Weight Independent Set problem for rectangles: given a family of weighted axis-parallel rectangles in the plane, find a maximum-weight subset of non-overlapping rectangles. The problem is notoriously hard both in the approximation and in the parameterized setting. The best known polynomial-time approximation algorithms achieve super-constant approximation ratios [Chalermsook & Chuzhoy, Proc. SODA 2009; Chan & Har-Peled, Discrete & Comp. Geometry, 2012], even though there is a (1+epsilon)-approximation running in quasi-polynomial time [Adamaszek & Wiese, Proc. FOCS 2013; Chuzhoy & Ene, Proc. FOCS 2016]. When parameterized by the target size of the solution, the problem is W[1]-hard even in the unweighted setting [Marx, ESA 2005]. To achieve tractability, we study the following shrinking model: one is allowed to shrink each input rectangle by a multiplicative factor 1-delta for some fixed delta > 0, but the performance is still compared against the optimal solution for the original, non-shrunk instance. We prove that in this regime, the problem admits an EPTAS with running time f(epsilon,delta) n^{O(1)}, and an FPT algorithm with running time f(k,delta) n^{O(1)}, in the setting where a maximum-weight solution of size at most k is to be computed. This improves and significantly simplifies a PTAS given earlier for this problem [Adamaszek, Chalermsook & Wiese, Proc. APPROX/RANDOM 2015], and provides the first parameterized results for the shrinking model. Furthermore, we explore kernelization in the shrinking model, by giving efficient kernelization procedures for several variants of the problem when the input rectangles are squares.

Cite as

Michal Pilipczuk, Erik Jan van Leeuwen, and Andreas Wiese. Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pilipczuk_et_al:LIPIcs.MFCS.2017.42,
  author =	{Pilipczuk, Michal and van Leeuwen, Erik Jan and Wiese, Andreas},
  title =	{{Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.42},
  URN =		{urn:nbn:de:0030-drops-80917},
  doi =		{10.4230/LIPIcs.MFCS.2017.42},
  annote =	{Keywords: Combinatorial optimization, Approximation algorithms, Fixed-parameter algorithms}
}
Document
Eilenberg Theorems for Free

Authors: Henning Urbat, Jiri Adámek, Liang-Ting Chen, and Stefan Milius


Abstract
Eilenberg-type correspondences, relating varieties of languages (e.g., of finite words, infinite words, or trees) to pseudovarieties of finite algebras, form the backbone of algebraic language theory. We show that they all arise from the same recipe: one models languages and the algebras recognizing them by monads on an algebraic category, and applies a Stone-type duality. Our main contribution is a variety theorem that covers e.g. Wilke's and Pin's work on infinity-languages, the variety theorem for cost functions of Daviaud, Kuperberg, and Pin, and unifies the two categorical approaches of Bojanczyk and of Adamek et al. In addition we derive new results, such as an extension of the local variety theorem of Gehrke, Grigorieff, and Pin from finite to infinite words.

Cite as

Henning Urbat, Jiri Adámek, Liang-Ting Chen, and Stefan Milius. Eilenberg Theorems for Free. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{urbat_et_al:LIPIcs.MFCS.2017.43,
  author =	{Urbat, Henning and Ad\'{a}mek, Jiri and Chen, Liang-Ting and Milius, Stefan},
  title =	{{Eilenberg Theorems for Free}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.43},
  URN =		{urn:nbn:de:0030-drops-81032},
  doi =		{10.4230/LIPIcs.MFCS.2017.43},
  annote =	{Keywords: Eilenberg's theorem, variety of languages, pseudovariety, monad, duality}
}
Document
Membership Problem in GL(2, Z) Extended by Singular Matrices

Authors: Igor Potapov and Pavel Semukhin


Abstract
We consider the membership problem for matrix semigroups, which is the problem to decide whether a matrix belongs to a given finitely generated matrix semigroup. In general, the decidability and complexity of this problem for two-dimensional matrix semigroups remains open. Recently there was a significant progress with this open problem by showing that the membership is decidable for 2x2 nonsingular integer matrices. In this paper we focus on the membership for singular integer matrices and prove that this problem is decidable for 2x2 integer matrices whose determinants are equal to 0, 1, -1 (i.e. for matrices from GL(2,Z) and any singular matrices). Our algorithm relies on a translation of numerical problems on matrices into combinatorial problems on words and conversion of the membership problem into decision problem on regular languages.

Cite as

Igor Potapov and Pavel Semukhin. Membership Problem in GL(2, Z) Extended by Singular Matrices. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 44:1-44:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{potapov_et_al:LIPIcs.MFCS.2017.44,
  author =	{Potapov, Igor and Semukhin, Pavel},
  title =	{{Membership Problem in GL(2, Z) Extended by Singular Matrices}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{44:1--44:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.44},
  URN =		{urn:nbn:de:0030-drops-80737},
  doi =		{10.4230/LIPIcs.MFCS.2017.44},
  annote =	{Keywords: Matrix Semigroups, Membership Problem, General Linear Group, Singular Matrices, Automata and Formal Languages}
}
Document
Grammars for Indentation-Sensitive Parsing

Authors: Härmel Nestra


Abstract
Adams' extension of parsing expression grammars enables specifying indentation sensitivity using two non-standard grammar constructs - indentation by a binary relation and alignment. This paper is a theoretical study of Adams' grammars. It proposes a step-by-step transformation of well-formed Adams' grammars for elimination of the alignment construct from the grammar. The idea that alignment could be avoided was suggested by Adams but no process for achieving this aim has been described before. This paper also establishes general conditions that binary relations used in indentation constructs must satisfy in order to enable efficient parsing.

Cite as

Härmel Nestra. Grammars for Indentation-Sensitive Parsing. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nestra:LIPIcs.MFCS.2017.45,
  author =	{Nestra, H\"{a}rmel},
  title =	{{Grammars for Indentation-Sensitive Parsing}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.45},
  URN =		{urn:nbn:de:0030-drops-80788},
  doi =		{10.4230/LIPIcs.MFCS.2017.45},
  annote =	{Keywords: Parsing expression grammars, indentation, grammar transformation}
}
Document
The Power of Linear-Time Data Reduction for Maximum Matching

Authors: George B. Mertzios, André Nichterlein, and Rolf Niedermeier


Abstract
Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings.

Cite as

George B. Mertzios, André Nichterlein, and Rolf Niedermeier. The Power of Linear-Time Data Reduction for Maximum Matching. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{mertzios_et_al:LIPIcs.MFCS.2017.46,
  author =	{Mertzios, George B. and Nichterlein, Andr\'{e} and Niedermeier, Rolf},
  title =	{{The Power of Linear-Time Data Reduction for Maximum Matching}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.46},
  URN =		{urn:nbn:de:0030-drops-81166},
  doi =		{10.4230/LIPIcs.MFCS.2017.46},
  annote =	{Keywords: Maximum-cardinality matching, bipartite graphs, linear-time algorithms, kernelization, parameterized complexity analysis, FPT in P}
}
Document
Two-Planar Graphs Are Quasiplanar

Authors: Michael Hoffmann and Csaba D. Tóth


Abstract
It is shown that every 2-planar graph is quasiplanar, that is, if a simple graph admits a drawing in the plane such that every edge is crossed at most twice, then it also admits a drawing in which no three edges pairwise cross. We further show that quasiplanarity is witnessed by a simple topological drawing, that is, any two edges cross at most once and adjacent edges do not cross.

Cite as

Michael Hoffmann and Csaba D. Tóth. Two-Planar Graphs Are Quasiplanar. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hoffmann_et_al:LIPIcs.MFCS.2017.47,
  author =	{Hoffmann, Michael and T\'{o}th, Csaba D.},
  title =	{{Two-Planar Graphs Are Quasiplanar}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.47},
  URN =		{urn:nbn:de:0030-drops-80811},
  doi =		{10.4230/LIPIcs.MFCS.2017.47},
  annote =	{Keywords: graph drawing, near-planar graph, simple topological plane graph}
}
Document
The Shortest Identities for Max-Plus Automata with Two States

Authors: Laure Daviaud and Marianne Johnson


Abstract
Max-plus automata are quantitative extensions of automata designed to associate an integer with every non-empty word. A pair of distinct words is said to be an identity for a class of max-plus automata if each of the automata in the class computes the same value on the two words. We give the shortest identities holding for the class of max-plus automata with two states. For this, we exhibit an interesting list of necessary conditions for an identity to hold. Moreover, this result provides a counter-example of a conjecture of Izhakian, concerning the minimality of certain identities.

Cite as

Laure Daviaud and Marianne Johnson. The Shortest Identities for Max-Plus Automata with Two States. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 48:1-48:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{daviaud_et_al:LIPIcs.MFCS.2017.48,
  author =	{Daviaud, Laure and Johnson, Marianne},
  title =	{{The Shortest Identities for Max-Plus Automata with Two States}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{48:1--48:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.48},
  URN =		{urn:nbn:de:0030-drops-81048},
  doi =		{10.4230/LIPIcs.MFCS.2017.48},
  annote =	{Keywords: Max-plus automata, Weighted automata, Identities, Tropical matrices}
}
Document
On the Upward/Downward Closures of Petri Nets

Authors: Mohamed Faouzi Atig, Roland Meyer, Sebastian Muskalla, and Prakash Saivasan


Abstract
We study the size and the complexity of computing finite state automata (FSA) representing and approximating the downward and the upward closure of Petri net languages with coverability as the acceptance condition. We show how to construct an FSA recognizing the upward closure of a Petri net language in doubly-exponential time, and therefore the size is at most doubly exponential. For downward closures, we prove that the size of the minimal automata can be non-primitive recursive. In the case of BPP nets, a well-known subclass of Petri nets, we show that an FSA accepting the downward/upward closure can be constructed in exponential time. Furthermore, we consider the problem of checking whether a simple regular language is included in the downward/upward closure of a Petri net/BPP net language. We show that this problem is EXPSPACE-complete (resp. NP-complete) in the case of Petri nets (resp. BPP nets). Finally, we show that it is decidable whether a Petri net language is upward/downward closed.

Cite as

Mohamed Faouzi Atig, Roland Meyer, Sebastian Muskalla, and Prakash Saivasan. On the Upward/Downward Closures of Petri Nets. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{atig_et_al:LIPIcs.MFCS.2017.49,
  author =	{Atig, Mohamed Faouzi and Meyer, Roland and Muskalla, Sebastian and Saivasan, Prakash},
  title =	{{On the Upward/Downward Closures of Petri Nets}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.49},
  URN =		{urn:nbn:de:0030-drops-81278},
  doi =		{10.4230/LIPIcs.MFCS.2017.49},
  annote =	{Keywords: Petri nets, BPP nets, downward closure, upward closure}
}
Document
On Multidimensional and Monotone k-SUM

Authors: Chloe Ching-Yun Hsu and Chris Umans


Abstract
The well-known k-SUM conjecture is that integer k-SUM requires time Omega(n^{\ceil{k/2}-o(1)}). Recent work has studied multidimensional k-SUM in F_p^d, where the best known algorithm takes time \tilde O(n^{\ceil{k/2}}). Bhattacharyya et al. [ICS 2011] proved a min(2^{\Omega(d)},n^{\Omega(k)}) lower bound for k-SUM in F_p^d under the Exponential Time Hypothesis. We give a more refined lower bound under the standard k-SUM conjecture: for sufficiently large p, k-SUM in F_p^d requires time Omega(n^{k/2-o(1)}) if k is even, and Omega(n^{\ceil{k/2}-2k(log k)/(log p)-o(1)}) if k is odd. For a special case of the multidimensional problem, bounded monotone d-dimensional 3SUM, Chan and Lewenstein [STOC 2015] gave a surprising \tilde O(n^{2-2/(d+13)}) algorithm using additive combinatorics. We show this algorithm is essentially optimal. To be more precise, bounded monotone d-dimensional 3SUM requires time Omega(n^{2-\frac{4}{d}-o(1)}) under the standard 3SUM conjecture, and time Omega(n^{2-\frac{2}{d}-o(1)}) under the so-called strong 3SUM conjecture. Thus, even though one might hope to further exploit the structural advantage of monotonicity, no substantial improvements beyond those obtained by Chan and Lewenstein are possible for bounded monotone d-dimensional 3SUM.

Cite as

Chloe Ching-Yun Hsu and Chris Umans. On Multidimensional and Monotone k-SUM. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hsu_et_al:LIPIcs.MFCS.2017.50,
  author =	{Hsu, Chloe Ching-Yun and Umans, Chris},
  title =	{{On Multidimensional and Monotone k-SUM}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.50},
  URN =		{urn:nbn:de:0030-drops-80618},
  doi =		{10.4230/LIPIcs.MFCS.2017.50},
  annote =	{Keywords: 3SUM, kSUM, monotone 3SUM, strong 3SUM conjecture}
}
Document
Parameterized Complexity of the List Coloring Reconfiguration Problem with Graph Parameters

Authors: Tatsuhiko Hatanaka, Takehiro Ito, and Xiao Zhou


Abstract
Let G be a graph such that each vertex has its list of available colors, and assume that each list is a subset of the common set consisting of k colors. For two given list colorings of G, we study the problem of transforming one into the other by changing only one vertex color assignment at a time, while at all times maintaining a list coloring. This problem is known to be PSPACE-complete even for bounded bandwidth graphs and a fixed constant k. In this paper, we study the fixed-parameter tractability of the problem when parameterized by several graph parameters. We first give a fixed-parameter algorithm for the problem when parameterized by k and the modular-width of an input graph. We next give a fixed-parameter algorithm for the shortest variant which computes the length of a shortest transformation when parameterized by k and the size of a minimum vertex cover of an input graph. As corollaries, we show that the problem for cographs and the shortest variant for split graphs are fixed-parameter tractable even when only k is taken as a parameter. On the other hand, we prove that the problem is W[1]-hard when parameterized only by the size of a minimum vertex cover of an input graph.

Cite as

Tatsuhiko Hatanaka, Takehiro Ito, and Xiao Zhou. Parameterized Complexity of the List Coloring Reconfiguration Problem with Graph Parameters. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 51:1-51:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hatanaka_et_al:LIPIcs.MFCS.2017.51,
  author =	{Hatanaka, Tatsuhiko and Ito, Takehiro and Zhou, Xiao},
  title =	{{Parameterized Complexity of the List Coloring Reconfiguration Problem with Graph Parameters}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{51:1--51:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.51},
  URN =		{urn:nbn:de:0030-drops-81060},
  doi =		{10.4230/LIPIcs.MFCS.2017.51},
  annote =	{Keywords: combinatorial reconfiguration, fixed-parameter tractability, graph algorithm, list coloring, W\lbrack1\rbrack-hardness}
}
Document
Automata in the Category of Glued Vector Spaces

Authors: Thomas Colcombet and Daniela Petrisan


Abstract
In this paper we adopt a category-theoretic approach to the conception of automata classes enjoying minimization by design. The main instantiation of our construction is a new class of automata that are hybrid between deterministic automata and automata weighted over a field.

Cite as

Thomas Colcombet and Daniela Petrisan. Automata in the Category of Glued Vector Spaces. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{colcombet_et_al:LIPIcs.MFCS.2017.52,
  author =	{Colcombet, Thomas and Petrisan, Daniela},
  title =	{{Automata in the Category of Glued Vector Spaces}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.52},
  URN =		{urn:nbn:de:0030-drops-81324},
  doi =		{10.4230/LIPIcs.MFCS.2017.52},
  annote =	{Keywords: hybrid set-vector automata, automata minimization in a category}
}
Document
The Equivalence, Unambiguity and Sequentiality Problems of Finitely Ambiguous Max-Plus Tree Automata are Decidable

Authors: Erik Paul


Abstract
We show that the equivalence, unambiguity and sequentiality problems are decidable for finitely ambiguous max-plus tree automata.

Cite as

Erik Paul. The Equivalence, Unambiguity and Sequentiality Problems of Finitely Ambiguous Max-Plus Tree Automata are Decidable. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 53:1-53:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{paul:LIPIcs.MFCS.2017.53,
  author =	{Paul, Erik},
  title =	{{The Equivalence, Unambiguity and Sequentiality Problems of Finitely Ambiguous Max-Plus Tree Automata are Decidable}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{53:1--53:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.53},
  URN =		{urn:nbn:de:0030-drops-81147},
  doi =		{10.4230/LIPIcs.MFCS.2017.53},
  annote =	{Keywords: Tree Automata, Max-Plus Automata, Equivalence, Unambiguity, Sequentiality, Decidability}
}
Document
New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems

Authors: Eric Allender and Shuichi Hirahara


Abstract
The Minimum Circuit Size Problem (MCSP) and a related problem (MKTP) that deals with time-bounded Kolmogorov complexity are prominent candidates for NP-intermediate status. We show that, under very modest cryptographic assumptions (such as the existence of one-way functions), the problem of approximating the minimum circuit size (or time-bounded Kolmogorov complexity) within a factor of n^{1 - o(1)} is indeed NP-intermediate. To the best of our knowledge, these problems are the first natural NP-intermediate problems under the existence of an arbitrary one-way function. We also prove that MKTP is hard for the complexity class DET under non-uniform NC^0 reductions. This is surprising, since prior work on MCSP and MKTP had highlighted weaknesses of "local" reductions such as NC^0 reductions. We exploit this local reduction to obtain several new consequences: * MKTP is not in AC^0[p]. * Circuit size lower bounds are equivalent to hardness of a relativized version MKTP^A of MKTP under a class of uniform AC^0 reductions, for a large class of sets A. * Hardness of MCSP^A implies hardness of MKTP^A for a wide class of sets A. This is the first result directly relating the complexity of MCSP^A and MKTP^A, for any A.

Cite as

Eric Allender and Shuichi Hirahara. New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.MFCS.2017.54,
  author =	{Allender, Eric and Hirahara, Shuichi},
  title =	{{New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{54:1--54:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.54},
  URN =		{urn:nbn:de:0030-drops-80636},
  doi =		{10.4230/LIPIcs.MFCS.2017.54},
  annote =	{Keywords: computational complexity, Kolmogorov complexity, circuit size}
}
Document
Strategy Complexity of Concurrent Safety Games

Authors: Krishnendu Chatterjee, Kristoffer Arnsfelt Hansen, and Rasmus Ibsen-Jensen


Abstract
We consider two player, zero-sum, finite-state concurrent reachability games, played for an infinite number of rounds, where in every round, each player simultaneously and independently of the other players chooses an action, whereafter the successor state is determined by a probability distribution given by the current state and the chosen actions. Player 1 wins iff a designated goal state is eventually visited. We are interested in the complexity of stationary strategies measured by their patience, which is defined as the inverse of the smallest non-zero probability employed. Our main results are as follows: We show that: (i) the optimal bound on the patience of optimal and epsilon-optimal strategies, for both players is doubly exponential; and (ii) even in games with a single non-absorbing state exponential (in the number of actions) patience is necessary.

Cite as

Krishnendu Chatterjee, Kristoffer Arnsfelt Hansen, and Rasmus Ibsen-Jensen. Strategy Complexity of Concurrent Safety Games. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 55:1-55:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.MFCS.2017.55,
  author =	{Chatterjee, Krishnendu and Hansen, Kristoffer Arnsfelt and Ibsen-Jensen, Rasmus},
  title =	{{Strategy Complexity of Concurrent Safety Games}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.55},
  URN =		{urn:nbn:de:0030-drops-81203},
  doi =		{10.4230/LIPIcs.MFCS.2017.55},
  annote =	{Keywords: Concurrent games, Reachability and safety, Patience of strategies}
}
Document
A Characterisation of Pi^0_2 Regular Tree Languages

Authors: Filippo Cavallari, Henryk Michalewski, and Michal Skrzypczak


Abstract
We show an algorithm that for a given regular tree language L decides if L is in Pi^0_2, that is if L belongs to the second level of Borel Hierarchy. Moreover, if L is in Pi^0_2, then we construct a weak alternating automaton of index (0, 2) which recognises L. We also prove that for a given language L, L is recognisable by a weak alternating (1, 3)-automaton if and only if it is recognisable by a weak non-deterministic (1, 3)-automaton.

Cite as

Filippo Cavallari, Henryk Michalewski, and Michal Skrzypczak. A Characterisation of Pi^0_2 Regular Tree Languages. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cavallari_et_al:LIPIcs.MFCS.2017.56,
  author =	{Cavallari, Filippo and Michalewski, Henryk and Skrzypczak, Michal},
  title =	{{A Characterisation of Pi^0\underline2 Regular Tree Languages}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.56},
  URN =		{urn:nbn:de:0030-drops-80683},
  doi =		{10.4230/LIPIcs.MFCS.2017.56},
  annote =	{Keywords: infinite trees, Rabin-Mostowski hierarchy, regular languages}
}
Document
On the Exact Amount of Missing Information that Makes Finding Possible Winners Hard

Authors: Palash Dey and Neeldhara Misra


Abstract
We consider election scenarios with incomplete information, a situation that arises often in practice. There are several models of incomplete information and accordingly, different notions of outcomes of such elections. In one well-studied model of incompleteness, the votes are given by partial orders over the candidates. In this context we can frame the problem of finding a possible winner, which involves determining whether a given candidate wins in at least one completion of a given set of partial votes for a specific voting rule. The Possible Winner problem is well-known to be NP-Complete in general, and it is in fact known to be NP-Complete for several voting rules where the number of undetermined pairs in every vote is bounded only by some constant. In this paper, we address the question of determining precisely the smallest number of undetermined pairs for which the Possible Winner problem remains NP-Complete. In particular, we find the exact values of t for which the Possible Winner problem transitions to being NP-Complete from being in P, where t is the maximum number of undetermined pairs in every vote. We demonstrate tight results for a broad subclass of scoring rules which includes all the commonly used scoring rules (such as plurality, veto, Borda, and k-approval), Copeland^\alpha for every \alpha in [0,1], maximin, and Bucklin voting rules. A somewhat surprising aspect of our results is that for many of these rules, the Possible Winner problem turns out to be hard even if every vote has at most one undetermined pair of candidates.

Cite as

Palash Dey and Neeldhara Misra. On the Exact Amount of Missing Information that Makes Finding Possible Winners Hard. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.MFCS.2017.57,
  author =	{Dey, Palash and Misra, Neeldhara},
  title =	{{On the Exact Amount of Missing Information that Makes Finding Possible Winners Hard}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.57},
  URN =		{urn:nbn:de:0030-drops-81354},
  doi =		{10.4230/LIPIcs.MFCS.2017.57},
  annote =	{Keywords: Computational Social Choice, Dichotomy, NP-completeness, Maxflow, Voting, Possible winner}
}
Document
Fractal Intersections and Products via Algorithmic Dimension

Authors: Neil Lutz


Abstract
Algorithmic dimensions quantify the algorithmic information density of individual points and may be defined in terms of Kolmogorov complexity. This work uses these dimensions to bound the classical Hausdorff and packing dimensions of intersections and Cartesian products of fractals in Euclidean spaces. This approach shows that a known intersection formula for Borel sets holds for arbitrary sets, and it significantly simplifies the proof of a known product formula. Both of these formulas are prominent, fundamental results in fractal geometry that are taught in typical undergraduate courses on the subject.

Cite as

Neil Lutz. Fractal Intersections and Products via Algorithmic Dimension. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 58:1-58:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lutz:LIPIcs.MFCS.2017.58,
  author =	{Lutz, Neil},
  title =	{{Fractal Intersections and Products via Algorithmic Dimension}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{58:1--58:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.58},
  URN =		{urn:nbn:de:0030-drops-80875},
  doi =		{10.4230/LIPIcs.MFCS.2017.58},
  annote =	{Keywords: algorithmic randomness, geometric measure theory, Hausdorff dimension, Kolmogorov complexity}
}
Document
Domains for Higher-Order Games

Authors: Matthew Hague, Roland Meyer, and Sebastian Muskalla


Abstract
We study two-player inclusion games played over word-generating higher-order recursion schemes. While inclusion checks are known to capture verification problems, two-player games generalize this relationship to program synthesis. In such games, non-terminals of the grammar are controlled by opposing players. The goal of the existential player is to avoid producing a word that lies outside of a regular language of safe words. We contribute a new domain that provides a representation of the winning region of such games. Our domain is based on (functions over) potentially infinite Boolean formulas with words as atomic propositions. We develop an abstract interpretation framework that we instantiate to abstract this domain into a domain where the propositions are replaced by states of a finite automaton. This second domain is therefore finite and we obtain, via standard fixed-point techniques, a direct algorithm for the analysis of two-player inclusion games. We show, via a second instantiation of the framework, that our finite domain can be optimized, leading to a (k+1)EXP algorithm for order-k recursion schemes. We give a matching lower bound, showing that our approach is optimal. Since our approach is based on standard Kleene iteration, existing techniques and tools for fixed-point computations can be applied.

Cite as

Matthew Hague, Roland Meyer, and Sebastian Muskalla. Domains for Higher-Order Games. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 59:1-59:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hague_et_al:LIPIcs.MFCS.2017.59,
  author =	{Hague, Matthew and Meyer, Roland and Muskalla, Sebastian},
  title =	{{Domains for Higher-Order Games}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{59:1--59:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.59},
  URN =		{urn:nbn:de:0030-drops-81409},
  doi =		{10.4230/LIPIcs.MFCS.2017.59},
  annote =	{Keywords: higher-order recursion schemes, games, semantics, abstract interpretation, fixed points}
}
Document
Fine-Grained Complexity of Rainbow Coloring and its Variants

Authors: Akanksha Agrawal


Abstract
Consider a graph G and an edge-coloring c_R:E(G) \rightarrow [k]. A rainbow path between u,v \in V(G) is a path P from u to v such that for all e,e' \in E(P), where e \neq e' we have c_R(e) \neq c_R(e'). In the Rainbow k-Coloring problem we are given a graph G, and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in V(G) there is a rainbow path between u and v in G. Several variants of Rainbow k-Coloring have been studied, two of which are defined as follows. The Subset Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G) \times V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all (u,v) \in S there is a rainbow path between u and v in G. The problem Steiner Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in S there is a rainbow path between u and v in G. In an attempt to resolve open problems posed by Kowalik et al. (ESA 2016), we obtain the following results. - For every k \geq 3, Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|E(G)|)}n^{O(1)}, unless ETH fails. - For every k \geq 3, Steiner Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|S|^2)}n^{O(1)}, unless ETH fails. - Subset Rainbow k-Coloring admits an algorithm running in time 2^{\OO(|S|)}n^{O(1)}. This also implies an algorithm running in time 2^{o(|S|^2)}n^{O(1)} for Steiner Rainbow k-Coloring, which matches the lower bound we obtain.

Cite as

Akanksha Agrawal. Fine-Grained Complexity of Rainbow Coloring and its Variants. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{agrawal:LIPIcs.MFCS.2017.60,
  author =	{Agrawal, Akanksha},
  title =	{{Fine-Grained Complexity of Rainbow Coloring and its Variants}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.60},
  URN =		{urn:nbn:de:0030-drops-80990},
  doi =		{10.4230/LIPIcs.MFCS.2017.60},
  annote =	{Keywords: Rainbow Coloring, Lower bound, ETH, Fine-grained Complexity}
}
Document
Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs

Authors: Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Martin A. Nowak


Abstract
Evolutionary graph theory studies the evolutionary dynamics in a population structure given as a connected graph. Each node of the graph represents an individual of the population, and edges determine how offspring are placed. We consider the classical birth-death Moran process where there are two types of individuals, namely, the residents with fitness 1 and mutants with fitness r. The fitness indicates the reproductive strength. The evolutionary dynamics happens as follows: in the initial step, in a population of all resident individuals a mutant is introduced, and then at each step, an individual is chosen proportional to the fitness of its type to reproduce, and the offspring replaces a neighbor uniformly at random. The process stops when all individuals are either residents or mutants. The probability that all individuals in the end are mutants is called the fixation probability, which is a key factor in the rate of evolution. We consider the problem of approximating the fixation probability. The class of algorithms that is extremely relevant for approximation of the fixation probabilities is the Monte-Carlo simulation of the process. Previous results present a polynomial-time Monte-Carlo algorithm for undirected graphs when $r$ is given in unary. First, we present a simple modification: instead of simulating each step, we discard ineffective steps, where no node changes type (i.e., either residents replace residents, or mutants replace mutants). Using the above simple modification and our result that the number of effective steps is concentrated around the expected number of effective steps, we present faster polynomial-time Monte-Carlo algorithms for undirected graphs. Our algorithms are always at least a factor O(n^2/log n) faster as compared to the previous algorithms, where n is the number of nodes, and is polynomial even if r is given in binary. We also present lower bounds showing that the upper bound on the expected number of effective steps we present is asymptotically tight for undirected graphs.

Cite as

Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Martin A. Nowak. Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 61:1-61:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.MFCS.2017.61,
  author =	{Chatterjee, Krishnendu and Ibsen-Jensen, Rasmus and Nowak, Martin A.},
  title =	{{Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{61:1--61:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.61},
  URN =		{urn:nbn:de:0030-drops-81213},
  doi =		{10.4230/LIPIcs.MFCS.2017.61},
  annote =	{Keywords: Graph algorithms, Evolutionary biology, Monte-Carlo algorithms}
}
Document
The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis

Authors: Tomoyuki Yamakami


Abstract
We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT - the 2CNF Boolean formula satisfiability problem-parameterized by the number of Boolean variables. It is shown that 2SAT with n variables and m clauses can be solved simultaneously polynomial time and (n/2^{c sqrt{log(n)}}) polylog(m+n) space for an absolute constant c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT_3-a restricted variant of 2SAT in which each variable of a given 2CNF formula appears as literals in at most 3 clauses-cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., n^{epsilon} polylog(n) for a certain constant epsilon in (0,1)) space. An immediate consequence of this working hypothesis is L neq NL. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems. For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that overline{2SAT}_3 is complete for a restricted version of NL, called Syntactic NL or simply SNL, under such short reductions. This fact supports the legitimacy of our working hypothesis.

Cite as

Tomoyuki Yamakami. The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{yamakami:LIPIcs.MFCS.2017.62,
  author =	{Yamakami, Tomoyuki},
  title =	{{The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.62},
  URN =		{urn:nbn:de:0030-drops-81344},
  doi =		{10.4230/LIPIcs.MFCS.2017.62},
  annote =	{Keywords: sub-linear space, linear space hypothesis, short reduction, Boolean formula satisfiability problem, NL search, NL optimization, Syntactic NL}
}
Document
Variations on Inductive-Recursive Definitions

Authors: Neil Ghani, Conor McBride, Fredrik Nordvall Forsberg, and Stephan Spahn


Abstract
Dybjer and Setzer introduced the definitional principle of inductive-recursively defined families - i.e. of families (U : Set, T : U -> D) such that the inductive definition of U may depend on the recursively defined T --- by defining a type DS D E of codes. Each c : DS D E defines a functor [c] : Fam D -> Fam E, and (U, T) = \mu [c] : Fam D is exhibited as the initial algebra of [c]. This paper considers the composition of DS-definable functors: Given F : Fam C -> Fam D and G : Fam D -> Fam E, is G \circ F : Fam C -> Fam E DS-definable, if F and G are? We show that this is the case if and only if powers of families are DS-definable, which seems unlikely. To construct composition, we present two new systems UF and PN of codes for inductive-recursive definitions, with UF a subsytem of DS a subsystem of PN. Both UF and PN are closed under composition. Since PN defines a potentially larger class of functors, we show that there is a model where initial algebras of PN-functors exist by adapting Dybjer-Setzer's proof for DS.

Cite as

Neil Ghani, Conor McBride, Fredrik Nordvall Forsberg, and Stephan Spahn. Variations on Inductive-Recursive Definitions. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 63:1-63:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghani_et_al:LIPIcs.MFCS.2017.63,
  author =	{Ghani, Neil and McBride, Conor and Nordvall Forsberg, Fredrik and Spahn, Stephan},
  title =	{{Variations on Inductive-Recursive Definitions}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{63:1--63:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.63},
  URN =		{urn:nbn:de:0030-drops-81184},
  doi =		{10.4230/LIPIcs.MFCS.2017.63},
  annote =	{Keywords: Type Theory, induction-recursion, initial-algebra semantics}
}
Document
One-Dimensional Logic over Trees

Authors: Emanuel Kieronski and Antti Kuusisto


Abstract
A one-dimensional fragment of first-order logic is obtained by restricting quantification to blocks of existential quantifiers that leave at most one variable free. This fragment contains two-variable logic, and it is known that over words both formalisms have the same complexity and expressive power. Here we investigate the one-dimensional fragment over trees. We consider unranked unordered trees accessible by one or both of the descendant and child relations, as well as ordered trees equipped additionally with sibling relations. We show that over unordered trees the satisfiability problem is ExpSpace-complete when only the descendant relation is available and 2ExpTime-complete with both the descendant and child or with only the child relation. Over ordered trees the problem remains 2ExpTime-complete. Regarding expressivity, we show that over ordered trees and over unordered trees accessible by both the descendant and child the one-dimensional fragment is equivalent to the two-variable fragment with counting quantifiers.

Cite as

Emanuel Kieronski and Antti Kuusisto. One-Dimensional Logic over Trees. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 64:1-64:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kieronski_et_al:LIPIcs.MFCS.2017.64,
  author =	{Kieronski, Emanuel and Kuusisto, Antti},
  title =	{{One-Dimensional Logic over Trees}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{64:1--64:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.64},
  URN =		{urn:nbn:de:0030-drops-80658},
  doi =		{10.4230/LIPIcs.MFCS.2017.64},
  annote =	{Keywords: satisfiability, expressivity, trees, fragments of first-order logic}
}
Document
An Improved FPT Algorithm for the Flip Distance Problem

Authors: Shaohua Li, Qilong Feng, Xiangzhong Meng, and Jianxin Wang


Abstract
Given a set \cal P of points in the Euclidean plane and two triangulations of \cal P, the flip distance between these two triangulations is the minimum number of flips required to transform one triangulation into the other. The Parameterized Flip Distance problem is to decide if the flip distance between two given triangulations is equal to a given integer k. The previous best FPT algorithm runs in time O^*(k\cdot c^k) (c\leq 2\times 14^11), where each step has fourteen possible choices, and the length of the action sequence is bounded by 11k. By applying the backtracking strategy and analyzing the underlying property of the flip sequence, each step of our algorithm has only five possible choices. Based on an auxiliary graph G, we prove that the length of the action sequence for our algorithm is bounded by 2|G|. As a result, we present an FPT algorithm running in time O^*(k\cdot 32^k).

Cite as

Shaohua Li, Qilong Feng, Xiangzhong Meng, and Jianxin Wang. An Improved FPT Algorithm for the Flip Distance Problem. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 65:1-65:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.MFCS.2017.65,
  author =	{Li, Shaohua and Feng, Qilong and Meng, Xiangzhong and Wang, Jianxin},
  title =	{{An Improved FPT Algorithm for the Flip Distance Problem}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{65:1--65:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.65},
  URN =		{urn:nbn:de:0030-drops-81100},
  doi =		{10.4230/LIPIcs.MFCS.2017.65},
  annote =	{Keywords: triangulation, flip distance, FPT algorithm}
}
Document
Reversible Kleene lattices

Authors: Paul Brunet


Abstract
We investigate the equational theory of reversible Kleene lattices, that is algebras of languages with the regular operations (union, composition and Kleene star), together with intersection and mirror image. Building on results by Andréka, Mikulás and Németi from 2011, we construct the free representation of this algebra. We then provide an automaton model to compare representations. These automata are adapted from Petri automata, which we introduced with Pous in 2015 to tackle a similar problem for algebras of binary relations. This allows us to show that testing the validity of equations in this algebra is decidable, and in fact ExpSpace-complete.

Cite as

Paul Brunet. Reversible Kleene lattices. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{brunet:LIPIcs.MFCS.2017.66,
  author =	{Brunet, Paul},
  title =	{{Reversible Kleene lattices}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.66},
  URN =		{urn:nbn:de:0030-drops-81334},
  doi =		{10.4230/LIPIcs.MFCS.2017.66},
  annote =	{Keywords: Kleene algebra, Automata, Petri nets, Decidability, Complexity, Formal languages, Lattice}
}
Document
Lossy Kernels for Hitting Subgraphs

Authors: Eduard Eiben, Danny Hermelin, and M. S. Ramanujan


Abstract
In this paper, we study the Connected H-hitting Set and Dominating Set problems from the perspective of approximate kernelization, a framework recently introduced by Lokshtanov et al. [STOC 2017]. For the Connected H-hitting set problem, we obtain an \alpha-approximate kernel for every \alpha>1 and complement it with a lower bound for the natural weighted version. We then perform a refined analysis of the tradeoff between the approximation factor and kernel size for the Dominating Set problem on d-degenerate graphs and provide an interpolation of approximate kernels between the known d^2-approximate kernel of constant size and 1-approximate kernel of size k^{O(d^2)}.

Cite as

Eduard Eiben, Danny Hermelin, and M. S. Ramanujan. Lossy Kernels for Hitting Subgraphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.MFCS.2017.67,
  author =	{Eiben, Eduard and Hermelin, Danny and Ramanujan, M. S.},
  title =	{{Lossy Kernels for Hitting Subgraphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.67},
  URN =		{urn:nbn:de:0030-drops-80955},
  doi =		{10.4230/LIPIcs.MFCS.2017.67},
  annote =	{Keywords: parameterized algorithms, lossy kernelization, graph theory}
}
Document
Undecidable Problems for Probabilistic Network Programming

Authors: David M. Kahn


Abstract
The software-defined networking language NetKAT is able to verify many useful properties of networks automatically via a PSPACE decision procedure for program equality. However, for its probabilistic extension ProbNetKAT, no such decision procedure is known. We show that several potentially useful properties of ProbNetKAT are in fact undecidable, including emptiness of support intersection and certain kinds of distribution bounds and program comparisons. We do so by embedding the Post Correspondence Problem in ProbNetKAT via direct product expressions, and by directly embedding probabilistic finite automata.

Cite as

David M. Kahn. Undecidable Problems for Probabilistic Network Programming. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 68:1-68:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kahn:LIPIcs.MFCS.2017.68,
  author =	{Kahn, David M.},
  title =	{{Undecidable Problems for Probabilistic Network Programming}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{68:1--68:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.68},
  URN =		{urn:nbn:de:0030-drops-80969},
  doi =		{10.4230/LIPIcs.MFCS.2017.68},
  annote =	{Keywords: Software-defined networking, NetKAT, ProbNetKAT, Undecidability, Probabilistic finite automata}
}
Document
Computational Complexity of Graph Partition under Vertex-Compaction to an Irreflexive Hexagon

Authors: Narayan Vikas


Abstract
In this paper, we solve a long-standing graph partition problem under vertex-compaction that has been of interest since about 1999. The graph partition problem that we consider in this paper is to decide whether or not it is possible to partition the vertices of a graph into six distinct non-empty sets A, B, C, D, E, and F, such that the vertices in each set are independent, i.e., there is no edge within any set, and an edge is possible but not necessary only between the pairs of sets A and B, B and C, C and D, D and E, E and F, and F and A, and there is no edge between any other pair of sets. We study the problem as the vertex-compaction problem for an irreflexive hexagon (6-cycle). Determining the computational complexity of this problem has been a long-standing problem of interest since about 1999, especially after the results of open problems obtained by the author on a related compaction problem appeared in 1999. We show in this paper that the vertex-compaction problem for an irreflexive hexagon is NP-complete. Our proof can be extended for larger even irreflexive cycles, showing that the vertex-compaction problem for an irreflexive even k-cycle is NP-complete, for all even k \geq 6.

Cite as

Narayan Vikas. Computational Complexity of Graph Partition under Vertex-Compaction to an Irreflexive Hexagon. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 69:1-69:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{vikas:LIPIcs.MFCS.2017.69,
  author =	{Vikas, Narayan},
  title =	{{Computational Complexity of Graph Partition under Vertex-Compaction to an Irreflexive Hexagon}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{69:1--69:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.69},
  URN =		{urn:nbn:de:0030-drops-81374},
  doi =		{10.4230/LIPIcs.MFCS.2017.69},
  annote =	{Keywords: computational complexity, algorithms, graph, partition, colouring, homomorphism, retraction, compaction, vertex-compaction}
}
Document
Recognizing Graphs Close to Bipartite Graphs

Authors: Marthe Bonamy, Konrad K. Dabrowski, Carl Feghali, Matthew Johnson, and Daniël Paulusma


Abstract
We continue research into a well-studied family of problems that ask if the vertices of a graph can be partitioned into sets A and B, where A is an independent set and B induces a graph from some specified graph class G. We let G be the class of k-degenerate graphs. The problem is known to be polynomial-time solvable if k=0 (bipartite graphs) and NP-complete if k=1 (near-bipartite graphs) even for graphs of diameter 4, as shown by Yang and Yuan, who also proved polynomial-time solvability for graphs of diameter 2. We show that recognizing near-bipartite graphs of diameter 3 is NP-complete resolving their open problem. To answer another open problem, we consider graphs of maximum degree D on n vertices. We show how to find A and B in O(n) time for k=1 and D=3, and in O(n^2) time for k >= 2 and D >= 4. These results also provide an algorithmic version of a result of Catlin [JCTB, 1979] and enable us to complete the complexity classification of another problem: finding a path in the vertex colouring reconfiguration graph between two given k-colourings of a graph of bounded maximum degree.

Cite as

Marthe Bonamy, Konrad K. Dabrowski, Carl Feghali, Matthew Johnson, and Daniël Paulusma. Recognizing Graphs Close to Bipartite Graphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 70:1-70:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bonamy_et_al:LIPIcs.MFCS.2017.70,
  author =	{Bonamy, Marthe and Dabrowski, Konrad K. and Feghali, Carl and Johnson, Matthew and Paulusma, Dani\"{e}l},
  title =	{{Recognizing Graphs Close to Bipartite Graphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{70:1--70:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.70},
  URN =		{urn:nbn:de:0030-drops-80740},
  doi =		{10.4230/LIPIcs.MFCS.2017.70},
  annote =	{Keywords: degenerate graphs, near-bipartite graphs, reconfiguration graphs}
}
Document
Parameterized Algorithms and Kernels for Rainbow Matching

Authors: Sushmita Gupta, Sanjukta Roy, Saket Saurabh, and Meirav Zehavi


Abstract
In this paper, we study the NP-complete colorful variant of the classical Matching problem, namely, the Rainbow Matching problem. Given an edge-colored graph G and a positive integer k, this problem asks whether there exists a matching of size at least k such that all the edges in the matching have distinct colors. We first develop a deterministic algorithm that solves Rainbow Matching on paths in time O*(((1+\sqrt{5})/2)^k) and polynomial space. This algorithm is based on a curious combination of the method of bounded search trees and a "divide-and-conquer-like" approach, where the branching process is guided by the maintenance of an auxiliary bipartite graph where one side captures "divided-and-conquered" pieces of the path. Our second result is a randomized algorithm that solves Rainbow Matching on general graphs in time O*(2^k) and polynomial space. Here, we show how a result by Björklund et al. [JCSS, 2017] can be invoked as a black box, wrapped by a probability-based analysis tailored to our problem. We also complement our two main results by designing kernels for Rainbow Matching on general and bounded-degree graphs.

Cite as

Sushmita Gupta, Sanjukta Roy, Saket Saurabh, and Meirav Zehavi. Parameterized Algorithms and Kernels for Rainbow Matching. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 71:1-71:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.MFCS.2017.71,
  author =	{Gupta, Sushmita and Roy, Sanjukta and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Parameterized Algorithms and Kernels for Rainbow Matching}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{71:1--71:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.71},
  URN =		{urn:nbn:de:0030-drops-81245},
  doi =		{10.4230/LIPIcs.MFCS.2017.71},
  annote =	{Keywords: Rainbow Matching, Parameterized Algorithm, Bounded Search Trees, Divide-and-Conquer, 3-Set Packing, 3-Dimensional Matching}
}
Document
Compositional Weak Metrics for Group Key Update

Authors: Ruggero Lanotte, Massimo Merro, and Simone Tini


Abstract
We investigate the compositionality of both weak bisimilarity metric and weak similarity quasi- metric semantics with respect to a variety of standard operators, in the context of probabilistic process algebra. We show how compositionality with respect to nondeterministic and probabilistic choice requires to resort to rooted semantics. As a main application, we demonstrate how our results can be successfully used to conduct compositional reasonings to estimate the performances of group key update protocols in a multicast setting.

Cite as

Ruggero Lanotte, Massimo Merro, and Simone Tini. Compositional Weak Metrics for Group Key Update. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 72:1-72:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lanotte_et_al:LIPIcs.MFCS.2017.72,
  author =	{Lanotte, Ruggero and Merro, Massimo and Tini, Simone},
  title =	{{Compositional Weak Metrics for Group Key Update}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{72:1--72:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.72},
  URN =		{urn:nbn:de:0030-drops-81262},
  doi =		{10.4230/LIPIcs.MFCS.2017.72},
  annote =	{Keywords: Behavioural metric, compositional reasoning, group key update}
}
Document
Clique-Width for Graph Classes Closed under Complementation

Authors: Alexandre Blanché, Konrad K. Dabrowski, Matthew Johnson, Vadim V. Lozin, Daniël Paulusma, and Viktor Zamaraev


Abstract
Clique-width is an important graph parameter due to its algorithmic and structural properties. A graph class is hereditary if it can be characterized by a (not necessarily finite) set H of forbidden induced subgraphs. We initiate a systematic study into the boundedness of clique-width of hereditary graph classes closed under complementation. First, we extend the known classification for the |H|=1 case by classifying the boundedness of clique-width for every set H of self-complementary graphs. We then completely settle the |H|=2 case. In particular, we determine one new class of (H1, complement of H1)-free graphs of bounded clique-width (as a side effect, this leaves only six classes of (H1, H2)-free graphs, for which it is not known whether their clique-width is bounded). Once we have obtained the classification of the |H|=2 case, we research the effect of forbidding self-complementary graphs on the boundedness of clique-width. Surprisingly, we show that for a set F of self-complementary graphs on at least five vertices, the classification of the boundedness of clique-width for ({H1, complement of H1} + F)-free graphs coincides with the one for the |H|=2 case if and only if F does not include the bull (the only non-empty self-complementary graphs on fewer than five vertices are P_1 and P_4, and P_4-free graphs have clique-width at most 2). Finally, we discuss the consequences of our results for COLOURING.

Cite as

Alexandre Blanché, Konrad K. Dabrowski, Matthew Johnson, Vadim V. Lozin, Daniël Paulusma, and Viktor Zamaraev. Clique-Width for Graph Classes Closed under Complementation. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{blanche_et_al:LIPIcs.MFCS.2017.73,
  author =	{Blanch\'{e}, Alexandre and Dabrowski, Konrad K. and Johnson, Matthew and Lozin, Vadim V. and Paulusma, Dani\"{e}l and Zamaraev, Viktor},
  title =	{{Clique-Width for Graph Classes Closed under Complementation}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.73},
  URN =		{urn:nbn:de:0030-drops-80756},
  doi =		{10.4230/LIPIcs.MFCS.2017.73},
  annote =	{Keywords: clique-width, self-complementary graph, forbidden induced subgraph}
}
Document
Computing the Maximum using (min, +) Formulas

Authors: Meena Mahajan, Prajakta Nimbhorkar, and Anuj Tawari


Abstract
We study computation by formulas over (min,+). We consider the computation of max{x_1,...,x_n} over N as a difference of (min,+) formulas, and show that size n + n \log n is sufficient and necessary. Our proof also shows that any (min,+) formula computing the minimum of all sums of n-1 out of n variables must have n \log n leaves; this too is tight. Our proofs use a complexity measure for (min,+) functions based on minterm-like behaviour and on the entropy of an associated graph.

Cite as

Meena Mahajan, Prajakta Nimbhorkar, and Anuj Tawari. Computing the Maximum using (min, +) Formulas. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 74:1-74:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{mahajan_et_al:LIPIcs.MFCS.2017.74,
  author =	{Mahajan, Meena and Nimbhorkar, Prajakta and Tawari, Anuj},
  title =	{{Computing the Maximum using (min, +) Formulas}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{74:1--74:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.74},
  URN =		{urn:nbn:de:0030-drops-80706},
  doi =		{10.4230/LIPIcs.MFCS.2017.74},
  annote =	{Keywords: Formulas, Circuits, Lower bounds, Tropical semiring}
}
Document
Selecting Nodes and Buying Links to Maximize the Information Diffusion in a Network

Authors: Gianlorenzo D'Angelo, Lorenzo Severini, and Yllka Velaj


Abstract
The Independent Cascade Model (ICM) is a widely studied model that aims to capture the dynamics of the information diffusion in social networks and in general complex networks. In this model, we can distinguish between active nodes which spread the information and inactive ones. The process starts from a set of initially active nodes called seeds. Recursively, currently active nodes can activate their neighbours according to a probability distribution on the set of edges. After a certain number of these recursive cycles, a large number of nodes might become active. The process terminates when no further node gets activated. Starting from the work of Domingos and Richardson [Domingos et al. 2001], several studies have been conducted with the aim of shaping a given diffusion process so as to maximize the number of activated nodes at the end of the process. One of the most studied problems has been formalized by Kempe et al. and consists in finding a set of initial seeds that maximizes the expected number of active nodes under a budget constraint [Kempe et al. 2003]. In this paper we study a generalization of the problem of Kempe et al. in which we are allowed to spend part of the budget to create new edges incident to the seeds. That is, the budget can be spent to buy seeds or edges according to a cost function. The problem does not admin a PTAS, unless P=NP. We propose two approximation algorithms: the former one gives an approximation ratio that depends on the edge costs and increases when these costs are high; the latter algorithm gives a constant approximation guarantee which is greater than that of the first algorithm when the edge costs can be small.

Cite as

Gianlorenzo D'Angelo, Lorenzo Severini, and Yllka Velaj. Selecting Nodes and Buying Links to Maximize the Information Diffusion in a Network. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dangelo_et_al:LIPIcs.MFCS.2017.75,
  author =	{D'Angelo, Gianlorenzo and Severini, Lorenzo and Velaj, Yllka},
  title =	{{Selecting Nodes and Buying Links to Maximize the Information Diffusion in a Network}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.75},
  URN =		{urn:nbn:de:0030-drops-80853},
  doi =		{10.4230/LIPIcs.MFCS.2017.75},
  annote =	{Keywords: Approximation algorithms, information diffusion, complex networks, independent cascade model, network augmentation}
}
Document
K4-free Graphs as a Free Algebra

Authors: Enric Cosme Llópez and Damien Pous


Abstract
Graphs of treewidth at most two are the ones excluding the clique with four vertices as a minor. Equivalently, they are the graphs whose biconnected components are series-parallel. We turn those graphs into a free algebra, answering positively a question by Courcelle and Engelfriet, in the case of treewidth two. First we propose a syntax for denoting them: in addition to series and parallel compositions, it suffices to consider the neutral elements of those operations and a unary transpose operation. Then we give a finite equational presentation and we prove it complete: two terms from the syntax are congruent if and only if they denote the same graph.

Cite as

Enric Cosme Llópez and Damien Pous. K4-free Graphs as a Free Algebra. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cosmellopez_et_al:LIPIcs.MFCS.2017.76,
  author =	{Cosme Ll\'{o}pez, Enric and Pous, Damien},
  title =	{{K4-free Graphs as a Free Algebra}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.76},
  URN =		{urn:nbn:de:0030-drops-80883},
  doi =		{10.4230/LIPIcs.MFCS.2017.76},
  annote =	{Keywords: Universal Algebra, Graph theory, Axiomatisation, Tree decompositions, Graph minors}
}
Document
Making Metric Temporal Logic Rational

Authors: Shankara Narayanan Krishna, Khushraj Madnani, and Paritosh K. Pandya


Abstract
We study an extension of MTL in pointwise time with regular expression guarded modality Reg_I(re) where re is a rational expression over subformulae. We study the decidability and expressiveness of this extension (MTL+Ureg+Reg), called RegMTL, as well as its fragment SfrMTL where only star-free rational expressions are allowed. Using the technique of temporal projections, we show that RegMTL has decidable satisfiability by giving an equisatisfiable reduction to MTL. We also identify a subclass MITL+UReg of RegMTL for which our equisatisfiable reduction gives rise to formulae of MITL, yielding elementary decidability. As our second main result, we show a tight automaton-logic connection between SfrMTL and partially ordered (or very weak) 1-clock alternating timed automata.

Cite as

Shankara Narayanan Krishna, Khushraj Madnani, and Paritosh K. Pandya. Making Metric Temporal Logic Rational. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{krishna_et_al:LIPIcs.MFCS.2017.77,
  author =	{Krishna, Shankara Narayanan and Madnani, Khushraj and Pandya, Paritosh K.},
  title =	{{Making Metric Temporal Logic Rational}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.77},
  URN =		{urn:nbn:de:0030-drops-81112},
  doi =		{10.4230/LIPIcs.MFCS.2017.77},
  annote =	{Keywords: Metric Temporal Logic, Timed Automata, Regular Expression, Equisatisfiability, Expressiveness}
}
Document
Complexity of Restricted Variants of Skolem and Related Problems

Authors: Akshay S., Nikhil Balaji, and Nikhil Vyas


Abstract
Given a linear recurrence sequence (LRS), the Skolem problem, asks whether it ever becomes zero. The decidability of this problem has been open for several decades. Currently decidability is known only for LRS of order upto 4. For arbitrary orders (i.e., number of terms the n-th depends on), the only known complexity result is NP-hardness by a result of Blondel and Portier from 2002. In this paper, we give a different proof of this hardness result, which is arguably simpler and pinpoints the source of hardness. To demonstrate this, we identify a subclass of LRS for which the Skolem problem is in fact NP-complete. We show the generic nature of our lower-bound technique by adapting it to show stronger lower bounds of a related problem that encompasses many known decision problems on linear recurrent sequences.

Cite as

Akshay S., Nikhil Balaji, and Nikhil Vyas. Complexity of Restricted Variants of Skolem and Related Problems. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 78:1-78:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{s._et_al:LIPIcs.MFCS.2017.78,
  author =	{S., Akshay and Balaji, Nikhil and Vyas, Nikhil},
  title =	{{Complexity of Restricted Variants of Skolem and Related Problems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{78:1--78:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.78},
  URN =		{urn:nbn:de:0030-drops-81306},
  doi =		{10.4230/LIPIcs.MFCS.2017.78},
  annote =	{Keywords: Linear recurrence sequences, Skolem problem, NP-completeness, Program termination}
}
Document
Being Even Slightly Shallow Makes Life Hard

Authors: Irene Muzi, Michael P. O'Brien, Felix Reidl, and Blair D. Sullivan


Abstract
We study the computational complexity of identifying dense substructures, namely r/2-shallow topological minors and r-subdivisions. Of particular interest is the case r = 1, when these substructures correspond to very localized relaxations of subgraphs. Since Densest Subgraph can be solved in polynomial time, we ask whether these slight relaxations also admit efficient algorithms. In the following, we provide a negative answer: Dense r/2-Shallow Topological Minor and Dense r-Subdivsion are already NP-hard for r = 1 in very sparse graphs. Further, they do not admit algorithms with running time 2^(o(tw^2)) n^O(1) when parameterized by the treewidth of the input graph for r > 2 unless ETH fails.

Cite as

Irene Muzi, Michael P. O'Brien, Felix Reidl, and Blair D. Sullivan. Being Even Slightly Shallow Makes Life Hard. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 79:1-79:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{muzi_et_al:LIPIcs.MFCS.2017.79,
  author =	{Muzi, Irene and O'Brien, Michael P. and Reidl, Felix and Sullivan, Blair D.},
  title =	{{Being Even Slightly Shallow Makes Life Hard}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{79:1--79:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.79},
  URN =		{urn:nbn:de:0030-drops-81257},
  doi =		{10.4230/LIPIcs.MFCS.2017.79},
  annote =	{Keywords: Topological minors, NP Completeness, Treewidth, ETH, FPT algorithms}
}
Document
Walrasian Pricing in Multi-Unit Auctions

Authors: Simina Brânzei, Aris Filos-Ratsikas, Peter Bro Miltersen, and Yulong Zeng


Abstract
Multi-unit auctions are a paradigmatic model, where a seller brings multiple units of a good, while several buyers bring monetary endowments. It is well known that Walrasian equilibria do not always exist in this model, however compelling relaxations such as Walrasian envy-free pricing do. In this paper we design an optimal envy-free mechanism for multi-unit auctions with budgets. When the market is even mildly competitive, the approximation ratios of this mechanism are small constants for both the revenue and welfare objectives, and in fact for welfare the approximation converges to 1 as the market becomes fully competitive. We also give an impossibility theorem, showing that truthfulness requires discarding resources, and in particular, is incompatible with (Pareto) efficiency.

Cite as

Simina Brânzei, Aris Filos-Ratsikas, Peter Bro Miltersen, and Yulong Zeng. Walrasian Pricing in Multi-Unit Auctions. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{branzei_et_al:LIPIcs.MFCS.2017.80,
  author =	{Br\^{a}nzei, Simina and Filos-Ratsikas, Aris and Miltersen, Peter Bro and Zeng, Yulong},
  title =	{{Walrasian Pricing in Multi-Unit Auctions}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.80},
  URN =		{urn:nbn:de:0030-drops-81197},
  doi =		{10.4230/LIPIcs.MFCS.2017.80},
  annote =	{Keywords: mechanism design, multi-unit auctions, Walrasian pricing, market share}
}
Document
Distributed Strategies Made Easy

Authors: Simon Castellan, Pierre Clairambault, and Glynn Winskel


Abstract
Distributed/concurrent strategies have been introduced as special maps of event structures. As such they factor through their "rigid images," themselves strategies. By concentrating on such "rigid image" strategies we are able to give an elementary account of distributed strategies and their composition, resulting in a category of games and strategies. This is in contrast to the usual development where composition involves the pullback of event structures explicitly and results in a bicategory. It is shown how, in this simpler setting, to extend strategies to probabilistic strategies; and indicated how through probability we can track nondeterministic branching behaviour, that one might otherwise think lost irrevocably in restricting attention to "rigid image" strategies.

Cite as

Simon Castellan, Pierre Clairambault, and Glynn Winskel. Distributed Strategies Made Easy. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 81:1-81:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{castellan_et_al:LIPIcs.MFCS.2017.81,
  author =	{Castellan, Simon and Clairambault, Pierre and Winskel, Glynn},
  title =	{{Distributed Strategies Made Easy}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{81:1--81:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.81},
  URN =		{urn:nbn:de:0030-drops-81315},
  doi =		{10.4230/LIPIcs.MFCS.2017.81},
  annote =	{Keywords: Games, Strategies, Event Structures, Probability}
}
Document
Invited Talk
On Definable and Recognizable Properties of Graphs of Bounded Treewidth (Invited Talk)

Authors: Michal Pilipczuk


Abstract
This is an overview of the invited talk delivered at the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017).

Cite as

Michal Pilipczuk. On Definable and Recognizable Properties of Graphs of Bounded Treewidth (Invited Talk). In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 82:1-82:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pilipczuk:LIPIcs.MFCS.2017.82,
  author =	{Pilipczuk, Michal},
  title =	{{On Definable and Recognizable Properties of Graphs of Bounded Treewidth}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{82:1--82:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.82},
  URN =		{urn:nbn:de:0030-drops-81384},
  doi =		{10.4230/LIPIcs.MFCS.2017.82},
  annote =	{Keywords: monadic second-order logic, treewidth, recognizability}
}
Document
Invited Talk
Hardness and Approximation of High-Dimensional Search Problems (Invited Talk)

Authors: Rasmus Pagh


Abstract
Hardness and Approximation of High-Dimensional Search Problems.

Cite as

Rasmus Pagh. Hardness and Approximation of High-Dimensional Search Problems (Invited Talk). In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, p. 83:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pagh:LIPIcs.MFCS.2017.83,
  author =	{Pagh, Rasmus},
  title =	{{Hardness and Approximation of High-Dimensional Search Problems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{83:1--83:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.83},
  URN =		{urn:nbn:de:0030-drops-81416},
  doi =		{10.4230/LIPIcs.MFCS.2017.83},
  annote =	{Keywords: Hardness, high-dimensional search}
}
Document
Invited Talk
Temporal Logics for Multi-Agent Systems (Invited Talk)

Authors: Nicolas Markey


Abstract
This is an overview of an invited talk delivered during the 42nd International Conference on Mathematical Foundations of Computer Science (MFCS 2017).

Cite as

Nicolas Markey. Temporal Logics for Multi-Agent Systems (Invited Talk). In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 84:1-84:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{markey:LIPIcs.MFCS.2017.84,
  author =	{Markey, Nicolas},
  title =	{{Temporal Logics for Multi-Agent Systems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{84:1--84:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.84},
  URN =		{urn:nbn:de:0030-drops-81368},
  doi =		{10.4230/LIPIcs.MFCS.2017.84},
  annote =	{Keywords: Temporal logics, verification, game theory, strategic reasoning.}
}
Document
Invited Talk
Ideal-Based Algorithms for the Symbolic Verification of Well-Structured Systems (Invited Talk)

Authors: Philippe Schnoebelen


Abstract
We explain how the downward-closed subsets of a well-quasi-ordering (X,\leq) can be represented via the ideals of X and how this leads to simple and efficient algorithms for the verification of well-structured systems.

Cite as

Philippe Schnoebelen. Ideal-Based Algorithms for the Symbolic Verification of Well-Structured Systems (Invited Talk). In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 85:1-85:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{schnoebelen:LIPIcs.MFCS.2017.85,
  author =	{Schnoebelen, Philippe},
  title =	{{Ideal-Based Algorithms for the Symbolic Verification of Well-Structured Systems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{85:1--85:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.85},
  URN =		{urn:nbn:de:0030-drops-81393},
  doi =		{10.4230/LIPIcs.MFCS.2017.85},
  annote =	{Keywords: Well-structured systems and verification, Order theory}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail