eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
0
0
10.4230/LIPIcs.MFCS.2017
article
LIPIcs, Volume 83, MFCS'17, Complete Volume
Larsen, Kim G.
Bodlaender, Hans L.
Raskin, Jean-Francois
LIPIcs, Volume 83, MFCS'17, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017/LIPIcs.MFCS.2017.pdf
Theory of Computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
0:i
0:xvi
10.4230/LIPIcs.MFCS.2017.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Larsen, Kim G.
Bodlaender, Hans L.
Raskin, Jean-Francois
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.0/LIPIcs.MFCS.2017.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
1:1
1:13
10.4230/LIPIcs.MFCS.2017.1
article
Does Looking Inside a Circuit Help?
Impagliazzo, Russell
Kabanets, Valentine
Kolokolova, Antonina
McKenzie, Pierre
Romani, Shadab
The Black-Box Hypothesisstates that any property of Boolean functions decided efficiently (e.g., in BPP) with inputs represented by circuits can also be decided efficiently in the black-box setting, where an algorithm is given an oracle access to the input function and an upper bound on its circuit size. If this hypothesis is true, then P neq NP. We focus on the consequences of the hypothesis being false, showing that (under general conditions on the structure of a counterexample) it implies a non-trivial algorithm for CSAT. More specifically, we show that if there is a property F of boolean functions such that F has high sensitivity on some input function f of subexponential circuit complexity (which is a sufficient condition for F being a counterexample to the Black-Box Hypothesis), then CSAT is solvable by a subexponential-size circuit family. Moreover, if such a counterexample F is symmetric, then CSAT is in Ppoly. These results provide some evidence towards the conjecture (made in this paper) that the Black-Box Hypothesis is false if and only if CSAT is easy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.1/LIPIcs.MFCS.2017.1.pdf
Black-Box Hypothesis
Rice's theorem
circuit complexity
SAT
sensitivity of boolean functions
decision tree complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
2:1
2:20
10.4230/LIPIcs.MFCS.2017.2
article
The Power of Programs over Monoids in DA
Grosshans, Nathan
McKenzie, Pierre
Segoufin, Luc
The program-over-monoid model of computation originates with Barrington's proof that it captures the complexity class NC^1. Here we make progress in understanding the subtleties of the model. First, we identify a new tameness condition on a class of monoids that entails a natural characterization of the regular languages recognizable by programs over monoids from the class. Second, we prove that the class known as DA satisfies tameness and hence that the regular languages recognized by programs over monoids in DA are precisely those recognizable in the classical sense by morphisms from QDA. Third, we show by contrast that the well studied class of monoids called J is not tame and we exhibit a regular language, recognized by a program over a monoid from J, yet not recognizable classically by morphisms from the class QJ. Finally, we exhibit a program-length-based hierarchy within the class of languages recognized by programs over monoids from DA.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.2/LIPIcs.MFCS.2017.2.pdf
Programs over monoids
DA
lower-bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
3:1
3:14
10.4230/LIPIcs.MFCS.2017.3
article
Regular Language Distance and Entropy
Parker, Austin J.
Yancey, Kelly B.
Yancey, Matthew P.
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages.
The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of channel capacity, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be
equivalent to topological entropy.
Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.3/LIPIcs.MFCS.2017.3.pdf
regular languages
channel capacity
entropy
Jaccard
symbolic dynamics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
4:1
4:14
10.4230/LIPIcs.MFCS.2017.4
article
The Complexity of Boolean Surjective General-Valued CSPs
Fulla, Peter
Zivny, Stanislav
Valued constraint satisfaction problems (VCSPs) are discrete optimisation problems with the objective function given as a sum of fixed-arity functions; the values are rational numbers or infinity.
In Boolean surjective VCSPs variables take on labels from D={0,1} and an optimal assignment is required to use both labels from D. A classic example is the global min-cut problem in graphs. Building on the work of Uppman, we establish a dichotomy theorem and thus give a complete complexity classification of Boolean surjective VCSPs. The newly discovered tractable case has an interesting structure related to projections of downsets and upsets. Our work generalises the dichotomy for {0,infinity}-valued constraint languages corresponding to CSPs) obtained by Creignou and Hebrard, and the dichotomy for {0,1}-valued constraint languages (corresponding to Min-CSPs) obtained by Uppman.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.4/LIPIcs.MFCS.2017.4.pdf
constraint satisfaction problems
surjective CSP
valued CSP
min-cut
polymorphisms
multimorphisms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
5:1
5:14
10.4230/LIPIcs.MFCS.2017.5
article
On the Expressive Power of Quasiperiodic SFT
Durand, Bruno
Romashchenko, Andrei
In this paper we study the shifts, which are the shift-invariant and topologically closed sets of configurations over a finite alphabet in Z^d. The minimal shifts are those shifts in which all configurations contain exactly the same patterns. Two classes of shifts play a prominent role in symbolic dynamics, in language theory and in the theory of computability: the shifts of finite type (obtained by forbidding a finite number of finite patterns) and the effective shifts (obtained by forbidding a computably enumerable set of finite patterns).
We prove that every effective minimal shift can be represented as a factor of a projective subdynamics on a minimal shift of finite type in a bigger (by 1) dimension. This result transfers to the class of minimal shifts a theorem by M.Hochman known for the class of all effective shifts and thus answers an open question by E. Jeandel. We prove a similar result for quasiperiodic shifts and also show that there exists a quasiperiodic shift of finite type for which Kolmogorov complexity of all patterns of size n\times n is \Omega(n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.5/LIPIcs.MFCS.2017.5.pdf
minimal SFT
tilings
quasiperiodicityIn this paper we study the shifts
which are the shift-invariant and topologically closed sets of configurations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
6:1
6:14
10.4230/LIPIcs.MFCS.2017.6
article
Parameterized Algorithms for Partitioning Graphs into Highly Connected Clusters
Bliznets, Ivan
Karpov, Nikolai
Clustering is a well-known and important problem with numerous applications. The graph-based model is one of the typical cluster models. In the graph model generally clusters are defined as cliques. However, such approach might be too restrictive as in some applications, not all objects from the same cluster must be connected. That is why different types of cliques relaxations often considered as clusters.
In our work, we consider a problem of partitioning graph into clusters and a problem of isolating cluster of a special type where by cluster we mean highly connected subgraph. Initially, such clusterization was proposed by Hartuv and Shamir. And their HCS clustering algorithm was extensively applied in practice. It was used to cluster cDNA fingerprints, to find complexes in protein-protein interaction data, to group protein sequences hierarchically into superfamily and family clusters, to find families of regulatory RNA structures. The HCS algorithm partitions graph in highly connected subgraphs. However, it is achieved by deletion of not necessarily the minimum number of edges. In our work, we try to minimize the number of edge deletions. We consider problems from the parameterized point of view where the main parameter is a number of allowed edge deletions. The presented algorithms significantly improve previous known running times for the Highly Connected Deletion (improved from \cOs\left(81^k\right) to \cOs\left(3^k\right)), Isolated Highly Connected Subgraph (from \cOs(4^k) to \cOs\left(k^{\cO\left(k^{\sfrac{2}{3}}\right)}\right) ), Seeded Highly Connected Edge Deletion (from \cOs\left(16^{k^{\sfrac{3}{4}}}\right) to \cOs\left(k^{\sqrt{k}}\right)) problems. Furthermore, we present a subexponential algorithm for Highly Connected Deletion problem if the number of clusters is bounded. Overall our work contains three subexponential algorithms which is unusual as very recently there were known very few problems admitting subexponential algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.6/LIPIcs.MFCS.2017.6.pdf
clustering
parameterized complexity
highly connected
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
7:1
7:20
10.4230/LIPIcs.MFCS.2017.7
article
Hypercube LSH for Approximate near Neighbors
Laarhoven, Thijs
A celebrated technique for finding near neighbors for the angular distance involves using a set of random hyperplanes to partition the space into hash regions [Charikar, STOC 2002]. Experiments later showed that using a set of orthogonal hyperplanes, thereby partitioning the space into the Voronoi regions induced by a hypercube, leads to even better results [Terasawa and Tanaka, WADS 2007]. However, no theoretical explanation for this improvement was ever given, and it remained unclear how the resulting hypercube hash method scales in high dimensions.
In this work, we provide explicit asymptotics for the collision probabilities when using hypercubes to partition the space. For instance, two near-orthogonal vectors are expected to collide with probability (1/pi)^d in dimension d, compared to (1/2)^d when using random hyperplanes. Vectors at angle pi/3 collide with probability (sqrt[3]/pi)^d, compared to (2/3)^d for random hyperplanes, and near-parallel vectors collide with similar asymptotic probabilities in both cases.
For c-approximate nearest neighbor searching, this translates to a decrease in the exponent rho of locality-sensitive hashing (LSH) methods of a factor up to log2(pi) ~ 1.652 compared to hyperplane LSH. For c = 2, we obtain rho ~ 0.302 for hypercube LSH, improving upon the rho ~ 0.377 for hyperplane LSH. We further describe how to use hypercube LSH in practice, and we consider an example application in the area of lattice algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.7/LIPIcs.MFCS.2017.7.pdf
(approximate) near neighbors
locality-sensitive hashing
large deviations
dimensionality reduction
lattice algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
8:1
8:13
10.4230/LIPIcs.MFCS.2017.8
article
Generalized Predecessor Existence Problems for Boolean Finite Dynamical Systems
Kawachi, Akinori
Ogihara, Mitsunori
Uchizawa, Kei
A Boolean Finite Synchronous Dynamical System (BFDS, for short) consists of a finite number of objects that each maintains a boolean state, where after individually receiving state assignments, the objects update their state with respect to object-specific time-independent boolean functions synchronously in discrete time steps.
The present paper studies the computational complexity of determining, given a boolean finite synchronous dynamical system,
a configuration, which is a boolean vector representing the states
of the objects, and a positive integer t, whether there exists another configuration from which the given configuration can be reached in t steps. It was previously shown that this problem, which we call the t-Predecessor Problem, is NP-complete even for t = 1
if the update function of an object is either the conjunction of
arbitrary fan-in or the disjunction of arbitrary fan-in.
This paper studies the computational complexity of the t-Predecessor Problem for a variety of sets of permissible update functions as well as for polynomially bounded t. It also studies the t-Garden-Of-Eden Problem, a variant of the t-Predecessor Problem that asks whether a configuration has a t-predecessor, which itself has no predecessor. The paper obtains complexity theoretical characterizations of all but one of these problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.8/LIPIcs.MFCS.2017.8.pdf
Computational complexity
dynamical systems
Garden of Eden
predecessor
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
9:1
9:13
10.4230/LIPIcs.MFCS.2017.9
article
Dividing Splittable Goods Evenly and With Limited Fragmentation
Damaschke, Peter
A splittable good provided in n pieces shall be divided as evenly as possible among m agents, where every agent can take shares of at most F pieces. We call F the fragmentation. For F=1 we can solve the max-min and min-max problems in linear time. The case F=2 has neat formulations and structural characterizations in terms of weighted graphs. Here we focus on perfectly balanced solutions. While the problem is strongly NP-hard in general, it can be solved in linear time if m>=n-1, and a solution always exists in this case. Moreover, case F=2 is fixed-parameter tractable in the parameter 2m-n. The results also give rise to various open problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.9/LIPIcs.MFCS.2017.9.pdf
packing
load balancing
weighted graph
linear-time algorithm
parameterized algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
10:1
10:15
10.4230/LIPIcs.MFCS.2017.10
article
Small-Space LCE Data Structure with Constant-Time Queries
Tanimura, Yuka
Nishimoto, Takaaki
Bannai, Hideo
Inenaga, Shunsuke
Takeda, Masayuki
The longest common extension (LCE) problem is to preprocess a given string w of length n so that the length of the longest common prefix between suffixes of w that start at any two given positions is answered quickly. In this paper, we present a data structure of O(z \tau^2 + \frac{n}{\tau}) words of space which answers LCE queries in O(1) time and can be built in O(n \log \sigma) time, where 1 \leq \tau \leq \sqrt{n} is a parameter, z is the size of the Lempel-Ziv 77 factorization of w and \sigma is the alphabet size. The proposed LCE data structure not access the input string w when answering queries, and thus w can be deleted after preprocessing. On top of this main result, we obtain further results using (variants of) our LCE data structure, which include the following:
- For highly repetitive strings where the z\tau^2 term is dominated by \frac{n}{\tau}, we obtain a constant-time and sub-linear space LCE query data structure.
- Even when the input string is not well compressible via Lempel-Ziv 77 factorization, we still can obtain a constant-time and sub-linear space LCE data structure for suitable \tau and for \sigma \leq 2^{o(\log n)}.
- The time-space trade-off lower bounds for the LCE problem by Bille et al. [J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR, abs/1611.02891, 2016] do not apply in some cases with our LCE data structure.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.10/LIPIcs.MFCS.2017.10.pdf
longest common extension
truncated suffix trees
t-covers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
11:1
11:13
10.4230/LIPIcs.MFCS.2017.11
article
ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics
Jeandel, Emmanuel
Perdrix, Simon
Vilmart, Renaud
Wang, Quanlong
The ZX-Calculus is a powerful graphical language for quantum mechanics and quantum information processing. The completeness of the language - i.e. the ability to derive any true equation - is a crucial question. In the quest of a complete ZX-calculus, supplementarity has been recently proved to be necessary for quantum diagram reasoning (MFCS 2016). Roughly speaking, supplementarity consists in merging two subdiagrams when they are parameterized by antipodal angles.
We introduce a generalised supplementarity - called cyclotomic supplementarity - which consists in merging n subdiagrams at once, when the n angles divide the circle into equal parts. We show that when n is an odd prime number, the cyclotomic supplementarity cannot be derived, leading to a countable family of new axioms for diagrammatic quantum reasoning.
We exhibit another new simple axiom that cannot be derived from the existing rules of the ZX-Calculus, implying in particular the incompleteness of the language for the so-called Clifford+T quantum mechanics. We end up with a new axiomatisation of an extended ZX-Calculus, including an axiom schema for the cyclotomic supplementarity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.11/LIPIcs.MFCS.2017.11.pdf
Categorical Quantum Mechanincs
ZX-Calculus
Completeness
Cyclotomic Supplmentarity
Clifford+T
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
12:1
12:13
10.4230/LIPIcs.MFCS.2017.12
article
Counting Problems for Parikh Images
Haase, Christoph
Kiefer, Stefan
Lohrey, Markus
Given finite-state automata (or context-free grammars) A,B over the same alphabet and a Parikh vector p, we study the complexity of deciding whether the number of words in the language of A with Parikh image p is greater than the number of such words in the language of B. Recently, this problem turned out to be tightly related to the cost problem for weighted Markov chains. We classify the complexity according to whether A and B are deterministic, the size of the alphabet, and the encoding of p (binary or unary).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.12/LIPIcs.MFCS.2017.12.pdf
Parikh images
finite automata
counting problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
13:1
13:13
10.4230/LIPIcs.MFCS.2017.13
article
Communication Complexity of Pairs of Graph Families with Applications
Kolay, Sudeshna
Panolan, Fahad
Saurabh, Saket
Given a graph G and a pair (\mathcal{F}_1,\mathcal{F}_2) of graph families, the function {\sf GDISJ}_{G,{\cal F}_1,{\cal F}_2} takes as input, two induced subgraphs G_1 and G_2 of G, such that G_1 \in \mathcal{F}_1 and G_2 \in \mathcal{F}_2 and returns 1 if V(G_1)\cap V(G_2)=\emptyset and 0 otherwise. We study the communication complexity of this problem in the two-party model. In particular, we look at pairs of hereditary graph families. We show that the communication complexity of this function, when the two graph families are hereditary, is sublinear if and only if there are finitely many graphs in the intersection of these two families. Then, using concepts from parameterized complexity, we
obtain nuanced upper bounds on the communication complexity of GDISJ_G,\cal F_1,\cal F_2. A concept related to communication protocols is that of a (\mathcal{F}_1,\mathcal{F}_2)-separating family of a graph G. A collection \mathcal{F} of subsets of V(G) is
called a (\mathcal{F}_1,\mathcal{F}_2)-separating family} for G, if for any two vertex disjoint induced subgraphs G_1\in \mathcal{F}_1,G_2\in \mathcal{F}_2, there is a set F \in \mathcal{F} with V(G_1) \subseteq F and V(G_2) \cap F = \emptyset.
Given a graph G on n vertices, for any pair (\mathcal{F}_1,\mathcal{F}_2) of hereditary graph families with sublinear communication complexity for GDISJ_G,\cal F_1,\cal F_2, we give an enumeration algorithm that finds a subexponential sized (\mathcal{F}_1,\mathcal{F}_2)-separating
family. In fact, we give an enumeration algorithm that finds a 2^{o(k)}n^{\Oh(1)} sized (\mathcal{F}_1,\mathcal{F}_2)-separating family; where k denotes the size of a minimum sized set S of vertices such that V(G)\setminus S has a bipartition (V_1,V_2) with G[V_1] \in {\cal F}_1 and G[V_2]\in {\cal F}_2. We exhibit a wide range of applications for these separating families, to obtain combinatorial bounds, enumeration algorithms as well as exact and FPT algorithms for several problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.13/LIPIcs.MFCS.2017.13.pdf
Communication Complexity
Separating Family
FPT algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
14:1
14:13
10.4230/LIPIcs.MFCS.2017.14
article
Monitor Logics for Quantitative Monitor Automata
Paul, Erik
We introduce a new logic called Monitor Logic and show that it is expressively equivalent to Quantitative Monitor Automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.14/LIPIcs.MFCS.2017.14.pdf
Quantitative Monitor Automata
Nested Weighted Automata
Monitor Logics
Weighted Logics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
15:1
15:13
10.4230/LIPIcs.MFCS.2017.15
article
The Complexity of Quantum Disjointness
Klauck, Hartmut
We introduce the communication problem QNDISJ, short for Quantum (Unique) Non-Disjointness, and study its complexity under different modes of communication complexity. The main motivation for the problem is that it is a candidate for the separation of the quantum communication complexity classes QMA and QCMA. The problem generalizes the Vector-in-Subspace and Non-Disjointness problems. We give tight bounds for the QMA, quantum, randomized communication complexities of the problem. We show polynomially related upper and lower bounds for the MA complexity. We also show an upper bound for QCMA protocols, and show that the bound is tight for a natural class of QCMA protocols for the problem. The latter lower bound is based on a geometric lemma, that states that every subset of the n-dimensional sphere of measure 2^-p must contain an ortho-normal set of points of size Omega(n/p).
We also study a "small-spaces" version of the problem, and give upper and lower bounds for its randomized complexity that show that the QNDISJ problem is harder than Non-disjointness for randomized protocols. Interestingly, for quantum modes the complexity depends only on the dimension of the smaller space, whereas for classical modes the dimension of the larger space matters.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.15/LIPIcs.MFCS.2017.15.pdf
Communication Complexity
Quantum Proof Systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
16:1
16:15
10.4230/LIPIcs.MFCS.2017.16
article
Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis
Deng, Xiaotie
Gao, Yansong
Zhang, Jie
The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is Theta(sqrt{n}) times of what random priority achieves.
These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the worst-case analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.16/LIPIcs.MFCS.2017.16.pdf
mechanism design
approximation ratio
smoothed analysis
average-case analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
17:1
17:15
10.4230/LIPIcs.MFCS.2017.17
article
Time Complexity of Constraint Satisfaction via Universal Algebra
Jonsson, Peter
Lagerkvist, Victor
Roy, Biman
The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in subexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1, where n denotes the number of variables. Problems like k-SAT can be viewed as special cases of the constraint satisfaction problem (CSP), which is the problem of determining whether a set of constraints is satisfiable. In this paper we study the worst-case time complexity of NP-complete CSPs. Our main interest is in the CSP problem parameterized by a constraint language Gamma (CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is believed that CSP(Gamma) is either tractable or NP-complete, and the algebraic CSP dichotomy conjecture gives a sharp delineation of these two classes based on algebraic properties of constraint languages. Under this conjecture and the ETH, we first rule out the existence of subexponential algorithms for finite domain NP-complete CSP(Gamma) problems. This result also extends to certain infinite-domain CSPs and structurally restricted CSP(Gamma) problems. We then begin a study of the complexity of NP-complete CSPs where one is allowed to arbitrarily restrict the values of individual variables, which is a very well-studied subclass of CSPs. For such CSPs with finite domain D, we identify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if CSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is solvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a lower bound for all CSPs of this particular kind. We also prove that the complexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is false. This implies, for instance, that for every c>1 there exists a finite-domain Gamma such that CSP(Gamma) is NP complete and solvable in O(c^n) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.17/LIPIcs.MFCS.2017.17.pdf
Clone Theory
Universal Algebra
Constraint Satisfaction Problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
18:1
18:14
10.4230/LIPIcs.MFCS.2017.18
article
The Hardness of Solving Simple Word Equations
Day, Joel D.
Manea, Florin
Nowotka, Dirk
We investigate the class of regular-ordered word equations. In such equations, each variable occurs at most once in each side and the order of the variables occurring in both left and right hand sides is preserved (the variables can be, however, separated by potentially distinct constant factors). Surprisingly, we obtain that solving such simple equations, even when the sides contain exactly the same variables, is NP-hard. By considerations regarding the combinatorial structure of the minimal solutions of the more general quadratic equations we obtain that the satisfiability problem for regular-ordered equations is in NP. The complexity of solving such word equations under regular constraints is also settled. Finally, we show that a related class of simple word equations, that generalises one-variable equations, is in P.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.18/LIPIcs.MFCS.2017.18.pdf
Word Equations
Regular Patterns
Regular Constraints
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
19:1
19:14
10.4230/LIPIcs.MFCS.2017.19
article
Comparison of Max-Plus Automata and Joint Spectral Radius of Tropical Matrices
Daviaud, Laure
Guillon, Pierre
Merlet, Glenn
Weighted automata over the tropical semiring Zmax are closely related to finitely generated semigroups of matrices over Zmax. In this paper, we use results in automata theory to study two quantities associated with sets of matrices: the joint spectral radius and the ultimate rank. We prove that these two quantities are not computable over the tropical semiring, i.e. there is no algorithm that takes as input a finite set of matrices S and provides as output the joint spectral radius (resp. the ultimate rank) of S. On the other hand, we prove that the joint spectral radius is nevertheless approximable and we exhibit restricted cases in which the joint spectral radius and the ultimate rank are computable. To reach this aim, we study the problem of comparing functions computed by weighted automata over the tropical semiring. This problem is known to be undecidable, and we prove that it remains undecidable in some specific subclasses of automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.19/LIPIcs.MFCS.2017.19.pdf
max-plus automata
max-plus matrices
weighted automata
tropical semiring
joint spectral radius
ultimate rank
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
20:1
20:14
10.4230/LIPIcs.MFCS.2017.20
article
Binary Search in Graphs Revisited
Deligkas, Argyrios
Mertzios, George B.
Spirakis, Paul G.
In the classical binary search in a path the aim is to detect an unknown target by asking as few queries as possible, where each query reveals the direction to the target. This binary search algorithm has been recently extended by [Emamjomeh-Zadeh et al., STOC, 2016] to the problem of detecting a target in an arbitrary graph. Similarly to the classical case in the path, the algorithm of Emamjomeh-Zadeh et al. maintains a candidates’ set for the target, while each query asks an appropriately chosen vertex– the "median"–which minimises a potential \Phi among the vertices of the candidates' set. In this paper we address three open questions posed by Emamjomeh-Zadeh et al., namely (a) detecting a target when the query response is a direction to an approximately shortest path to the target, (b) detecting a target when querying a vertex that is an approximate median of the current candidates' set (instead of an exact one), and (c) detecting multiple targets, for which to the best of our knowledge no progress has been made so far. We resolve questions (a) and (b) by providing appropriate upper and lower bounds, as well as a new potential Γ that guarantees efficient target detection even by querying an approximate median each time. With respect to (c), we initiate a systematic study for detecting two targets in graphs and we identify sufficient conditions on the queries that allow for strong (linear) lower bounds and strong (polylogarithmic) upper bounds for the number of queries. All of our positive results can be derived using our new potential \Gamma that allows querying approximate medians.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.20/LIPIcs.MFCS.2017.20.pdf
binary search
graph
approximate query
probabilistic algorithm
lower bound.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
21:1
21:14
10.4230/LIPIcs.MFCS.2017.21
article
A Formal Semantics of Influence in Bayesian Reasoning
Jacobs, Bart
Zanasi, Fabio
This paper proposes a formal definition of influence in Bayesian reasoning, based on the notions of state (as probability distribution), predicate, validity and conditioning. Our approach highlights how conditioning a joint entwined/entangled state with a predicate on one of its components has 'crossover' influence on the other components. We use the total variation metric on probability
distributions to quantitatively measure such influence. These insights are applied to give a rigorous explanation of the fundamental concept of d-separation in Bayesian networks.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.21/LIPIcs.MFCS.2017.21.pdf
probability distribution
Bayesian network
influence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
22:1
22:15
10.4230/LIPIcs.MFCS.2017.22
article
The Complexity of SORE-definability Problems
Lu, Ping
Wu, Zhilin
Chen, Haiming
Single occurrence regular expressions (SORE) are a special kind of deterministic regular expressions, which are extensively used in the schema languages DTD and XSD for XML documents. In this paper, with motivations from the simplification of XML schemas, we consider the SORE-definability problem: Given a regular expression, decide whether it has an equivalent SORE. We investigate extensively the complexity of the SORE-definability problem: We consider both (standard) regular expressions and regular expressions with counting, and distinguish between the alphabets of size at least two and unary alphabets. In all cases, we obtain tight complexity bounds. In addition, we consider another variant of this problem, the bounded SORE-definability problem, which is to decide, given a regular expression E and a number M (encoded in unary or binary), whether there is an SORE, which is equivalent to E on the set of words of length at most M. We show that in several cases, there is an exponential decrease in the complexity when switching from the SORE-definability problem to its bounded variant.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.22/LIPIcs.MFCS.2017.22.pdf
Single occurrence regular expressions
Definability
Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
23:1
23:14
10.4230/LIPIcs.MFCS.2017.23
article
TC^0 Circuits for Algorithmic Problems in Nilpotent Groups
Myasnikov, Alexei
Weiß, Armin
Recently, Macdonald et. al. showed that many algorithmic problems for finitely generated nilpotent groups including computation of normal forms, the subgroup membership problem, the conjugacy problem, and computation of subgroup presentations can be done in LOGSPACE. Here we follow their approach and show that all these problems are complete for the uniform circuit class TC^0 - uniformly for all r-generated nilpotent groups of class at most c for fixed r and c.
Moreover, if we allow a certain binary representation of the inputs, then the word problem and computation of normal forms is still in uniform TC^0, while all the other problems we examine are shown to be TC^0-Turing reducible to the problem of computing greatest common divisors and expressing them as linear combinations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.23/LIPIcs.MFCS.2017.23.pdf
nilpotent groups
TC^0
abelian groups
word problem
conjugacy problem
subgroup membership problem
greatest common divisors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
24:1
24:14
10.4230/LIPIcs.MFCS.2017.24
article
Better Complexity Bounds for Cost Register Automata
Allender, Eric
Krebs, Andreas
McKenzie, Pierre
Cost register automata (CRAs) are one-way finite automata whose transitions have the side effect that a register is set to the result of applying a state-dependent semiring operation to a pair of registers. Here it is shown that CRAs over the tropical semiring (N U {infinity},\min,+) can simulate polynomial time computation, proving along the way that a naturally defined width-k circuit value problem over the tropical semiring is P-complete.
Then the copyless variant of the CRA, requiring that semiring operations be applied to distinct registers, is shown no more powerful than NC^1 when the semiring is (Z,+,x) or (Gamma^*,max,concat). This relates questions left open in recent work on the complexity of CRA-computable functions to long-standing class separation conjectures in complexity theory, such as NC versus P and NC^1 versus GapNC^1.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.24/LIPIcs.MFCS.2017.24.pdf
computational complexity
cost registers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
25:1
25:13
10.4230/LIPIcs.MFCS.2017.25
article
Kernelization of the Subset General Position Problem in Geometry
Boissonnat, Jean-Daniel
Dutta, Kunal
Ghosh, Arijit
Kolay, Sudeshna
In this paper, we consider variants of the Geometric Subset General Position problem. In defining this problem, a geometric subsystem is specified, like a subsystem of lines, hyperplanes or spheres. The input of the problem is a set of n points in \mathbb{R}^d and a positive integer k. The objective is to find a subset of at least k input points such that this subset is in general position with respect to the specified subsystem. For example, a set of points is in
general position with respect to a subsystem of hyperplanes in \mathbb{R}^d if no d+1 points lie on the same
hyperplane. In this paper, we study the Hyperplane Subset General Position problem under two parameterizations.
When parameterized by k then we exhibit a polynomial kernelization for the problem. When parameterized by h=n-k,
or the dual parameter, then we exhibit polynomial kernels which are also tight, under standard complexity theoretic
assumptions.
We can also exhibit similar kernelization results for d-Polynomial Subset General Position, where a vector space of polynomials
of degree at most d are specified as the underlying subsystem such that the size of the basis for this vector space is b. The objective is to find a set of at least k input points, or in the dual delete at most h = n-k points, such that no b+1 points lie on the same polynomial. Notice that this is a generalization of many well-studied geometric variants of the Set Cover problem, such as Circle Subset General Position. We also study general projective variants of these problems. These problems are also related to other geometric problems like Subset Delaunay Triangulation problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.25/LIPIcs.MFCS.2017.25.pdf
Incidence Geometry
Kernel Lower bounds
Hyperplanes
Bounded degree polynomials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
26:1
26:12
10.4230/LIPIcs.MFCS.2017.26
article
Satisfiable Tseitin Formulas Are Hard for Nondeterministic Read-Once Branching Programs
Glinskih, Ludmila
Itsykson, Dmitry
We consider satisfiable Tseitin formulas TS_{G,c} based on d-regular expanders G with the absolute value of the second largest eigenvalue less than d/3. We prove that any nondeterministic read-once branching program (1-NBP) representing TS_{G,c} has size 2^{\Omega(n)}, where n is the number of vertices in G. It extends the recent result by Itsykson at el. [STACS 2017] from OBDD to 1-NBP.
On the other hand it is easy to see that TS_{G,c} can be represented as a read-2 branching program (2-BP) of size O(n), as the negation of a nondeterministic read-once branching program (1-coNBP) of size O(n) and as a CNF formula of size O(n). Thus TS_{G,c} gives the best possible separations (up to a constant in the exponent) between
1-NBP and 2-BP, 1-NBP and 1-coNBP and between 1-NBP and CNF.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.26/LIPIcs.MFCS.2017.26.pdf
Tseitin formula
read-once branching program
expander
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
27:1
27:14
10.4230/LIPIcs.MFCS.2017.27
article
The Complexity of Quantified Constraints Using the Algebraic Formulation
Carvalho, Catarina
Martin, Barnaby
Zhuk, Dmitriy
Let A be an idempotent algebra on a finite domain. We combine results of Chen, Zhuk and Carvalho et al. to argue that if A satisfies the polynomially generated powers property (PGP), then QCSP(Inv(A)) is in NP. We then use the result of Zhuk to prove a converse, that if Inv(A) satisfies the exponentially generated powers property (EGP), then QCSP(Inv(A)) is co-NP-hard. Since Zhuk proved that only PGP and EGP are possible, we derive a full dichotomy for the QCSP, justifying the moral correctness of what we term the Chen Conjecture.
We examine in closer detail the situation for domains of size three. Over any finite domain, the only type of PGP that can occur is switchability. Switchability was introduced by Chen as a generalisation of the already-known Collapsibility. For three-element domain algebras A that are Switchable, we prove that for every finite subset Delta of Inv(A), Pol(Delta) is Collapsible. The significance of this is that, for QCSP on finite structures (over three-element domain), all QCSP tractability explained by Switchability is already explained by Collapsibility.
Finally, we present a three-element domain complexity classification vignette, using known as well as derived results.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.27/LIPIcs.MFCS.2017.27.pdf
Quantified Constraints
Computational Complexity
Universal Algebra
Constraint Satisfaction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
28:1
28:15
10.4230/LIPIcs.MFCS.2017.28
article
Induced Embeddings into Hamming Graphs
Milanic, Martin
Mursic, Peter
Mydlarz, Marcelo
Let d be a positive integer. Can a given graph G be realized in R^d so that vertices are mapped to distinct points, two vertices being adjacent if and only if the corresponding points lie on a common line that is parallel to some axis? Graphs admitting such realizations have been studied in the literature for decades under different names. Peterson asked in [Discrete Appl. Math., 2003] about the complexity of the recognition problem. While the two-dimensional case corresponds to the class of line graphs of bipartite graphs and is well-understood, the complexity question has remained open for all higher dimensions.
In this paper, we answer this question. We establish the NP-completeness of the recognition problem for any fixed dimension, even in the class of bipartite graphs. To do this, we strengthen a characterization of induced subgraphs of 3-dimensional Hamming graphs due to Klavžar and Peterin. We complement the hardness result by showing that for some important classes of perfect graphs –including chordal graphs and distance-hereditary graphs– the minimum dimension of the Euclidean space in which the graph can be realized, or the impossibility of doing so, can be determined in linear time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.28/LIPIcs.MFCS.2017.28.pdf
gridline graph
Hamming graph
induced embedding
NP-completeness
chordal graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
29:1
29:13
10.4230/LIPIcs.MFCS.2017.29
article
Structured Connectivity Augmentation
Fomin, Fedor V.
Golovach, Petr A.
Thilikos, Dimitrios M.
We initiate the algorithmic study of the following "structured augmentation" question: is it possible to increase the connectivity of a given graph G by superposing it with another given graph H? More precisely, graph F is the superposition of G and H with respect to injective mapping \phi:V(H)->V(G) if every edge uv of F is either an edge of G, or \phi^{-1}(u)\phi^{-1}(v) is an edge of H. Thus F contains both G and H as subgraphs, and the edge set of F is the union of the edge sets of G and \phi(H). We consider the following optimization problem. Given graphs G, H, and a weight function \omega assigning non-negative weights to pairs of vertices of V(G), the task is to find \phi of minimum weight \omega(\phi)=\sum_{xy\in E(H)}\omega(\phi(x)\phi(y)) such that the edge connectivity of the superposition F of G and H with respect to \phi is higher than the edge connectivity of G. Our main result is the following ``dichotomy'' complexity classification. We say that a class of graphs C has bounded vertex-cover number, if there is a constant t depending on C only such that the vertex-cover number of every graph from C does not exceed t. We show that for every class of graphs C with bounded vertex-cover number, the problems of superposing into a connected graph F and to 2-edge connected graph F, are solvable in polynomial time when H\in C. On the other hand, for any hereditary class C with unbounded vertex-cover number, both problems are NP-hard when H\in C. For the unweighted variants of structured augmentation problems, i.e. the problems where the task is to identify whether there is a superposition of graphs of required connectivity, we provide necessary and sufficient combinatorial conditions on the existence of such superpositions. These conditions imply polynomial time algorithms solving the unweighted variants of the problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.29/LIPIcs.MFCS.2017.29.pdf
connectivity augmentation
graph superposition
complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
30:1
30:15
10.4230/LIPIcs.MFCS.2017.30
article
Combinatorial Properties and Recognition of Unit Square Visibility Graphs
Casel, Katrin
Fernau, Henning
Grigoriev, Alexander
Schmid, Markus L.
Whitesides, Sue
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.30/LIPIcs.MFCS.2017.30.pdf
Visibility graphs
visibility layout
NP-completeness
exact algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
31:1
31:15
10.4230/LIPIcs.MFCS.2017.31
article
Weighted Operator Precedence Languages
Droste, Manfred
Dück, Stefan
Mandrioli, Dino
Pradella, Matteo
In the last years renewed investigation of operator precedence languages (OPL) led to discover important properties thereof: OPL are closed with respect to all major operations, are characterized, besides the original grammar family, in terms of an automata family (OPA) and an MSO logic; furthermore they significantly generalize the well-known visibly pushdown languages (VPL). In another area of research, quantitative models of systems are also greatly in demand. In this paper, we lay the foundation to marry these two research fields. We introduce weighted operator precedence automata and show how they are both strict extensions of OPA and weighted visibly pushdown automata. We prove a Nivat-like result which shows that quantitative OPL can be described by unweighted OPA and very particular weighted OPA. In a Büchi-like theorem, we show that weighted OPA are expressively equivalent to a weighted MSO-logic for OPL.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.31/LIPIcs.MFCS.2017.31.pdf
Quantitative automata
operator precedence languages
input-driven languages
visibly pushdown languages
quantitative logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
32:1
32:14
10.4230/LIPIcs.MFCS.2017.32
article
Model Checking and Validity in Propositional and Modal Inclusion Logics
Hella, Lauri
Kuusisto, Antti
Meier, Arne
Virtema, Jonni
Propositional and modal inclusion logic are formalisms that belong to the family of logics based on team semantics. This article investigates the model checking and validity problems of these logics. We identify complexity bounds for both problems, covering both lax and strict team semantics. By doing so, we come close to finalising the programme that ultimately aims to classify the complexities of the basic reasoning problems for modal and propositional dependence, independence, and inclusion logics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.32/LIPIcs.MFCS.2017.32.pdf
Inclusion Logic
Model Checking
Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
33:1
33:14
10.4230/LIPIcs.MFCS.2017.33
article
Emptiness Problems for Integer Circuits
Barth, Dominik
Beck, Moritz
Dose, Titus
Glaßer, Christian
Michler, Larissa
Technau, Marc
We study the computational complexity of emptiness problems for circuits over sets of natural numbers with the operations union, intersection, complement, addition, and multiplication. For most settings of allowed operations we precisely characterize the complexity in terms of completeness for classes like NL, NP, and PSPACE. The case where intersection, addition, and multiplication is allowed turns out to be equivalent to the complement of polynomial identity testing (PIT).
Our results imply the following improvements and insights on problems studied in earlier papers. We improve the bounds for the membership problem MC(\cup,\cap,¯,+,×) studied by McKenzie and Wagner 2007 and for the equivalence problem EQ(\cup,\cap,¯,+,×) studied by Glaßer et al. 2010. Moreover, it turns out that the following problems are equivalent to PIT, which shows that the challenge to improve their bounds is just a reformulation of a major open problem in algebraic computing complexity:
1. membership problem MC(\cap,+,×) studied by McKenzie and Wagner 2007
2. integer membership problems MC_Z(+,×), MC_Z(\cap,+,×) studied by Travers 2006
3. equivalence problem EQ(+,×) studied by Glaßer et al. 2010
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.33/LIPIcs.MFCS.2017.33.pdf
computational complexity
integer expressions
integer circuits
polynomial identity testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
34:1
34:13
10.4230/LIPIcs.MFCS.2017.34
article
Another Characterization of the Higher K-Trivials
Anglès d'Auriac, Paul-Elliot
Monin, Benoit
In algorithmic randomness, the class of K-trivial sets has proved itself to be remarkable, due to its numerous different characterizations. We pursue in this paper some work already initiated on K-trivials in the context of higher randomness. In particular we give here another characterization of the non hyperarithmetic higher K-trivial sets.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.34/LIPIcs.MFCS.2017.34.pdf
Algorithmic randomness
higher computability
K-triviality
effective descriptive set theory
Kolmogorov complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
35:1
35:19
10.4230/LIPIcs.MFCS.2017.35
article
The Quantum Monad on Relational Structures
Abramsky, Samson
Barbosa, Rui Soares
de Silva, Nadish
Zapata, Octavio
Homomorphisms between relational structures play a central role in finite model theory, constraint satisfaction, and database theory. A central theme in quantum computation is to show how quantum resources can be used to gain advantage in information processing tasks. In particular, non-local games have been used to exhibit quantum advantage in boolean constraint satisfaction, and to obtain quantum versions of graph invariants such as the chromatic number. We show how quantum strategies for homomorphism games between relational structures can be viewed as Kleisli morphisms for a quantum monad on the (classical) category of relational structures and homomorphisms. We use these results to exhibit a wide range of examples of contextuality-powered quantum advantage, and to unify several apparently diverse strands of previous work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.35/LIPIcs.MFCS.2017.35.pdf
non-local games
quantum computation
monads
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
36:1
36:15
10.4230/LIPIcs.MFCS.2017.36
article
Towards a Polynomial Kernel for Directed Feedback Vertex Set
Bergougnoux, Benjamin
Eiben, Eduard
Ganian, Robert
Ordyniak, Sebastian
Ramanujan, M. S.
In the Directed Feedback Vertex Set (DFVS) problem, the input is
a directed graph D and an integer k. The objective is to determine
whether there exists a set of at most k vertices intersecting every
directed cycle of D. DFVS was shown to be fixed-parameter tractable when parameterized by solution size by Chen, Liu, Lu, O'Sullivan and
Razgon [JACM 2008]; since then, the existence of a polynomial kernel for this problem has become one of the largest open problems in the area of parameterized algorithmics.
In this paper, we study DFVS parameterized by the feedback vertex
set number of the underlying undirected graph. We provide two main contributions: a polynomial kernel for this problem on general instances, and a linear kernel for the case where the input digraph is embeddable on a surface of bounded genus.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.36/LIPIcs.MFCS.2017.36.pdf
parameterized algorithms
kernelization
(directed) feedback vertex set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
37:1
37:16
10.4230/LIPIcs.MFCS.2017.37
article
Timed Network Games
Avni, Guy
Guha, Shibashis
Kupferman, Orna
Network games are widely used as a model for selfish resource-allocation problems. In the classical model, each player selects a path connecting her source and target vertex. The cost of traversing an edge depends on the number of players that traverse it. Thus, it abstracts the fact that different users may use a resource at different times and for different durations, which plays an important role in defining the costs of the users in reality. For example, when transmitting packets in a communication network, routing traffic in a road network, or processing a task in a production system, the traversal of the network involves an inherent delay, and so sharing and congestion of resources crucially depends on time.
We study timed network games, which add a time component to network games. Each vertex v in the network is associated with a cost function, mapping the load on v to the price that a player pays for staying in v for one time unit with this load. In addition, each edge has a guard, describing time intervals in which the edge can be traversed, forcing the players to spend time on vertices. Unlike earlier work that add a time component to network games, the time in our model is continuous and cannot be discretized. In particular, players have uncountably many strategies, and a game may have uncountably many pure Nash equilibria.
We study properties of timed network games with cost-sharing or congestion cost functions: their stability, equilibrium inefficiency, and complexity. In particular, we show that the answer to the question whether we can restrict attention to boundary strategies, namely ones in which edges are traversed only at the boundaries of guards, is mixed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.37/LIPIcs.MFCS.2017.37.pdf
Network Games
Timed Automata
Nash Equilibrium
Equilibrium Inefficiency
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
38:1
38:13
10.4230/LIPIcs.MFCS.2017.38
article
Efficient Identity Testing and Polynomial Factorization in Nonassociative Free Rings
Arvind, Vikraman
Datta, Rajit
Mukhopadhyay, Partha
Raja, S.
In this paper we study arithmetic computations in the nonassociative, and noncommutative free polynomial ring F{X}. Prior to this work, nonassociative arithmetic computation was considered by Hrubes, Wigderson, and Yehudayoff, and they showed lower bounds and proved completeness results. We consider Polynomial Identity Testing and Polynomial Factorization in F{X} and show the following results.
1. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give a deterministic polynomial algorithm to decide if f is identically zero. Our result is obtained by a suitable adaptation of the PIT algorithm of Raz and Shpilka for noncommutative ABPs.
2. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give an efficient deterministic algorithm to compute circuits for the irreducible factors of f in polynomial time when F is the field of rationals. Over finite fields of characteristic p,
our algorithm runs in time polynomial in input size and p.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.38/LIPIcs.MFCS.2017.38.pdf
Circuits
Nonassociative
Noncommutative
Polynomial Identity Testing
Factorization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
39:1
39:14
10.4230/LIPIcs.MFCS.2017.39
article
Faster Algorithms for Mean-Payoff Parity Games
Chatterjee, Krishnendu
Henzinger, Monika
Svozil, Alexander
Graph games provide the foundation for modeling and synthesis of reactive processes. Such games are played over graphs where the vertices are controlled by two adversarial players. We consider graph games where the objective of the first player is the
conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a mean-payoff condition). There are two variants of the problem, namely, the threshold problem where the quantitative goal is to ensure that the mean-payoff value is above a threshold, and the value problem where the quantitative goal is to ensure the optimal mean-payoff value; in both cases ensuring the qualitative parity objective. The previous best-known algorithms for game graphs with n vertices, m edges,
parity objectives with d priorities, and maximal absolute reward value W for mean-payoff objectives, are as follows: O(n^(d+1)·m·W) for the threshold problem, and O(n^(d+2)·m·W) for the value problem.
Our main contributions are faster algorithms, and the running times of our algorithms are as follows: O(n^(d-1)·m·W) for the threshold problem, and O(n^d·m·W·log(n·W)) for the value problem. For mean-payoff parity objectives with two priorities, our algorithms match the best-known bounds of the algorithms for mean-payoff games (without conjunction with parity objectives). Our results are relevant in synthesis of reactive systems with both functional
requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.39/LIPIcs.MFCS.2017.39.pdf
graph games
mean-payoff parity games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
40:1
40:14
10.4230/LIPIcs.MFCS.2017.40
article
Attainable Values of Reset Thresholds
Dzyga, Michalina
Ferens, Robert
Gusev, Vladimir V.
Szykula, Marek
An automaton is synchronizing if there exists a word that sends all states of the automaton to a single state. The reset threshold is the length of the shortest such word. We study the set RT_n of attainable reset thresholds by automata with n states. Relying on constructions of digraphs with known local exponents we show that the intervals [1, (n^2-3n+4)/2] and
[(p-1)(q-1), p(q-2)+n-q+1], where 2 <= p < q <= n, p+q > n, gcd(p,q)=1, belong to RT_n, even if restrict our attention to strongly connected automata. Moreover, we prove that in this case the smallest value that does not belong to RT_n is at least n^2 - O(n^{1.7625} log n / log log n).
This value is increased further assuming certain conjectures about the gaps between consecutive prime numbers.
We also show that any value smaller than n(n-1)/2 is attainable by an automaton with a sink state and any value smaller than n^2-O(n^{1.5}) is attainable in general case.
Furthermore, we solve the problem of existence of slowly synchronizing automata over an arbitrarily large alphabet, by presenting for every fixed size of the alphabet an infinite series of irreducibly synchronizing automata with the reset threshold n^2-O(n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.40/LIPIcs.MFCS.2017.40.pdf
Cerny conjecture
exponent
primitive digraph
reset word
synchronizing automaton
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
41:1
41:14
10.4230/LIPIcs.MFCS.2017.41
article
Lower Bounds and PIT for Non-Commutative Arithmetic Circuits with Restricted Parse Trees
Lagarde, Guillaume
Limaye, Nutan
Srinivasan, Srikanth
We investigate the power of Non-commutative Arithmetic Circuits, which compute polynomials over the free non-commutative polynomial ring F<x_1,...,x_N>, where variables do not commute. We consider circuits that are restricted in the ways in which they can compute monomials: this can be seen as restricting the families of parse trees that appear in the circuit. Such restrictions capture essentially all non-commutative circuit models for which lower bounds are known. We prove several results about such circuits.
- We show explicit exponential lower bounds for circuits with up to an exponential number of parse trees, strengthening the work of Lagarde, Malod, and Perifel (ECCC 2016), who prove such a result for Unique Parse Tree (UPT) circuits which have a single parse tree.
- We show explicit exponential lower bounds for circuits whose parse trees are rotations of a single tree. This simultaneously generalizes recent lower bounds of Limaye, Malod, and Srinivasan (Theory of Computing 2016) and the above lower bounds of Lagarde et al., which are known to be incomparable.
- We make progress on a question of Nisan (STOC 1991) regarding separating the power of Algebraic Branching Programs (ABPs) and Formulas in the non-commutative setting by showing a tight lower bound of n^{Omega(log d)} for any UPT formula computing the product of d n*n matrices.
When d <= log n, we can also prove superpolynomial lower bounds for formulas with up to 2^{o(d)} many parse trees (for computing the same polynomial). Improving this bound to allow for 2^{O(d)} trees would yield an unconditional separation between ABPs and Formulas.
- We give deterministic white-box PIT algorithms for UPT circuits over any field (strengthening a result of Lagarde et al. (2016)) and also for sums of a constant number of UPT circuits with different parse trees.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.41/LIPIcs.MFCS.2017.41.pdf
Non-commutative Arithemetic circuits
Partial derivatives
restrictions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
42:1
42:13
10.4230/LIPIcs.MFCS.2017.42
article
Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking
Pilipczuk, Michal
van Leeuwen, Erik Jan
Wiese, Andreas
Consider the Maximum Weight Independent Set problem for rectangles: given a family of weighted axis-parallel rectangles in the plane, find a maximum-weight subset of non-overlapping rectangles. The problem is notoriously hard both in the approximation and in the parameterized setting. The best known polynomial-time approximation algorithms achieve super-constant approximation ratios [Chalermsook & Chuzhoy, Proc. SODA 2009; Chan & Har-Peled, Discrete & Comp. Geometry, 2012], even though there is a (1+epsilon)-approximation running in quasi-polynomial time [Adamaszek & Wiese, Proc. FOCS 2013; Chuzhoy & Ene, Proc. FOCS 2016]. When parameterized by the target size of the solution, the problem is W[1]-hard even in the unweighted setting [Marx, ESA 2005].
To achieve tractability, we study the following shrinking model: one is allowed to shrink each input rectangle by a multiplicative factor 1-delta for some fixed delta > 0, but the performance is still compared against the optimal solution for the original, non-shrunk instance. We prove that in this regime, the problem admits an EPTAS with running time f(epsilon,delta) n^{O(1)}, and an FPT algorithm with running time f(k,delta) n^{O(1)}, in the setting where a maximum-weight solution of size at most k is to be computed. This improves and significantly simplifies a PTAS given earlier for this problem [Adamaszek, Chalermsook & Wiese, Proc. APPROX/RANDOM 2015], and provides the first parameterized results for the shrinking model. Furthermore, we explore kernelization in the shrinking model, by giving efficient kernelization procedures for several variants of the problem when the input rectangles are squares.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.42/LIPIcs.MFCS.2017.42.pdf
Combinatorial optimization
Approximation algorithms
Fixed-parameter algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
43:1
43:15
10.4230/LIPIcs.MFCS.2017.43
article
Eilenberg Theorems for Free
Urbat, Henning
Adámek, Jiri
Chen, Liang-Ting
Milius, Stefan
Eilenberg-type correspondences, relating varieties of languages (e.g., of finite words, infinite words, or trees) to pseudovarieties of finite algebras, form the backbone of algebraic language theory. We show that they all arise from the same recipe: one models languages and the algebras recognizing them by monads on an algebraic category, and applies a Stone-type duality. Our main
contribution is a variety theorem that covers e.g. Wilke's and Pin's
work on infinity-languages, the variety theorem for cost functions of Daviaud, Kuperberg, and Pin, and unifies the two categorical
approaches of Bojanczyk and of Adamek et al. In addition we derive new results, such as an extension of the local variety theorem of Gehrke, Grigorieff, and Pin from finite to infinite words.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.43/LIPIcs.MFCS.2017.43.pdf
Eilenberg's theorem
variety of languages
pseudovariety
monad
duality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
44:1
44:13
10.4230/LIPIcs.MFCS.2017.44
article
Membership Problem in GL(2, Z) Extended by Singular Matrices
Potapov, Igor
Semukhin, Pavel
We consider the membership problem for matrix semigroups, which is the problem to decide whether a matrix belongs to a given finitely generated matrix semigroup.
In general, the decidability and complexity of this problem for two-dimensional matrix semigroups remains open. Recently there was a significant progress with this open problem by showing that the membership is decidable for 2x2 nonsingular integer matrices. In this paper we focus on the membership for singular integer matrices and prove that this problem is decidable for 2x2 integer matrices whose determinants are equal to 0, 1, -1 (i.e. for matrices from GL(2,Z) and any singular matrices). Our algorithm relies on a translation of numerical problems on matrices into combinatorial problems on words and conversion of the membership problem into decision problem on regular languages.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.44/LIPIcs.MFCS.2017.44.pdf
Matrix Semigroups
Membership Problem
General Linear Group
Singular Matrices
Automata and Formal Languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
45:1
45:13
10.4230/LIPIcs.MFCS.2017.45
article
Grammars for Indentation-Sensitive Parsing
Nestra, Härmel
Adams' extension of parsing expression grammars enables specifying
indentation sensitivity using two non-standard grammar constructs - indentation by a binary relation and alignment. This paper is a theoretical study of Adams' grammars. It proposes a
step-by-step transformation of well-formed Adams' grammars for
elimination of the alignment construct from the grammar. The idea that alignment could be avoided was suggested by Adams but no
process for achieving this aim has been described before.
This paper also establishes general conditions that binary
relations used in indentation constructs must satisfy in order to enable efficient parsing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.45/LIPIcs.MFCS.2017.45.pdf
Parsing expression grammars
indentation
grammar transformation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
46:1
46:14
10.4230/LIPIcs.MFCS.2017.46
article
The Power of Linear-Time Data Reduction for Maximum Matching
Mertzios, George B.
Nichterlein, André
Niedermeier, Rolf
Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.46/LIPIcs.MFCS.2017.46.pdf
Maximum-cardinality matching
bipartite graphs
linear-time algorithms
kernelization
parameterized complexity analysis
FPT in P
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
47:1
47:14
10.4230/LIPIcs.MFCS.2017.47
article
Two-Planar Graphs Are Quasiplanar
Hoffmann, Michael
Tóth, Csaba D.
It is shown that every 2-planar graph is quasiplanar, that is, if a simple graph admits a drawing in the plane such that every edge is crossed at most twice, then it also admits a drawing in which no three edges pairwise cross. We further show that quasiplanarity is witnessed by a simple topological drawing, that is, any two edges cross at most once and adjacent edges do not cross.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.47/LIPIcs.MFCS.2017.47.pdf
graph drawing
near-planar graph
simple topological plane graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
48:1
48:13
10.4230/LIPIcs.MFCS.2017.48
article
The Shortest Identities for Max-Plus Automata with Two States
Daviaud, Laure
Johnson, Marianne
Max-plus automata are quantitative extensions of automata designed to associate an integer with every non-empty word. A pair of distinct words is said to be an identity for a class of max-plus automata if each of the automata in the class computes the same value on the two words. We give the shortest identities holding for the class of max-plus automata with two states. For this, we exhibit an interesting list of necessary conditions for an identity to hold. Moreover, this result provides a counter-example of a conjecture of Izhakian, concerning the minimality of certain identities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.48/LIPIcs.MFCS.2017.48.pdf
Max-plus automata
Weighted automata
Identities
Tropical matrices
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
49:1
49:14
10.4230/LIPIcs.MFCS.2017.49
article
On the Upward/Downward Closures of Petri Nets
Atig, Mohamed Faouzi
Meyer, Roland
Muskalla, Sebastian
Saivasan, Prakash
We study the size and the complexity of computing finite state automata (FSA) representing and approximating the downward and the upward closure of Petri net languages with coverability as the acceptance condition.
We show how to construct an FSA recognizing the upward closure of a Petri net language in doubly-exponential time, and therefore the size is at most doubly exponential.
For downward closures, we prove that the size of the minimal automata can be non-primitive recursive.
In the case of BPP nets, a well-known subclass of Petri nets, we show that an FSA accepting the downward/upward closure can be constructed in exponential time.
Furthermore, we consider the problem of checking whether a simple regular language is included in the downward/upward closure of a Petri net/BPP net language.
We show that this problem is EXPSPACE-complete (resp. NP-complete) in the case of Petri nets (resp. BPP nets).
Finally, we show that it is decidable whether a Petri net language is upward/downward closed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.49/LIPIcs.MFCS.2017.49.pdf
Petri nets
BPP nets
downward closure
upward closure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
50:1
50:13
10.4230/LIPIcs.MFCS.2017.50
article
On Multidimensional and Monotone k-SUM
Hsu, Chloe Ching-Yun
Umans, Chris
The well-known k-SUM conjecture is that integer k-SUM requires time Omega(n^{\ceil{k/2}-o(1)}). Recent work has studied multidimensional k-SUM in F_p^d, where the best known algorithm takes time \tilde O(n^{\ceil{k/2}}). Bhattacharyya et al. [ICS 2011] proved a min(2^{\Omega(d)},n^{\Omega(k)}) lower bound for k-SUM in F_p^d under the Exponential Time Hypothesis. We give a more refined lower bound under the standard k-SUM conjecture: for sufficiently large p, k-SUM in F_p^d requires time Omega(n^{k/2-o(1)}) if k is even, and Omega(n^{\ceil{k/2}-2k(log k)/(log p)-o(1)}) if k is odd.
For a special case of the multidimensional problem, bounded monotone d-dimensional 3SUM, Chan and Lewenstein [STOC 2015] gave a surprising \tilde O(n^{2-2/(d+13)}) algorithm using additive combinatorics. We show this algorithm is essentially optimal. To be more precise, bounded monotone d-dimensional 3SUM requires time Omega(n^{2-\frac{4}{d}-o(1)}) under the standard 3SUM conjecture, and time Omega(n^{2-\frac{2}{d}-o(1)}) under the so-called strong 3SUM conjecture. Thus, even though one might hope to further exploit the structural advantage of monotonicity, no substantial improvements beyond those obtained by Chan and Lewenstein are possible for bounded monotone d-dimensional 3SUM.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.50/LIPIcs.MFCS.2017.50.pdf
3SUM
kSUM
monotone 3SUM
strong 3SUM conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
51:1
51:13
10.4230/LIPIcs.MFCS.2017.51
article
Parameterized Complexity of the List Coloring Reconfiguration Problem with Graph Parameters
Hatanaka, Tatsuhiko
Ito, Takehiro
Zhou, Xiao
Let G be a graph such that each vertex has its list of available colors, and assume that each list is a subset of the common set consisting of k colors. For two given list colorings of G, we study the problem of transforming one into the other by changing only one vertex color assignment at a time, while at all times maintaining a list coloring. This problem is known to be PSPACE-complete even for bounded bandwidth graphs and a fixed constant k. In this paper, we study the fixed-parameter tractability of the problem when parameterized by several graph parameters. We first give a fixed-parameter algorithm for the problem when parameterized by k and the modular-width of an input graph. We next give a fixed-parameter algorithm for the shortest variant which computes the length of a shortest transformation when parameterized by k and the size of a minimum vertex cover of an input graph. As corollaries, we show that the problem for cographs and the shortest variant for split graphs are fixed-parameter tractable even when only k is taken as a parameter. On the other hand, we prove that the problem is W[1]-hard when parameterized only by the size of a minimum vertex cover of an input graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.51/LIPIcs.MFCS.2017.51.pdf
combinatorial reconfiguration
fixed-parameter tractability
graph algorithm
list coloring
W[1]-hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
52:1
52:14
10.4230/LIPIcs.MFCS.2017.52
article
Automata in the Category of Glued Vector Spaces
Colcombet, Thomas
Petrisan, Daniela
In this paper we adopt a category-theoretic approach to the conception of automata classes enjoying minimization by design. The main instantiation of our construction is a new class of automata that are hybrid between deterministic automata and automata weighted over a field.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.52/LIPIcs.MFCS.2017.52.pdf
hybrid set-vector automata
automata minimization in a category
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
53:1
53:13
10.4230/LIPIcs.MFCS.2017.53
article
The Equivalence, Unambiguity and Sequentiality Problems of Finitely Ambiguous Max-Plus Tree Automata are Decidable
Paul, Erik
We show that the equivalence, unambiguity and sequentiality problems are decidable for finitely ambiguous max-plus tree automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.53/LIPIcs.MFCS.2017.53.pdf
Tree Automata
Max-Plus Automata
Equivalence
Unambiguity
Sequentiality
Decidability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
54:1
54:14
10.4230/LIPIcs.MFCS.2017.54
article
New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems
Allender, Eric
Hirahara, Shuichi
The Minimum Circuit Size Problem (MCSP) and a related problem (MKTP) that deals with time-bounded Kolmogorov complexity are prominent candidates for NP-intermediate status. We show that, under very modest cryptographic assumptions (such as the existence of one-way functions), the problem of approximating the minimum circuit size (or time-bounded Kolmogorov complexity) within a factor of n^{1 - o(1)} is indeed NP-intermediate. To the best of our knowledge, these problems are the first natural NP-intermediate problems under the existence of an arbitrary one-way function.
We also prove that MKTP is hard for the complexity class DET under
non-uniform NC^0 reductions. This is surprising, since prior work on MCSP and MKTP had highlighted weaknesses of "local" reductions such as NC^0 reductions. We exploit this local reduction to obtain several new consequences:
* MKTP is not in AC^0[p].
* Circuit size lower bounds are equivalent to hardness of a relativized version MKTP^A of MKTP under a class of uniform AC^0 reductions, for a large class of sets A.
* Hardness of MCSP^A implies hardness of MKTP^A for a wide class of
sets A. This is the first result directly relating the complexity of
MCSP^A and MKTP^A, for any A.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.54/LIPIcs.MFCS.2017.54.pdf
computational complexity
Kolmogorov complexity
circuit size
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
55:1
55:13
10.4230/LIPIcs.MFCS.2017.55
article
Strategy Complexity of Concurrent Safety Games
Chatterjee, Krishnendu
Hansen, Kristoffer Arnsfelt
Ibsen-Jensen, Rasmus
We consider two player, zero-sum, finite-state concurrent reachability games, played for an infinite number of rounds, where in every round, each player simultaneously and independently of the other players chooses an action, whereafter the successor state is determined by a probability distribution given by the current state and the chosen actions. Player 1 wins iff a designated goal state is eventually visited. We are interested in the complexity of stationary strategies measured by their patience, which is defined as the inverse of the smallest non-zero probability employed. Our main results are as follows: We show that: (i) the optimal bound on the patience of optimal and epsilon-optimal strategies, for both players is doubly exponential; and (ii) even in games with a single non-absorbing state exponential (in the number of actions) patience is necessary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.55/LIPIcs.MFCS.2017.55.pdf
Concurrent games
Reachability and safety
Patience of strategies
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
56:1
56:14
10.4230/LIPIcs.MFCS.2017.56
article
A Characterisation of Pi^0_2 Regular Tree Languages
Cavallari, Filippo
Michalewski, Henryk
Skrzypczak, Michal
We show an algorithm that for a given regular tree language L decides if L is in Pi^0_2, that is if L belongs to the second level of Borel Hierarchy. Moreover, if L is in Pi^0_2, then we construct a weak alternating automaton of index (0, 2) which recognises L. We also prove that for a given language L, L is recognisable by a weak alternating (1, 3)-automaton if and only if it is recognisable by a weak non-deterministic (1, 3)-automaton.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.56/LIPIcs.MFCS.2017.56.pdf
infinite trees
Rabin-Mostowski hierarchy
regular languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
57:1
57:14
10.4230/LIPIcs.MFCS.2017.57
article
On the Exact Amount of Missing Information that Makes Finding Possible Winners Hard
Dey, Palash
Misra, Neeldhara
We consider election scenarios with incomplete information, a situation that arises often in practice. There are several models of incomplete information and accordingly, different notions of outcomes of such elections. In one well-studied model of incompleteness, the votes are given by partial orders over the candidates. In this context we can frame the problem of finding a possible winner, which involves determining whether a given candidate wins in at least one completion of a given set of partial votes for a specific voting rule.
The Possible Winner problem is well-known to be NP-Complete in general, and it is in fact known to be NP-Complete for several voting rules where the number of undetermined pairs in every vote is bounded only by some constant. In this paper, we address the question of determining precisely the smallest number of undetermined pairs for which the Possible Winner problem remains NP-Complete. In particular, we find the exact values of t for which the Possible Winner problem transitions to being NP-Complete from being in P, where t is the maximum number of undetermined pairs in every vote. We demonstrate tight results for a broad subclass of scoring rules which includes all the commonly used scoring rules (such as plurality, veto, Borda, and k-approval), Copeland^\alpha for every \alpha in [0,1], maximin, and Bucklin voting rules. A somewhat surprising aspect of our results is that for many of these rules, the Possible Winner problem turns out to be hard even if every vote has at most one undetermined pair of candidates.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.57/LIPIcs.MFCS.2017.57.pdf
Computational Social Choice
Dichotomy
NP-completeness
Maxflow
Voting
Possible winner
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
58:1
58:12
10.4230/LIPIcs.MFCS.2017.58
article
Fractal Intersections and Products via Algorithmic Dimension
Lutz, Neil
Algorithmic dimensions quantify the algorithmic information density of individual points and may be defined in terms of Kolmogorov complexity. This work uses these dimensions to bound the classical Hausdorff and packing dimensions of intersections and Cartesian products of fractals in Euclidean spaces. This approach shows that a known intersection formula for Borel sets holds for arbitrary sets, and it significantly simplifies the proof of a known product formula. Both of these formulas are prominent, fundamental results in fractal geometry that are taught in typical undergraduate courses on the subject.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.58/LIPIcs.MFCS.2017.58.pdf
algorithmic randomness
geometric measure theory
Hausdorff dimension
Kolmogorov complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
59:1
59:15
10.4230/LIPIcs.MFCS.2017.59
article
Domains for Higher-Order Games
Hague, Matthew
Meyer, Roland
Muskalla, Sebastian
We study two-player inclusion games played over word-generating higher-order recursion schemes.
While inclusion checks are known to capture verification problems, two-player games generalize this relationship to program synthesis.
In such games, non-terminals of the grammar are controlled by opposing players.
The goal of the existential player is to avoid producing a word that lies outside of a regular language of safe words.
We contribute a new domain that provides a representation of the winning region of such games. Our domain is based on (functions over) potentially infinite Boolean formulas with words as atomic propositions. We develop an abstract interpretation framework that we instantiate to abstract this domain into a domain where the propositions are replaced by states of a finite automaton.
This second domain is therefore finite and we obtain, via standard fixed-point techniques, a direct algorithm for the analysis of two-player inclusion games. We show, via a second instantiation of the framework, that our finite domain can be optimized, leading to a (k+1)EXP algorithm for order-k recursion schemes. We give a matching lower bound, showing that our approach is optimal. Since our approach is based on standard Kleene iteration, existing techniques and tools for fixed-point computations can be applied.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.59/LIPIcs.MFCS.2017.59.pdf
higher-order recursion schemes
games
semantics
abstract interpretation
fixed points
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
60:1
60:14
10.4230/LIPIcs.MFCS.2017.60
article
Fine-Grained Complexity of Rainbow Coloring and its Variants
Agrawal, Akanksha
Consider a graph G and an edge-coloring c_R:E(G) \rightarrow [k]. A rainbow path between u,v \in V(G) is a path P from u to v such that for all e,e' \in E(P), where e \neq e' we have c_R(e) \neq c_R(e'). In the Rainbow k-Coloring problem we are given a graph G, and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in V(G) there is a rainbow path between u and v in G. Several variants of Rainbow k-Coloring have been studied, two of which are defined as follows. The Subset Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G) \times V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all (u,v) \in S there is a rainbow path between u and v in G. The problem Steiner Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in S there is a rainbow path between u and v in G. In an attempt to resolve open problems posed by Kowalik et al. (ESA 2016), we obtain the following results.
- For every k \geq 3, Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|E(G)|)}n^{O(1)}, unless ETH fails.
- For every k \geq 3, Steiner Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|S|^2)}n^{O(1)}, unless ETH fails.
- Subset Rainbow k-Coloring admits an algorithm running in time 2^{\OO(|S|)}n^{O(1)}. This also implies an algorithm running in time 2^{o(|S|^2)}n^{O(1)} for Steiner Rainbow k-Coloring, which matches the lower bound we obtain.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.60/LIPIcs.MFCS.2017.60.pdf
Rainbow Coloring
Lower bound
ETH
Fine-grained Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
61:1
61:13
10.4230/LIPIcs.MFCS.2017.61
article
Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs
Chatterjee, Krishnendu
Ibsen-Jensen, Rasmus
Nowak, Martin A.
Evolutionary graph theory studies the evolutionary dynamics in a population structure given as a connected graph. Each node of the graph represents an individual of the population, and edges determine how offspring are placed. We consider the classical birth-death Moran process where there are two types of individuals, namely, the residents with fitness 1 and mutants with fitness r. The fitness indicates the reproductive strength. The evolutionary dynamics happens as follows: in the initial step, in a population of all resident individuals a mutant is introduced, and then at each step, an individual is chosen proportional to the fitness of its type to reproduce, and the offspring replaces a neighbor uniformly at random. The process stops when all individuals are either residents or mutants. The probability that all individuals in the end are mutants is called the fixation probability, which is a key factor in the rate of evolution. We consider the problem of approximating the fixation probability. The class of algorithms that is extremely relevant for approximation of the fixation probabilities is the Monte-Carlo simulation of the process. Previous results present a polynomial-time Monte-Carlo algorithm for undirected graphs when $r$ is given in unary. First, we present a simple modification: instead of simulating each step, we discard ineffective steps, where no node changes type (i.e., either residents replace residents, or mutants replace mutants). Using the above simple modification and our result that the number of effective steps is concentrated around the expected number of effective steps, we present faster polynomial-time Monte-Carlo algorithms for undirected graphs. Our algorithms are always at least a factor O(n^2/log n) faster as compared to the previous algorithms, where n is the number of nodes, and is polynomial even if r is given in binary. We also present lower bounds showing that the upper bound on the expected number of effective steps we present is asymptotically tight for undirected graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.61/LIPIcs.MFCS.2017.61.pdf
Graph algorithms
Evolutionary biology
Monte-Carlo algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
62:1
62:14
10.4230/LIPIcs.MFCS.2017.62
article
The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis
Yamakami, Tomoyuki
We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT - the 2CNF Boolean formula satisfiability problem-parameterized by the number of Boolean variables. It is shown that 2SAT with n variables and m clauses can be solved simultaneously polynomial time and (n/2^{c sqrt{log(n)}}) polylog(m+n) space for an absolute constant c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT_3-a restricted variant of 2SAT in which each variable of a given 2CNF formula appears as literals in at most 3 clauses-cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., n^{epsilon} polylog(n) for a certain constant epsilon in (0,1)) space. An immediate consequence of this working hypothesis is L neq NL. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems.
For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that overline{2SAT}_3 is complete for a restricted version of NL, called Syntactic NL or simply SNL, under such short reductions. This fact supports the legitimacy of our working hypothesis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.62/LIPIcs.MFCS.2017.62.pdf
sub-linear space
linear space hypothesis
short reduction
Boolean formula satisfiability problem
NL search
NL optimization
Syntactic NL
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
63:1
63:13
10.4230/LIPIcs.MFCS.2017.63
article
Variations on Inductive-Recursive Definitions
Ghani, Neil
McBride, Conor
Nordvall Forsberg, Fredrik
Spahn, Stephan
Dybjer and Setzer introduced the definitional principle of inductive-recursively defined families - i.e. of families (U : Set, T : U -> D) such that the inductive definition of U may depend on the recursively defined T --- by defining a type DS D E of codes. Each c : DS D E defines a functor [c] : Fam D -> Fam E, and
(U, T) = \mu [c] : Fam D is exhibited as the initial algebra of [c].
This paper considers the composition of DS-definable functors: Given F : Fam C -> Fam D and G : Fam D -> Fam E, is G \circ F : Fam C -> Fam E DS-definable, if F and G are? We show that this is the case if and only if powers of families are DS-definable, which seems unlikely. To construct composition, we present two new systems UF and PN of codes for inductive-recursive definitions, with UF a subsytem of DS a subsystem of PN. Both UF and PN are closed under composition. Since PN defines a potentially larger class of functors, we show that there is a model where initial algebras of PN-functors exist by adapting Dybjer-Setzer's proof for DS.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.63/LIPIcs.MFCS.2017.63.pdf
Type Theory
induction-recursion
initial-algebra semantics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
64:1
64:13
10.4230/LIPIcs.MFCS.2017.64
article
One-Dimensional Logic over Trees
Kieronski, Emanuel
Kuusisto, Antti
A one-dimensional fragment of first-order logic is obtained by restricting quantification to blocks of existential quantifiers that leave at most one variable free. This fragment contains two-variable logic, and it is known that over words both formalisms have the same complexity and expressive power. Here we investigate the one-dimensional fragment over trees. We consider unranked unordered trees accessible by one or both of the descendant and child relations, as well as ordered trees equipped additionally with sibling relations. We show that over unordered trees the satisfiability problem is ExpSpace-complete when only the descendant relation is available and 2ExpTime-complete with both the descendant and child or with only the child relation. Over ordered trees the problem remains 2ExpTime-complete. Regarding expressivity, we show that over ordered trees and over unordered trees accessible by both the descendant and child the one-dimensional fragment is equivalent to the two-variable fragment with counting quantifiers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.64/LIPIcs.MFCS.2017.64.pdf
satisfiability
expressivity
trees
fragments of first-order logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
65:1
65:13
10.4230/LIPIcs.MFCS.2017.65
article
An Improved FPT Algorithm for the Flip Distance Problem
Li, Shaohua
Feng, Qilong
Meng, Xiangzhong
Wang, Jianxin
Given a set \cal P of points in the Euclidean plane and two triangulations of \cal P, the flip distance between these two triangulations is the minimum number of flips required to transform one triangulation into the other. The Parameterized Flip Distance problem is to decide if the flip distance between two given triangulations is equal to a given integer k. The previous best FPT algorithm runs in time O^*(k\cdot c^k) (c\leq 2\times 14^11), where each step has fourteen possible choices, and the length of the action sequence is bounded by 11k. By applying the backtracking strategy and analyzing the underlying property of the flip sequence, each step of our algorithm has only five possible choices. Based on an auxiliary graph G, we prove that the length of the action sequence for our algorithm is bounded by 2|G|. As a result, we present an FPT algorithm running in time O^*(k\cdot 32^k).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.65/LIPIcs.MFCS.2017.65.pdf
triangulation
flip distance
FPT algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
66:1
66:14
10.4230/LIPIcs.MFCS.2017.66
article
Reversible Kleene lattices
Brunet, Paul
We investigate the equational theory of reversible Kleene lattices, that is algebras of languages with the regular operations (union, composition and Kleene star), together with intersection and mirror image. Building on results by Andréka, Mikulás and Németi from 2011, we construct the free representation of this algebra. We then provide an automaton model to compare representations. These automata are adapted from Petri automata, which we introduced with Pous in 2015 to tackle a similar problem for algebras of binary relations. This allows us to show that testing the validity of
equations in this algebra is decidable, and in fact ExpSpace-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.66/LIPIcs.MFCS.2017.66.pdf
Kleene algebra
Automata
Petri nets
Decidability
Complexity
Formal languages
Lattice
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
67:1
67:14
10.4230/LIPIcs.MFCS.2017.67
article
Lossy Kernels for Hitting Subgraphs
Eiben, Eduard
Hermelin, Danny
Ramanujan, M. S.
In this paper, we study the Connected H-hitting Set and Dominating Set problems from the perspective of approximate kernelization, a framework recently introduced by Lokshtanov et al. [STOC 2017]. For the Connected H-hitting set problem, we obtain an \alpha-approximate kernel for every \alpha>1 and complement it with a lower bound for the natural weighted version. We then perform a refined analysis of the tradeoff between the approximation factor and kernel size for the Dominating Set problem on d-degenerate graphs and provide an interpolation of approximate kernels between the known d^2-approximate kernel of constant size and 1-approximate kernel of size k^{O(d^2)}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.67/LIPIcs.MFCS.2017.67.pdf
parameterized algorithms
lossy kernelization
graph theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
68:1
68:17
10.4230/LIPIcs.MFCS.2017.68
article
Undecidable Problems for Probabilistic Network Programming
Kahn, David M.
The software-defined networking language NetKAT is able to verify many useful properties of networks automatically via a PSPACE decision procedure for program equality. However, for its probabilistic extension ProbNetKAT, no such decision procedure is known. We show that several potentially useful properties of ProbNetKAT are in fact undecidable, including emptiness of support intersection and certain kinds of distribution bounds and program comparisons. We do so by embedding the Post Correspondence Problem in ProbNetKAT via direct product expressions, and by directly embedding probabilistic finite automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.68/LIPIcs.MFCS.2017.68.pdf
Software-defined networking
NetKAT
ProbNetKAT
Undecidability
Probabilistic finite automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
69:1
69:14
10.4230/LIPIcs.MFCS.2017.69
article
Computational Complexity of Graph Partition under Vertex-Compaction to an Irreflexive Hexagon
Vikas, Narayan
In this paper, we solve a long-standing graph partition problem under vertex-compaction that has been of interest since about 1999. The graph partition problem that we consider in this paper is to decide whether or not it is possible to partition the vertices of a graph into six distinct non-empty sets A, B, C, D, E, and F, such that the vertices in each set are independent, i.e., there is no edge within any set, and an edge is possible but not necessary only between the pairs of sets A and B, B and C, C and D, D and E, E and F, and F and A, and there is no edge between any other pair of sets. We study the problem as the vertex-compaction problem for an irreflexive hexagon (6-cycle). Determining the computational complexity of this problem has been a long-standing problem of interest since about 1999, especially after the results of open problems obtained by the author on a related compaction problem appeared in 1999. We show in this paper that the vertex-compaction problem for an irreflexive hexagon is NP-complete. Our proof can be extended for larger even irreflexive cycles, showing that the vertex-compaction problem for an irreflexive even k-cycle is NP-complete, for all even k \geq 6.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.69/LIPIcs.MFCS.2017.69.pdf
computational complexity
algorithms
graph
partition
colouring
homomorphism
retraction
compaction
vertex-compaction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
70:1
70:14
10.4230/LIPIcs.MFCS.2017.70
article
Recognizing Graphs Close to Bipartite Graphs
Bonamy, Marthe
Dabrowski, Konrad K.
Feghali, Carl
Johnson, Matthew
Paulusma, Daniël
We continue research into a well-studied family of problems that ask if the vertices of a graph can be partitioned into sets A and B, where A is an independent set and B induces a graph from some specified graph class G. We let G be the class of k-degenerate graphs. The problem is known to be polynomial-time solvable if k=0 (bipartite graphs) and NP-complete if k=1 (near-bipartite graphs) even for graphs of diameter 4, as shown by Yang and Yuan, who also proved polynomial-time solvability for graphs of diameter 2. We show that recognizing near-bipartite graphs of diameter 3 is NP-complete resolving their open problem. To answer another open problem, we consider graphs of maximum degree D on n vertices. We show how to find A and B in O(n) time for k=1 and D=3, and in O(n^2) time for k >= 2 and D >= 4. These results also provide an algorithmic version of a result of Catlin [JCTB, 1979] and enable us to complete the complexity classification of another problem: finding a path in the vertex colouring reconfiguration graph between two given k-colourings of a graph of bounded maximum degree.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.70/LIPIcs.MFCS.2017.70.pdf
degenerate graphs
near-bipartite graphs
reconfiguration graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
71:1
71:13
10.4230/LIPIcs.MFCS.2017.71
article
Parameterized Algorithms and Kernels for Rainbow Matching
Gupta, Sushmita
Roy, Sanjukta
Saurabh, Saket
Zehavi, Meirav
In this paper, we study the NP-complete colorful variant of the classical Matching problem, namely, the Rainbow Matching problem. Given an edge-colored graph G and a positive integer k, this problem asks whether there exists a matching of size at least k such that all the edges in the matching have distinct colors. We first develop a deterministic algorithm that solves Rainbow Matching on paths in time O*(((1+\sqrt{5})/2)^k) and polynomial space. This algorithm is based on a curious combination of the method of bounded search trees and a "divide-and-conquer-like" approach, where the branching process is guided by the maintenance of an auxiliary bipartite graph where one side captures "divided-and-conquered" pieces of the path. Our second result is a randomized algorithm that solves Rainbow Matching on general graphs in time O*(2^k) and polynomial space. Here, we show how a result by Björklund et al. [JCSS, 2017] can be invoked as a black box, wrapped by a probability-based analysis tailored to our problem. We also complement our two main results by designing kernels for Rainbow Matching on general and bounded-degree graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.71/LIPIcs.MFCS.2017.71.pdf
Rainbow Matching
Parameterized Algorithm
Bounded Search Trees
Divide-and-Conquer
3-Set Packing
3-Dimensional Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
72:1
72:16
10.4230/LIPIcs.MFCS.2017.72
article
Compositional Weak Metrics for Group Key Update
Lanotte, Ruggero
Merro, Massimo
Tini, Simone
We investigate the compositionality of both weak bisimilarity metric and weak similarity quasi- metric semantics with respect to a variety of standard operators, in the context of probabilistic process algebra. We show how compositionality with respect to nondeterministic and probabilistic choice requires to resort to rooted semantics. As a main application, we demonstrate how our results can be successfully used to conduct compositional reasonings to estimate the performances of group key update protocols in a multicast setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.72/LIPIcs.MFCS.2017.72.pdf
Behavioural metric
compositional reasoning
group key update
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
73:1
73:14
10.4230/LIPIcs.MFCS.2017.73
article
Clique-Width for Graph Classes Closed under Complementation
Blanché, Alexandre
Dabrowski, Konrad K.
Johnson, Matthew
Lozin, Vadim V.
Paulusma, Daniël
Zamaraev, Viktor
Clique-width is an important graph parameter due to its algorithmic and structural properties. A graph class is hereditary if it can be characterized by a (not necessarily finite) set H of forbidden induced subgraphs. We initiate a systematic study into the boundedness of clique-width of hereditary graph classes closed under complementation. First, we extend the known classification for the |H|=1 case by classifying the boundedness of clique-width for every set H of self-complementary graphs. We then completely settle the |H|=2 case. In particular, we determine one new class of (H1, complement of H1)-free graphs of bounded clique-width (as a side effect, this leaves only six classes of (H1, H2)-free graphs, for which it is not known whether their clique-width is bounded).
Once we have obtained the classification of the |H|=2 case, we research the effect of forbidding self-complementary graphs on the boundedness of clique-width. Surprisingly, we show that for a set F of self-complementary graphs on at least five vertices, the classification of the boundedness of clique-width for ({H1, complement of H1} + F)-free graphs coincides with the one for the |H|=2 case if and only if F does not include the bull (the only non-empty self-complementary graphs on fewer than five vertices are P_1 and P_4, and P_4-free graphs have clique-width at most 2).
Finally, we discuss the consequences of our results for COLOURING.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.73/LIPIcs.MFCS.2017.73.pdf
clique-width
self-complementary graph
forbidden induced subgraph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
74:1
74:11
10.4230/LIPIcs.MFCS.2017.74
article
Computing the Maximum using (min, +) Formulas
Mahajan, Meena
Nimbhorkar, Prajakta
Tawari, Anuj
We study computation by formulas over (min,+). We consider the
computation of max{x_1,...,x_n} over N as a difference of
(min,+) formulas, and show that size n + n \log n is sufficient
and necessary. Our proof also shows that any (min,+) formula
computing the minimum of all sums of n-1 out of n variables must
have n \log n leaves; this too is tight. Our proofs use a
complexity measure for (min,+) functions based on minterm-like
behaviour and on the entropy of an associated graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.74/LIPIcs.MFCS.2017.74.pdf
Formulas
Circuits
Lower bounds
Tropical semiring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
75:1
75:14
10.4230/LIPIcs.MFCS.2017.75
article
Selecting Nodes and Buying Links to Maximize the Information Diffusion in a Network
D'Angelo, Gianlorenzo
Severini, Lorenzo
Velaj, Yllka
The Independent Cascade Model (ICM) is a widely studied model that aims to capture the dynamics of the information diffusion in social networks and in general complex networks. In this model, we can distinguish between active nodes which spread the information and inactive ones. The process starts from a set of initially active nodes called seeds. Recursively, currently active nodes can activate their neighbours according to a probability distribution on the set of edges. After a certain number of these recursive cycles, a large number of nodes might become active. The process terminates when no further node gets activated.
Starting from the work of Domingos and Richardson [Domingos et al. 2001], several studies have been conducted with the aim of shaping a given diffusion process so as to maximize the number of activated nodes at the end of the process. One of the most studied problems has been formalized by Kempe et al. and consists in finding a set of initial seeds that maximizes the expected number of active nodes under a budget constraint [Kempe et al. 2003].
In this paper we study a generalization of the problem of Kempe et al. in which we are allowed to spend part of the budget to create new edges incident to the seeds. That is, the budget can be spent to buy seeds or edges according to a cost function. The problem does not admin a PTAS, unless P=NP. We propose two approximation algorithms: the former one gives an approximation ratio that depends on the edge costs and increases when these costs are high; the latter algorithm gives a constant approximation guarantee which is greater than that of the first algorithm when the edge costs can be small.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.75/LIPIcs.MFCS.2017.75.pdf
Approximation algorithms
information diffusion
complex networks
independent cascade model
network augmentation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
76:1
76:14
10.4230/LIPIcs.MFCS.2017.76
article
K4-free Graphs as a Free Algebra
Cosme Llópez, Enric
Pous, Damien
Graphs of treewidth at most two are the ones excluding the clique with four vertices as a minor. Equivalently, they are the graphs whose biconnected components are series-parallel.
We turn those graphs into a free algebra, answering positively a question by Courcelle and Engelfriet, in the case of treewidth two.
First we propose a syntax for denoting them: in addition to series and parallel compositions, it suffices to consider the neutral elements of those operations and a unary transpose operation. Then we give a finite equational presentation and we prove it complete: two terms from the syntax are congruent if and only if they denote the same graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.76/LIPIcs.MFCS.2017.76.pdf
Universal Algebra
Graph theory
Axiomatisation
Tree decompositions
Graph minors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
77:1
77:14
10.4230/LIPIcs.MFCS.2017.77
article
Making Metric Temporal Logic Rational
Krishna, Shankara Narayanan
Madnani, Khushraj
Pandya, Paritosh K.
We study an extension of MTL in pointwise time with regular expression guarded modality Reg_I(re) where re is a rational expression over subformulae. We study the decidability and expressiveness of this extension (MTL+Ureg+Reg), called RegMTL, as well as its fragment SfrMTL where only star-free rational expressions are allowed. Using the technique of temporal projections, we show that RegMTL has decidable satisfiability by giving an equisatisfiable reduction to MTL. We also identify a subclass MITL+UReg of RegMTL for which our equisatisfiable reduction gives rise to formulae of MITL, yielding elementary decidability. As our second main result, we show a tight automaton-logic connection between SfrMTL and partially ordered (or very weak) 1-clock alternating timed automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.77/LIPIcs.MFCS.2017.77.pdf
Metric Temporal Logic
Timed Automata
Regular Expression
Equisatisfiability
Expressiveness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
78:1
78:14
10.4230/LIPIcs.MFCS.2017.78
article
Complexity of Restricted Variants of Skolem and Related Problems
S., Akshay
Balaji, Nikhil
Vyas, Nikhil
Given a linear recurrence sequence (LRS), the Skolem problem, asks whether it ever becomes zero. The decidability of this problem has been open for several decades. Currently decidability is known only for LRS of order upto 4. For arbitrary orders (i.e., number of terms the n-th depends on), the only known complexity result is NP-hardness by a result of Blondel and Portier from 2002.
In this paper, we give a different proof of this hardness result, which is arguably simpler and pinpoints the source of hardness. To demonstrate this, we identify a subclass of LRS for which the Skolem problem is in fact NP-complete. We show the generic nature of our lower-bound technique by adapting it to show stronger lower bounds of a related problem that encompasses many known decision problems on linear recurrent sequences.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.78/LIPIcs.MFCS.2017.78.pdf
Linear recurrence sequences
Skolem problem
NP-completeness
Program termination
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
79:1
79:13
10.4230/LIPIcs.MFCS.2017.79
article
Being Even Slightly Shallow Makes Life Hard
Muzi, Irene
O'Brien, Michael P.
Reidl, Felix
Sullivan, Blair D.
We study the computational complexity of identifying dense substructures, namely r/2-shallow topological minors and r-subdivisions. Of particular interest is the case r = 1, when these substructures correspond to very localized relaxations of subgraphs. Since Densest Subgraph can be solved in polynomial time, we ask whether these slight relaxations also admit efficient algorithms.
In the following, we provide a negative answer: Dense r/2-Shallow Topological Minor and Dense r-Subdivsion are already NP-hard for r = 1 in very sparse graphs. Further, they do not admit algorithms with running time 2^(o(tw^2)) n^O(1) when parameterized by the treewidth of the input graph for r > 2 unless ETH fails.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.79/LIPIcs.MFCS.2017.79.pdf
Topological minors
NP Completeness
Treewidth
ETH
FPT algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
80:1
80:14
10.4230/LIPIcs.MFCS.2017.80
article
Walrasian Pricing in Multi-Unit Auctions
Brânzei, Simina
Filos-Ratsikas, Aris
Miltersen, Peter Bro
Zeng, Yulong
Multi-unit auctions are a paradigmatic model, where a seller brings multiple units of a good, while several buyers bring monetary endowments. It is well known that Walrasian equilibria do not always exist in this model, however compelling relaxations such as Walrasian envy-free pricing do. In this paper we design an optimal envy-free mechanism for multi-unit auctions with budgets. When the market is even mildly competitive, the approximation ratios of this mechanism are small constants for both the revenue and welfare objectives, and in fact for welfare the approximation converges to 1 as the market becomes fully competitive. We also give an impossibility theorem, showing that truthfulness requires discarding resources, and in particular, is incompatible with (Pareto) efficiency.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.80/LIPIcs.MFCS.2017.80.pdf
mechanism design
multi-unit auctions
Walrasian pricing
market share
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
81:1
81:13
10.4230/LIPIcs.MFCS.2017.81
article
Distributed Strategies Made Easy
Castellan, Simon
Clairambault, Pierre
Winskel, Glynn
Distributed/concurrent strategies have been introduced as special maps of event structures. As such they factor through their "rigid images," themselves strategies. By concentrating on such "rigid image" strategies we are able to give an elementary account of distributed strategies and their composition, resulting in a category of games and strategies. This is in contrast to the usual development where composition involves the pullback of event structures explicitly and results in a bicategory. It is shown how, in this simpler setting, to extend strategies to probabilistic strategies; and indicated how through probability we can track nondeterministic branching behaviour, that one might otherwise think lost irrevocably in restricting attention to "rigid image" strategies.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.81/LIPIcs.MFCS.2017.81.pdf
Games
Strategies
Event Structures
Probability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
82:1
82:2
10.4230/LIPIcs.MFCS.2017.82
article
On Definable and Recognizable Properties of Graphs of Bounded Treewidth (Invited Talk)
Pilipczuk, Michal
This is an overview of the invited talk delivered at the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.82/LIPIcs.MFCS.2017.82.pdf
monadic second-order logic
treewidth
recognizability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
83:1
83:1
10.4230/LIPIcs.MFCS.2017.83
article
Hardness and Approximation of High-Dimensional Search Problems (Invited Talk)
Pagh, Rasmus
Hardness and Approximation of High-Dimensional Search Problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.83/LIPIcs.MFCS.2017.83.pdf
Hardness
high-dimensional search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
84:1
84:3
10.4230/LIPIcs.MFCS.2017.84
article
Temporal Logics for Multi-Agent Systems (Invited Talk)
Markey, Nicolas
This is an overview of an invited talk delivered during the 42nd International Conference on Mathematical Foundations of Computer Science (MFCS 2017).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.84/LIPIcs.MFCS.2017.84.pdf
Temporal logics
verification
game theory
strategic reasoning.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2017-12-01
83
85:1
85:4
10.4230/LIPIcs.MFCS.2017.85
article
Ideal-Based Algorithms for the Symbolic Verification of Well-Structured Systems (Invited Talk)
Schnoebelen, Philippe
We explain how the downward-closed subsets of a well-quasi-ordering (X,\leq) can be represented via the ideals of X and how this leads to simple and efficient algorithms for the verification of well-structured systems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol083-mfcs2017/LIPIcs.MFCS.2017.85/LIPIcs.MFCS.2017.85.pdf
Well-structured systems and verification
Order theory