30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013), STACS 2013, February 27 to March 2, 2013, Kiel, Germany
STACS 2013
February 27 to March 2, 2013
Kiel, Germany
Symposium on Theoretical Aspects of Computer Science
STACS
http://www.stacs-conf.org/
https://dblp.org/db/conf/stacs
Leibniz International Proceedings in Informatics
LIPIcs
https://www.dagstuhl.de/dagpub/1868-8969
https://dblp.org/db/series/lipics
1868-8969
Natacha
Portier
Natacha Portier
Thomas
Wilke
Thomas Wilke
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
20
2013
978-3-939897-50-7
https://www.dagstuhl.de/dagpub/978-3-939897-50-7
Frontmatter, Table of Contents, Preface, Workshop Organization
Frontmatter, Table of Contents, Preface, Workshop Organization
Frontmatter
Table of Contents
Preface
Workshop Organization
i-xvii
Front Matter
Natacha
Portier
Natacha Portier
Thomas
Wilke
Thomas Wilke
10.4230/LIPIcs.STACS.2013.i
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The complexity of analyzing infinite-state Markov chains, Markov decision processes, and stochastic games (Invited Talk)
In recent years, a considerable amount of research has been devoted to understanding the computational complexity of basic analysis problems, and model checking problems, for finitely-presented countable infinite-state probabilistic systems. In particular, we have studied recursive Markov chains (RMCs), recursive Markov decision processes (RMDPs) and recursive stochastic games (RSGs). These arise by adding a natural recursion feature to finite-state Markov chains, MDPs, and stochastic games. RMCs and RMDPs provide natural abstract models of probabilistic procedural programs with recursion, and they are expressively equivalent to probabilistic and MDP extensions of pushdown automata. Moreover, a number of well-studied stochastic processes, including multi-type branching processes, (discrete-time) quasi-birth-death processes, and stochastic context-free grammars, can be suitably captured by subclasses of RMCs.
A central computational problem for analyzing various classes of recursive probabilistic systems is the computation of their (optimal) termination probabilities. These form a key ingredient for many other analyses, including model checking. For RMCs, and for important subclasses of RMDPs and RSGs, computing their termination values is equivalent to computing the least fixed point (LFP) solution of a corresponding monotone system of polynomial (min/max) equations. The complexity of computing the LFP solution for such equation systems is a intriguing problem, with connections to several areas of research. The LFP solution may in general be irrational. So, one possible aim is to compute it to within a desired additive error epsilon > 0. For general RMCs, approximating their termination probability within any non-trivial constant additive error < 1/2, is at least as hard as long-standing open problems in the complexity of numerical computation which are not even known to be in NP. For several key subclasses of RMCs and RMDPs, computing their termination values
turns out to be much more tractable.
In this talk I will survey algorithms for, and discuss the computational complexity of, key analysis problems for classes of infinite-state recursive MCs, MDPs, and stochastic games. In particular, I will discuss recent joint work with Alistair Stewart and Mihalis Yannakakis (in papers that appeared at STOC'12 and ICALP'12), in which we have obtained polynomial time algorithms for computing, to within arbitrary desired precision, the LFP solution of probabilistic polynomial (min/max) systems of equations. Using this, we obtained the first P-time algorithms for computing (within desired precision) the extinction probabilities of multi-type branching processes, the probability that an arbitrary given stochastic context-free grammar generates a given string, and the optimum (maximum or minimum) extinction probabilities for branching MDPs and context-free MDPs. For branching MDPs, their corresponding equations amount to Bellman optimality equations for minimizing/maximizing their termination probabilities. Our algorithms combine variations and generalizations of Newton's method with other techniques, including linear programming. The algorithms are fairly easy to implement, but analyzing their worst-case running time
is mathematically quite involved.
recursive Markov chains
Markov decision processes
stochastic games
monotone systems of nonlinear equations
least fixed points
Newton's method
co
1-2
Invited Talk
Kousha
Etessami
Kousha Etessami
10.4230/LIPIcs.STACS.2013.1
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Graph coloring, communication complexity and the stubborn problem (Invited Talk)
We discuss three equivalent forms of the same problem arising in communication complexity, constraint satisfaction problems, and graph coloring. Some partial results are discussed.
stubborn problem
graph coloring
Clique-Stable set separation
Alon-Saks-Seymour conjecture
bipartite packing
3-4
Invited Talk
Nicolas
Bousquet
Nicolas Bousquet
Aurélie
Lagoutte
Aurélie Lagoutte
Thomassé
Stéphan
Thomassé Stéphan
10.4230/LIPIcs.STACS.2013.3
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Physarum Computations (Invited Talk)
Physarum is a slime mold. It was observed over the past 10 years that the mold is able to solve shortest path problems and to construct good Steiner networks [9, 11, 8].In a nutshell, the shortest path experiment is as follows: A maze is covered with mold and food is then provided at two positions s and t and the evolution of the slime is observed. Over time, the slime retracts to the shortest s-t-path. A video showing the wet-lab experiment can be found at
http://www.youtube.com/watch?v=tLO2n3YMcXw&t=4m43s. We strongly recommend to watch this video.
A mathematical model of the slime's dynamic behavior was proposed in 2007 [10]. Extensive computer simulations of the mathematical model confirm the wet-lab findings. For the edges on the shortest path, the diameter
converges to one, and for the edges off the shortest path, the diameter converges to zero.
We review the wet-lab and the computer experiments and provide a proof for these experimental findings. The proof was developed over a sequence of papers [6, 7, 4, 2, 1, 3]. We recommend the last two papers for first reading.
An interesting connection between Physarum and ant computations is made in [5].
Biological computation
shortest path problems
5-6
Invited Talk
Kurt
Mehlhorn
Kurt Mehlhorn
10.4230/LIPIcs.STACS.2013.5
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Algorithmic Graph Structure Theory (Tutorial)
The Graph Minors project of Robertson and Seymour uncovered a very deep structural theory of graphs. This theory had several important consequences, among others, the proof of Wagner's Conjecture. While the whole theory, presented in a series of 23 very dense papers, is notoriously difficult to understand, it has to be emphasized that these papers introduced several elementary concepts and tools that had strong impact on algorithms, complexity, and combinatorics. Moreover, even some of the very deep results can be stated in a compact and useful way, and it is possible to build upon these results without a complete understanding of the underlying machinery.
In the first part of the lecture, I will introduce the concept of treewidth, which can be thought of as an elementary entry point to graph minors theory. I will overview its graph-theoretic and algorithmic properties that make it especially important in the design of parameterized algorithms and approximation schemes on planar graphs. Furthermore, I will briefly explain some of the connections of treewidth to complexity and automata theory.
In the next part of the lecture, we will turn our attention to the more advanced topic of graphs excluding a fixed minor: the structure of such graphs, finding minors, and the well-quasi-ordering of the minor relation. The primary goal here is to provide clear and useful statements of these results and to show how they generalize the concepts of treewidth and planar graphs. Finally, I will briefly overview some more recent results involving different kinds of excluded structures, such as graphs excluding odd minors and topological minors.
Graph theory
graph minors
structure theorems
7-7
Tutorial
Dániel
Marx
Dániel Marx
10.4230/LIPIcs.STACS.2013.7
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Searching for better fill-in
Minimum Fill-in is a fundamental and classical problem arising in sparse matrix computations. In terms of graphs it can be formulated as a problem of finding a triangulation of a given graph with the minimum number of edges. By the classical result of Rose, Tarjan, Lueker, and Ohtsuki from 1976, an inclusion minimal triangulation of a graph can be found in polynomial time but, as it was shown by Yannakakis in 1981, finding a triangulation with the minimum number of edges is NP-hard.
In this paper, we study the parameterized complexity of local search for the Minimum Fill-in problem in the following form: Given a triangulation H of a graph G, is there a better triangulation, i.e. triangulation with less edges than H, within a given distance from H? We prove that this problem is fixed-parameter tractable (FPT) being parameterized by the distance from the initial triangulation by providing an algorithm that in time O(f(k) |G|^{O(1)}) decides if a better triangulation of G can be obtained by swapping at most k edges of H.
Our result adds Minimum Fill-in to the list of very few problems for which local search is known to be FPT.
Local Search
Parameterized Complexity
Fill-in
Triangulation
Chordal graph
8-19
Regular Paper
Fedor V.
Fomin
Fedor V. Fomin
Yngve
Villanger
Yngve Villanger
10.4230/LIPIcs.STACS.2013.8
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Probably Optimal Graph Motifs
We show an O^*(2^k)-time polynomial space algorithm for the k-sized Graph Motif problem. We also introduce a new optimization variant of the problem, called Closest Graph Motif and solve it within the same time bound. The Closest Graph Motif problem encompasses several previously studied optimization variants, like Maximum Graph Motif, Min-Substitute, and Min-Add.
Moreover, we provide a piece of evidence that our result might be essentially tight: the existence of an O^*((2-epsilon)^k)-time algorithm for the Graph Motif problem implies an ((2-epsilon')^n)-time algorithm for Set Cover.
graph motif
FPT algorithm
20-31
Regular Paper
Andreas
Björklund
Andreas Björklund
Petteri
Kaski
Petteri Kaski
Lukasz
Kowalik
Lukasz Kowalik
10.4230/LIPIcs.STACS.2013.20
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Tight bounds for Parameterized Complexity of Cluster Editing
In the Correlation Clustering problem, also known as Cluster Editing, we are given an undirected graph G and a positive integer k; the task is to decide whether G can be transformed into a cluster graph, i.e., a disjoint union of cliques, by changing at most k adjacencies, that is, by adding or deleting at most k edges. The motivation of the problem stems from various tasks in computational biology (Ben-Dor et al., Journal of Computational Biology 1999) and machine learning (Bansal et al., Machine Learning 2004). Although in general Correlation Clustering is APX-hard (Charikar et al., FOCS 2003), the version of the problem where the number of cliques may not exceed a prescribed constant p admits a PTAS (Giotis and Guruswami, SODA 2006).
We study the parameterized complexity of Correlation Clustering with this restriction on the number of cliques to be created. We give an algorithm that - in time O(2^{O(sqrt{pk})} + n+m) decides whether a graph G on n vertices and m edges can be transformed into a cluster graph with exactly p cliques by changing at most k adjacencies.
We complement these algorithmic findings by the following, surprisingly tight lower bound on the asymptotic behavior of our algorithm. We show that unless the Exponential Time Hypothesis (ETH) fails - for any constant 0 <= sigma <= 1, there is p = Theta(k^sigma) such that there is no algorithm deciding in time 2^{o(sqrt{pk})} n^{O(1)} whether an n-vertex graph G can be transformed into a cluster graph with at most p cliques by changing at most k adjacencies.
Thus, our upper and lower bounds provide an asymptotically tight analysis of the multivariate parameterized complexity of the problem for the whole range of values of p from constant to a linear function of k.
parameterized complexity
cluster editing
correlation clustering
subexponential algorithms
tight bounds
32-43
Regular Paper
Fedor V.
Fomin
Fedor V. Fomin
Stefan
Kratsch
Stefan Kratsch
Marcin
Pilipczuk
Marcin Pilipczuk
Michal
Pilipczuk
Michal Pilipczuk
Yngve
Villanger
Yngve Villanger
10.4230/LIPIcs.STACS.2013.32
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Bounded-width QBF is PSPACE-complete
Tree-width is a well-studied parameter of structures that measures their similarity to a tree. Many important NP-complete problems, such as Boolean satisfiability (SAT), are tractable on bounded tree-width instances. In this paper we focus on the canonical PSPACE-complete problem QBF, the fully-quantified version of SAT. It was shown by Pan and Vardi [LICS 2006] that this problem is PSPACE-complete even for formulas whose tree-width grows extremely slowly. Vardi also posed the question of whether the problem is tractable when restricted to instances of bounded tree-width. We answer this question by showing that QBF on instances with constant tree-width is PSPACE-complete.
Tree-width
QBF
PSPACE-complete
44-54
Regular Paper
Albert
Atserias
Albert Atserias
Sergi
Oliva
Sergi Oliva
10.4230/LIPIcs.STACS.2013.44
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Model Counting for CNF Formulas of Bounded Modular Treewidth
The modular treewidth of a graph is its treewidth after the contraction of modules. Modular treewidth properly generalizes treewidth and is itself properly generalized by clique-width. We show that the number of satisfying assignments of a CNF formula whose incidence graph has bounded modular treewidth can be computed in polynomial time. This provides new tractable classes of formulas for which #SAT is polynomial. In particular, our result generalizes known results for the treewidth of incidence graphs and is incomparable with known results for clique-width (or rank-width) of signed incidence graphs. The contraction of modules is an effective data reduction procedure. Our algorithm is the first one to harness this technique for #SAT. The order of the polynomial time bound of our algorithm depends on the modular treewidth. We show that this dependency cannot be avoided subject to an assumption from Parameterized Complexity.
Satisfiability
Model Counting
Parameterized Complexity
55-66
Regular Paper
Daniel
Paulusma
Daniel Paulusma
Friedrich
Slivovsky
Friedrich Slivovsky
Stefan
Szeider
Stefan Szeider
10.4230/LIPIcs.STACS.2013.55
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Backdoors to q-Horn
The class q-Horn, introduced by Boros, Crama and Hammer in 1990, is one of the largest known classes of propositional CNF formulas for which satisfiability can be decided in polynomial time. This class properly contains the fundamental classes of Horn and Krom formulas as well as the class of renamable (or disguised) Horn formulas.
In this paper we extend this class so that its favorable algorithmic properties can be made accessible to formulas that are outside but "close"' to this class. We show that deciding satisfiability is fixed-parameter tractable parameterized by the distance of the given formula from q-Horn. The distance is measured by the smallest number of variables that we need to delete from the formula in order to get a q-Horn formula, i.e., the size of a smallest deletion backdoor set into the class q-Horn.
This result generalizes known fixed-parameter tractability results for satisfiability decision with respect to the parameters distance from Horn, Krom, and renamable Horn.
Algorithms and data structures
Backdoor sets
Satisfiability
Fixed Parameter Tractability
67-79
Regular Paper
Serge
Gaspers
Serge Gaspers
Sebastian
Ordyniak
Sebastian Ordyniak
M. S.
Ramanujan
M. S. Ramanujan
Saket
Saurabh
Saket Saurabh
Stefan
Szeider
Stefan Szeider
10.4230/LIPIcs.STACS.2013.67
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
On Polynomial Kernels for Sparse Integer Linear Programs
Integer linear programs (ILPs) are a widely applied framework for dealing with combinatorial problems that arise in practice. It is known, e.g., by the success of CPLEX, that preprocessing and simplification can greatly speed up the process of optimizing an ILP. The present work seeks to further the theoretical understanding of preprocessing for ILPs by initiating a rigorous study within the framework of parameterized complexity and kernelization.
A famous result of Lenstra (Mathematics of Operations Research, 1983) shows that feasibility of any ILP with n variables and m constraints can be decided in time O(c^{n^3} m^{c'}). Thus, by a folklore argument, any such ILP admits a kernelization to an equivalent instance of size O(c^{n^3}). It is known, that unless \containment and the polynomial hierarchy collapses, no kernelization with size bound polynomial in n is possible. However, this lower bound only applies for the case when constraints may include an arbitrary number of variables since it follows from lower bounds for \sat and \hittingset, whose bounded arity variants admit polynomial kernelizations.
We consider the feasibility problem for ILPs Ax <= b where A is an r-row-sparse matrix parameterized by the number of variables. We show that the kernelizability of this problem depends strongly on the range of the variables. If the range is unbounded then this problem does not admit a polynomial kernelization unless \containment. If, on the other hand, the range of each variable is polynomially bounded in n then we do get a polynomial kernelization. Additionally, this holds also for the more general case when the maximum range d is an additional parameter, i.e., the size obtained is polynomial in n+d.
integer linear programs
kernelization
parameterized complexity
80-91
Regular Paper
Stefan
Kratsch
Stefan Kratsch
10.4230/LIPIcs.STACS.2013.80
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Linear kernels for (connected) dominating set on graphs with excluded topological subgraphs
We give the first linear kernels for Dominating Set and Connected Dominating Set problems on graphs excluding a fixed graph H as a topological minor.
Parameterized complexity
kernelization
algorithmic graph minors
dominating set
connected dominating set
92-103
Regular Paper
Fedor V.
Fomin
Fedor V. Fomin
Daniel
Lokshtanov
Daniel Lokshtanov
Saket
Saurabh
Saket Saurabh
Dimitrios M.
Thilikos
Dimitrios M. Thilikos
10.4230/LIPIcs.STACS.2013.92
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The PCP theorem for NP over the reals
In this paper we show that the PCP theorem holds as well in the real
number computational model introduced by Blum, Shub, and Smale.
More precisely, the real number counterpart NP_R of the classical
Turing model class NP can be characterized as NP_R = PCP_R(O(log n), O(1)). Our proof structurally follows the one by Dinur for classical NP. However, a lot of minor and major changes are necessary due to the real numbers as underlying computational structure. The analogue result holds for the complex numbers and NP_C.
PCP
real number computation
systems of polynomials
104-115
Regular Paper
Martijn
Baartse
Martijn Baartse
Klaus
Meer
Klaus Meer
10.4230/LIPIcs.STACS.2013.104
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Mutual Dimension
We define the lower and upper mutual dimensions mdim(x:y) and Mdim(x:y) between any two points x and y in Euclidean space. Intuitively these are the lower and upper densities of the algorithmic information shared by x and y. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that, if f : R^m -> R^n is computable and Lipschitz, then the inequalities mdim(f(x):y) <= mdim(x:y) and Mdim(f(x):y) <= Mdim(x:y) hold for all x \in R^m and y \in R^t. We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points.
computable analysis
data processing inequality
effective fractal dimensions
Kolmogorov complexity
mutual information
116-126
Regular Paper
Adam
Case
Adam Case
Jack H.
Lutz
Jack H. Lutz
10.4230/LIPIcs.STACS.2013.116
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra
We study the constraint satisfaction problem over the point algebra.
In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x <= y), (x \neq y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constraints as possible.This problem contains many important problems such as Correlation Clustering, Maximum Acyclic Subgraph, and Feedback Arc Set.
We first give an exact algorithm that runs in O^*(3^{\frac{log 5}{log 6}n}) time, which improves the previous best O^*(3^n) obtained by a standard dynamic programming.
Our algorithm combines the dynamic programming with the split-and-list technique. The split-and-list technique involves matrix products and we make use of sparsity of matrices to speed up the computation.
As for approximation, we give a 0.4586-approximation algorithm when the objective is maximizing the number of satisfied constraints, and give an O(log n log log n)-approximation algorithm when the objective is minimizing the number of unsatisfied constraints.
Constraint Satisfaction Problems
Point Algebra
Exact Algorithms
Approximation Algorithms
127-138
Regular Paper
Yoichi
Iwata
Yoichi Iwata
Yuichi
Yoshida
Yuichi Yoshida
10.4230/LIPIcs.STACS.2013.127
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Local Search is Better than Random Assignment for Bounded Occurrence Ordering k-CSPs
We prove that the Bounded Occurrence Ordering k-CSP Problem is not approximation resistant. We give a very simple local search algorithm that always performs better than the random assignment algorithm (unless, the number of satisfied constraints does not depend on the ordering). Specifically, the expected value of the solution returned by the algorithm is at least ALG >= AVG + alpha(B,k)(OPT-AVG), where OPT is the value of the optimal solution; AVG is the expected value of the random solution; and alpha(B,k) = Omega_k(B^{-(k+O(1))}) is a parameter depending only on k (the arity of the CSP) and B (the maximum number of times each variable is used in constraints).
The question whether bounded occurrence ordering k-CSPs are approximation resistant was raised by Guruswami and Zhou (2012), who recently showed that bounded occurrence 3-CSPs and "monotone" k-CSPs admit a non-trivial approximation.
approximation algorithms
approximation resistance
ordering CSPs
139-147
Regular Paper
Konstantin
Makarychev
Konstantin Makarychev
10.4230/LIPIcs.STACS.2013.139
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The complexity of approximating conservative counting CSPs
We study the complexity of approximation for a weighted counting constraint satisfaction problem #CSP(F). In the conservative case, where F contains all unary functions, a classification is known for the Boolean domain. We give a classification for problems with general finite domain. We define weak log-modularity and weak log-supermodularity, and show that #CSP(F) is in FP if F is weakly log-modular. Otherwise, it is at least as hard to approximate as #BIS, counting independent sets in bipartite graphs, which is believed to be intractable. We further sub-divide the #BIS-hard case. If F is weakly log-supermodular, we show that #CSP(F) is as easy as Boolean log-supermodular weighted #CSP. Otherwise, it is NP-hard to approximate. Finally, we give a trichotomy for the arity-2 case.
Then, #CSP(F) is in FP, is #BIS-equivalent, or is equivalent to #SAT, the problem of approximately counting satisfying assignments of a CNF Boolean formula.
counting constraint satisfaction problem
approximation
complexity
148-159
Regular Paper
Xi
Chen
Xi Chen
Martin
Dyer
Martin Dyer
Leslie Ann
Goldberg
Leslie Ann Goldberg
Mark
Jerrum
Mark Jerrum
Pinyan
Lu
Pinyan Lu
Colin
McQuillan
Colin McQuillan
David
Richerby
David Richerby
10.4230/LIPIcs.STACS.2013.148
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Lossy Chains and Fractional Secret Sharing
Motivated by the goal of controlling the amount of work required to access a shared resource or to solve a cryptographic puzzle, we introduce and study the related notions of lossy chains and fractional secret sharing.
Fractional secret sharing generalizes traditional secret sharing by allowing a fine-grained control over the amount of uncertainty about the secret. More concretely, a fractional secret sharing scheme realizes a fractional access structure f : 2^{[n]} -> {0,...,m-1} by guaranteeing that from the point of view of each set T \subseteq [n] of parties, the secret is uniformly distributed over a set of f(T) + 1 potential secrets. We show that every (monotone) fractional access structure can be realized. For symmetric structures, in which f(T) depends only on the size of T, we give an efficient construction with share size poly(n,log m).
Our construction of fractional secret sharing schemes is based on the new notion of lossy chains which may be of independent interest.
A lossy chain is a Markov chain (X_0,...,X_n) which starts with a random secret X_0 and gradually loses information about it at a rate which is specified by a loss function g. Concretely, in every step t, the distribution of X_0 conditioned on the value of X_t should always be uniformly distributed over a set of size g(t). We show how to construct such lossy chains efficiently for any possible loss function g, and prove that our construction achieves an optimal asymptotic information rate.
Cryptography
secret sharing
Markov chains
160-171
Regular Paper
Yuval
Ishai
Yuval Ishai
Eyal
Kushilevitz
Eyal Kushilevitz
Omer
Strulovich
Omer Strulovich
10.4230/LIPIcs.STACS.2013.160
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Two Hands Are Better Than One (up to constant factors): Self-Assembly In The 2HAM vs. aTAM
We study the difference between the standard seeded model (aTAM) of tile self-assembly, and the "seedless" two-handed model of tile self-assembly (2HAM). Most of our results suggest that the two-handed model is more powerful. In particular, we show how to simulate any seeded system with a two-handed system that is essentially just a constant factor larger. We exhibit finite shapes with a busy-beaver separation in the number of distinct tiles required by seeded versus two-handed, and exhibit an infinite shape that can be constructed two-handed but not seeded. Finally, we show that verifying whether a given system uniquely assembles a desired supertile is co-NP-complete in the two-handed model, while it was known to be polynomially solvable in the seeded model.
abstract tile assembly model
hierarchical tile assembly model
two-handed tile assembly model
algorithmic self-assembly
DNA computing
biocomputing
172-184
Regular Paper
Sarah
Cannon
Sarah Cannon
Erik D.
Demaine
Erik D. Demaine
Martin L.
Demaine
Martin L. Demaine
Sarah
Eisenstat
Sarah Eisenstat
Matthew J.
Patitz
Matthew J. Patitz
Robert T.
Schweller
Robert T. Schweller
Scott M
Summers
Scott M Summers
Andrew
Winslow
Andrew Winslow
10.4230/LIPIcs.STACS.2013.172
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Unlabeled Data Does Provably Help
A fully supervised learner needs access to correctly labeled examples whereas a semi-supervised learner has access to examples part of which are labeled and part of which are not. The hope is that a large collection of unlabeled examples significantly reduces the need for labeled-ones. It is widely believed that this reduction of "label complexity" is marginal unless the hidden target concept and the domain distribution satisfy some "compatibility assumptions". There are some recent papers in support of this belief. In this paper, we revitalize the discussion by presenting a result that goes in the other direction. To this end, we consider the PAC-learning model in two settings: the (classical) fully supervised setting and the semi-supervised setting. We show that the "label-complexity gap"' between the semi-supervised and the fully supervised setting can become arbitrarily large for concept classes of infinite VC-dimension (or sequences of classes whose VC-dimensions are finite but become arbitrarily large). On the other hand, this gap is bounded by O(ln |C|) for each finite concept class C that contains the constant zero- and the constant one-function. A similar statement holds for all classes C of finite VC-dimension.
algorithmic learning
sample complexity
semi-supervised learning
185-196
Regular Paper
Malte
Darnstädt
Malte Darnstädt
Hans Ulrich
Simon
Hans Ulrich Simon
Balázs
Szörényi
Balázs Szörényi
10.4230/LIPIcs.STACS.2013.185
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Computing cutwidth and pathwidth of semi-complete digraphs via degree orderings
The notions of cutwidth and pathwidth of digraphs play a central role in the containment theory for tournaments, or more generally semi-complete digraphs, developed in a recent series of papers by Chudnovsky, Fradkin, Kim, Scott, and Seymour (Maria Chudnovsky, Alexandra Fradkin, and Paul Seymour, 2012; Maria Chudnovsky, Alex Scott, and Paul Seymour, 2011; Maria Chudnovsky and Paul D. Seymour, 2011; Alexandra Fradkin and Paul Seymour, 2010; Alexandra Fradkin and Paul Seymour, 2011; Ilhee Kim and Paul Seymour, 2012). In this work we introduce a new approach to computing these width measures on semi-complete digraphs, via degree orderings. Using the new technique we are able to reprove the main results of (Maria Chudnovsky, Alexandra Fradkin, and Paul Seymour, 2012; Alexandra Fradkin and Paul Seymour, 2011) in a unified and significantly simplified way, as well as obtain new results. First, we present polynomial-time approximation algorithms for both cutwidth and pathwidth, faster and simpler than the previously known ones; the most significant improvement is in case of pathwidth, where instead of previously known O(OPT)-approximation in fixed-parameter tractable time (Fedor V. Fomin and Michal Pilipczuk, 2013) we obtain a constant-factor approximation in polynomial time. Secondly, by exploiting the new set of obstacles for cutwidth and pathwidth, we show that topological containment and immersion in semi-complete digraphs can be tested in single-exponential fixed-parameter tractable time. Finally, we present how the new approach can be used to obtain exact fixed-parameter tractable algorithms for cutwidth and pathwidth, with single-exponential running time dependency on the optimal width.
semi-complete digraph
tournament
pathwidth
cutwidth
197-208
Regular Paper
Michal
Pilipczuk
Michal Pilipczuk
10.4230/LIPIcs.STACS.2013.197
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
On Pairwise Spanners
Given an undirected n-node unweighted graph G = (V, E), a spanner with stretch function f(.) is a subgraph H \subseteq G such that, if two nodes are at distance d in G, then they are at distance at most f(d) in H. Spanners are very well studied in the literature. The typical goal is to construct the sparsest possible spanner for a given stretch function.
In this paper we study pairwise spanners, where we require to approximate the u-v distance only for pairs (u,v) in a given set P \subseteq V x V. Such P-spanners were studied before [Coppersmith,Elkin'05] only in the special case that f(.) is the identity function, i.e. distances between relevant pairs must be preserved exactly (a.k.a. pairwise preservers).
Here we present pairwise spanners which are at the same time sparser than the best known preservers (on the same P) and of the best known spanners (with the same f(.)).
In more detail, for arbitrary P, we show that there exists a P-spanner of size O(n(|P|log n)^{1/4}) with f(d) = d + 4 log n. Alternatively, for any epsislon > 0, there exists a P-spanner of size O(n|P|^{1/4} sqrt{(log n) / epsilon}) with f(d) = (1 + epsilon)d + 4. We also consider the relevant special case that there is a critical set of nodes S \subseteq V, and we wish to approximate either the distances within nodes in S or from nodes in S to any other node. We show that there exists an (S x S)-spanner of size O(n sqrt{|S|}) with f(d) = d + 2, and an (S x V)-spanner of size O(n sqrt{|S| log n}) with f(d) = d + 2 log n. All the mentioned pairwise spanners can be constructed in polynomial time.
Undirected graphs
shortest paths
additive spanners
distance preservers
209-220
Regular Paper
Marek
Cygan
Marek Cygan
Fabrizio
Grandoni
Fabrizio Grandoni
Telikepalli
Kavitha
Telikepalli Kavitha
10.4230/LIPIcs.STACS.2013.209
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Excluded vertex-minors for graphs of linear rank-width at most k.
Linear rank-width is a graph width parameter, which is a variation of rank-width by restricting its tree to a caterpillar. As a corollary of known theorems, for each k, there is a finite set \mathcal{O}_k of graphs such that a graph G has linear rank-width at most k if and only if no vertex-minor of G is isomorphic to a graph in \mathcal{O}_k. However, no attempts have been made to bound the number of graphs in \mathcal{O}_k for k >= 2. We construct, for each k, 2^{\Omega(3^k)} pairwise locally non-equivalent graphs that are excluded vertex-minors for graphs of linear rank-width at most k.
Therefore the number of graphs in \mathcal{O}_k is at least double exponential.
rank-width
linear rank-width
vertex-minor
well-quasi-ordering
221-232
Regular Paper
Jisu
Jeong
Jisu Jeong
O-joung
Kwon
O-joung Kwon
Sang-il
Oum
Sang-il Oum
10.4230/LIPIcs.STACS.2013.221
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Recompression: a simple and powerful technique for word equations
We present an application of a local recompression technique, previously developed by the author in the context of compressed membership problems and compressed pattern matching, to word equations. The technique is based on local modification of variables (replacing X by aX or Xa) and replacement of pairs of letters appearing in the equation by a 'fresh' letter, which can be seen as a bottom-up compression of the solution of the given word equation, to be more specific, building an SLP (Straight-Line Programme) for the solution of the word equation.
Using this technique we give new self-contained proofs of many known results for word equations: the presented nondeterministic algorithm runs in O(n log n) space and in time polynomial in log N and n, where N is the size of the length-minimal solution of the word equation.
It can be easily generalised to a generator of all solutions of the word equation. A further analysis of the algorithm yields a doubly exponential upper bound on the size of the length-minimal solution.
The presented algorithm does not use exponential bound on the exponent of periodicity. Conversely, the analysis of the algorithm yields a new proof of the exponential bound on exponent of periodicity. For O(1) variables with arbitrary many appearances it works in linear space.
Word equations
exponent of periodicity
string unification
233-244
Regular Paper
Artur
Jez
Artur Jez
10.4230/LIPIcs.STACS.2013.233
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Fast Algorithms for Abelian Periods in Words and Greatest Common Divisor Queries
We present efficient algorithms computing all Abelian periods of two types in a word. Regular Abelian periods are computed in O(n log log{n}) randomized time which improves over the best previously known algorithm by almost a factor of n. The other algorithm, for full Abelian periods, works in O(n) time. As a tool we develop an O(n) time construction of a data structure that allows O(1) time gcd(i,j) queries for all 1 <= i,j <= n, this is a result of independent interest.
Abelian period
greatest common divisor
245-256
Regular Paper
Tomasz
Kociumaka
Tomasz Kociumaka
Jakub
Radoszewski
Jakub Radoszewski
Wojciech
Rytter
Wojciech Rytter
10.4230/LIPIcs.STACS.2013.245
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Finding Pseudo-repetitions
Pseudo-repetitions are a natural generalization of the classical notion of repetitions in sequences. We solve fundamental algorithmic questions on pseudo-repetitions by application of insightful combinatorial results on words. More precisely, we efficiently decide whether a word is a pseudo-repetition and find all the pseudo-repetitive factors of a word.
Stringology
Pattern matching
Repetition
Pseudo-repetition
257-268
Regular Paper
Pawel
Gawrychowski
Pawel Gawrychowski
Florin
Manea
Florin Manea
Robert
Mercas
Robert Mercas
Dirk
Nowotka
Dirk Nowotka
Catalin
Tiseanu
Catalin Tiseanu
10.4230/LIPIcs.STACS.2013.257
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Algorithms for Designing Pop-Up Cards
We prove that every simple polygon can be made as a (2D) pop-up card/book that opens to any desired angle between 0 and 360°.
More precisely, given a simple polygon attached to the two walls of the open pop-up, our polynomial-time algorithm subdivides the polygon into a single-degree-of-freedom linkage structure, such that closing the pop-up flattens the linkage without collision. This result solves an open problem of Hara and Sugihara from 2009. We also show how to obtain a more efficient construction for the special case of orthogonal polygons, and how to make 3D orthogonal polyhedra, from pop-ups that open to 90°, 180°, 270°, or 360°.
geometric folding
linkages
universality
269-280
Regular Paper
Zachary
Abel
Zachary Abel
Erik D.
Demaine
Erik D. Demaine
Martin L.
Demaine
Martin L. Demaine
Sarah
Eisenstat
Sarah Eisenstat
Anna
Lubiw
Anna Lubiw
André
Schulz
André Schulz
Diane L.
Souvaine
Diane L. Souvaine
Giovanni
Viglietta
Giovanni Viglietta
Andrew
Winslow
Andrew Winslow
10.4230/LIPIcs.STACS.2013.269
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Space-Time Trade-offs for Stack-Based Algorithms
In memory-constrained algorithms we have read-only access to the input, and the number of additional variables is limited. In this paper we introduce the compressed stack technique, a method that allows to transform algorithms whose space bottleneck is a stack into memory-constrained algorithms. Given an algorithm A that runs in O(n) time using a stack of length Theta(n), we can modify it so that it runs in O(n^2/2^s) time using a workspace of O(s) variables (for any s \in o(log n)) or O(n log n/log p)$ time using O(p log n/log p) variables (for any 2 <= p <= n). We also show how the technique can be applied to solve various geometric problems, namely computing the convex hull of a simple polygon, a triangulation of a monotone polygon, the shortest path between two points inside a monotone polygon, 1-dimensional pyramid approximation of a 1-dimensional vector, and the visibility profile of a point inside a simple polygon. Our approach exceeds or matches the best-known results for these problems in constant-workspace models (when they exist), and gives a trade-off between the size of the workspace and running time. To the best of our knowledge, this is the first general framework for obtaining memory-constrained algorithms.
space-time trade-off
constant workspace
stack algorithms
281-292
Regular Paper
Luis
Barba
Luis Barba
Matias
Korman
Matias Korman
Stefan
Langerman
Stefan Langerman
Rodrigo I.
Silveira
Rodrigo I. Silveira
Kunihiko
Sadakane
Kunihiko Sadakane
10.4230/LIPIcs.STACS.2013.281
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
L_1 Shortest Path Queries among Polygonal Obstacles in the Plane
Given a point s and a set of h pairwise disjoint polygonal obstacles with a total of n vertices in the plane, after the free space is triangulated, we present an O(n+h log h) time and O(n) space algorithm for building a data structure (called shortest path map) of size O(n) such that for any query point t, the length of the L_1 shortest obstacle-avoiding path from s to t can be reported in O(log n) time and the actual path can be found in additional time proportional to the number of edges of the path. Previously, the best algorithm computes such a shortest path map in O(n log n) time and O(n) space. In addition, our techniques also yield an improved algorithm for computing the L_1 geodesic Voronoi diagram of m point sites among the obstacles.
computational geometry
shortest path queries
shortest paths among obstacles
$L_1$/$L_infty$/rectilinear metric
shortest path maps
geodesic Vorono
293-304
Regular Paper
Danny Z.
Chen
Danny Z. Chen
Haitao
Wang
Haitao Wang
10.4230/LIPIcs.STACS.2013.293
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Quantifier Alternation in Two-Variable First-Order Logic with Successor Is Decidable
We consider the quantifier alternation hierarchy within two-variable first-order logic FO^2[<,suc] over finite words with linear order and binary successor predicate. We give a single identity of omega-terms for each level of this hierarchy. This shows that for a given regular language and a non-negative integer~$m$ it is decidable whether the language is definable by a formula in FO^2[<,suc] which has at most m quantifier alternations. We also consider the alternation hierarchy of unary temporal logic TL[X,F,Y,P] defined by the maximal number of nested negations. This hierarchy coincides with the FO^2[<,suc] quantifier alternation hierarchy.
automata theory
semigroups
regular languages
first-order logic
305-316
Regular Paper
Manfred
Kufleitner
Manfred Kufleitner
Alexander
Lauser
Alexander Lauser
10.4230/LIPIcs.STACS.2013.305
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
FO^2 with one transitive relation is decidable
We show that the satisfiability problem for the two-variable first-order logic, FO^2, over transitive structures when only one relation is required to be transitive, is decidable. The result is optimal, as FO^2 over structures with two transitive relations, or with one transitive and one equivalence relation, are known to be undecidable, so in fact, our result completes the classification of FO^2-logics over transitive structures with respect to decidability.
We show that the satisfiability problem is in 2-NExpTime.
Decidability of the finite satisfiability problem remains open.
classical decision problem
two-variable first-order logic
decidability
computational complexity
317-328
Regular Paper
Wieslaw
Szwast
Wieslaw Szwast
Lidia
Tendera
Lidia Tendera
10.4230/LIPIcs.STACS.2013.317
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Two-variable first order logic with modular predicates over words
We consider first order formulae over the signature consisting of the symbols of the alphabet, the symbol < (interpreted as a linear order) and the set MOD of modular numerical predicates. We study the expressive power of FO^2[<,MOD], the two-variable first order logic over this signature, interpreted over finite words. We give an algebraic characterization of the corresponding regular languages in terms of their syntactic morphisms and we also give simple unambiguous regular expressions for them. It follows that one can decide whether a given regular language is captured by FO^2[<,MOD]. Our proofs rely on a combination of arguments from semigroup theory (stamps), model theory (Ehrenfeucht-Fraïssé games) and combinatorics.
First order logic
automata theory
semigroup
modular predicates
329-340
Regular Paper
Luc
Dartois
Luc Dartois
Charles
Paperman
Charles Paperman
10.4230/LIPIcs.STACS.2013.329
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Abusing the Tutte Matrix: An Algebraic Instance Compression for the K-set-cycle Problem
We give an algebraic, determinant-based algorithm for the K-Cycle problem, i.e., the problem of finding a cycle through a set of specified elements. Our approach gives a simple FPT algorithm for the problem, matching the O^*(2^|K|) running time of the algorithm of Björklund et al. (SODA, 2012). Furthermore, our approach is open for treatment by classical algebraic tools (e.g., Gaussian elimination), and we show that it leads to a polynomial compression of the problem, i.e., a polynomial-time reduction of the K-Cycle problem into an algebraic problem with coding size O(|K|^3).
This is surprising, as several related problems (e.g., k-Cycle and the Disjoint Paths problem) are known not to admit such a reduction unless the polynomial hierarchy collapses. Furthermore, despite the result, we are not aware of any witness for the K-Cycle problem of size polynomial in |K|+ log n, which seems (for now) to separate the notions of polynomial compression and polynomial kernelization (as a polynomial kernelization for a problem in NP necessarily implies a small witness).
Parameterized complexity
graph theory
kernelization
algebraic algorithms
341-352
Regular Paper
Magnus
Wahlström
Magnus Wahlström
10.4230/LIPIcs.STACS.2013.341
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Subexponential-Time Parameterized Algorithm for Steiner Tree on Planar Graphs
The well-known bidimensionality theory provides a method for designing fast, subexponential-time parameterized algorithms for a vast number of NP-hard problems on sparse graph classes such as planar graphs, bounded genus graphs, or, more generally, graphs with a fixed excluded minor. However, in order to apply the bidimensionality framework the considered problem needs to fulfill a special density property. Some well-known problems do not have this property, unfortunately, with probably the most prominent and important example being the Steiner Tree problem. Hence the question whether a subexponential-time parameterized algorithm for Steiner Tree on planar graphs exists has remained open. In this paper, we answer this question positively and develop an algorithm running in O(2^{O((k log k)^{2/3})}n) time and polynomial space, where k is the size of the Steiner tree and n is the number of vertices of the graph.
Our algorithm does not rely on tools from bidimensionality theory or graph minors theory, apart from Baker's classical approach. Instead, we introduce new tools and concepts to the study of the parameterized complexity of problems on sparse graphs.
planar graph
Steiner tree
subexponential-time algorithms
353-364
Regular Paper
Marcin
Pilipczuk
Marcin Pilipczuk
Michal
Pilipczuk
Michal Pilipczuk
Piotr
Sankowski
Piotr Sankowski
Erik Jan
van Leeuwen
Erik Jan van Leeuwen
10.4230/LIPIcs.STACS.2013.353
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The arithmetic complexity of tensor contractions
We investigate the algebraic complexity of tensor calulus. We consider a generalization of iterated matrix product to tensors and show that the resulting formulas exactly capture VP, the class of polynomial families efficiently computable by arithmetic circuits. This gives a natural and robust characterization of this complexity class that despite its naturalness is not very well understood so far.
algebraic complexity
arithmetic circuits
tensor calculus
365-376
Regular Paper
Florent
Capelli
Florent Capelli
Arnaud
Durand
Arnaud Durand
Stefan
Mengel
Stefan Mengel
10.4230/LIPIcs.STACS.2013.365
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Search versus Decision for Election Manipulation Problems
Most theoretical definitions about the complexity of manipulating elections focus on the decision problem of recognizing which instances can be successfully manipulated, rather than the search problem of finding the successful manipulative actions. Since the latter is a far more natural goal for manipulators, that definitional focus may be misguided if these two complexities can differ. Our main result is that they probably do differ: If integer factoring is hard, then for election manipulation, election bribery, and some types of election control, there are election systems for which recognizing which instances can be successfully manipulated is in polynomial time but producing the successful manipulations cannot be done in polynomial time.
Search vs. decision
application of structural complexity theory
377-388
Regular Paper
Edith
Hemaspaandra
Edith Hemaspaandra
Lane A.
Hemaspaandra
Lane A. Hemaspaandra
Curtis
Menton
Curtis Menton
10.4230/LIPIcs.STACS.2013.377
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Improved Bounds for Online Preemptive Matching
When designing a preemptive online algorithm for the maximum matching problem, we wish to maintain a valid matching M while edges of the underlying graph are presented one after the other. When presented with an edge e, the algorithm should decide whether to augment the matching M by adding e (in which case e may be removed later on) or to keep M in its current form without adding e (in which case e is lost for good). The objective is to eventually hold a matching M with maximum weight.
The main contribution of this paper is to establish new lower and upper bounds on the competitive ratio achievable by preemptive online algorithms:
- We provide a lower bound of 1 + ln 2 \approx 1.693 on the competitive ratio of any randomized algorithm for the maximum cardinality matching problem, thus improving on the currently best known bound of e / (e-1) \approx 1.581 due to Karp, Vazirani, and Vazirani [STOC'90].
- We devise a randomized algorithm that achieves an expected competitive ratio of 5.356 for maximum weight matching. This finding demonstrates the power of randomization in this context, showing how to beat the tight bound of 3 + 2\sqrt{2} \approx 5.828 for deterministic algorithms, obtained by combining the 5.828 upper bound of McGregor [APPROX'05] and the recent 5.828 lower bound of Varadaraja [ICALP'11].
Online algorithms
matching
lower bound
389-399
Regular Paper
Leah
Epstein
Leah Epstein
Asaf
Levin
Asaf Levin
Danny
Segev
Danny Segev
Oren
Weimann
Oren Weimann
10.4230/LIPIcs.STACS.2013.389
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Parameterized Matching in the Streaming Model
We study the problem of parameterized matching in a stream where we want to output matches between a pattern of length m and the last m symbols of the stream before the next symbol arrives. Parameterized matching is a natural generalisation of exact matching where an arbitrary one-to-one relabelling of pattern symbols is allowed. We show how this problem can be solved in constant time per arriving stream symbol and sublinear, near optimal space with high probability. Our results are surprising and important: it has been shown that almost no streaming pattern matching problems can be solved (not even randomised) in less than Theta(m) space, with exact matching as the only known problem to have a sublinear, near optimal space solution. Here we demonstrate that a similar sublinear, near optimal space solution is achievable for an even more challenging problem.
Pattern matching
streaming algorithms
randomized algorithms
400-411
Regular Paper
Markus
Jalsenius
Markus Jalsenius
Benny
Porat
Benny Porat
Benjamin
Sach
Benjamin Sach
10.4230/LIPIcs.STACS.2013.400
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Popular Matchings: Structure and Cheating Strategies
We consider the cheating strategies for the popular matchings problem.
Let G = (\A \cup \p, E) be a bipartite graph where \A denotes a set of agents, p denotes a set of posts and the edges in E are ranked. Each agent ranks a subset of posts in an order of preference, possibly involving ties.
A matching M is popular if there exists no matching M' such that the number of agents that prefer M' to M exceeds the number of agents that prefer M to M'.
Consider a centralized market where agents submit their preferences and a central authority matches
agents to posts according to the notion of popularity.
Since a popular matching need not be unique, we assume that the central authority chooses an arbitrary popular matching. Let a_1 be the sole manipulative agent who is aware of the true
preference lists of all other agents.
The goal of a_1 is to falsify her preference list to get better always, that is, to improve the set of posts she gets matched to in the falsified instance.
We show that the optimal cheating strategy for a single agent to get better always can be computed
in O(m+n) time when preference lists are all strict and in O(\sqrt{n}m) time when preference lists are allowed to contain ties.
Here n = |\A| + |\p| and m = |E|.
To compute the cheating strategies, we develop a switching graph characterization of the popular matchings problem involving ties. The switching graph characterization was studied for the case of strict lists by McDermid and Irving (J. Comb. Optim. 2011) and was open for the case of ties. We show an O(\sqrt{n}m) time algorithm to compute the set of popular pairs using the switching graph.
These results are of independent interest and answer a part of the open questions posed by McDermid and Irving.
bipartite matchings
preferences
cheating strategies
412-423
Regular Paper
Meghana
Nasre
Meghana Nasre
10.4230/LIPIcs.STACS.2013.412
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Fooling One-Sided Quantum Protocols
We use the venerable "fooling set" method to prove new lower bounds on the quantum communication complexity of various functions. Let f : X x Y -> {0,1} be a Boolean function, fool^1(f) its maximal fooling set size among 1-inputs, Q_1^*(f) its one-sided-error quantum communication complexity with prior entanglement, and NQ(f) its nondeterministic quantum communication complexity (without prior entanglement; this model is trivial with shared randomness or entanglement). Our main results are the following, where logs are to base 2:
- If the maximal fooling set is "upper triangular" (which is for instance the case for the equality, disjointness, and greater-than functions), then we have Q_1^*(f) >= 1/2 log fool^1(f) - 1/2, which (by superdense coding) is essentially optimal for functions like equality, disjointness, and greater-than. No super-constant lower bound for equality seems to follow from earlier techniques.
- For all f we have Q_1^*(f) >= 1/4 log fool^1(f) - 1/2.
- NQ(f) >= 1/2 log fool^1(f) + 1. We do not know if the factor 1/2 is needed in this result, but it cannot be replaced by 1: we give an example where NQ(f) \approx 0.613 log fool^1(f).
Quantum computing
communication complexity
fooling set
lower bound
424-433
Regular Paper
Hartmut
Klauck
Hartmut Klauck
Ronald
de Wolf
Ronald de Wolf
10.4230/LIPIcs.STACS.2013.424
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Explicit relation between all lower bound techniques for quantum query complexity
The polynomial method and the adversary method are the two main techniques to prove lower bounds on quantum query complexity, and they have so far been considered as unrelated approaches. Here, we show an explicit reduction from the polynomial method to the multiplicative adversary method. The proof goes by extending the polynomial method from Boolean functions to quantum state generation problems. In the process, the bound is even strengthened. We then show that this extended polynomial method is a special case of the multiplicative adversary method with an adversary matrix that is independent of the function. This new result therefore provides insight on the reason why in some cases the adversary method is stronger than the polynomial method. It also reveals a clear picture of the relation between the different lower bound techniques, as it implies that all known techniques reduce to the multiplicative adversary method.
Quantum computation
lower bound
adversary method
polynomial method
434-445
Regular Paper
Loïck
Magnin
Loïck Magnin
Jérémie
Roland
Jérémie Roland
10.4230/LIPIcs.STACS.2013.434
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Optimal quantum query bounds for almost all Boolean functions
We show that almost all n-bit Boolean functions have bounded-error quantum query complexity at least n/2, up to lower-order terms. This improves over an earlier n/4 lower bound of Ambainis (A. Ambainis, 1999), and shows that van Dam's oracle interrogation (W. van Dam, 1998) is essentially optimal for almost all functions. Our proof uses the fact that the acceptance probability of a T-query algorithm can be written as the sum of squares of degree-T polynomials.
Quantum computing
query complexity
lower bounds
polynomial method
446-453
Regular Paper
Andris
Ambainis
Andris Ambainis
Arturs
Backurs
Arturs Backurs
Juris
Smotrovs
Juris Smotrovs
Ronald
de Wolf
Ronald de Wolf
10.4230/LIPIcs.STACS.2013.446
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Streaming Complexity of Checking Priority Queues
This work is in the line of designing efficient checkers for testing the reliability of some massive data structures. Given a sequential access to the insert/extract operations on such a structure, one would like to decide, a posteriori only, if it corresponds to the evolution of a reliable structure.
In a context of massive data, one would like to minimize both the amount of reliable memory of the checker and the number of passes on the sequence of operations.
Chu, Kannan and McGregor (M. Chu, S. Kannan, and A. McGregor, 2007) initiated the study of checking priority queues in this setting. They showed that the use of timestamps allows to check a priority queue with a single pass and memory space \tilde{\Order}(\sqrt{N}). Later, Chakrabarti, Cormode, Kondapally and McGregor (A. Chakrabarti, G. Cormode, R. Kondapally, and A. McGregor, 2010) removed the use of timestamps, and proved that more passes do not help.
We show that, even in the presence of timestamps, more passes do not help, solving an open problem
of (M. Chu, S. Kannan, and A. McGregor, 2007; A. Chakrabarti, G. Cormode, R. Kondapally, and A. McGregor). On the other hand, we show that a second pass, but in reverse direction shrinks the memory space to \tilde{\Order}((\log N)^2), extending a phenomenon the first time observed by Magniez, Mathieu and Nayak (F. Magniez, C. Mathieu, and A. Nayak, 2010) for checking well-parenthesized expressions.
Streaming Algorithms
Communication Complexity
Priority Queue
454-465
Regular Paper
Nathanael
Francois
Nathanael Francois
Frédéric
Magniez
Frédéric Magniez
10.4230/LIPIcs.STACS.2013.454
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Deterministic algorithms for skewed matrix products
Recently, Pagh presented a randomized approximation algorithm for the multiplication of real-valued matrices building upon work for detecting the most frequent items in data streams. We continue this line of research and present new deterministic matrix multiplication algorithms.
Motivated by applications in data mining, we first consider the case of real-valued, nonnegative n-by-n input matrices A and B, and show how to obtain a deterministic approximation of the weights of individual entries, as well as the entrywise p-norm, of the product AB. The algorithm is simple, space efficient and runs in one pass over the input matrices. For a user defined b \in (0, n^2) the algorithm runs in time O(nb + n Sort(n)) and space O(n + b) and returns an approximation of the entries of AB within an additive factor of ||AB||_{E1}/b, where ||C||_{E1} = sum_{i, j} |C_{ij}| is the entrywise 1-norm of a matrix C and Sort(n) is the time required to sort n real numbers in linear space. Building upon a result by Berinde et al. we show that for skewed matrix products (a common situation in many real-life applications) the algorithm is more efficient and achieves better approximation guarantees than previously known randomized algorithms.
When the input matrices are not restricted to nonnegative entries, we present a new deterministic group testing algorithm detecting nonzero entries in the matrix product with large absolute value. The algorithm is clearly outperformed by randomized matrix multiplication algorithms, but as a byproduct we obtain the first O(n^{2 + epsilon})-time deterministic algorithm for matrix products with O(sqrt(n)) nonzero entries.
approximate deterministic memory-efficient matrix multiplication
466-477
Regular Paper
Konstantin
Kutzkov
Konstantin Kutzkov
10.4230/LIPIcs.STACS.2013.466
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The Simulated Greedy Algorithm for Several Submodular Matroid Secretary Problems
We study the matroid secretary problems with submodular valuation functions. In these problems, the elements arrive in random order. When one element arrives, we have to make an immediate and irrevocable decision on whether to accept it or not. The set of accepted elements must form an independent set in a predefined matroid. Our objective is to maximize the value of the accepted elements. In this paper, we focus on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function.
We introduce a general algorithm for such submodular matroid secretary problems. In particular, we obtain constant competitive algorithms for the cases of laminar matroids and transversal
matroids. Our algorithms can be further applied to any independent set system defined by the intersection of a constant number of laminar matroids, while still achieving constant competitive ratios. Notice that laminar matroids generalize uniform matroids and partition matroids.
On the other hand, when the underlying valuation function is linear, our algorithm achieves a competitive ratio of 9.6 for laminar matroids, which significantly improves the previous
result.
secretary problem
submodular function
matroid
online algorithm
478-489
Regular Paper
Tengyu
Ma
Tengyu Ma
Bo
Tang
Bo Tang
Yajun
Wang
Yajun Wang
10.4230/LIPIcs.STACS.2013.478
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Hardness of Conjugacy, Embedding and Factorization of multidimensional Subshifts of Finite Type
Subshifts of finite type are sets of colorings of the plane defined by local constraints. They can be seen as a discretization of continuous dynamical systems. We investigate here the hardness of deciding factorization, conjugacy and embedding of subshifts of finite type (SFTs) in dimension d > 1. In particular, we prove that the factorization problem is Sigma^0_3-complete and that the conjugacy and embedding problems are Sigma^0_1-complete in the arithmetical hierarchy.
Subshifts
Computability
Factorization
Embedding
Conjugacy
490-501
Regular Paper
Emmanuel
Jeandel
Emmanuel Jeandel
Pascal
Vanier
Pascal Vanier
10.4230/LIPIcs.STACS.2013.490
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The finiteness of a group generated by a 2-letter invertible-reversible Mealy automaton is decidable
We prove that a semigroup generated by a reversible two-state Mealy automaton is either finite or free of rank 2. This fact leads to the decidability of finiteness for groups generated by two-state or two-letter invertible-reversible Mealy automata and to the decidability of freeness for semigroups generated by two-state invertible-reversible Mealy automata.
Mealy automata
automaton semigroups
decidability of finiteness
decidability of freeness
Nerode equivalence
502-513
Regular Paper
Ines
Klimann
Ines Klimann
10.4230/LIPIcs.STACS.2013.502
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity (Extended Abstract)
In the theory of discrete-time dynamical systems, one studies the limiting behaviour of processes defined by iterating a fixed function f over a given space. A much-studied case involves piecewise affine functions on R^n. Blondel et al. (2001) studied the decidability of questions such as mortality for such functions with rational coefficients. Mortality means that every trajectory includes a 0; if the iteration is seen as a loop while (x \ne 0) x := f(x), mortality means that the loop is guaranteed to terminate.
Blondel et al. proved that the problems are undecidable when the dimension n of the state space is at least two. They assume that the variables range over the rationals; this is an essential assumption. From a program analysis (and discrete Computability) viewpoint, it would be more interesting to consider integer-valued variables.
This paper establishes (un)decidability results for the integer setting. We show that also over integers, undecidability (moreover, Pi^0_2 completeness) begins at two dimensions. We further investigate the effect of several restrictions on the iterated functions. Specifically, we consider bounding the size of the partition defining f, and restricting the coefficients of the linear components. In the decidable cases, we give complexity results. The complexity is PTIME for affine functions, but for piecewise-affine ones it is PSPACE-complete. The undecidability proofs use some variants of the Collatz problem, which may be of independent interest.
discrete-time dynamical systems
termination
Collatz problem
514-525
Extended Abstract
Amir M.
Ben-Amram
Amir M. Ben-Amram
10.4230/LIPIcs.STACS.2013.514
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
On the practically interesting instances of MAXCUT
For many optimization problems, the instances of practical interest often occupy just a tiny part of the algorithm's space of instances.
Following (Y. Bilu and N. Linial, 2010), we apply this perspective to MAXCUT, viewed as a clustering problem. Using a variety of techniques, we investigate practically interesting instances of this problem. Specifically, we show how to solve in polynomial time distinguished, metric, expanding and dense instances of MAXCUT under mild stability assumptions. In particular, (1 + epsilon)-stability (which is optimal) suffices for metric and dense MAXCUT. We also show how to solve in polynomial time Omega(sqrt(n))-stable instances of MAXCUT, substantially improving the best previously known result.
MAXCUT
Clustering
Hardness in practice
Stability
Non worst-case analysis
526-537
Regular Paper
Yonatan
Bilu
Yonatan Bilu
Amit
Daniely
Amit Daniely
Nati
Linial
Nati Linial
Michael
Saks
Michael Saks
10.4230/LIPIcs.STACS.2013.526
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
First Fit bin packing: A tight analysis
In the bin packing problem we are given an instance consisting of a sequence of items with sizes between 0 and 1. The objective is to pack these items into the smallest possible number of bins of unit size. FirstFit algorithm packs each item into the first bin where it fits, possibly opening a new bin if the item cannot fit into any currently open bin. In early seventies it was shown that the asymptotic approximation ratio of FirstFit bin packing is equal to 1.7.
We prove that also the absolute approximation ratio for FirstFit bin packing is exactly 1.7. This means that if the optimum needs OPT bins, FirstFit always uses at most \lfloor 1.7 OPT \rfloor bins.
Furthermore we show matching lower bounds for a majority of values of OPT, i.e., we give instances on which FirstFit uses exactly \lfloor 1.7 OPT \rfloor bins.
Such matching upper and lower bounds were previously known only for finitely many small values of OPT. The previous published bound on the absolute approximation ratio of FirstFit was 12/7 \approx 1.7143. Recently a bound of 101/59 \approx 1.7119 was claimed.
Approximation algorithms
online algorithms
bin packing
First Fit
538-549
Regular Paper
György
Dósa
György Dósa
Jiri
Sgall
Jiri Sgall
10.4230/LIPIcs.STACS.2013.538
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Constrained Binary Identification Problem
We consider the problem of building a binary decision tree, to locate an object within a set by way of the least number of membership queries. This problem is equivalent to the "20 questions game" of information theory and is closely related to lossless source compression. If any query is admissible, Huffman coding is optimal with close to H[P] questions on average, the entropy of the prior distribution P over objects. However, in many realistic scenarios, there are constraints on which queries can be asked, and solving the problem optimally is NP-hard.
We provide novel polynomial time approximation algorithms where constraints are defined in terms of "graph", general "cost", and "submodular" functions. In particular, we show that under graph constraints, there exists a constant approximation algorithm for locating the target in the set. We then extend our approach for scenarios where the constraints are defined in terms of general cost functions that depend only on the size of the query and provide an approximation algorithm that can find the target within O(log(log n)) gap from the cost of the optimum algorithm. Submodular functions come as a natural generalization of cost functions with decreasing marginals. Under submodular set constraints, we devise an approximation algorithm that can find the target within O(log n) gap from the cost of the optimum algorithm. The proposed algorithms are greedy in a sense that at each step they select a query that most evenly splits the set without violating the underlying constraints. These results can be applied to network tomography, active learning and interactive content search.
Network Tomography
Binary Identification Problem
Approximation Algorithms
Graph Algorithms
Tree Search Strategies
Entropy
550-561
Regular Paper
Amin
Karbasi
Amin Karbasi
Morteza
Zadimoghaddam
Morteza Zadimoghaddam
10.4230/LIPIcs.STACS.2013.550
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Regular languages of thin trees
An infinite tree is called thin if it contains only countably many infinite branches. Thin trees can be seen as intermediate structures between infinite words and infinite trees. In this work we investigate properties of regular languages of thin trees.
Our main tool is an algebra suitable for thin trees. Using this framework we characterize various classes of regular languages: commutative, open in the standard topology, closed under two variants of bisimulational equivalence, and definable in WMSO logic among all trees.
We also show that in various meanings thin trees are not as rich as all infinite trees. In particular we observe a parity index collapse to level (1,3) and a topological complexity collapse to co-analytic sets. Moreover, a gap property is shown: a regular language of thin trees is either WMSO-definable among all trees or co-analytic-complete.
infinite trees
regular languages
effective characterizations
topological complexity
562-573
Regular Paper
Mikolaj
Bojanczyk
Mikolaj Bojanczyk
Tomasz
Idziaszek
Tomasz Idziaszek
Michal
Skrzypczak
Michal Skrzypczak
10.4230/LIPIcs.STACS.2013.562
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Approximate comparison of distance automata
Distance automata are automata weighted over the semiring (\mathbb{N} \cup \infty,\min,+) (the tropical semiring). Such automata compute functions from words to \mathbb{N} \cup \infty such as the number of occurrences of a given letter. It is known that testing f <= g is an undecidable problem for f,g computed by distance automata. The main contribution of this paper is to show that an approximation of this problem becomes decidable.
We present an algorithm which, given epsilon > 0 and two functions f,g computed by distance automata, answers "yes" if f <= (1-epsilon) g, "no" if $f \not\leq g$, and may answer "yes" or "no" in all other cases. This result highly refines previously known decidability results of the same type.
The core argument behind this quasi-decision procedure is an algorithm which is able to provide an approximated finite presentation to the closure under products of sets of matrices over the tropical semiring.
We also provide another theorem, of affine domination, which shows that previously known decision procedures for cost-automata have an improved precision when used over distance automata.
Distance automata
tropical semiring
decidability
cost functions
574-585
Regular Paper
Thomas
Colcombet
Thomas Colcombet
Laure
Daviaud
Laure Daviaud
10.4230/LIPIcs.STACS.2013.574
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
The Rank of Tree-Automatic Linear Orderings
A tree-automatic structure is a structure whose domain can be encoded by a regular tree language such that each relation is recognisable by a finite automaton processing tuples of trees synchronously. The finite condensation rank (FC-rank) of a linear ordering measures how far it is away from being dense. We prove that the FC-rank of every tree-automatic linear ordering is below omega^omega. This generalises Delhommé's result that each tree-automatic ordinal is less than omega^omega^omega. Furthermore, we show an analogue for tree-automatic linear orderings where the branching complexity of the trees involved is bounded.
tree-automatic structures
linear orderings
finite condensation rank
computable model theory
586-597
Regular Paper
Martin
Huschenbett
Martin Huschenbett
10.4230/LIPIcs.STACS.2013.586
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
A general framework for the realistic analysis of sorting and searching algorithms. Application to some popular algorithms
We describe a general framework for realistic analysis of sorting and searching algorithms, and we apply it to the average-case analysis of five basic algorithms: three sorting algorithms (QuickSort, InsertionSort, BubbleSort) and two selection algorithms (QuickMin and SelectionMin). Usually, the analysis deals with the mean number of key comparisons, but, here, we view keys as words produced by the same source, which are compared via their symbols in the lexicographic order. The "realistic" cost of the algorithm is now the total number of symbol comparisons performed by the algorithm, and, in this context, the average-case analysis aims to providee stimates for the mean number of symbol comparisons used by the algorithm. For sorting algorithms, and with respect to key comparisons, the average-case complexity of QuickSort is asymptotic to 2n log n, InsertionSort to n^2/4 and BubbleSort to n^2/2. With respect to symbol comparisons, we prove that their average-case complexity becomes Theta(n log^2n), Theta(n^2), Theta (n^2 log n). For selection algorithms, and with respect to key comparisons, the average-case complexity of QuickMin is asymptotic to 2n, of SelectionMin is n - 1. With respect to symbol comparisons, we prove that their average-case complexity remains Theta(n). In these five cases, we describe the dominant constants which exhibit the probabilistic behaviour of the source (namely, entropy, and various notions of coincidence) with respect to the algorithm.
Probabilistic analysis of algorithms
Sorting and searching algorithms
Pattern matching
Permutations
Information theory
Rice formula
Asymptotic e
598-609
Regular Paper
Julien
Clément
Julien Clément
Thu Hien
Nguyen Thi
Thu Hien Nguyen Thi
Brigitte
Vallée
Brigitte Vallée
10.4230/LIPIcs.STACS.2013.598
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Search using queries on indistinguishable items
We investigate the problem of determining a set S of k indistinguishable integers in the range [1, n].
The algorithm is allowed to query an integer q \in [1,n], and receive a response comparing this integer to an integer randomly chosen from S. The algorithm has no control over which element of S the query q is compared to. We show tight bounds for this problem. In particular, we show that in the natural regime where k <= n, the optimal number of queries to attain n^{-Omega(1)} error probability is Theta(k^3 log n). In the regime where k > n, the optimal number of queries is Theta(n^2 k log n).
Our main technical tools include the use of information theory to derive the lower bounds, and the application of noisy binary search in the spirit of Feige, Raghavan, Peleg, and Upfal (1994). In particular, our lower bound technique is likely to be applicable in other situations that involve search under uncertainty.
Search
Noisy Search
Information Theory
Query Complexity
610-621
Regular Paper
Mark
Braverman
Mark Braverman
Gal
Oshri
Gal Oshri
10.4230/LIPIcs.STACS.2013.610
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Pebbling, Entropy and Branching Program Size Lower Bounds
We contribute to the program of proving lower bounds on the size of branching programs solving the Tree Evaluation Problem introduced in (Stephen A. Cook, Pierre McKenzie, Dustin Wehr, Mark Braverman, and Rahul Santhanam, 2012). Proving an exponential lower bound for the size of the non-deterministic thrifty branching programs would separate NL from P under the thrifty hypothesis. In this context, we consider a restriction of non-deterministic thrifty branching programs called bitwise-independence. We show that any bitwise-independent non-deterministic thrifty branching program solving BT_2(h,k) must have at least 1/2 k^{h/2} states. Prior to this work, lower bounds were known for general branching programs only for fixed heights h=2,3,4 (Stephen A. Cook, Pierre McKenzie, Dustin Wehr, Mark Braverman, and Rahul Santhanam, 2012). Our lower bounds are also tight (up to a factor of k), since the known (Stephen A. Cook, Pierre McKenzie, Dustin Wehr, Mark Braverman, and Rahul Santhanam, 2012) non-deterministic thrifty branching programs for this problem of size O(k^{h/2+1}) are bitwise-independent. We prove our results by associating a fractional pebbling strategy with any bitwise-independent non-deterministic thrifty branching program solving the Tree Evaluation Problem. Such a connection was not known previously even for fixed heights.
Our main technique is the entropy method introduced by Jukna and Zak (S. Jukna and S. Žák, 2003) originally in the context of proving lower bounds for read-once branching programs. We also show that the previous lower bounds known (Stephen A. Cook, Pierre McKenzie, Dustin Wehr, Mark Braverman, and Rahul Santhanam, 2012) for deterministic branching programs for Tree Evaluation Problem can be obtained using this approach. Using this method, we also show tight lower bounds for any k-way deterministic branching program solving Tree Evaluation Problem when the instances are restricted to have the same group operation in all internal nodes.
Pebbling
Entropy Method
Branching Programs
Size Lower Bounds.
622-633
Regular Paper
Balagopal
Komarath
Balagopal Komarath
Jayalal M. N.
Sarma
Jayalal M. N. Sarma
10.4230/LIPIcs.STACS.2013.622
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Advice Lower Bounds for the Dense Model Theorem
We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is delta-dense in a distribution that is epsilon'-indistinguishable from uniform, there exists a "dense model" for D, that is, a distribution that is delta-dense in the uniform distribution and is epsilon-indistinguishable from D. This epsilon-indistinguishability is with respect to an arbitrary small class of functions F. For the natural case where epsilon' >= Omega(epsilon delta) and epsilon >= delta^{O(1)}, our lower bound implies that Omega(sqrt{(1/epsilon)log(1/delta)} log|F|) advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is O((1/epsilon^2)log(1/delta) log|F|). Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for "dense model decoding" instead. Our proof introduces some new techniques which may be of independent interest, including an analysis of a majority of majorities of p-biased bits. The latter analysis uses an extremely tight lower bound on the tail of the binomial distribution, which we could not find in the literature.
Pseudorandomness
advice lower bounds
dense model theorem
634-645
Regular Paper
Thomas
Watson
Thomas Watson
10.4230/LIPIcs.STACS.2013.634
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode
Author Index
Author Index.
Author Index
646-647
Regular Paper
Natacha
Portier
Natacha Portier
Thomas
Wilke
Thomas Wilke
10.4230/LIPIcs.STACS.2013.646
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode