Volume

LIPIcs, Volume 9

28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)



Thumbnail PDF

Event

STACS 2011, March 10-12, 2011, Dortmund, Germany

Editors

Thomas Schwentick
Christoph Dürr

Publication Details

  • published at: 2011-03-10
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-25-5
  • DBLP: db/conf/stacs/stacs2011

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 9, STACS'11, Complete Volume

Authors: Thomas Schwentick and Christoph Dürr


Abstract
LIPIcs, Volume 9, STACS'11, Complete Volume

Cite as

28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{schwentick_et_al:LIPIcs.STACS.2011,
  title =	{{LIPIcs, Volume 9, STACS'11, Complete Volume}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011},
  URN =		{urn:nbn:de:0030-drops-41031},
  doi =		{10.4230/LIPIcs.STACS.2011},
  annote =	{Keywords: Models of Computation, Nonnumerical Algorithms and Problems, Mathematical Logic, Formal Languages, Combinatorics, Graph Theory}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization

Authors: Thomas Schwentick and Christoph Dürr


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization

Cite as

28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. i-xvii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{schwentick_et_al:LIPIcs.STACS.2011.i,
  author =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{i--xvii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.i},
  URN =		{urn:nbn:de:0030-drops-29939},
  doi =		{10.4230/LIPIcs.STACS.2011.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization}
}
Document
Algorithms for Dynamic Speed Scaling

Authors: Susanne Albers


Abstract
Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variable-speed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.

Cite as

Susanne Albers. Algorithms for Dynamic Speed Scaling. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{albers:LIPIcs.STACS.2011.1,
  author =	{Albers, Susanne},
  title =	{{Algorithms for Dynamic Speed Scaling}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{1--11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.1},
  URN =		{urn:nbn:de:0030-drops-29954},
  doi =		{10.4230/LIPIcs.STACS.2011.1},
  annote =	{Keywords: competitive analysis, energy-efficiency, flow time, job deadline, offline algorithm, online algorithm, response time, scheduling, variable-speed proce}
}
Document
Structural Decomposition Methods and What They are Good For

Authors: Markus Aschinger, Conrad Drescher, Georg Gottlob, Peter Jeavons, and Evgenij Thorstensen


Abstract
This paper reviews structural problem decomposition methods, such as tree and path decompositions. It is argued that these notions can be applied in two distinct ways: Either to show that a problem is efficiently solvable when a width parameter is fixed, or to prove that the unrestricted (or some width-parameter free) version of a problem is tractable by using a width-notion as a mathematical tool for directly solving the problem at hand. Examples are given for both cases. As a new showcase for the latter usage, we report some recent results on the Partner Units Problem, a form of configuration problem arising in an industrial context. We use the notion of a path decomposition to identify and solve a tractable class of instances of this problem with practical relevance.

Cite as

Markus Aschinger, Conrad Drescher, Georg Gottlob, Peter Jeavons, and Evgenij Thorstensen. Structural Decomposition Methods and What They are Good For. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 12-28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{aschinger_et_al:LIPIcs.STACS.2011.12,
  author =	{Aschinger, Markus and Drescher, Conrad and Gottlob, Georg and Jeavons, Peter and Thorstensen, Evgenij},
  title =	{{Structural Decomposition Methods and What They are Good For}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{12--28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.12},
  URN =		{urn:nbn:de:0030-drops-29960},
  doi =		{10.4230/LIPIcs.STACS.2011.12},
  annote =	{Keywords: decompositions}
}
Document
How to prove security of communication protocols? A discussion on the soundness of formal models w.r.t. computational ones.

Authors: Hubert Comon-Lundh and Véronique Cortier


Abstract
Security protocols are short programs that aim at securing communication over a public network. Their design is known to be error-prone with flaws found years later. That is why they deserve a careful security analysis, with rigorous proofs. Two main lines of research have been (independently) developed to analyse the security of protocols. On the one hand, formal methods provide with symbolic models and often automatic proofs. On the other hand, cryptographic models propose a tighter modeling but proofs are more difficult to write and to check. An approach developed during the last decade consists in bridging the two approaches, showing that symbolic models are sound w.r.t. symbolic ones, yielding strong security guarantees using automatic tools. These results have been developed for several cryptographic primitives (e.g. symmetric and asymmetric encryption, signatures, hash) and security properties. While proving soundness of symbolic models is a very promising approach, several technical details are often not satisfactory. Focusing on symmetric encryption, we describe the difficulties and limitations of the available results.

Cite as

Hubert Comon-Lundh and Véronique Cortier. How to prove security of communication protocols? A discussion on the soundness of formal models w.r.t. computational ones.. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 29-44, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{comonlundh_et_al:LIPIcs.STACS.2011.29,
  author =	{Comon-Lundh, Hubert and Cortier, V\'{e}ronique},
  title =	{{How to prove security of communication protocols? A discussion on the soundness of formal models w.r.t. computational ones.}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{29--44},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.29},
  URN =		{urn:nbn:de:0030-drops-29993},
  doi =		{10.4230/LIPIcs.STACS.2011.29},
  annote =	{Keywords: verification, security, cryptography}
}
Document
Local dependency dynamic programming in the presence of memory faults

Authors: Saverio Caminiti, Irene Finocchi, and Emanuele G. Fusco


Abstract
We investigate the design of dynamic programming algorithms in unreliable memories, i.e., in the presence of faults that may arbitrarily corrupt memory locations during the algorithm execution. As a main result, we devise a general resilient framework that can be applied to all local dependency dynamic programming problems, where updates to entries in the auxiliary table are determined by the contents of neighboring cells. Consider, as an example, the computation of the edit distance between two strings of length n and m. We prove that, for any arbitrarily small constant epsilon in (0,1] and n >=m, this problem can be solved correctly with high probability in O(nm + alpha delta^{1+epsilon}) worst-case time and O(nm + n delta) space, when up to delta memory faults can be inserted by an adversary with unbounded computational power and alpha <= delta is the actual number of faults occurring during the computation. We also show that an optimal edit sequence can be constructed in additional time O(n delta + alpha delta^{1+epsilon}). It follows that our resilient algorithms match the running time and space usage of the standard non-resilient implementations while tolerating almost linearly-many faults.

Cite as

Saverio Caminiti, Irene Finocchi, and Emanuele G. Fusco. Local dependency dynamic programming in the presence of memory faults. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 45-56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{caminiti_et_al:LIPIcs.STACS.2011.45,
  author =	{Caminiti, Saverio and Finocchi, Irene and Fusco, Emanuele G.},
  title =	{{Local dependency dynamic programming in the presence of memory faults}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{45--56},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.45},
  URN =		{urn:nbn:de:0030-drops-29988},
  doi =		{10.4230/LIPIcs.STACS.2011.45},
  annote =	{Keywords: unreliable memories, fault-tolerant algorithms, local dependency dynamic programming, edit distance}
}
Document
Tight bounds for rumor spreading in graphs of a given conductance

Authors: George Giakkoupis


Abstract
We study the connection between the rate at which a rumor spreads throughout a graph and the conductance of the graph -- a standard measure of a graph's expansion properties. We show that for any n-node graph with conductance phi, the classical PUSH-PULL algorithm distributes a rumor to all nodes of the graph in O(phi^(-1) log(n)) rounds with high probability (w.h.p.). This bound improves a recent result of Chierichetti, Lattanzi, and Panconesi [STOC 2010], and it is tight in the sense that there exist graphs where Omega(phi^(-1)log(n)) rounds of the PUSH-PULL algorithm are required to distribute a rumor w.h.p. We also explore the PUSH and the PULL algorithms, and derive conditions that are both necessary and sufficient for the above upper bound to hold for those algorithms as well. An interesting finding is that every graph contains a node such that the PULL algorithm takes O(phi^(-1) log(n)) rounds w.h.p. to distribute a rumor started at that node. In contrast, there are graphs where the PUSH algorithm requires significantly more rounds for any start node.

Cite as

George Giakkoupis. Tight bounds for rumor spreading in graphs of a given conductance. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 57-68, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{giakkoupis:LIPIcs.STACS.2011.57,
  author =	{Giakkoupis, George},
  title =	{{Tight bounds for rumor spreading in graphs of a given conductance}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{57--68},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.57},
  URN =		{urn:nbn:de:0030-drops-29977},
  doi =		{10.4230/LIPIcs.STACS.2011.57},
  annote =	{Keywords: conductance, rumor spreading}
}
Document
Tight Bounds For Distributed MST Verification

Authors: Liah Kor, Amos Korman, and David Peleg


Abstract
This paper establishes tight bounds for the Minimum-weight Spanning Tree (MST) verification problem in the distributed setting. Specifically, we provide an MST verification algorithm that achieves simultaneously tilde ~O(|E|) messages and $tilde O(sqrt{n} + D) time, where |E| is the number of edges in the given graph G and D is G's diameter. On the negative side, we show that any MST verification algorithm must send Omega(|E|) messages and incur ~Omega(sqrt{n} + D) time in worst case. Our upper bound result appears to indicate that the verification of an MST may be easier than its construction, since for MST construction, both lower bounds of Omega(|E|) messages and Omega(sqrt{n} + D) time hold, but at the moment there is no known distributed algorithm that constructs an MST and achieves simultaneously tilde O(|E|) messages and ´~O(sqrt{n} + D) time. Specifically, the best known time-optimal algorithm (using ~O(sqrt{n} + D) time) requires O(|E|+n^{3/2}) messages, and the best known message-optimal algorithm (using ~O(|E|) messages) requires O(n) time. On the other hand, our lower bound results indicate that the verification of an MST is not significantly easier than its construction.

Cite as

Liah Kor, Amos Korman, and David Peleg. Tight Bounds For Distributed MST Verification. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 69-80, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kor_et_al:LIPIcs.STACS.2011.69,
  author =	{Kor, Liah and Korman, Amos and Peleg, David},
  title =	{{Tight Bounds For Distributed MST Verification}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{69--80},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.69},
  URN =		{urn:nbn:de:0030-drops-30000},
  doi =		{10.4230/LIPIcs.STACS.2011.69},
  annote =	{Keywords: distributed algorithms, distributed verification, labeling schemes, minimum-weight spanning tree}
}
Document
Automata based verification over linearly ordered data domains

Authors: Luc Segoufin and Szymon Torunczyk


Abstract
In this paper we work over linearly ordered data domains equipped with finitely many unary predicates and constants. We consider nondeterministic automata processing words and storing finitely many variables ranging over the domain. During a transition, these automata can compare the data values of the current configuration with those of the previous configuration using the linear order, the unary predicates and the constants. We show that emptiness for such automata is decidable, both over finite and infinite words, under reasonable computability assumptions on the linear order. Finally, we show how our automata model can be used for verifying properties of workflow specifications in the presence of an underlying database.

Cite as

Luc Segoufin and Szymon Torunczyk. Automata based verification over linearly ordered data domains. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 81-92, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{segoufin_et_al:LIPIcs.STACS.2011.81,
  author =	{Segoufin, Luc and Torunczyk, Szymon},
  title =	{{Automata based verification over linearly ordered data domains}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{81--92},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.81},
  URN =		{urn:nbn:de:0030-drops-30017},
  doi =		{10.4230/LIPIcs.STACS.2011.81},
  annote =	{Keywords: register automata, data values, linear order}
}
Document
Bottom-up automata on data trees and vertical XPath

Authors: Diego Figueira and Luc Segoufin


Abstract
A data tree is a tree whose every node carries a label from a finite alphabet and a datum from some infinite domain. We introduce a new model of automata over unranked data trees with a decidable emptiness problem. It is essentially a bottom-up alternating automaton with one register, enriched with epsilon-transitions that perform tests on the data values of the subtree. We show that it captures the expressive power of the vertical fragment of XPath -- containing the child, descendant, parent and ancestor axes -- obtaining thus a decision procedure for its satisfiability problem.

Cite as

Diego Figueira and Luc Segoufin. Bottom-up automata on data trees and vertical XPath. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 93-104, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{figueira_et_al:LIPIcs.STACS.2011.93,
  author =	{Figueira, Diego and Segoufin, Luc},
  title =	{{Bottom-up automata on data trees and vertical XPath}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{93--104},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.93},
  URN =		{urn:nbn:de:0030-drops-30029},
  doi =		{10.4230/LIPIcs.STACS.2011.93},
  annote =	{Keywords: Decidability, XPath, Data trees, Bottom-up tree automata}
}
Document
Data Monoids

Authors: Mikolaj Bojanczyk


Abstract
We develop an algebraic theory for languages of data words. We prove that, under certain conditions, a language of data words is definable in first-order logic if and only if its syntactic monoid is aperiodic.

Cite as

Mikolaj Bojanczyk. Data Monoids. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 105-116, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bojanczyk:LIPIcs.STACS.2011.105,
  author =	{Bojanczyk, Mikolaj},
  title =	{{Data Monoids}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{105--116},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.105},
  URN =		{urn:nbn:de:0030-drops-30030},
  doi =		{10.4230/LIPIcs.STACS.2011.105},
  annote =	{Keywords: Monoid, Data Words, Nominal Set, First-Order Logic}
}
Document
Minimum s-t cut in undirected planar graphs when the source and the sink are close

Authors: Haim Kaplan and Yahav Nussbaum


Abstract
Consider the minimum s-t cut problem in an embedded undirected planar graph. Let p be the minimum number of faces that a curve from s to $t$ passes through. If p=1, that is, the vertices s and t are on the boundary of the same face, then the minimum cut can be found in O(n)time. For general planar graphs this cut can be found in O(n log n) time. We unify these results and give an O(n log p) time algorithm. We use cut-cycles to obtain the value of the minimum cut, and study the structure of these cycles to get an efficient algorithm.

Cite as

Haim Kaplan and Yahav Nussbaum. Minimum s-t cut in undirected planar graphs when the source and the sink are close. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 117-128, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.STACS.2011.117,
  author =	{Kaplan, Haim and Nussbaum, Yahav},
  title =	{{Minimum s-t cut in undirected planar graphs when the source and the sink are close}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{117--128},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.117},
  URN =		{urn:nbn:de:0030-drops-30049},
  doi =		{10.4230/LIPIcs.STACS.2011.117},
  annote =	{Keywords: planar graph, minimum cut, shortest path, cut cycle}
}
Document
Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing

Authors: Petr Kolman and Christian Scheideler


Abstract
An elementary h-route flow, for an integer h >= 1, is a set of h edge-disjoint paths between a source and a sink, each path carrying a unit of flow, and an h-route flow is a non-negative linear combination of elementary h-route flows. An h-route cut is a set of edges whose removal decreases the maximum h-route flow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity $h$-route cuts and flows, for h <= 3: The size of a minimum h-route cut is at least f/h and at most O(log^3(k)f) where f is the size of the maximum h-route flow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h=3 that has an approximation ratio of O(log^3 k). Previously, polylogarithmic approximation was known only for $h$-route cuts for h <= 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity flows and cuts. Similar results are shown also for the sparsest multiroute cut problem.

Cite as

Petr Kolman and Christian Scheideler. Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 129-140, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kolman_et_al:LIPIcs.STACS.2011.129,
  author =	{Kolman, Petr and Scheideler, Christian},
  title =	{{Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{129--140},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.129},
  URN =		{urn:nbn:de:0030-drops-30051},
  doi =		{10.4230/LIPIcs.STACS.2011.129},
  annote =	{Keywords: Multicommodity flow, Multiroute flow, Cuts, Duality}
}
Document
Compact Visibility Representation of Plane Graphs

Authors: Jiun-Jie Wang and Xin He


Abstract
The visibility representation (VR for short) is a classical representation of plane graphs. It has various applications and has been extensively studied. A main focus of the study is to minimize the size of the VR. It is known that there exists a plane graph $G$ with $n$ vertices where any VR of $G$ requires a grid of size at least (2/3)n x((4/3)n-3) (width x height). For upper bounds, it is known that every plane graph has a VR with grid size at most (2/3)n x (2n-5), and a VR with grid size at most (n-1) x (4/3)n. It has been an open problem to find a VR with both height and width simultaneously bounded away from the trivial upper bounds (namely with size at most c_h n x c_w n with c_h < 1 and c_w < 2$). In this paper, we provide the first VR construction with this property. We prove that every plane graph of n vertices has a VR with height <= max{23/24 n + 2 Ceil(sqrt(n))+4, 11/12 n + 13} and width <= 23/12 n. The area (height x width) of our VR is larger than the area of some of previous results. However, bounding one dimension of the VR only requires finding a good st-orientation or a good dual s^*t^*-orientation of G. On the other hand, to bound both dimensions of VR simultaneously, one must find a good $st$-orientation and a good dual s^*t^*-orientation at the same time, and thus is far more challenging. Since st-orientation is a very useful concept in other applications, this result may be of independent interests.

Cite as

Jiun-Jie Wang and Xin He. Compact Visibility Representation of Plane Graphs. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 141-152, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.STACS.2011.141,
  author =	{Wang, Jiun-Jie and He, Xin},
  title =	{{Compact Visibility Representation of Plane Graphs}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{141--152},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.141},
  URN =		{urn:nbn:de:0030-drops-30064},
  doi =		{10.4230/LIPIcs.STACS.2011.141},
  annote =	{Keywords: plane graph, plane triangulation, visibility representation, st-orientation}
}
Document
Telling convex from reflex allows to map a polygon

Authors: Jeremie Chalopin, Shantanu Das, Yann Disser, Matus Mihalak, and Peter Widmayer


Abstract
We consider the exploration of a simple polygon P by a robot that moves from vertex to vertex along edges of the visibility graph of P. The visibility graph has a vertex for every vertex of P and an edge between two vertices if they see each other, i.e.~if the line segment connecting them lies inside $P$ entirely. While located at a vertex, the robot is capable of ordering the vertices it sees in counter-clockwise order as they appear on the boundary, and for every two such vertices, it can distinguish whether the angle between them is convex (<= pi) or reflex (> pi). Other than that, distant vertices are indistinguishable to the robot. We assume that an upper bound on the number of vertices is known and show that the robot is always capable of reconstructing the visibility graph of P. We also show that multiple identical, indistinguishable and deterministic such robots can always position themselves such that they mutually see each other, i.e. such that they form a clique in the visibility graph.

Cite as

Jeremie Chalopin, Shantanu Das, Yann Disser, Matus Mihalak, and Peter Widmayer. Telling convex from reflex allows to map a polygon. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 153-164, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chalopin_et_al:LIPIcs.STACS.2011.153,
  author =	{Chalopin, Jeremie and Das, Shantanu and Disser, Yann and Mihalak, Matus and Widmayer, Peter},
  title =	{{Telling convex from reflex allows to map a polygon}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{153--164},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.153},
  URN =		{urn:nbn:de:0030-drops-30077},
  doi =		{10.4230/LIPIcs.STACS.2011.153},
  annote =	{Keywords: polygon mapping, map construction, autonomous agent, simple robot, visibility graph reconstruction}
}
Document
Cross-Composition: A New Technique for Kernelization Lower Bounds

Authors: Hans L. Bodlaender, Bart M. P. Jansen, and Stefan Kratsch


Abstract
We introduce a new technique for proving kernelization lower bounds, called cross-composition. A classical problem L cross-composes into a parameterized problem $Q$ if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we show that if an NP-hard problem cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless the polynomial hierarchy collapses. Our technique generalizes and strengthens the recent techniques of using OR-composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Chromatic Number, Clique, and Weighted Feedback Vertex Set do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set.

Cite as

Hans L. Bodlaender, Bart M. P. Jansen, and Stefan Kratsch. Cross-Composition: A New Technique for Kernelization Lower Bounds. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 165-176, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bodlaender_et_al:LIPIcs.STACS.2011.165,
  author =	{Bodlaender, Hans L. and Jansen, Bart M. P. and Kratsch, Stefan},
  title =	{{Cross-Composition: A New Technique for Kernelization Lower Bounds}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{165--176},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.165},
  URN =		{urn:nbn:de:0030-drops-30082},
  doi =		{10.4230/LIPIcs.STACS.2011.165},
  annote =	{Keywords: kernelization, lower bounds, parameterized complexity}
}
Document
Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

Authors: Bart M. P. Jansen and Hans L. Bodlaender


Abstract
Kernelization is a concept that enables the formal mathematical analysis of data reduction through the framework of parameterized complexity. Intensive research into the Vertex Cover problem has shown that there is a preprocessing algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number VC(G) since fvs(G) <= VC(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that k' <= k, |X'| <= |X| and most importantly |V(G')| <= 2k and |V(G')| in O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have polynomial kernel when parameterized by fvs(G) unless the polynomial hierarchy collapses to the third level (PH=Sigma_3^p). Our work is one of the first examples of research in kernelization using a non-standard parameter, and shows that this approach can yield interesting computational insights. To obtain our results we make extensive use of the combinatorial structure of independent sets in forests.

Cite as

Bart M. P. Jansen and Hans L. Bodlaender. Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 177-188, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.STACS.2011.177,
  author =	{Jansen, Bart M. P. and Bodlaender, Hans L.},
  title =	{{Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{177--188},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.177},
  URN =		{urn:nbn:de:0030-drops-30097},
  doi =		{10.4230/LIPIcs.STACS.2011.177},
  annote =	{Keywords: kernelization, lower bounds, parameterized complexity}
}
Document
Hitting forbidden minors: Approximation and Kernelization

Authors: Fedor V. Fomin, Daniel Lokshtanov, Neeldhara Misra, Geevarghese Philip, and Saket Saurabh


Abstract
We study a general class of problems called F-Deletion problems. In an F-Deletion problem, we are asked whether a subset of at most k vertices can be deleted from a graph G such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number of algorithmic results on the F-Deletion problem when F contains a planar graph. We give - a linear vertex kernel on graphs excluding t-claw K_(1,t), the star with t leaves, as an induced subgraph, where t is a fixed integer. - an approximation algorithm achieving an approximation ratio of O(log^(3/2) OPT), where $OPT$ is the size of an optimal solution on general undirected graphs. Finally, we obtain polynomial kernels for the case when F only contains graph theta_c as a minor for a fixed integer c. The graph theta_c consists of two vertices connected by $c$ parallel edges. Even though this may appear to be a very restricted class of problems it already encompasses well-studied problems such as Vertex Cover, Feedback Vertex Set and Diamond Hitting Set. The generic kernelization algorithm is based on a non-trivial application of protrusion techniques, previously used only for problems on topological graph classes.

Cite as

Fedor V. Fomin, Daniel Lokshtanov, Neeldhara Misra, Geevarghese Philip, and Saket Saurabh. Hitting forbidden minors: Approximation and Kernelization. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 189-200, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.STACS.2011.189,
  author =	{Fomin, Fedor V. and Lokshtanov, Daniel and Misra, Neeldhara and Philip, Geevarghese and Saurabh, Saket},
  title =	{{Hitting forbidden minors: Approximation and Kernelization}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{189--200},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.189},
  URN =		{urn:nbn:de:0030-drops-30103},
  doi =		{10.4230/LIPIcs.STACS.2011.189},
  annote =	{Keywords: kernelization}
}
Document
Extended Abstract
Self-Assembly of Arbitrary Shapes Using RNAse Enzymes: Meeting the Kolmogorov Bound with Small Scale Factor (Extended Abstract)

Authors: Erik D. Demaine, Matthew J. Patitz, Robert T. Schweller, and Scott M. Summers


Abstract
We consider a model of algorithmic self-assembly of geometric shapes out of square Wang tiles studied in SODA 2010, in which there are two types of tiles (e.g., constructed out of DNA and RNA material) and one operation that destroys all tiles of a particular type (e.g., an RNAse enzyme destroys all RNA tiles). We show that a single use of this destruction operation enables much more efficient construction of arbitrary shapes. In particular, an arbitrary shape can be constructed using an asymptotically optimal number of distinct tile type (related to the shape's Kolmogorov complexity), after scaling the shape by only a logarithmic factor. By contrast, without the destruction operation, the best such result has a scale factor at least linear in the size of the shape and is connected only by a spanning tree of the scaled tiles. We also characterize a large collection of shapes that can be constructed efficiently without any scaling.

Cite as

Erik D. Demaine, Matthew J. Patitz, Robert T. Schweller, and Scott M. Summers. Self-Assembly of Arbitrary Shapes Using RNAse Enzymes: Meeting the Kolmogorov Bound with Small Scale Factor (Extended Abstract). In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 201-212, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.STACS.2011.201,
  author =	{Demaine, Erik D. and Patitz, Matthew J. and Schweller, Robert T. and Summers, Scott M.},
  title =	{{Self-Assembly of Arbitrary Shapes Using RNAse Enzymes: Meeting the Kolmogorov Bound with Small Scale Factor}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{201--212},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.201},
  URN =		{urn:nbn:de:0030-drops-30118},
  doi =		{10.4230/LIPIcs.STACS.2011.201},
  annote =	{Keywords: Biomolecular computation, RNAse enzyme self-assembly, algorithmic self-assembly, Komogorov complexity}
}
Document
Weakly Unambiguous Morphisms

Authors: Dominik D. Freydenberger, Hossein Nevisi, and Daniel Reidenbach


Abstract
A nonerasing morphism sigma is said to be weakly unambiguous with respect to a word w if sigma is the only nonerasing morphism that can map w to sigma(w), i.e., there does not exist any other nonerasing morphism tau satisfying tau(w) = sigma(w). In the present paper, we wish to characterise those words with respect to which there exists such a morphism. This question is nontrivial if we consider so-called length-increasing morphisms, which map a word to an image that is strictly longer than the word. Our main result is a compact characterisation that holds for all morphisms with ternary or larger target alphabets. We also comprehensively describe those words that have a weakly unambiguous length-increasing morphism with a unary target alphabet, but we have to leave the problem open for binary alphabets, where we can merely give some non-characteristic conditions.

Cite as

Dominik D. Freydenberger, Hossein Nevisi, and Daniel Reidenbach. Weakly Unambiguous Morphisms. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 213-224, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{freydenberger_et_al:LIPIcs.STACS.2011.213,
  author =	{Freydenberger, Dominik D. and Nevisi, Hossein and Reidenbach, Daniel},
  title =	{{Weakly Unambiguous Morphisms}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{213--224},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.213},
  URN =		{urn:nbn:de:0030-drops-30123},
  doi =		{10.4230/LIPIcs.STACS.2011.213},
  annote =	{Keywords: Combinatorics on words, Morphisms, Ambiguity}
}
Document
On Minimal Sturmian Partial Words

Authors: Francine Blanchet-Sadri and John Lensmire


Abstract
Partial words, which are sequences that may have some undefined positions called holes, can be viewed as sequences over an extended alphabet A_diamond=A cup {diamond}$ where {diamond} stands for a hole and matches (or is compatible with every letter in A. The subword complexity of a partial word w, denoted by p_w(n), is the number of distinct full words (those without holes) over the alphabet that are compatible with factors of length n of w. A function f: N -> N is (k,h)-feasible if for each integer N geq 1, there exists a k-ary partial word w with h holes such that p_w(n) = f(n) for all n, 1 <= n <= N. We show that when dealing with feasibility in the context of finite binary partial words, the only linear functions that need investigation are f(n)=n+1 and f(n)=2n It turns out that both are (2,h)-feasible for all non-negative integers h. We classify all minimal partial words with $h$ holes of order $N$ with respect to f(n)=n+1, called Sturmian, computing their lengths as well as their numbers, except when h=0 in which case we describe an algorithm that generates all minimal Sturmian full words. We show that up to reversal and complement, any minimal Sturmian partial word with one hole is of the form a^i{diamond}a^jba^l, where i, j, l are integers satisfying some restrictions, that all minimal Sturmian partial words with two holes are one-periodic, and that up to complement, {diamond}(a^{N-1}{diamond})^{h-1} is the only minimal Sturmian partial word with h >= 3$holes. Finally, we give upper bounds on the lengths of minimal partial words with respect to f(n)=2n$ which are tight for h=0, 1 or 2.

Cite as

Francine Blanchet-Sadri and John Lensmire. On Minimal Sturmian Partial Words. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 225-236, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{blanchetsadri_et_al:LIPIcs.STACS.2011.225,
  author =	{Blanchet-Sadri, Francine and Lensmire, John},
  title =	{{On Minimal Sturmian Partial Words}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{225--236},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.225},
  URN =		{urn:nbn:de:0030-drops-30131},
  doi =		{10.4230/LIPIcs.STACS.2011.225},
  annote =	{Keywords: Automata and formal languages; Combinatorics on words; Graph theory; Subword complexity; Feasible functions; Partial words; Sturmian words.}
}
Document
Improving PPSZ for 3-SAT using Critical Variables

Authors: Timon Hertli, Robin A. Moser, and Dominik Scheder


Abstract
A critical variable of a satisfiable CNF formula is a variable that has the same value in all satisfying assignments. Using a simple case distinction on the fraction of critical variables of a CNF formula, we improve the running time for 3-SAT from O(1.32216^n) [Rolf 2006] to O(1.32153^n). Using a different approach, Iwama et al. very recently achieved a running time of O(1.32113^n). Our method nicely combines with theirs, yielding the currently fastest known algorithm with running time O(1.32065^n). We also improve the bound for 4-SAT from O(1.47390^n) [Hofmeister et al. 2002] to O(1.46928^n), where O(1.46981^n) can be obtained using the methods of Hofmeister and Rolf.

Cite as

Timon Hertli, Robin A. Moser, and Dominik Scheder. Improving PPSZ for 3-SAT using Critical Variables. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 237-248, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{hertli_et_al:LIPIcs.STACS.2011.237,
  author =	{Hertli, Timon and Moser, Robin A. and Scheder, Dominik},
  title =	{{Improving PPSZ for 3-SAT using Critical Variables}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{237--248},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.237},
  URN =		{urn:nbn:de:0030-drops-30147},
  doi =		{10.4230/LIPIcs.STACS.2011.237},
  annote =	{Keywords: SAT, satisfiability, randomized, exponential time, algorithm, 3-SAT, 4-SAT}
}
Document
The Complexity of Weighted Boolean #CSP Modulo k

Authors: Heng Guo, Sangxia Huang, Pinyan Lu, and Mingji Xia


Abstract
We prove a complexity dichotomy theorem for counting weighted Boolean CSP modulo k for any positive integer $k>1$. This generalizes a theorem by Faben for the unweighted setting. In the weighted setting, there are new interesting tractable problems. We first prove a dichotomy theorem for the finite field case where k is a prime. It turns out that the dichotomy theorem for the finite field is very similar to the one for the complex weighted Boolean #CSP, found by [Cai, Lu and Xia, STOC 2009]. Then we further extend the result to an arbitrary integer k.

Cite as

Heng Guo, Sangxia Huang, Pinyan Lu, and Mingji Xia. The Complexity of Weighted Boolean #CSP Modulo k. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 249-260, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.STACS.2011.249,
  author =	{Guo, Heng and Huang, Sangxia and Lu, Pinyan and Xia, Mingji},
  title =	{{The Complexity of Weighted Boolean #CSP Modulo k}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{249--260},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.249},
  URN =		{urn:nbn:de:0030-drops-30158},
  doi =		{10.4230/LIPIcs.STACS.2011.249},
  annote =	{Keywords: #CSP, dichotomy theorem, counting problems, computational complexity}
}
Document
The #CSP Dichotomy is Decidable

Authors: Martin Dyer and David Richerby


Abstract
Bulatov (2008) and Dyer and Richerby (2010) have established the following dichotomy for the counting constraint satisfaction problem (#CSP): for any constraint language Gamma, the problem of computing the number of satisfying assignments to constraints drawn from Gamma is either in FP or is #P-complete, depending on the structure of Gamma. The principal question left open by this research was whether the criterion of the dichotomy is decidable. We show that it is; in fact, it is in NP.

Cite as

Martin Dyer and David Richerby. The #CSP Dichotomy is Decidable. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 261-272, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{dyer_et_al:LIPIcs.STACS.2011.261,
  author =	{Dyer, Martin and Richerby, David},
  title =	{{The #CSP Dichotomy is Decidable}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{261--272},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.261},
  URN =		{urn:nbn:de:0030-drops-30161},
  doi =		{10.4230/LIPIcs.STACS.2011.261},
  annote =	{Keywords: constraint satisfaction problem, counting problems, complexity dichotomy, decidability}
}
Document
A speed-up of oblivious multi-head finite automata by cellular automata

Authors: Alex Borelllo, Gaetan Richard, and Veronique Terrier


Abstract
In this paper, we present a parallel speed-up of a simple, yet significantly powerful, sequential model by cellular automata. The simulated model is called oblivious multi-head finite automata and is characterized by the fact that the trajectory of the heads only depends on the length of the input word. While the original $k$-head finite automaton works in time $O(n^k)$, its corresponding cellular automaton performs the same task in time $O(n^(k-1)log(n))$ and space $O(n^(k - 1))$.

Cite as

Alex Borelllo, Gaetan Richard, and Veronique Terrier. A speed-up of oblivious multi-head finite automata by cellular automata. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 273-283, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{borelllo_et_al:LIPIcs.STACS.2011.273,
  author =	{Borelllo, Alex and Richard, Gaetan and Terrier, Veronique},
  title =	{{A speed-up of oblivious multi-head finite automata by cellular automata}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{273--283},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.273},
  URN =		{urn:nbn:de:0030-drops-30172},
  doi =		{10.4230/LIPIcs.STACS.2011.273},
  annote =	{Keywords: oblivious multi-head finite automata, cellular automata, parallel speed-up, simulation}
}
Document
Stochastic Cellular Automata Solve the Density Classification Problem with an Arbitrary Precision

Authors: Nazim Fates


Abstract
The density classification problem consists in using a binary cellular automaton (CA) to decide whether an initial configuration contains more 0s or 1s. This problem is known for having no exact solution in the case of binary, deterministic, one-dimensional CA. Stochastic cellular automata have been studied as an alternative for solving the problem. This paper is aimed at presenting techniques to analyse the behaviour of stochastic CA rules, seen as a ``blend'' of deterministic CA rules. Using analytical calculations and numerical simulations, we analyse two previously studied rules and present a new rule. We estimate their quality of classification and their average time of classification. We show that the new rule solves the problem with an arbitrary precision. From a practical point of view, this rule is effective and exhibits a high quality of classification, even when the simulation time is kept small.

Cite as

Nazim Fates. Stochastic Cellular Automata Solve the Density Classification Problem with an Arbitrary Precision. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 284-295, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fates:LIPIcs.STACS.2011.284,
  author =	{Fates, Nazim},
  title =	{{Stochastic Cellular Automata Solve the Density Classification Problem with an Arbitrary Precision}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{284--295},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.284},
  URN =		{urn:nbn:de:0030-drops-30187},
  doi =		{10.4230/LIPIcs.STACS.2011.284},
  annote =	{Keywords: stochastic and probabilistic cellular automata, density classification problem, models of spatially distributed computing, stochastic process}
}
Document
Probabilistic cellular automata, invariant measures, and perfect sampling

Authors: Ana Busic, Jean Mairesse, and Irene Marcovici


Abstract
In a probabilistic cellular automaton (PCA), the cells are updated synchronously and independently, according to a distribution depending on a finite neighborhood. A PCA can be viewed as a Markov chain whose ergodicity is investigated. A classical cellular automaton (CA) is a particular case of PCA. For a 1-dimensional CA, we prove that ergodicity is equivalent to nilpotency, and is therefore undecidable. We then propose an efficient perfect sampling algorithm for the invariant measure of an ergodic PCA. Our algorithm does not assume any monotonicity property of the local rule. It is based on a bounding process which is shown to be also a PCA.

Cite as

Ana Busic, Jean Mairesse, and Irene Marcovici. Probabilistic cellular automata, invariant measures, and perfect sampling. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 296-307, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{busic_et_al:LIPIcs.STACS.2011.296,
  author =	{Busic, Ana and Mairesse, Jean and Marcovici, Irene},
  title =	{{Probabilistic cellular automata, invariant measures, and perfect sampling}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{296--307},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.296},
  URN =		{urn:nbn:de:0030-drops-30190},
  doi =		{10.4230/LIPIcs.STACS.2011.296},
  annote =	{Keywords: probabilistic cellular automata, perfect sampling, ergodicity}
}
Document
Analysis of Agglomerative Clustering

Authors: Marcel R. Ackermann, Johannes Bloemer, Daniel Kuntze, and Christian Sohler


Abstract
The diameter k-clustering problem is the problem of partitioning a finite subset of R^d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem for all values of k is the agglomerative clustering algorithm with the complete linkage strategy. For decades this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension dis a constant, we show that for any k the solution computed by this algorithm is an O(log k)-approximation to the diameter k-clustering problem. Moreover, our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm.

Cite as

Marcel R. Ackermann, Johannes Bloemer, Daniel Kuntze, and Christian Sohler. Analysis of Agglomerative Clustering. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 308-319, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{ackermann_et_al:LIPIcs.STACS.2011.308,
  author =	{Ackermann, Marcel R. and Bloemer, Johannes and Kuntze, Daniel and Sohler, Christian},
  title =	{{Analysis of Agglomerative Clustering}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{308--319},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.308},
  URN =		{urn:nbn:de:0030-drops-29942},
  doi =		{10.4230/LIPIcs.STACS.2011.308},
  annote =	{Keywords: agglomerative clustering, hierarchical clustering, complete linkage, approximation guarantees}
}
Document
Measuring Learning Complexity with Criteria Epitomizers

Authors: John Case and Timo Koetzing


Abstract
In prior papers, beginning with the seminal work by Freivalds et al. 1995, the notion of intrinsic complexity is used to analyze the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions -- sets, which are learnable under a given learning criterion, but not under other criteria which are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain classes of operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are, then, employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are so-called self-learning sets, previously applied by Case & Koetzing, 2010. These strong epitomizers can be generated and employed in a myriad of settings to witness the strict separation in learning power between the criteria so epitomized and other not as powerful criteria!

Cite as

John Case and Timo Koetzing. Measuring Learning Complexity with Criteria Epitomizers. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 320-331, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{case_et_al:LIPIcs.STACS.2011.320,
  author =	{Case, John and Koetzing, Timo},
  title =	{{Measuring Learning Complexity with Criteria Epitomizers}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{320--331},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.320},
  URN =		{urn:nbn:de:0030-drops-30232},
  doi =		{10.4230/LIPIcs.STACS.2011.320},
  annote =	{Keywords: Algorithmic Learning Theory, Learning Complexity, Robustness in Learning}
}
Document
On Parsimonious Explanations For 2-D Tree- and Linearly-Ordered Data

Authors: Howard Karloff, Flip Korn, Konstantin Makarychev, and Yuval Rabani


Abstract
This paper studies the ``explanation problem'' for tree- and linearly-ordered array data, a problem motivated by database applications and recently solved for the one-dimensional tree-ordered case. In this paper, one is given a matrix A=(a_{ij}) whose rows and columns have semantics: special subsets of the rows and special subsets of the columns are meaningful, others are not. A submatrix in A is said to be meaningful if and only if it is the cross product of a meaningful row subset and a meaningful column subset, in which case we call it an ``allowed rectangle.'' The goal is to ``explain'' A as a sparse sum of weighted allowed rectangles. Specifically, we wish to find as few weighted allowed rectangles as possible such that, for all i,j, a_ij equals the sum of the weights of all rectangles which include cell (i,j). In this paper we consider the natural cases in which the matrix dimensions are tree-ordered or linearly-ordered. In the tree-ordered case, we are given a rooted tree $T_1$ whose leaves are the rows of $A$ and another, $T_2$, whose leaves are the columns. Nodes of the trees correspond in an obvious way to the sets of their leaf descendants. In the linearly-ordered case, a set of rows or columns is meaningful if and only if it is contiguous. For tree-ordered data, we prove the explanation problem NP-Hard and give a randomized $2$-approximation algorithm for it. For linearly-ordered data, we prove the explanation problem NP-Har and give a $2.56$-approximation algorithm. To our knowledge, these are the first results for the problem of sparsely and exactly representing matrices by weighted rectangles.

Cite as

Howard Karloff, Flip Korn, Konstantin Makarychev, and Yuval Rabani. On Parsimonious Explanations For 2-D Tree- and Linearly-Ordered Data. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 332-343, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{karloff_et_al:LIPIcs.STACS.2011.332,
  author =	{Karloff, Howard and Korn, Flip and Makarychev, Konstantin and Rabani, Yuval},
  title =	{{On Parsimonious Explanations For 2-D Tree- and Linearly-Ordered Data}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{332--343},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.332},
  URN =		{urn:nbn:de:0030-drops-30246},
  doi =		{10.4230/LIPIcs.STACS.2011.332},
  annote =	{Keywords: ordered data, explanation problem}
}
Document
Unary negation

Authors: Balder ten Cate and Luc Segoufin


Abstract
We study fragments of first-order logic and of least fixed point logic that allow only unary negation: negation of formulas with at most one free variable. These logics generalize many interesting known formalisms, including modal logic and the mu-calculus, as well as conjunctive queries and monadic Datalog. We show that satisfiability and finite satisfiability are decidable for both fragments, and we pinpoint the complexity of satisfiability, finite satisfiability, and model checking. We also show that the unary negation fragment of first-order logic is model-theoretically very well behaved. In particular, it enjoys Craig interpolation and the Beth property.

Cite as

Balder ten Cate and Luc Segoufin. Unary negation. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 344-355, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{tencate_et_al:LIPIcs.STACS.2011.344,
  author =	{ten Cate, Balder and Segoufin, Luc},
  title =	{{Unary negation}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{344--355},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.344},
  URN =		{urn:nbn:de:0030-drops-30256},
  doi =		{10.4230/LIPIcs.STACS.2011.344},
  annote =	{Keywords: Decidability, Logic, Unary negation}
}
Document
First-order Fragments with Successor over Infinite Words

Authors: Jakub Kallas, Manfred Kufleitner, and Alexander Lauser


Abstract
We consider fragments of first-order logic and as models we allow finite and infinite words simultaneously. The only binary relations apart from equality are order comparison < and the successor predicate +1. We give characterizations of the fragments Sigma_2 = Sigma_2[<,+1] and FO^2 = FO^2[<,+1] in terms of algebraic and topological properties. To this end we introduce the factor topology over infinite words. It turns out that a language $L$ is in FO^2 cap Sigma_2 if and only if $L$ is the interior of an FO^2 language. Symmetrically, a language is in FO^2 cap Pi_2 if and only if it is the topological closure of an FO^2 language. The fragment Delta_2 = Sigma_2 cap Pi_2 contains exactly the clopen languages in FO^2. In particular, over infinite words Delta_2 is a strict subclass of FO^2. Our characterizations yield decidability of the membership problem for all these fragments over finite and infinite words; and as a corollary we also obtain decidability for infinite words. Moreover, we give a new decidable algebraic characterization of dot-depth 3/2 over finite words. Decidability of dot-depth 3/2 over finite words was first shown by Glasser and Schmitz in STACS 2000, and decidability of the membership problem for FO^2 over infinite words was shown 1998 by Wilke in his habilitation thesis whereas decidability of Sigma_2 over infinite words is new.

Cite as

Jakub Kallas, Manfred Kufleitner, and Alexander Lauser. First-order Fragments with Successor over Infinite Words. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 356-367, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kallas_et_al:LIPIcs.STACS.2011.356,
  author =	{Kallas, Jakub and Kufleitner, Manfred and Lauser, Alexander},
  title =	{{First-order Fragments with Successor over Infinite Words}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{356--367},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.356},
  URN =		{urn:nbn:de:0030-drops-30267},
  doi =		{10.4230/LIPIcs.STACS.2011.356},
  annote =	{Keywords: infinite words, regular languages, first-order logic, automata theory, semi-groups, topology}
}
Document
The model checking problem for propositional intuitionistic logic with one variable is AC^1-complete

Authors: Martin Mundhenk and Felix Weiß


Abstract
We investigate the complexity of the model checking problem for propositional intuitionistic logic. We show that the model checking problem for intuitionistic logic with one variable is complete for logspace-uniform AC^1, and for intuitionistic logic with two variables it is P-complete. For superintuitionistic logics with one variable, we obtain NC^1-completeness for the model checking problem and for the tautology problem.

Cite as

Martin Mundhenk and Felix Weiß. The model checking problem for propositional intuitionistic logic with one variable is AC^1-complete. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 368-379, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{mundhenk_et_al:LIPIcs.STACS.2011.368,
  author =	{Mundhenk, Martin and Wei{\ss}, Felix},
  title =	{{The model checking problem for propositional intuitionistic logic with one variable is AC^1-complete}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{368--379},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.368},
  URN =		{urn:nbn:de:0030-drops-30278},
  doi =		{10.4230/LIPIcs.STACS.2011.368},
  annote =	{Keywords: complexity, intuitionistic logic, model checking, ACi}
}
Document
A Fast Algorithm for Multi-Machine Scheduling Problems with Jobs of Equal Processing Times

Authors: Alejandro Lopez-Ortiz and Claude-Guy Quimper


Abstract
Consider the problem of scheduling a set of tasks of length p without preemption on $m$ identical machines with given release and deadline times. We present a new algorithm for computing the schedule with minimal completion times and makespan. The algorithm has time complexity O(min(1,p/m)n^2) which improves substantially over the best known algorithm with complexity O(mn^2).

Cite as

Alejandro Lopez-Ortiz and Claude-Guy Quimper. A Fast Algorithm for Multi-Machine Scheduling Problems with Jobs of Equal Processing Times. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 380-391, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{lopezortiz_et_al:LIPIcs.STACS.2011.380,
  author =	{Lopez-Ortiz, Alejandro and Quimper, Claude-Guy},
  title =	{{A Fast Algorithm for Multi-Machine Scheduling Problems with Jobs of Equal Processing Times}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{380--391},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.380},
  URN =		{urn:nbn:de:0030-drops-30282},
  doi =		{10.4230/LIPIcs.STACS.2011.380},
  annote =	{Keywords: Scheduling}
}
Document
Scheduling for Weighted Flow Time and Energy with Rejection Penalty

Authors: Sze-Hang Chan, Tak-Wah Lam, and Lap-Kei Lee


Abstract
This paper revisits the online problem of flow-time scheduling on a single processor when jobs can be rejected at some penalty [Bansal et al. 2003]. The user cost of a job is defined as the weighted flow time of the job plus the penalty if it is rejected before completion. For jobs with arbitrary weights and arbitrary penalties, [Bansal et al. 2003] gave an online algorithm that is O((log W + log C)^2)-competitive for minimizing the total user cost when using a slightly faster processor, where W and C are the max-min ratios of job weights and job penalties, respectively. In this paper we improve this result with a new algorithm that can achieve a constant competitive ratio independent of $W$ and C when using a slightly faster processor. Note that the above results assume a processor running at a fixed speed. This paper shows more interesting results on extending the above study to the dynamic speed scaling model, where the processor can vary the speed dynamically and the rate of energy consumption is a cubic or any increasing function of speed. A scheduling algorithm has to control job admission and determine the order and speed of job execution. This paper studies the tradeoff between the above-mentioned user cost and energy, and it shows two O(1)-competitive algorithms and a lower bound result on minimizing the user cost plus energy. These algorithms can also be regarded as a generalization of the recent work on minimizing flow time plus energy when all jobs must be completed (see the survey paper [Albers 2010]).

Cite as

Sze-Hang Chan, Tak-Wah Lam, and Lap-Kei Lee. Scheduling for Weighted Flow Time and Energy with Rejection Penalty. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 392-403, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.STACS.2011.392,
  author =	{Chan, Sze-Hang and Lam, Tak-Wah and Lee, Lap-Kei},
  title =	{{Scheduling for Weighted Flow Time and Energy with Rejection Penalty}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{392--403},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.392},
  URN =		{urn:nbn:de:0030-drops-30293},
  doi =		{10.4230/LIPIcs.STACS.2011.392},
  annote =	{Keywords: online scheduling, weighted flow time, rejection penalty, speed scaling}
}
Document
Clique-width: When Hard Does Not Mean Impossible

Authors: Robert Ganian, Petr Hlineny, and Jan Obdrzalek


Abstract
In recent years, the parameterized complexity approach has lead to the introduction of many new algorithms and frameworks on graphs and digraphs of bounded clique-width and, equivalently, rank-width. However, despite intensive work on the subject, there still exist well-established hard problems where neither a parameterized algorithm nor a theoretical obstacle to its existence are known. Our article is interested mainly in the digraph case, targeting the well-known Minimum Leaf Out-Branching (cf. also Minimum Leaf Spanning Tree) and Edge Disjoint Paths problems on digraphs of bounded clique-width with non-standard new approaches. The first part of the article deals with the Minimum Leaf Out-Branching problem and introduces a novel XP-time algorithm wrt. clique-width. We remark that this problem is known to be W[2]-hard, and that our algorithm does not resemble any of the previously published attempts solving special cases of it such as the Hamiltonian Path. The second part then looks at the Edge Disjoint Paths problem (both on graphs and digraphs) from a different perspective -- rather surprisingly showing that this problem has a definition in the MSO_1 logic of graphs. The linear-time FPT algorithm wrt. clique-width then follows as a direct consequence.

Cite as

Robert Ganian, Petr Hlineny, and Jan Obdrzalek. Clique-width: When Hard Does Not Mean Impossible. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 404-415, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.STACS.2011.404,
  author =	{Ganian, Robert and Hlineny, Petr and Obdrzalek, Jan},
  title =	{{Clique-width: When Hard Does Not Mean Impossible}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{404--415},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.404},
  URN =		{urn:nbn:de:0030-drops-30309},
  doi =		{10.4230/LIPIcs.STACS.2011.404},
  annote =	{Keywords: clique-width, bi-rank-width, minimum leaf out-branching, minimum leaf spanning tree, edge-disjoint paths}
}
Document
From Pathwidth to Connected Pathwidth

Authors: Dariusz Dereniowski


Abstract
It is proven that the connected pathwidth of any graph G is at most 2*pw(G)+1, where pw(G) is the pathwidth of G. The method is constructive, i.e. it yields an efficient algorithm that for a given path decomposition of width k computes a connected path decomposition of width at most 2k+1. The running time of the algorithm is O(dk^2), where d is the number of `bags' in the input path decomposition. The motivation for studying connected path decompositions comes from the connection between the pathwidth and some graph searching games. One of the advantages of the above bound for connected pathwidth is an inequality $csn(G) <= 2*sn(G)+3$, where $csn(G)$ is the connected search number of a graph $G$ and $sn(G)$ is its search number, which holds for any graph $G$. Moreover, the algorithm presented in this work can be used to convert efficiently a given search strategy using $k$ searchers into a connected one using $2k+3$ searchers and starting at arbitrary homebase.

Cite as

Dariusz Dereniowski. From Pathwidth to Connected Pathwidth. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 416-427, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{dereniowski:LIPIcs.STACS.2011.416,
  author =	{Dereniowski, Dariusz},
  title =	{{From Pathwidth to Connected Pathwidth}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{416--427},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.416},
  URN =		{urn:nbn:de:0030-drops-30311},
  doi =		{10.4230/LIPIcs.STACS.2011.416},
  annote =	{Keywords: connected pathwidth, connected searching, fugitive search games, graph searching, pathwidth}
}
Document
Polynomial Fitting of Data Streams with Applications to Codeword Testing

Authors: Andrew McGregor, Atri Rudra, and Steve Uurtamo


Abstract
Given a stream of $(x,y)$ points, we consider the problem of finding univariate polynomials that best fit the data. Over finite fields, this problem encompasses the well-studied problem of decoding Reed-Solomon codes while over the reals it corresponds to the well-studied polynomial regression problem. We present one-pass algorithms for two natural problems: i) find the polynomial of a given degree $k$ that minimizes the error and ii) find the polynomial of smallest degree that interpolates through the points with at most a given error bound. We consider a range of error models including the average error per point, the maximum error, and the number of points that are not fitted exactly. Many of our results apply to both the reals and finite fields. As a consequence we also solve an open question regarding the tolerant testing of codes in the data stream model.

Cite as

Andrew McGregor, Atri Rudra, and Steve Uurtamo. Polynomial Fitting of Data Streams with Applications to Codeword Testing. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 428-439, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{mcgregor_et_al:LIPIcs.STACS.2011.428,
  author =	{McGregor, Andrew and Rudra, Atri and Uurtamo, Steve},
  title =	{{Polynomial Fitting of Data Streams with Applications to Codeword Testing}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{428--439},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.428},
  URN =		{urn:nbn:de:0030-drops-30322},
  doi =		{10.4230/LIPIcs.STACS.2011.428},
  annote =	{Keywords: Streaming, Polynomial Interpolation, Polynomial Regression}
}
Document
Spectral Sparsification in the Semi-Streaming Setting

Authors: Jonathan A. Kelner and Alex Levin


Abstract
Let G be a graph with n vertices and m edges. A sparsifier of G is a sparse graph on the same vertex set approximating $G$ in some natural way. It allows us to say useful things about $G$ while considering much fewer than m edges. The strongest commonly-used notion of sparsification is spectral sparsification; H is a spectral sparsifier of G if the quadratic forms induced by the Laplacians of G and H approximate one another well. This notion is strictly stronger than the earlier concept of combinatorial sparsification. In this paper, we consider a semi-streaming setting, where we have only ~O(n) storage space, and we thus cannot keep all of G. In this case, maintaining a sparsifier instead gives us a useful approximation to G, allowing us to answer certain questions about the original graph without storing all of it. In this paper, we introduce an algorithm for constructing a spectral sparsifier of G with O(n log n/epsilon^2) edges (where epsilon is a parameter measuring the quality of the sparsifier), taking ~O(m) time and requiring only one pass over G. In addition, our algorithm has the property that it maintains at all times a valid sparsifier for the subgraph of G that we have received. Our algorithm is natural and conceptually simple. As we read edges of $G,$ we add them to the sparsifier $H$. Whenever $H$ gets too big, we resparsify it in $tilde O(n)$ time. Adding edges to a graph changes the structure of its sparsifier's restriction to the already existing edges. It would thus seem that the above procedure would cause errors to compound each time that we resparsify, and that we should need to either retain significantly more information or reexamine previously discarded edges in order to construct the new sparsifier. However, we show how to use the information contained in $H$ to perform this resparsification using only the edges retained by earlier steps in nearly linear time.

Cite as

Jonathan A. Kelner and Alex Levin. Spectral Sparsification in the Semi-Streaming Setting. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 440-451, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kelner_et_al:LIPIcs.STACS.2011.440,
  author =	{Kelner, Jonathan A. and Levin, Alex},
  title =	{{Spectral Sparsification in the Semi-Streaming Setting}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{440--451},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.440},
  URN =		{urn:nbn:de:0030-drops-30332},
  doi =		{10.4230/LIPIcs.STACS.2011.440},
  annote =	{Keywords: algorithms and data structures, graph algorithms, spectral graph theory, sub-linear space algorithms, spectral sparsification}
}
Document
Solovay functions and K-triviality

Authors: Laurent Bienvenu, Wolfgang Merkle, and Andre Nies


Abstract
As part of his groundbreaking work on algorithmic randomness, Solovay demonstrated in the 1970s the remarkable fact that there are computable upper bounds of prefix-free Kolmogorov complexity $K$ that are tight on infinitely many values (up to an additive constant). Such computable upper bounds are called Solovay functions. Recent work of Bienvenu and Downey~[STACS 2009, LIPIcs 3, pp 147-158] indicates that Solovay functions are deeply connected with central concepts of algorithmic randomness such as $Omega$ numbers, K-triviality, and Martin-Loef randomness. In what follows, among other results we answer two open problems posed by Bienvenu and Downey about the definition of $K$-triviality and about the Gacs-Miller-Yu characterization of Martin-Loef randomness. The former defines a sequence A to be K-trivial if K(A|n) <=^+ K(n), the latter asserts that a sequence A is Martin-Loef random iff C(A|n) >=^+ n-K(n). So both involve the noncomputable function K. As our main results we show that in both cases K(n) can be equivalently replaced by any Solovay function, and, what is more, that among all computable functions such a replacement is possible exactly for the Solovay functions. Moreover, similar statements hold for the larger class of all right-c.e. in place of the computable functions. These full characterizations, besides having significant theoretical interest on their own, will be useful as tools when working with K-trivial and Martin-Loef random sequences.

Cite as

Laurent Bienvenu, Wolfgang Merkle, and Andre Nies. Solovay functions and K-triviality. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 452-463, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bienvenu_et_al:LIPIcs.STACS.2011.452,
  author =	{Bienvenu, Laurent and Merkle, Wolfgang and Nies, Andre},
  title =	{{Solovay functions and K-triviality}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{452--463},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.452},
  URN =		{urn:nbn:de:0030-drops-30345},
  doi =		{10.4230/LIPIcs.STACS.2011.452},
  annote =	{Keywords: Algorithmic randomness, Kolmogorov complexity, K-triviality}
}
Document
Everywhere complex sequences and the probabilistic method

Authors: Andrey Yu. Rumyantsev


Abstract
The main subject of the paper is everywhere complex sequences. An everywhere complex sequence is a sequence that does not contain substrings of Kolmogorov complexity less than alpha n-O(1) where n is the length of the substring and alpha is a constant between 0 and 1. First, we prove that no randomized algorithm can produce an everywhere complex sequence with positive probability. On the other hand, for weaker notions of everywhere complex sequences the situation is different. For example, there is a probabilistic algorithm that produces (with probability 1) sequences whose substrings of length $n$ have complexity sqrt(n) - O(1). Finally, one may replace the complexity of a substring (in the definition of everywhere complex sequences) by its conditional complexity when the position is given. This gives a stronger notion of everywhere complex sequence, and no randomized algorithm can produce (with positive probability) such a sequence even if alpha n is replaced by sqrt(n), log*(n) or any other monotone unbounded computable function.

Cite as

Andrey Yu. Rumyantsev. Everywhere complex sequences and the probabilistic method. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 464-471, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{rumyantsev:LIPIcs.STACS.2011.464,
  author =	{Rumyantsev, Andrey Yu.},
  title =	{{Everywhere complex sequences and the probabilistic method}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{464--471},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.464},
  URN =		{urn:nbn:de:0030-drops-30353},
  doi =		{10.4230/LIPIcs.STACS.2011.464},
  annote =	{Keywords: Kolmogorov complexity, everywhere complex sequences, randomized algorithms, Medvedev reducibility, Muchnik reducibility}
}
Document
Online Scheduling with Interval Conflicts

Authors: Magnus M. Halldorsson, Boaz Patt-Shamir, and Dror Rawitz


Abstract
In the problem of Scheduling with Interval Conflicts, there is a ground set of items indexed by integers, and the input is a collection of conflicts, each containing all the items whose index lies within some interval on the real line. Conflicts arrive in an online fashion. A scheduling algorithm must select, from each conflict, at most one survivor item, and the goal is to maximize the number (or weight) of items that survive all the conflicts they are involved in. We present a centralized deterministic online algorithm whose competitive ratio is O(log sigma), where sigma is the size of the largest conflict. For the distributed setting, we present another deterministic algorithm whose competitive ratio is 2 log sigma, in the special contiguous case, in which the item indices constitute a contiguous interval of integers. Our upper bounds are complemented by two lower bounds: one that shows that even in the contiguous case, all deterministic algorithms (centralized or distributed) have competitive ratio Omega(log sigma), and that in the non-contiguous case, no deterministic oblivious algorithm (i.e., a distributed algorithm that does not use communication) can have a bounded competitive ratio.

Cite as

Magnus M. Halldorsson, Boaz Patt-Shamir, and Dror Rawitz. Online Scheduling with Interval Conflicts. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 472-483, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{halldorsson_et_al:LIPIcs.STACS.2011.472,
  author =	{Halldorsson, Magnus M. and Patt-Shamir, Boaz and Rawitz, Dror},
  title =	{{Online Scheduling with Interval Conflicts}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{472--483},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.472},
  URN =		{urn:nbn:de:0030-drops-30363},
  doi =		{10.4230/LIPIcs.STACS.2011.472},
  annote =	{Keywords: online scheduling, online set packing, interval conflicts, competitive analysis, compound tasks, distributed algorithms}
}
Document
Analysis of multi-stage open shop processing systems

Authors: Christian E.J. Eggermont, Alexander Schrijver, and Gerhard J. Woeginger


Abstract
We study algorithmic problems in multi-stage open shop processing systems that are centered around reachability and deadlock detection questions. We characterize safe and unsafe system states. We show that it is easy to recognize system states that can be reached from the initial state (where the system is empty), but that in general it is hard to decide whether one given system state is reachable from another given system state. We show that the problem of identifying reachable deadlock states is hard in general open shop systems, but is easy in the special case where no job needs processing on more than two machines (by linear programming and matching theory), and in the special case where all machines have capacity one (by graph-theoretic arguments).

Cite as

Christian E.J. Eggermont, Alexander Schrijver, and Gerhard J. Woeginger. Analysis of multi-stage open shop processing systems. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 484-494, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{eggermont_et_al:LIPIcs.STACS.2011.484,
  author =	{Eggermont, Christian E.J. and Schrijver, Alexander and Woeginger, Gerhard J.},
  title =	{{Analysis of multi-stage open shop processing systems}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{484--494},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.484},
  URN =		{urn:nbn:de:0030-drops-30373},
  doi =		{10.4230/LIPIcs.STACS.2011.484},
  annote =	{Keywords: scheduling, resource allocation, deadlock, computational complexity}
}
Document
Graphs Encoded by Regular Expressions

Authors: Stefan Gulan


Abstract
In the conversion of finite automata to regular expressions, an exponential blowup in size can generally not be avoided. This is due to graph-structural properties of automata which cannot be directly encoded by regular expressions and cause the blowup combinatorially. In order to identify these structures, we generalize the class of arc-series-parallel digraphs to the acyclic case. The resulting digraphs are shown to be reversibly encoded by linear-sized regular expressions. We further derive a characterization of our new class by a finite set of forbidden minors and argue that these minors constitute the primitives causing the blowup in the conversion from automata to expressions.

Cite as

Stefan Gulan. Graphs Encoded by Regular Expressions. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 495-506, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{gulan:LIPIcs.STACS.2011.495,
  author =	{Gulan, Stefan},
  title =	{{Graphs Encoded by Regular Expressions}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{495--506},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.495},
  URN =		{urn:nbn:de:0030-drops-30386},
  doi =		{10.4230/LIPIcs.STACS.2011.495},
  annote =	{Keywords: Digraphs, Regular Expressions, Finite Automata, Forbidden Minors}
}
Document
Extended Regular Expressions: Succinctness and Decidability

Authors: Dominik D. Freydenberger


Abstract
Most modern implementations of regular expression engines allow the use of variables (also called back references). The resulting extended regular expressions (which, in the literature, are also called practical regular expressions, rewbr, or regex) are able to express non-regular languages. The present paper demonstrates that extended regular-expressions cannot be minimized effectively (neither with respect to length, nor number of variables), and that the tradeoff in size between extended and ``classical'' regular expressions is not bounded by any recursive function. In addition to this, we prove the undecidability of several decision problems (universality, equivalence, inclusion, regularity, and cofiniteness) for extended regular expressions. Furthermore, we show that all these results hold even if the extended regular expressions contain only a single variable.

Cite as

Dominik D. Freydenberger. Extended Regular Expressions: Succinctness and Decidability. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 507-518, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{freydenberger:LIPIcs.STACS.2011.507,
  author =	{Freydenberger, Dominik D.},
  title =	{{Extended Regular Expressions: Succinctness and Decidability}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{507--518},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.507},
  URN =		{urn:nbn:de:0030-drops-30396},
  doi =		{10.4230/LIPIcs.STACS.2011.507},
  annote =	{Keywords: extended regular expressions, regex, decidability, non-recursive tradeoffs}
}
Document
New Exact and Approximation Algorithms for the Star Packing Problem in Undirected Graphs

Authors: Maxim Babenko and Alexey Gusakov


Abstract
By a T-star we mean a complete bipartite graph K_{1,t} for some t <= T. For an undirected graph G, a T-star packing is a collection of node-disjoint T-stars in G. For example, we get ordinary matchings for $T = 1$ and packings of paths of length 1 and 2 for $T = 2$. Hereinafter we assume that T >= 2. Hell and Kirkpatrick devised an ad-hoc augmenting algorithm that finds a T-star packing covering the maximum number of nodes. The latter algorithm also yields a min-max formula. We show that T-star packings are reducible to network flows, hence the above problem is solvable in $O(m sqrt(n))$ time (hereinafter n denotes the number of nodes in G, and m --- the number of edges). For the edge-weighted case (in which weights may be assumed positive) finding a maximum $T$-packing is NP-hard. A novel 9/4 T/(T + 1)-factor approximation algorithm is presented. For non-negative node weights the problem reduces to a special case of a max-cost flow. We develop a divide-and-conquer approach that solves it in O(m sqrt(n) log(n)) time. The node-weighted problem with arbitrary weights is more difficult. We prove that it is NP-hard for T >= 3 and is solvable in strongly-polynomial time for T = 2.

Cite as

Maxim Babenko and Alexey Gusakov. New Exact and Approximation Algorithms for the Star Packing Problem in Undirected Graphs. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 519-530, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{babenko_et_al:LIPIcs.STACS.2011.519,
  author =	{Babenko, Maxim and Gusakov, Alexey},
  title =	{{New Exact and Approximation Algorithms for the Star Packing Problem in Undirected Graphs}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{519--530},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.519},
  URN =		{urn:nbn:de:0030-drops-30402},
  doi =		{10.4230/LIPIcs.STACS.2011.519},
  annote =	{Keywords: graph algorithms, approximation algorithms, generalized matchings, flows, weighted packings}
}
Document
Balanced Interval Coloring

Authors: Antonios Antoniadis, Falk Hueffner, Pascal Lenzner, Carsten Moldenhauer, and Alexander Souza


Abstract
We consider the discrepancy problem of coloring n intervals with k colors such that at each point on the line, the maximal difference between the number of intervals of any two colors is minimal. Somewhat surprisingly, a coloring with maximal difference at most one always exists. Furthermore, we give an algorithm with running time O(n log n + kn log k) for its construction. This is in particular interesting because many known results for discrepancy problems are non-constructive. This problem naturally models a load balancing scenario, where $n$~tasks with given start- and endtimes have to be distributed among $k$~servers. Our results imply that this can be done ideally balanced. When generalizing to $d$-dimensional boxes (instead of intervals), a solution with difference at most one is not always possible. We show that for any d >= 2 and any k >= 2 it is NP-complete to decide if such a solution exists, which implies also NP-hardness of the respective minimization problem. In an online scenario, where intervals arrive over time and the color has to be decided upon arrival, the maximal difference in the size of color classes can become arbitrarily high for any online algorithm.

Cite as

Antonios Antoniadis, Falk Hueffner, Pascal Lenzner, Carsten Moldenhauer, and Alexander Souza. Balanced Interval Coloring. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 531-542, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{antoniadis_et_al:LIPIcs.STACS.2011.531,
  author =	{Antoniadis, Antonios and Hueffner, Falk and Lenzner, Pascal and Moldenhauer, Carsten and Souza, Alexander},
  title =	{{Balanced Interval Coloring}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{531--542},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.531},
  URN =		{urn:nbn:de:0030-drops-30413},
  doi =		{10.4230/LIPIcs.STACS.2011.531},
  annote =	{Keywords: Load balancing, discrepancy theory, NP-hardness}
}
Document
Symmetric Determinantal Representation of Weakly-Skew Circuits

Authors: Bruno Grenet, Erich L. Kaltofen, Pascal Koiran, and Natacha Portier


Abstract
We deploy algebraic complexity theoretic techniques for constructing symmetric determinantal representations of weakly-skew circuits, which include formulas. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly-skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Buergisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.

Cite as

Bruno Grenet, Erich L. Kaltofen, Pascal Koiran, and Natacha Portier. Symmetric Determinantal Representation of Weakly-Skew Circuits. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 543-554, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{grenet_et_al:LIPIcs.STACS.2011.543,
  author =	{Grenet, Bruno and Kaltofen, Erich L. and Koiran, Pascal and Portier, Natacha},
  title =	{{Symmetric Determinantal Representation of Weakly-Skew Circuits}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{543--554},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.543},
  URN =		{urn:nbn:de:0030-drops-30426},
  doi =		{10.4230/LIPIcs.STACS.2011.543},
  annote =	{Keywords: algebraic complexity, determinant and permanent of symmetric matrices, formulas, skew circuits, Valiant’s classes}
}
Document
Randomness Efficient Testing of Sparse Black Box Identities of Unbounded Degree over the Reals

Authors: Markus Blaeser and Christian Engels


Abstract
We construct a hitting set generator for sparse multivariate polynomials over the reals. The seed length of our generator is O(log^2 (mn/epsilon)) where m is the number of monomials, n is number of variables, and 1 - epsilon is the hitting probability. The generator can be evaluated in time polynomial in log m, n, and log 1/epsilon. This is the first hitting set generator whose seed length is independent of the degree of the polynomial. The seed length of the best generator so far by Klivans and Spielman [STOC 2001] depends logarithmically on the degree. From this, we get a randomized algorithm for testing sparse black box polynomial identities over the reals using O(log^2 (mn/epsilon)) random bits with running time polynomial in log m, n, and log(1/epsilon). We also design a deterministic test with running time ~O(m^3 n^3). Here, the ~O-notation suppresses polylogarithmic factors. The previously best deterministic test by Lipton and Vishnoi [SODA 2003] has a running time that depends polynomially on log delta, where $delta$ is the degree of the black box polynomial.

Cite as

Markus Blaeser and Christian Engels. Randomness Efficient Testing of Sparse Black Box Identities of Unbounded Degree over the Reals. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 555-566, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{blaeser_et_al:LIPIcs.STACS.2011.555,
  author =	{Blaeser, Markus and Engels, Christian},
  title =	{{Randomness Efficient Testing of Sparse Black Box Identities of Unbounded Degree over the Reals}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{555--566},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.555},
  URN =		{urn:nbn:de:0030-drops-30433},
  doi =		{10.4230/LIPIcs.STACS.2011.555},
  annote =	{Keywords: Descartes’ rule of signs, polynomial identity testing, sparse polynomials, black box testing}
}
Document
On Isomorphism Testing of Groups with Normal Hall Subgroups

Authors: Youming Qiao, Jayalal Sarma M.N., and Bangsheng Tang


Abstract
A normal Hall subgroup $N$ of a group $G$ is a normal subgroup with its order coprime with its index. Schur-Zassenhaus theorem states that every normal Hall subgroup has a complement subgroup, that is a set of coset representatives H which also forms a subgroup of G. In this paper, we present a framework to test isomorphism of groups with at least one normal Hall subgroup, when groups are given as multiplication tables. To establish the framework, we first observe that a proof of Schur-Zassenhaus theorem is constructive, and formulate a necessary and sufficient condition for testing isomorphism in terms of the associated actions of the semidirect products, and isomorphisms of the normal parts and complement parts. We then focus on the case when the normal subgroup is abelian. Utilizing basic facts of representation theory of finite groups and a technique by Le Gall [STACS 2009], we first get an efficient isomorphism testing algorithm when the complement has bounded number of generators. For the case when the complement subgroup is elementary abelian, which does not necessarily have bounded number of generators, we obtain a polynomial time isomorphism testing algorithm by reducing to generalized code isomorphism problem. A solution to the latter can be obtained by a mild extension of the singly exponential (in the number of coordinates) time algorithm for code isomorphism problem developed recently by Babai. Enroute to obtaining the above reduction, we study the following computational problem in representation theory of finite groups: given two representations rho and tau of a group $H$ over Z_p^d , p a prime, determine if there exists an automorphism phi:H -> H, such that the induced representation rho_phi=rho o phi and tau are equivalent, in time poly(|H|,p^d).

Cite as

Youming Qiao, Jayalal Sarma M.N., and Bangsheng Tang. On Isomorphism Testing of Groups with Normal Hall Subgroups. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 567-578, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{qiao_et_al:LIPIcs.STACS.2011.567,
  author =	{Qiao, Youming and Sarma M.N., Jayalal and Tang, Bangsheng},
  title =	{{On Isomorphism Testing of Groups with Normal Hall Subgroups}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{567--578},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.567},
  URN =		{urn:nbn:de:0030-drops-30443},
  doi =		{10.4230/LIPIcs.STACS.2011.567},
  annote =	{Keywords: Group Isomorphism Problem, Normal Hall Subgroups, Computational Complexity}
}
Document
Space Complexity of Perfect Matching in Bounded Genus Bipartite Graphs

Authors: Samir Datta, Raghav Kulkarni, Raghunath Tewari, and N. Variyam Vinodchandran


Abstract
We investigate the space complexity of certain perfect matching problems over bipartite graphs embedded on surfaces of constant genus (orientable or non-orientable). We show that the problems of deciding whether such graphs have (1) a perfect matching or not and (2) a unique perfect matching or not, are in the logspace complexity class SPL. Since SPL is contained in the logspace counting classes oplus L (in fact in mod_k for all k >= 2), C=L, and PL, our upper bound places the above-mentioned matching problems in these counting classes as well. We also show that the search version, computing a perfect matching, for this class of graphs is in FL^SPL. Our results extend the same upper bounds for these problems over bipartite planar graphs known earlier. As our main technical result, we design a logspace computable and polynomially bounded weight function which isolates a minimum weight perfect matching in bipartite graphs embedded on surfaces of constant genus. We use results from algebraic topology for proving the correctness of the weight function.

Cite as

Samir Datta, Raghav Kulkarni, Raghunath Tewari, and N. Variyam Vinodchandran. Space Complexity of Perfect Matching in Bounded Genus Bipartite Graphs. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 579-590, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.STACS.2011.579,
  author =	{Datta, Samir and Kulkarni, Raghav and Tewari, Raghunath and Vinodchandran, N. Variyam},
  title =	{{Space Complexity of Perfect Matching in Bounded Genus Bipartite Graphs}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{579--590},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.579},
  URN =		{urn:nbn:de:0030-drops-30450},
  doi =		{10.4230/LIPIcs.STACS.2011.579},
  annote =	{Keywords: perfect matching, bounded genus graphs, isolation problem}
}
Document
The Recognition of Triangle Graphs

Authors: George B. Mertzios


Abstract
Trapezoid graphs are the intersection graphs of trapezoids, where every trapezoid has a pair of opposite sides lying on two parallel lines L_{1} and L_{2} of the plane. Strictly between permutation and trapezoid graphs lie the simple-triangle graphs -- also known as PI graphs (for Point-Interval) -- where the objects are triangles with one point of the triangle on L_1 and the other two points (i.e. interval) of the triangle on L_2, and the triangle graphs -- also known as PI^* graphs -- where again the objects are triangles, but now there is no restriction on which line contains one point of the triangle and which line contains the other two. The complexity status of both triangle and simple-triangle recognition problems (namely, the problems of deciding whether a given graph is a triangle or a simple-triangle graph, respectively) have been the most fundamental open problems on these classes of graphs since their introduction two decades ago. Moreover, since triangle and simple-triangle graphs lie naturally between permutation and trapezoid graphs, and since they share a very similar structure with them, it was expected that the recognition of triangle and simple-triangle graphs is polynomial, as it is also the case for permutation and trapezoid graphs. In this article we surprisingly prove that the recognition of triangle graphs is NP-complete, even in the case where the input graph is known to be a trapezoid graph.

Cite as

George B. Mertzios. The Recognition of Triangle Graphs. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 591-602, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{mertzios:LIPIcs.STACS.2011.591,
  author =	{Mertzios, George B.},
  title =	{{The Recognition of Triangle Graphs}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{591--602},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.591},
  URN =		{urn:nbn:de:0030-drops-30469},
  doi =		{10.4230/LIPIcs.STACS.2011.591},
  annote =	{Keywords: Intersection graphs, trapezoid graphs, PI graphs, PI∗ graphs, recognition problem, NP-complete}
}
Document
Collapse Operation Increases Expressive Power of Deterministic Higher Order Pushdown Automata

Authors: Pawel Parys


Abstract
We show that collapsible deterministic second level pushdown automata can recognize more languages than deterministic second level pushdown automata (without collapse). This implies that there exists a tree generated by a second level recursion scheme which is not generated by any second level safe recursion scheme.

Cite as

Pawel Parys. Collapse Operation Increases Expressive Power of Deterministic Higher Order Pushdown Automata. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 603-614, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{parys:LIPIcs.STACS.2011.603,
  author =	{Parys, Pawel},
  title =	{{Collapse Operation Increases Expressive Power of Deterministic Higher Order Pushdown Automata}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{603--614},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.603},
  URN =		{urn:nbn:de:0030-drops-30478},
  doi =		{10.4230/LIPIcs.STACS.2011.603},
  annote =	{Keywords: pushdown automata}
}
Document
Temporal Synthesis for Bounded Systems and Environments

Authors: Orna Kupferman, Yoad Lustig, Moshe Y. Vardi, and Mihalis Yannakakis


Abstract
Temporal synthesis is the automated construction of a system from its temporal specification. It is by now realized that requiring the synthesized system to satisfy the specifications against all possible environments may be too demanding, and, dually, allowing all systems may be not demanding enough. In this work we study bounded temporal synthesis, in which bounds on the sizes of the state space of the system and the environment are additional parameters to the synthesis problem. This study is motivated by the fact that such bounds may indeed change the answer to the synthesis problem, as well as the theoretical and computational aspects of the synthesis problem. In particular, a finer analysis of synthesis, which takes system and environment sizes into account, yields deeper insight into the quantificational structure of the synthesis problem and the relationship between strong synthesis -- there exists a system such that for all environments, the specification holds, and weak synthesis -- for all environments there exists a system such that the specification holds. We first show that unlike the unbounded setting, where determinacy of regular games implies that strong and weak synthesis coincide, these notions do not coincide in the bounded setting. We then turn to study the complexity of deciding strong and weak synthesis. We show that bounding the size of the system or both the system and the environment, turns the synthesis problem into a search problem, and one cannot expect to do better than brute-force search. In particular, the synthesis problem for bounded systems and environment is Sigma^P_2-complete (in terms of the bounds, for a specification given by a deterministic automaton). We also show that while bounding the environment may lead to the synthesis of specifications that are otherwise unrealizable, such relaxation of the problem comes at a high price from a complexity-theoretic point of view.

Cite as

Orna Kupferman, Yoad Lustig, Moshe Y. Vardi, and Mihalis Yannakakis. Temporal Synthesis for Bounded Systems and Environments. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 615-626, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kupferman_et_al:LIPIcs.STACS.2011.615,
  author =	{Kupferman, Orna and Lustig, Yoad and Vardi, Moshe Y. and Yannakakis, Mihalis},
  title =	{{Temporal Synthesis for Bounded Systems and Environments}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{615--626},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.615},
  URN =		{urn:nbn:de:0030-drops-30481},
  doi =		{10.4230/LIPIcs.STACS.2011.615},
  annote =	{Keywords: temporal synthesis}
}
Document
Linear temporal logic for regular cost functions

Authors: Denis Kuperberg


Abstract
Regular cost functions have been introduced recently as an extension to the notion of regular languages with counting capabilities, which retains strong closure, equivalence, and decidability properties. The specificity of cost functions is that exact values are not considered, but only estimated. In this paper, we define an extension of Linear Temporal Logic (LTL) over finite words to describe cost functions. We give an explicit translation from this new logic to automata. We then algebraically characterize the expressive power of this logic, using a new syntactic congruence for cost functions introduced in this paper.

Cite as

Denis Kuperberg. Linear temporal logic for regular cost functions. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 627-636, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kuperberg:LIPIcs.STACS.2011.627,
  author =	{Kuperberg, Denis},
  title =	{{Linear temporal logic for regular cost functions}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{627--636},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.627},
  URN =		{urn:nbn:de:0030-drops-30499},
  doi =		{10.4230/LIPIcs.STACS.2011.627},
  annote =	{Keywords: linear temporal logic, regular cost functions, regular languages}
}
Document
Bounds on the maximum multiplicity of some common geometric graphs

Authors: Adrian Dumitrescu, Andre Schulz, Adam Sheffer, and Csaba D. Toth


Abstract
We obtain new lower and upper bounds for the maximum multiplicity of some weighted, and respectively non-weighted, common geometric graphs drawn on $n$ points in the plane in general position (with no three points collinear): perfect matchings, spanning trees, spanning cycles (tours), and triangulations. (i) We present a new lower bound construction for the maximum number of triangulations a set of $n$ points in general position can have. In particular, we show that a generalized double chain formed by two almost convex chains admits Omega (8.65^n) different triangulations. This improves the bound Omega (8.48^n) achieved by the previous best construction, the double zig-zag chain studied by Aichholzer et al. (ii) We present a new lower bound of Omega(11.97^n) for the number of non-crossing spanning trees of the double chain composed of two convex chains. The previous bound, Omega(10.42^n), stood unchanged for more than 10 years. (iii) Using a recent upper bound of 30^n for the number of triangulations, due to Sharir and Sheffer, we show that n points in the plane in general position admit at most O(68.664^n) non-crossing spanning cycles. (iv) We derive exponential lower bounds for the number of maximum and minimum weighted geometric graphs (matchings, spanning trees, and tours). It was known that the number of longest non-crossing spanning trees of a point set can be exponentially large, and here we show that this can be also realized with points in convex position. For points in convex position we obtain tight bounds for the number of longest and shortest tours. We give a combinatorial characterization of the longest tours, which leads to an O(n log n) time algorithm for computing them.

Cite as

Adrian Dumitrescu, Andre Schulz, Adam Sheffer, and Csaba D. Toth. Bounds on the maximum multiplicity of some common geometric graphs. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 637-648, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{dumitrescu_et_al:LIPIcs.STACS.2011.637,
  author =	{Dumitrescu, Adrian and Schulz, Andre and Sheffer, Adam and Toth, Csaba D.},
  title =	{{Bounds on the maximum multiplicity of some common geometric graphs}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{637--648},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.637},
  URN =		{urn:nbn:de:0030-drops-30505},
  doi =		{10.4230/LIPIcs.STACS.2011.637},
  annote =	{Keywords: combinatorial geometry, matching, triangulation, spanning tree, spanning cycle, weighted structure, non-crossing condition}
}
Document
On the computational complexity of Ham-Sandwich cuts, Helly sets, and related problems

Authors: Christian Knauer, Hans Raj Tiwary, and Daniel Werner


Abstract
We study several canonical decision problems arising from some well-known theorems from combinatorial geometry. Among others, we show that computing the minimum size of a Caratheodory set and a Helly set and certain decision versions of the hs cut problem are W[1]-hard (and NP-hard) if the dimension is part of the input. This is done by fpt-reductions (which are actually ptime-reductions) from the d-Sum problem. Our reductions also imply that the problems we consider cannot be solved in time n^{o(d)} (where n is the size of the input), unless the Exponential-Time Hypothesis (ETH) is false. The technique of embedding d-Sum into a geometric setting is conceptually much simpler than direct fpt-reductions from purely combinatorial W[1]-hard problems (like the clique problem) and has great potential to show (parameterized) hardness and (conditional) lower bounds for many other problems.

Cite as

Christian Knauer, Hans Raj Tiwary, and Daniel Werner. On the computational complexity of Ham-Sandwich cuts, Helly sets, and related problems. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 649-660, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{knauer_et_al:LIPIcs.STACS.2011.649,
  author =	{Knauer, Christian and Tiwary, Hans Raj and Werner, Daniel},
  title =	{{On the computational complexity of Ham-Sandwich cuts, Helly sets, and related problems}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{649--660},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.649},
  URN =		{urn:nbn:de:0030-drops-30514},
  doi =		{10.4230/LIPIcs.STACS.2011.649},
  annote =	{Keywords: computational geometry, combinatorial geometry, ham-sandwich cuts, parameterized complexity.}
}
Document
Quantum query complexity of minor-closed graph properties

Authors: Andrew M. Childs and Robin Kothari


Abstract
We study the quantum query complexity of minor-closed graph properties, which include such problems as determining whether an $n$-vertex graph is planar, is a forest, or does not contain a path of a given length. We show that most minor-closed properties -- those that cannot be characterized by a finite set of forbidden subgraphs -- have quantum query complexity Theta(n^(3/2)). To establish this, we prove an adversary lower bound using a detailed analysis of the structure of minor-closed properties with respect to forbidden topological minors and forbidden subgraphs. On the other hand, we show that minor-closed properties (and more generally, sparse graph properties) that can be characterized by finitely many forbidden subgraphs can be solved strictly faster, in o(n^(3/2)) queries. Our algorithms are a novel application of the quantum walk search framework and give improved upper bounds for several subgraph-finding problems.

Cite as

Andrew M. Childs and Robin Kothari. Quantum query complexity of minor-closed graph properties. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 661-672, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{childs_et_al:LIPIcs.STACS.2011.661,
  author =	{Childs, Andrew M. and Kothari, Robin},
  title =	{{Quantum query complexity of minor-closed graph properties}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{661--672},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.661},
  URN =		{urn:nbn:de:0030-drops-30521},
  doi =		{10.4230/LIPIcs.STACS.2011.661},
  annote =	{Keywords: quatum query complexity, quantum algorithms, lower bounds, graph minors, graph properties}
}
Document
Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length

Authors: Anna Gal and Andrew Mills


Abstract
Locally decodable codes are error correcting codes with the extra property that, in order to retrieve the correct value of just one position of the input with high probability, it is sufficient to read a small number of positions of the corresponding, possibly corrupted codeword. A breakthrough result by Yekhanin showed that 3-query linear locally decodable codes may have subexponential length. The construction of Yekhanin, and the three query constructions that followed, achieve correctness only up to a certain limit which is $1 - 3 delta$ for nonbinary codes, where an adversary is allowed to corrupt up to delta fraction of the codeword. The largest correctness for a subexponential length 3-query binary code is achieved in a construction by Woodruff, and it is below 1 - 3 delta. We show that achieving slightly larger correctness (as a function of $delta$) requires exponential codeword length for 3-query codes. Previously, there were no larger than quadratic lower bounds known for locally decodable codes with more than 2 queries, even in the case of 3-query linear codes. Our results hold for linear codes over arbitrary finite fields and for binary nonlinear codes. Considering larger number of queries, we obtain lower bounds for q-query codes for q>3, under certain assumptions on the decoding algorithm that have been commonly used in previous constructions. We also prove bounds on the largest correctness achievable by these decoding algorithms, regardless of the length of the code. Our results explain the limitations on correctness in previous constructions using such decoding algorithms. In addition, our results imply tradeoffs on the parameters of error correcting data structures.

Cite as

Anna Gal and Andrew Mills. Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 673-684, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{gal_et_al:LIPIcs.STACS.2011.673,
  author =	{Gal, Anna and Mills, Andrew},
  title =	{{Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{673--684},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.673},
  URN =		{urn:nbn:de:0030-drops-30534},
  doi =		{10.4230/LIPIcs.STACS.2011.673},
  annote =	{Keywords: error correcting codes}
}
Document
Index of Authors

Authors: Thomas Schwentick and Christoph Dürr


Abstract
Author Index.

Cite as

Thomas Schwentick and Christoph Dürr. Index of Authors. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 685-686, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{schwentick_et_al:LIPIcs.STACS.2011.685,
  author =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  title =	{{Index of Authors}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{685--686},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.685},
  URN =		{urn:nbn:de:0030-drops-30542},
  doi =		{10.4230/LIPIcs.STACS.2011.685},
  annote =	{Keywords: Author Index}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail