Volume

LIPIcs, Volume 5

27th International Symposium on Theoretical Aspects of Computer Science



Thumbnail PDF

Event

STACS 2010, March 4-6, 2010, Nancy, France

Editors

Jean-Yves Marion
Thomas Schwentick

Publication Details

  • published at: 2010-03-09
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-16-3
  • DBLP: db/conf/stacs/stacs2010

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 5, STACS'10, Complete Volume

Authors: Jean-Yves Marion and Thomas Schwentick


Abstract
LIPIcs, Volume 5, STACS'10, Complete Volume

Cite as

27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{marion_et_al:LIPIcs.STACS.2010,
  title =	{{LIPIcs, Volume 5, STACS'10, Complete Volume}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010},
  URN =		{urn:nbn:de:0030-drops-40996},
  doi =		{10.4230/LIPIcs.STACS.2010},
  annote =	{Keywords: LIPIcs, Volume 5, STACS'10, Complete Volume}
}
Document
Front Matter
Foreword -- 27th International Symposium on Theoretical Aspects of Computer Science

Authors: Jean-Yves Marion and Thomas Schwentick


Abstract
The STACS conference of March 4-6, 2010, held in Nancy, is the 27th in this series. The STACS 2010 call for papers led to over 238 submissions from 40 countries. Each paper was assigned to three program committee members. The committee selected 54 papers during a two-week electronic meeting held in November.

Cite as

27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. i-vi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{marion_et_al:LIPIcs.STACS.2010.2439,
  author =	{Marion, Jean-Yves and Schwentick, Thomas},
  title =	{{Foreword -- 27th International Symposium on Theoretical Aspects of Computer Science}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{i--vi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2439},
  URN =		{urn:nbn:de:0030-drops-24393},
  doi =		{10.4230/LIPIcs.STACS.2010.2439},
  annote =	{Keywords: Foreword}
}
Document
Table of Contents -- 27th International Symposium on Theoretical Aspects of Computer Science

Authors: Jean-Yves Marion and Thomas Schwentick


Abstract
Table of contents

Cite as

Jean-Yves Marion and Thomas Schwentick. Table of Contents -- 27th International Symposium on Theoretical Aspects of Computer Science. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 7-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{marion_et_al:LIPIcs.STACS.2010.2505,
  author =	{Marion, Jean-Yves and Schwentick, Thomas},
  title =	{{Table of Contents -- 27th International Symposium on Theoretical Aspects of Computer Science}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{7--10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2505},
  URN =		{urn:nbn:de:0030-drops-25059},
  doi =		{10.4230/LIPIcs.STACS.2010.2505},
  annote =	{Keywords: Table of contents}
}
Document
Beyond omega-Regular Languages

Authors: Mikolaj Bojanczyk


Abstract
The paper presents some automata and logics on $\omega$-words, which capture all $\omega$-regular languages, and yet still have good closure and decidability properties.

Cite as

Mikolaj Bojanczyk. Beyond omega-Regular Languages. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 11-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bojanczyk:LIPIcs.STACS.2010.2440,
  author =	{Bojanczyk, Mikolaj},
  title =	{{Beyond omega-Regular Languages}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{11--16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2440},
  URN =		{urn:nbn:de:0030-drops-24409},
  doi =		{10.4230/LIPIcs.STACS.2010.2440},
  annote =	{Keywords: Automata, monadic second-order logic}
}
Document
Reflections on Multivariate Algorithmics and Problem Parameterization

Authors: Rolf Niedermeier


Abstract
Research on parameterized algorithmics for NP-hard problems has steadily grown over the last years. We survey and discuss how parameterized complexity analysis naturally develops into the field of multivariate algorithmics. Correspondingly, we describe how to perform a systematic investigation and exploitation of the ``parameter space'' of computationally hard problems.

Cite as

Rolf Niedermeier. Reflections on Multivariate Algorithmics and Problem Parameterization. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 17-32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{niedermeier:LIPIcs.STACS.2010.2495,
  author =	{Niedermeier, Rolf},
  title =	{{Reflections on Multivariate Algorithmics and Problem Parameterization}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{17--32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2495},
  URN =		{urn:nbn:de:0030-drops-24952},
  doi =		{10.4230/LIPIcs.STACS.2010.2495},
  annote =	{Keywords: Multivariate algorithmics, problem parameterization}
}
Document
Mathematics, Cryptology, Security

Authors: Jacques Stern


Abstract
In this talk, I will review some of the work performed by the research community in cryptology and security since the invention of public key cryptography by Diffie and Hellman in 1976. This community has developped many challenging lines of research. I will only focus on some of these, and moreover I will adopt an extremely specific perspective: for each chosen example, I will try to trace the original mathematics that underly the methods in use. Over the years, maybe due to my original training as a mathematician, I have come to consider that linking recent advances and challenges in cryptology and security to the work of past mathematicians is indeed fascinating. The range of examples will span both theory and practice: I will show that the celebrated RSA algorithm is intimately connected to mathematics that go back to the middle of the XVIIIth century. I will also cover alternatives to RSA, the method of "provable security", as well as some aspects of the security of electronic payments.

Cite as

Jacques Stern. Mathematics, Cryptology, Security. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 33-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{stern:LIPIcs.STACS.2010.2441,
  author =	{Stern, Jacques},
  title =	{{Mathematics, Cryptology, Security}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{33--34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2441},
  URN =		{urn:nbn:de:0030-drops-24416},
  doi =		{10.4230/LIPIcs.STACS.2010.2441},
  annote =	{Keywords: Mathematics, Cryptology, Security}
}
Document
Large-Girth Roots of Graphs

Authors: Anna Adamaszek and Michal Adamaszek


Abstract
We study the problem of recognizing graph powers and computing roots of graphs. We provide a polynomial time recognition algorithm for $r$-th powers of graphs of girth at least $2r+3$, thus improving a bound conjectured by Farzad et al. (STACS 2009). Our algorithm also finds all $r$-th roots of a given graph that have girth at least $2r+3$ and no degree one vertices, which is a step towards a recent conjecture of Levenshtein that such root should be unique. On the negative side, we prove that recognition becomes an NP-complete problem when the bound on girth is about twice smaller. Similar results have so far only been attempted for $r=2,3$.

Cite as

Anna Adamaszek and Michal Adamaszek. Large-Girth Roots of Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 35-46, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{adamaszek_et_al:LIPIcs.STACS.2010.2442,
  author =	{Adamaszek, Anna and Adamaszek, Michal},
  title =	{{Large-Girth Roots of Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{35--46},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2442},
  URN =		{urn:nbn:de:0030-drops-24429},
  doi =		{10.4230/LIPIcs.STACS.2010.2442},
  annote =	{Keywords: Graph roots, Graph powers, NP-completeness, Recognition algorithms}
}
Document
The Tropical Double Description Method

Authors: Xavier Allamigeon, Stéphane Gaubert, and Éric Goubault


Abstract
We develop a tropical analogue of the classical double description method allowing one to compute an internal representation (in terms of vertices) of a polyhedron defined externally (by inequalities). The heart of the tropical algorithm is a characterization of the extreme points of a polyhedron in terms of a system of constraints which define it. We show that checking the extremality of a point reduces to checking whether there is only one minimal strongly connected component in an hypergraph. The latter problem can be solved in almost linear time, which allows us to eliminate quickly redundant generators. We report extensive tests (including benchmarks from an application to static analysis) showing that the method outperforms experimentally the previous ones by orders of magnitude. The present tools also lead to worst case bounds which improve the ones provided by previous methods.

Cite as

Xavier Allamigeon, Stéphane Gaubert, and Éric Goubault. The Tropical Double Description Method. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 47-58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{allamigeon_et_al:LIPIcs.STACS.2010.2443,
  author =	{Allamigeon, Xavier and Gaubert, St\'{e}phane and Goubault, \'{E}ric},
  title =	{{The Tropical Double Description Method}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{47--58},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2443},
  URN =		{urn:nbn:de:0030-drops-24435},
  doi =		{10.4230/LIPIcs.STACS.2010.2443},
  annote =	{Keywords: Convexity in tropical algebra, algorithmics and combinatorics of tropical polyhedra, computational geometry, discrete event systems, static analysis}
}
Document
The Remote Point Problem, Small Bias Spaces, and Expanding Generator Sets

Authors: Vikraman Arvind and Srikanth Srinivasan


Abstract
Using $\varepsilon$-bias spaces over $\F_2$, we show that the Remote Point Problem (RPP), introduced by Alon et al \cite{APY09}, has an $\NC^2$ algorithm (achieving the same parameters as \cite{APY09}). We study a generalization of the Remote Point Problem to groups: we replace $\F_2^n$ by $\mcG^n$ for an arbitrary fixed group $\mcG$. When $\mcG$ is Abelian we give an $\NC^2$ algorithm for RPP, again using $\varepsilon$-bias spaces. For nonabelian $\mcG$, we give a deterministic polynomial-time algorithm for RPP. We also show the connection to construction of expanding generator sets for the group $\mcG^n$. All our algorithms for the RPP achieve essentially the same parameters as \cite{APY09}.

Cite as

Vikraman Arvind and Srikanth Srinivasan. The Remote Point Problem, Small Bias Spaces, and Expanding Generator Sets. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 59-70, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.STACS.2010.2444,
  author =	{Arvind, Vikraman and Srinivasan, Srikanth},
  title =	{{The Remote Point Problem, Small Bias Spaces, and Expanding Generator Sets}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{59--70},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2444},
  URN =		{urn:nbn:de:0030-drops-24449},
  doi =		{10.4230/LIPIcs.STACS.2010.2444},
  annote =	{Keywords: Small Bias Spaces, Expander Graphs, Cayley Graphs, Remote Point Problem}
}
Document
Evasiveness and the Distribution of Prime Numbers

Authors: László Babai, Anandam Banerjee, Raghav Kulkarni, and Vipul Naik


Abstract
A Boolean function on $N$ variables is called \emph{evasive} if its decision-tree complexity is $N$. A sequence $B_n$ of Boolean functions is \emph{eventually evasive} if $B_n$ is evasive for all sufficiently large $n$. We confirm the eventual evasiveness of several classes of monotone graph properties under widely accepted number theoretic hypotheses. In particular we show that Chowla's conjecture on Dirichlet primes implies that (a) for any graph $H$, ``forbidden subgraph $H$'' is eventually evasive and (b) all nontrivial monotone properties of graphs with $\le n^{3/2-\epsilon}$ edges are eventually evasive. ($n$ is the number of vertices.) While Chowla's conjecture is not known to follow from the Extended Riemann Hypothesis (ERH, the Riemann Hypothesis for Dirichlet's $L$ functions), we show (b) with the bound $O(n^{5/4-\epsilon})$ under ERH. We also prove unconditional results: (a$'$) for any graph $H$, the query complexity of ``forbidden subgraph $H$'' is $\binom{n}{2} - O(1)$; (b$'$) for some constant $c>0$, all nontrivial monotone properties of graphs with $\le cn\log n+O(1)$ edges are eventually evasive. Even these weaker, unconditional results rely on deep results from number theory such as Vinogradov's theorem on the Goldbach conjecture. Our technical contribution consists in connecting the topological framework of Kahn, Saks, and Sturtevant (1984), as further developed by Chakrabarti, Khot, and Shi (2002), with a deeper analysis of the orbital structure of permutation groups and their connection to the distribution of prime numbers. Our unconditional results include stronger versions and generalizations of some result of Chakrabarti et al.

Cite as

László Babai, Anandam Banerjee, Raghav Kulkarni, and Vipul Naik. Evasiveness and the Distribution of Prime Numbers. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 71-82, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{babai_et_al:LIPIcs.STACS.2010.2445,
  author =	{Babai, L\'{a}szl\'{o} and Banerjee, Anandam and Kulkarni, Raghav and Naik, Vipul},
  title =	{{Evasiveness and the Distribution of Prime Numbers}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{71--82},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2445},
  URN =		{urn:nbn:de:0030-drops-24451},
  doi =		{10.4230/LIPIcs.STACS.2010.2445},
  annote =	{Keywords: Decision tree complexity, evasiveness, graph property, group action, Dirichlet primes, Extended Riemann Hypothesis}
}
Document
Dynamic Sharing of a Multiple Access Channel

Authors: Marcin Bienkowski, Marek Klonowski, Miroslaw Korzeniowski, and Dariusz R. Kowalski


Abstract
In this paper we consider the mutual exclusion problem on a multiple access channel. Mutual exclusion is one of the fundamental problems in distributed computing. In the classic version of this problem, $n$ processes perform a concurrent program which occasionally triggers some of them to use shared resources, such as memory, communication channel, device, etc. The goal is to design a distributed algorithm to control entries and exits to/from the shared resource in such a way that in any time there is at most one process accessing it. We consider both the classic and a slightly weaker version of mutual exclusion, called $\ep$-mutual-exclusion, where for each period of a process staying in the critical section the probability that there is some other process in the critical section is at most $\ep$. We show that there are channel settings, where the classic mutual exclusion is not feasible even for randomized algorithms, while $\ep$-mutual-exclusion is. In more relaxed channel settings, we prove an exponential gap between the makespan complexity of the classic mutual exclusion problem and its weaker $\ep$-exclusion version. We also show how to guarantee fairness of mutual exclusion algorithms, i.e., that each process that wants to enter the critical section will eventually succeed.

Cite as

Marcin Bienkowski, Marek Klonowski, Miroslaw Korzeniowski, and Dariusz R. Kowalski. Dynamic Sharing of a Multiple Access Channel. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 83-94, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.STACS.2010.2446,
  author =	{Bienkowski, Marcin and Klonowski, Marek and Korzeniowski, Miroslaw and Kowalski, Dariusz R.},
  title =	{{Dynamic Sharing of a Multiple Access Channel}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{83--94},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2446},
  URN =		{urn:nbn:de:0030-drops-24467},
  doi =		{10.4230/LIPIcs.STACS.2010.2446},
  annote =	{Keywords: Distributed algorithms, multiple access channel, mutual exclusion}
}
Document
Exact Covers via Determinants

Authors: Andreas Björklund


Abstract
Given a $k$-uniform hypergraph on $n$ vertices, partitioned in $k$ equal parts such that every hyperedge includes one vertex from each part, the $k$-Dimensional Matching problem asks whether there is a disjoint collection of the hyperedges which covers all vertices. We show it can be solved by a randomized polynomial space algorithm in $O^*(2^{n(k-2)/k})$ time. The $O^*()$ notation hides factors polynomial in $n$ and $k$. The general Exact Cover by $k$-Sets problem asks the same when the partition constraint is dropped and arbitrary hyperedges of cardinality $k$ are permitted. We show it can be solved by a randomized polynomial space algorithm in $O^*(c_k^n)$ time, where $c_3=1.496, c_4=1.642, c_5=1.721$, and provide a general bound for larger $k$. Both results substantially improve on the previous best algorithms for these problems, especially for small $k$. They follow from the new observation that Lov\'asz' perfect matching detection via determinants (Lov\'asz, 1979) admits an embedding in the recently proposed inclusion--exclusion counting scheme for set covers, \emph{despite} its inability to count the perfect matchings.

Cite as

Andreas Björklund. Exact Covers via Determinants. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 95-106, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bjorklund:LIPIcs.STACS.2010.2447,
  author =	{Bj\"{o}rklund, Andreas},
  title =	{{Exact Covers via Determinants}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{95--106},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2447},
  URN =		{urn:nbn:de:0030-drops-24474},
  doi =		{10.4230/LIPIcs.STACS.2010.2447},
  annote =	{Keywords: Moderately Exponential Time Algorithms, Exact Set Cover, \$k\$-Dimensional Matching}
}
Document
On Iterated Dominance, Matrix Elimination, and Matched Paths

Authors: Felix Brandt, Felix Fischer, and Markus Holzer


Abstract
We study computational problems arising from the iterated removal of weakly dominated actions in anonymous games. Our main result shows that it is NP-complete to decide whether an anonymous game with three actions can be solved via iterated weak dominance. The two-action case can be reformulated as a natural elimination problem on a matrix, the complexity of which turns out to be surprisingly difficult to characterize and ultimately remains open. We however establish connections to a matching problem along paths in a directed graph, which is computationally hard in general but can also be used to identify tractable cases of matrix elimination. We finally identify different classes of anonymous games where iterated dominance is in P and NP-complete, respectively.

Cite as

Felix Brandt, Felix Fischer, and Markus Holzer. On Iterated Dominance, Matrix Elimination, and Matched Paths. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 107-118, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{brandt_et_al:LIPIcs.STACS.2010.2448,
  author =	{Brandt, Felix and Fischer, Felix and Holzer, Markus},
  title =	{{On Iterated Dominance, Matrix Elimination, and Matched Paths}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{107--118},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2448},
  URN =		{urn:nbn:de:0030-drops-24485},
  doi =		{10.4230/LIPIcs.STACS.2010.2448},
  annote =	{Keywords: Algorithmic Game Theory, Computational Complexity, Iterated Dominance, Matching}
}
Document
AMS Without 4-Wise Independence on Product Domains

Authors: Vladimir Braverman, Kai-Min Chung, Zhenming Liu, Michael Mitzenmacher, and Rafail Ostrovsky


Abstract
In their seminal work, Alon, Matias, and Szegedy introduced several sketching techniques, including showing that $4$-wise independence is sufficient to obtain good approximations of the second frequency moment. In this work, we show that their sketching technique can be extended to product domains $[n]^k$ by using the product of $4$-wise independent functions on $[n]$. Our work extends that of Indyk and McGregor, who showed the result for $k = 2$. Their primary motivation was the problem of identifying correlations in data streams. In their model, a stream of pairs $(i,j) \in [n]^2$ arrive, giving a joint distribution $(X,Y)$, and they find approximation algorithms for how close the joint distribution is to the product of the marginal distributions under various metrics, which naturally corresponds to how close $X$ and $Y$ are to being independent. By using our technique, we obtain a new result for the problem of approximating the $\ell_2$ distance between the joint distribution and the product of the marginal distributions for $k$-ary vectors, instead of just pairs, in a single pass. Our analysis gives a randomized algorithm that is a $(1\pm \epsilon)$ approximation (with probability $1-\delta$) that requires space logarithmic in $n$ and $m$ and proportional to $3^k$.

Cite as

Vladimir Braverman, Kai-Min Chung, Zhenming Liu, Michael Mitzenmacher, and Rafail Ostrovsky. AMS Without 4-Wise Independence on Product Domains. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 119-130, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.STACS.2010.2449,
  author =	{Braverman, Vladimir and Chung, Kai-Min and Liu, Zhenming and Mitzenmacher, Michael and Ostrovsky, Rafail},
  title =	{{AMS Without 4-Wise Independence on Product Domains}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{119--130},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2449},
  URN =		{urn:nbn:de:0030-drops-24496},
  doi =		{10.4230/LIPIcs.STACS.2010.2449},
  annote =	{Keywords: Data Streams, Randomized Algorithms, Streaming Algorithms, Independence, Sketches}
}
Document
Quantum Algorithms for Testing Properties of Distributions

Authors: Sergey Bravyi, Aram W. Harrow, and Avinatan Hassidim


Abstract
Suppose one has access to oracles generating samples from two unknown probability distributions $p$ and $q$ on some $N$-element set. How many samples does one need to test whether the two distributions are close or far from each other in the $L_1$-norm? This and related questions have been extensively studied during the last years in the field of property testing. In the present paper we study quantum algorithms for testing properties of distributions. It is shown that the $L_1$-distance $\|p-q\|_1$ can be estimated with a constant precision using only $O(N^{1/2})$ queries in the quantum settings, whereas classical computers need $\Omega(N^{1-o(1)})$ queries. We also describe quantum algorithms for testing Uniformity and Orthogonality with query complexity $O(N^{1/3})$. The classical query complexity of these problems is known to be $\Omega(N^{1/2})$. A quantum algorithm for testing Uniformity has been recently independently discovered by Chakraborty et al. \cite{CFMW09}.

Cite as

Sergey Bravyi, Aram W. Harrow, and Avinatan Hassidim. Quantum Algorithms for Testing Properties of Distributions. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 131-142, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bravyi_et_al:LIPIcs.STACS.2010.2450,
  author =	{Bravyi, Sergey and Harrow, Aram W. and Hassidim, Avinatan},
  title =	{{Quantum Algorithms for Testing Properties of Distributions}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{131--142},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2450},
  URN =		{urn:nbn:de:0030-drops-24502},
  doi =		{10.4230/LIPIcs.STACS.2010.2450},
  annote =	{Keywords: Quantum computing, property testing, sampling}
}
Document
Optimal Query Complexity for Reconstructing Hypergraphs

Authors: Nader H. Bshouty and Hanna Mazzawi


Abstract
In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let $G$ be a weighted hidden hypergraph of constant rank with~$n$ vertices and $m$ hyperedges. For any $m$ there exists a non-adaptive algorithm that finds the edges of the graph and their weights using $$ O\left(\frac{m\log n}{\log m}\right) $$ additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758, 2008]. When the weights of the hypergraph are integers that are less than $O(poly(n^d/m))$ where $d$ is the rank of the hypergraph (and therefore for unweighted hypergraphs) there exists a non-adaptive algorithm that finds the edges of the graph and their weights using $$ O\left(\frac{m\log \frac{n^d}{m}}{\log m}\right). $$ additive queries. Using the information theoretic bound the above query complexities are tight.

Cite as

Nader H. Bshouty and Hanna Mazzawi. Optimal Query Complexity for Reconstructing Hypergraphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 143-154, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bshouty_et_al:LIPIcs.STACS.2010.2496,
  author =	{Bshouty, Nader H. and Mazzawi, Hanna},
  title =	{{Optimal Query Complexity for Reconstructing Hypergraphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{143--154},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2496},
  URN =		{urn:nbn:de:0030-drops-24968},
  doi =		{10.4230/LIPIcs.STACS.2010.2496},
  annote =	{Keywords: Query complexity, hypergraphs}
}
Document
Ultimate Traces of Cellular Automata

Authors: Julien Cervelle, Enrico Formenti, and Pierre Guillon


Abstract
A cellular automaton (CA) is a parallel synchronous computing model, which consists in a juxtaposition of finite automata (cells) whose state evolves according to that of their neighbors. Its trace is the set of infinite words representing the sequence of states taken by some particular cell. In this paper we study the ultimate trace of CA and partial CA (a CA restricted to a particular subshift). The ultimate trace is the trace observed after a long time run of the CA. We give sufficient conditions for a set of infinite words to be the trace of some CA and prove the undecidability of all properties over traces that are stable by ultimate coincidence.

Cite as

Julien Cervelle, Enrico Formenti, and Pierre Guillon. Ultimate Traces of Cellular Automata. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 155-166, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{cervelle_et_al:LIPIcs.STACS.2010.2451,
  author =	{Cervelle, Julien and Formenti, Enrico and Guillon, Pierre},
  title =	{{Ultimate Traces of Cellular Automata}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{155--166},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2451},
  URN =		{urn:nbn:de:0030-drops-24518},
  doi =		{10.4230/LIPIcs.STACS.2010.2451},
  annote =	{Keywords: Discrete dynamical systems, cellular automata, symbolic dynamics, sofic systems, formal languages, decidability}
}
Document
Two-phase Algorithms for the Parametric Shortest Path Problem

Authors: Sourav Chakraborty, Eldar Fischer, Oded Lachish, and Raphael Yuster


Abstract
A {\em parametric weighted graph} is a graph whose edges are labeled with continuous real functions of a single common variable. For any instantiation of the variable, one obtains a standard edge-weighted graph. Parametric weighted graph problems are generalizations of weighted graph problems, and arise in various natural scenarios. Parametric weighted graph algorithms consist of two phases. A {\em preprocessing phase} whose input is a parametric weighted graph, and whose output is a data structure, the advice, that is later used by the {\em instantiation phase}, where a specific value for the variable is given. The instantiation phase outputs the solution to the (standard) weighted graph problem that arises from the instantiation. The goal is to have the running time of the instantiation phase supersede the running time of any algorithm that solves the weighted graph problem from scratch, by taking advantage of the advice. In this paper we construct several parametric algorithms for the shortest path problem. For the case of linear function weights we present an algorithm for the single source shortest path problem. Its preprocessing phase runs in $\tilde{O}(V^4)$ time, while its instantiation phase runs in only $O(E+V \log V)$ time. The fastest standard algorithm for single source shortest path runs in $O(VE)$ time. For the case of weight functions defined by degree $d$ polynomials, we present an algorithm with quasi-polynomial preprocessing time $O(V^{(1 + \log f(d))\log V})$ and instantiation time only $\tilde{O}(V)$. In fact, for any pair of vertices $u,v$, the instantiation phase computes the distance from $u$ to $v$ in only $O(\log^2 V)$ time. Finally, for linear function weights, we present a randomized algorithm whose preprocessing time is $\tilde{O (V^{3.5})$ and so that for any pair of vertices $u,v$ and any instantiation variable, the instantiation phase computes, in $O(1)$ time, a length of a path from $u$ to $v$ that is at most (additively) $\epsilon$ larger than the length of a shortest path. In particular, an all-pairs shortest path solution, up to an additive constant error, can be computed in $O(V^2)$ time.

Cite as

Sourav Chakraborty, Eldar Fischer, Oded Lachish, and Raphael Yuster. Two-phase Algorithms for the Parametric Shortest Path Problem. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 167-178, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.STACS.2010.2452,
  author =	{Chakraborty, Sourav and Fischer, Eldar and Lachish, Oded and Yuster, Raphael},
  title =	{{Two-phase Algorithms for the Parametric Shortest Path Problem}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{167--178},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2452},
  URN =		{urn:nbn:de:0030-drops-24523},
  doi =		{10.4230/LIPIcs.STACS.2010.2452},
  annote =	{Keywords: Parametric Algorithms, Shortest path problem}
}
Document
Continuous Monitoring of Distributed Data Streams over a Time-based Sliding Window

Authors: Ho-Leung Chan, Tak-Wah Lam, Lap-Kei Lee, and Hing-Fung Ting


Abstract
The past decade has witnessed many interesting algorithms for maintaining statistics over a data stream. This paper initiates a theoretical study of algorithms for monitoring distributed data streams over a time-based sliding window (which contains a variable number of items and possibly out-of-order items). The concern is how to minimize the communication between individual streams and the root, while allowing the root, at any time, to be able to report the global statistics of all streams within a given error bound. This paper presents communication-efficient algorithms for three classical statistics, namely, basic counting, frequent items and quantiles. The worst-case communication cost over a window is $O(\frac{k}{\varepsilon} \log \frac{\varepsilon N}{k})$ bits for basic counting and $O(\frac{k}{\varepsilon} \log \frac{N}{k})$ words for the remainings, where $k$ is the number of distributed data streams, $N$ is the total number of items in the streams that arrive or expire in the window, and $\varepsilon < 1$ is the desired error bound. Matching and nearly matching lower bounds are also obtained.

Cite as

Ho-Leung Chan, Tak-Wah Lam, Lap-Kei Lee, and Hing-Fung Ting. Continuous Monitoring of Distributed Data Streams over a Time-based Sliding Window. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 179-190, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.STACS.2010.2453,
  author =	{Chan, Ho-Leung and Lam, Tak-Wah and Lee, Lap-Kei and Ting, Hing-Fung},
  title =	{{Continuous Monitoring of Distributed Data Streams over a Time-based Sliding Window}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{179--190},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2453},
  URN =		{urn:nbn:de:0030-drops-24536},
  doi =		{10.4230/LIPIcs.STACS.2010.2453},
  annote =	{Keywords: Algorithms, distributed data streams, communication efficiency, frequent items}
}
Document
Robust Fault Tolerant Uncapacitated Facility Location

Authors: Shiri Chechik and David Peleg


Abstract
In the {\em uncapacitated facility location} problem, given a graph, a set of demands and opening costs, it is required to find a set of facilities $R$, so as to minimize the sum of the cost of opening the facilities in $R$ and the cost of assigning all node demands to open facilities. This paper concerns the {\em robust fault-tolerant} version of the uncapacitated facility location problem (RFTFL). In this problem, one or more facilities might fail, and each demand should be supplied by the closest open facility that did not fail. It is required to find a set of facilities $R$, so as to minimize the sum of the cost of opening the facilities in $R$ and the cost of assigning all node demands to open facilities that did not fail, after the failure of up to $\alpha$ facilities. We present a polynomial time algorithm that yields a 6.5-approximation for this problem with at most one failure and a $1.5 + 7.5\alpha$-approximation for the problem with at most $\alpha > 1$ failures. We also show that the $RFTFL$ problem is NP-hard even on trees, and even in the case of a single failure.

Cite as

Shiri Chechik and David Peleg. Robust Fault Tolerant Uncapacitated Facility Location. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 191-202, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.STACS.2010.2454,
  author =	{Chechik, Shiri and Peleg, David},
  title =	{{Robust Fault Tolerant Uncapacitated Facility Location}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{191--202},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2454},
  URN =		{urn:nbn:de:0030-drops-24547},
  doi =		{10.4230/LIPIcs.STACS.2010.2454},
  annote =	{Keywords: Facility location, approximation algorithms, fault-tolerance}
}
Document
Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation

Authors: Victor Chen, Elena Grigorescu, and Ronald de Wolf


Abstract
We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers \emph{most} queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares ``don't know.'' Furthermore, if there is no noise on the data structure, it answers \emph{all} queries correctly with high probability. Our model is the common generalization of an error-correcting data structure model proposed recently by de~Wolf, and the notion of ``relaxed locally decodable codes'' developed in the PCP literature. We measure the efficiency of a data structure in terms of its \emph{length} (the number of bits in its representation), and query-answering time, measured by the number of \emph{bit-probes} to the (possibly corrupted) representation. We obtain results for the following two data structure problems: \begin{itemize} \item (Membership) Store a subset $S$ of size at most $s$ from a universe of size $n$ such that membership queries can be answered efficiently, i.e., decide if a given element from the universe is in $S$. \\ We construct an error-correcting data structure for this problem with length nearly linear in $s\log n$ that answers membership queries with $O(1)$ bit-probes. This nearly matches the asymptotically optimal parameters for the noiseless case: length $O(s\log n)$ and one bit-probe, due to Buhrman, Miltersen, Radhakrishnan, and Venkatesh. \item (Univariate polynomial evaluation) Store a univariate polynomial $g$ of degree $\deg(g)\leq s$ over the integers modulo $n$ such that evaluation queries can be answered efficiently, i.e., we can evaluate the output of $g$ on a given integer modulo $n$. \\ We construct an error-correcting data structure for this problem with length nearly linear in $s\log n$ that answers evaluation queries with $\polylog s\cdot\log^{1+o(1)}n$ bit-probes. This nearly matches the parameters of the best-known noiseless construction, due to Kedlaya and Umans. \end{itemize}

Cite as

Victor Chen, Elena Grigorescu, and Ronald de Wolf. Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 203-214, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2010.2455,
  author =	{Chen, Victor and Grigorescu, Elena and de Wolf, Ronald},
  title =	{{Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{203--214},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2455},
  URN =		{urn:nbn:de:0030-drops-24558},
  doi =		{10.4230/LIPIcs.STACS.2010.2455},
  annote =	{Keywords: Data Structures, Error-Correcting Codes, Membership, Polynomial Evaluation}
}
Document
Log-space Algorithms for Paths and Matchings in k-trees

Authors: Bireswar Das, Samir Datta, and Prajakta Nimbhorkar


Abstract
Reachability and shortest path problems are \NLC\ for general graphs. They are known to be in \Log\ for graphs of tree-width $2$ \cite{JT07}. However, for graphs of tree-width larger than $2$, no bound better than \NL\ is known. In this paper, we improve these bounds for $k$-trees, where $k$ is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed $k$-trees, and for computation of shortest and longest paths in directed acyclic $k$-trees. Besides the path problems mentioned above, we consider the problem of deciding whether a $k$-tree has a perfect macthing (decision version), and if so, finding a perfect matching (search version), and prove that these problems are \Log-complete. These problems are known to be in \Ptime\ and in \RNC\ for general graphs, and in \SPL\ for planar bipartite graphs \cite{DKR08}. Our results settle the complexity of these problems for the class of $k$-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of divide-and-conquer approach in log-space, along with some ideas from \cite{JT07} and \cite{LMR07}.

Cite as

Bireswar Das, Samir Datta, and Prajakta Nimbhorkar. Log-space Algorithms for Paths and Matchings in k-trees. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 215-226, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.STACS.2010.2456,
  author =	{Das, Bireswar and Datta, Samir and Nimbhorkar, Prajakta},
  title =	{{Log-space Algorithms for Paths and Matchings in k-trees}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{215--226},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2456},
  URN =		{urn:nbn:de:0030-drops-24563},
  doi =		{10.4230/LIPIcs.STACS.2010.2456},
  annote =	{Keywords: k-trees, reachability, matching, log-space}
}
Document
Restricted Space Algorithms for Isomorphism on Bounded Treewidth Graphs

Authors: Bireswar Das, Jacobo Torán, and Fabian Wagner


Abstract
The Graph Isomorphism problem restricted to graphs of bounded treewidth or bounded tree distance width are known to be solvable in polynomial time~\cite{Bo90},\cite{YBFT}.We give restricted space algorithms for these problems proving the following results: \begin{itemize} \item Isomorphism for bounded tree distance width graphs is in \Log\ and thus complete for the class. We also show that for this kind of graphs a canon can be computed within logspace. \item For bounded treewidth graphs, when both input graphs are given together with a tree decomposition, the problem of whether there is an isomorphism which respects the decompositions (i.e.\ considering only isomorphisms mapping bags in one decomposition blockwise onto bags in the other decomposition) is in \Log. \item For bounded treewidth graphs, when one of the input graphs is given with a tree decomposition the isomorphism problem is in \LogCFL. \item As a corollary the isomorphism problem for bounded treewidth graphs is in \LogCFL. This improves the known \TCone\ upper bound for the problem given by Grohe and Verbitsky~\cite{GV06}. \end{itemize}

Cite as

Bireswar Das, Jacobo Torán, and Fabian Wagner. Restricted Space Algorithms for Isomorphism on Bounded Treewidth Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 227-238, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.STACS.2010.2457,
  author =	{Das, Bireswar and Tor\'{a}n, Jacobo and Wagner, Fabian},
  title =	{{Restricted Space Algorithms for Isomorphism on Bounded Treewidth Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{227--238},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2457},
  URN =		{urn:nbn:de:0030-drops-24570},
  doi =		{10.4230/LIPIcs.STACS.2010.2457},
  annote =	{Keywords: Complexity, Algorithms, Graph Isomorphism Problem, Treewidth, LogCFL}
}
Document
The Traveling Salesman Problem under Squared Euclidean Distances

Authors: Fred van Nijnatten, René Sitters, Gerhard J. Woeginger, Alexander Wolff, and Mark de Berg


Abstract
Let $P$ be a set of points in $\Reals^d$, and let $\alpha \ge 1$ be a real number. We define the distance between two points $p,q\in P$ as $|pq|^{\alpha}$, where $|pq|$ denotes the standard Euclidean distance between $p$ and $q$. We denote the traveling salesman problem under this distance function by \tsp($d,\alpha$). We design a 5-approximation algorithm for \tsp(2,2) and generalize this result to obtain an approximation factor of $3^{\alpha-1}+\sqrt{6}^{\,\alpha}\!/3$ for $d=2$ and all $\alpha\ge2$. We also study the variant Rev-\tsp\ of the problem where the traveling salesman is allowed to revisit points. We present a polynomial-time approximation scheme for Rev-\tsp$(2,\alpha)$ with $\alpha\ge2$, and we show that Rev-\tsp$(d, \alpha)$ is \apx-hard if $d\ge3$ and $\alpha>1$. The \apx-hardness proof carries over to \tsp$(d, \alpha)$ for the same parameter ranges.

Cite as

Fred van Nijnatten, René Sitters, Gerhard J. Woeginger, Alexander Wolff, and Mark de Berg. The Traveling Salesman Problem under Squared Euclidean Distances. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 239-250, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{vannijnatten_et_al:LIPIcs.STACS.2010.2458,
  author =	{van Nijnatten, Fred and Sitters, Ren\'{e} and Woeginger, Gerhard J. and Wolff, Alexander and de Berg, Mark},
  title =	{{The Traveling Salesman Problem under Squared Euclidean Distances}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{239--250},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2458},
  URN =		{urn:nbn:de:0030-drops-24580},
  doi =		{10.4230/LIPIcs.STACS.2010.2458},
  annote =	{Keywords: Geometric traveling salesman problem, power-assignment in wireless networks, distance-power gradient, NP-hard, APX-hard}
}
Document
Beyond Bidimensionality: Parameterized Subexponential Algorithms on Directed Graphs

Authors: Frederic Dorn, Fedor V. Fomin, Daniel Lokshtanov, Venkatesh Raman, and Saket Saurabh


Abstract
In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approaches with two well studied problems. For the first problem, $k$-Leaf Out-Branching, which is to find an oriented spanning tree with at least $k$ leaves, we obtain an algorithm solving the problem in time $2^{\cO(\sqrt{k} \log k)} n+ n^{\cO(1)}$ on directed graphs whose underlying undirected graph excludes some fixed graph $H$ as a minor. For the special case when the input directed graph is planar, the running time can be improved to $2^{\cO(\sqrt{k} )}n + n^{\cO(1)}$. The second example is a generalization of the {\sc Directed Hamiltonian Path} problem, namely $k$-Internal Out-Branching, which is to find an oriented spanning tree with at least $k$ internal vertices. We obtain an algorithm solving the problem in time $2^{\cO(\sqrt{k} \log k)} + n^{\cO(1)}$ on directed graphs whose underlying undirected graph excludes some fixed apex graph $H$ as a minor. Finally, we observe that for any $\ve>0$, the $k$-Directed Path problem is solvable in time $\cO((1+\ve)^k n^{f(\ve)})$, where $f$ is some function of $\ve$. Our methods are based on non-trivial combinations of obstruction theorems for undirected graphs, kernelization, problem specific combinatorial structures and a layering technique similar to the one employed by Baker to obtain PTAS for planar graphs.

Cite as

Frederic Dorn, Fedor V. Fomin, Daniel Lokshtanov, Venkatesh Raman, and Saket Saurabh. Beyond Bidimensionality: Parameterized Subexponential Algorithms on Directed Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 251-262, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dorn_et_al:LIPIcs.STACS.2010.2459,
  author =	{Dorn, Frederic and Fomin, Fedor V. and Lokshtanov, Daniel and Raman, Venkatesh and Saurabh, Saket},
  title =	{{Beyond Bidimensionality: Parameterized Subexponential Algorithms on Directed Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{251--262},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2459},
  URN =		{urn:nbn:de:0030-drops-24599},
  doi =		{10.4230/LIPIcs.STACS.2010.2459},
  annote =	{Keywords: Parameterized Subexponential Algorithms, Directed Graphs, Out-Branching, Internal Out-Branching}
}
Document
Planar Subgraph Isomorphism Revisited

Authors: Frederic Dorn


Abstract
The problem of {\sc Subgraph Isomorphism} is defined as follows: Given a \emph{pattern} $H$ and a \emph{host graph} $G$ on $n$ vertices, does $G$ contain a subgraph that is isomorphic to $H$? Eppstein [SODA 95, J'GAA 99] gives the first linear time algorithm for subgraph isomorphism for a fixed-size pattern, say of order $k$, and arbitrary planar host graph, improving upon the $O(n^{\sqrt{k}})$-time algorithm when using the ``Color-coding'' technique of Alon et al [J'ACM 95]. Eppstein's algorithm runs in time $k^{O(k)} n$, that is, the dependency on $k$ is superexponential. We improve the running time to $2^{O(k)} n$, that is, single exponential in $k$ while keeping the term in $n$ linear. Next to deciding subgraph isomorphism, we can construct a solution and count all solutions in the same asymptotic running time. We may enumerate $\omega$ subgraphs with an additive term $O(\omega k)$ in the running time of our algorithm. We introduce the technique of ``embedded dynamic programming'' on a suitably structured graph decomposition, which exploits the number and topology of the underlying drawings of the subgraph pattern (rather than of the host graph).

Cite as

Frederic Dorn. Planar Subgraph Isomorphism Revisited. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 263-274, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dorn:LIPIcs.STACS.2010.2460,
  author =	{Dorn, Frederic},
  title =	{{Planar Subgraph Isomorphism Revisited}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{263--274},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2460},
  URN =		{urn:nbn:de:0030-drops-24605},
  doi =		{10.4230/LIPIcs.STACS.2010.2460},
  annote =	{Keywords: Graph algorithms, Subgraph Isomorphism, NP-hard problems, Dynamic programming, Topological graph theory}
}
Document
Intrinsic Universality in Self-Assembly

Authors: David Doty, Jack H. Lutz, Matthew J. Patitz, Scott M. Summers, and Damien Woods


Abstract
We show that the Tile Assembly Model exhibits a strong notion of universality where the goal is to give a single tile assembly system that simulates the behavior of any other tile assembly system. We give a tile assembly system that is capable of simulating a very wide class of tile systems, including itself. Specifically, we give a tile set that simulates the assembly of any tile assembly system in a class of systems that we call \emph{locally consistent}: each tile binds with exactly the strength needed to stay attached, and that there are no glue mismatches between tiles in any produced assembly. Our construction is reminiscent of the studies of \emph{intrinsic universality} of cellular automata by Ollinger and others, in the sense that our simulation of a tile system $T$ by a tile system $U$ represents each tile in an assembly produced by $T$ by a $c \times c$ block of tiles in $U$, where $c$ is a constant depending on $T$ but not on the size of the assembly $T$ produces (which may in fact be infinite). Also, our construction improves on earlier simulations of tile assembly systems by other tile assembly systems (in particular, those of Soloveichik and Winfree, and of Demaine et al.) in that we simulate the actual process of self-assembly, not just the end result, as in Soloveichik and Winfree's construction, and we do not discriminate against infinite structures. Both previous results simulate only temperature 1 systems, whereas our construction simulates tile assembly systems operating at temperature 2.

Cite as

David Doty, Jack H. Lutz, Matthew J. Patitz, Scott M. Summers, and Damien Woods. Intrinsic Universality in Self-Assembly. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 275-286, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{doty_et_al:LIPIcs.STACS.2010.2461,
  author =	{Doty, David and Lutz, Jack H. and Patitz, Matthew J. and Summers, Scott M. and Woods, Damien},
  title =	{{Intrinsic Universality in Self-Assembly}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{275--286},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2461},
  URN =		{urn:nbn:de:0030-drops-24619},
  doi =		{10.4230/LIPIcs.STACS.2010.2461},
  annote =	{Keywords: Biological computing, Molecular computation, intrinsic universality, self-assembly}
}
Document
Sponsored Search, Market Equilibria, and the Hungarian Method

Authors: Paul Dütting, Monika Henzinger, and Ingmar Weber


Abstract
Two-sided matching markets play a prominent role in economic theory. A prime example of such a market is the sponsored search market where $n$ advertisers compete for the assignment of one of $k$ sponsored search results, also known as ``slots'', for certain keywords they are interested in. Here, as in other markets of that kind, market equilibria correspond to stable matchings. In this paper, we show how to modify Kuhn's Hungarian Method (Kuhn, 1955) so that it finds an optimal stable matching between advertisers and advertising slots in settings with generalized linear utilities, per-bidder-item reserve prices, and per-bidder-item maximum prices. The only algorithm for this problem presented so far (Aggarwal et al., 2009) requires the market to be in ``general position''. We do not make this assumption.

Cite as

Paul Dütting, Monika Henzinger, and Ingmar Weber. Sponsored Search, Market Equilibria, and the Hungarian Method. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 287-298, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dutting_et_al:LIPIcs.STACS.2010.2463,
  author =	{D\"{u}tting, Paul and Henzinger, Monika and Weber, Ingmar},
  title =	{{Sponsored Search, Market Equilibria, and the Hungarian Method}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{287--298},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2463},
  URN =		{urn:nbn:de:0030-drops-24636},
  doi =		{10.4230/LIPIcs.STACS.2010.2463},
  annote =	{Keywords: Stable matching, truthful matching mechanism, general position}
}
Document
Dispersion in Unit Disks

Authors: Adrian Dumitrescu and Minghui Jiang


Abstract
We present two new approximation algorithms with (improved) constant ratios for selecting $n$ points in $n$ unit disks such that the minimum pairwise distance among the points is maximized. (I) A very simple $O(n \log{n})$-time algorithm with ratio $0.5110$ for disjoint unit disks. In combination with an algorithm of Cabello~\cite{Ca07}, it yields a $O(n^2)$-time algorithm with ratio of $0.4487$ for dispersion in $n$ not necessarily disjoint unit disks. (II) A more sophisticated LP-based algorithm with ratio $0.6495$ for disjoint unit disks that uses a linear number of variables and constraints, and runs in polynomial time. The algorithm introduces a novel technique which combines linear programming and projections for approximating distances. The previous best approximation ratio for disjoint unit disks was $\frac{1}{2}$. Our results give a partial answer to an open question raised by Cabello~\cite{Ca07}, who asked whether $\frac{1}{2}$ could be improved.

Cite as

Adrian Dumitrescu and Minghui Jiang. Dispersion in Unit Disks. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 299-310, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dumitrescu_et_al:LIPIcs.STACS.2010.2464,
  author =	{Dumitrescu, Adrian and Jiang, Minghui},
  title =	{{Dispersion in Unit Disks}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{299--310},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2464},
  URN =		{urn:nbn:de:0030-drops-24646},
  doi =		{10.4230/LIPIcs.STACS.2010.2464},
  annote =	{Keywords: Dispersion problem, linear programming, approximation algorithm}
}
Document
Long Non-crossing Configurations in the Plane

Authors: Adrian Dumitrescu and Csaba D. Tóth


Abstract
We revisit several maximization problems for geometric networks design under the non-crossing constraint, first studied by Alon, Rajagopalan and Suri (ACM Symposium on Computational Geometry, 1993). Given a set of $n$ points in the plane in general position (no three points collinear), compute a longest non-crossing configuration composed of straight line segments that is: (a) a matching (b) a Hamiltonian path (c) a spanning tree. Here we obtain new results for (b) and (c), as well as for the Hamiltonian cycle problem: (i) For the longest non-crossing Hamiltonian path problem, we give an approximation algorithm with ratio $\frac{2}{\pi+1} \approx 0.4829$. The previous best ratio, due to Alon et al., was $1/\pi \approx 0.3183$. Moreover, the ratio of our algorithm is close to $2/\pi$ on a relatively broad class of instances: for point sets whose perimeter (or diameter) is much shorter than the maximum length matching. The algorithm runs in $O(n^{7/3}\log{n})$ time. (ii) For the longest non-crossing spanning tree problem, we give an approximation algorithm with ratio $0.502$ which runs in $O(n \log{n})$ time. The previous ratio, $1/2$, due to Alon et al., was achieved by a quadratic time algorithm. Along the way, we first re-derive the result of Alon et al. with a faster $O(n \log{n})$-time algorithm and a very simple analysis. (iii) For the longest non-crossing Hamiltonian cycle problem, we give an approximation algorithm whose ratio is close to $2/\pi$ on a relatively broad class of instances: for point sets with the product $\bf{\langle}$~diameter~$\times$ ~convex hull size $\bf{\rangle}$ much smaller than the maximum length matching. The algorithm runs in $O(n^{7/3}\log{n})$ time. No previous approximation results were known for this problem.

Cite as

Adrian Dumitrescu and Csaba D. Tóth. Long Non-crossing Configurations in the Plane. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 311-322, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dumitrescu_et_al:LIPIcs.STACS.2010.2465,
  author =	{Dumitrescu, Adrian and T\'{o}th, Csaba D.},
  title =	{{Long Non-crossing Configurations in the Plane}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{311--322},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2465},
  URN =		{urn:nbn:de:0030-drops-24655},
  doi =		{10.4230/LIPIcs.STACS.2010.2465},
  annote =	{Keywords: Longest non-crossing Hamiltonian path, longest non-crossing Hamiltonian cycle, longest non-crossing spanning tree, approximation algorithm.}
}
Document
The Complexity of Approximating Bounded-Degree Boolean #CSP

Authors: Martin Dyer, Leslie Ann Goldberg, Markus Jalsenius, and David Richerby


Abstract
The degree of a CSP instance is the maximum number of times that a variable may appear in the scope of constraints. We consider the approximate counting problem for Boolean CSPs with bounded-degree instances, for constraint languages containing the two unary constant relations $\{0\}$ and $\{1\}$. When the maximum degree is at least $25$ we obtain a complete classification of the complexity of this problem. It is exactly solvable in polynomial-time if every relation in the constraint language is affine. It is equivalent to the problem of approximately counting independent sets in bipartite graphs if every relation can be expressed as conjunctions of $\{0\}$, $\{1\}$ and binary implication. Otherwise, there is no FPRAS unless $\NPtime = \RPtime$. For lower degree bounds, additional cases arise in which the complexity is related to the complexity of approximately counting independent sets in hypergraphs.

Cite as

Martin Dyer, Leslie Ann Goldberg, Markus Jalsenius, and David Richerby. The Complexity of Approximating Bounded-Degree Boolean #CSP. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 323-334, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dyer_et_al:LIPIcs.STACS.2010.2466,
  author =	{Dyer, Martin and Goldberg, Leslie Ann and Jalsenius, Markus and Richerby, David},
  title =	{{The Complexity of Approximating Bounded-Degree Boolean #CSP}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{323--334},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2466},
  URN =		{urn:nbn:de:0030-drops-24669},
  doi =		{10.4230/LIPIcs.STACS.2010.2466},
  annote =	{Keywords: Boolean constraint satisfaction problem, generalized satisfiability, counting, approximation algorithms}
}
Document
The Complexity of the List Homomorphism Problem for Graphs

Authors: László Egri, Andrei Krokhin, Benoit Larose, and Pascal Tesson


Abstract
We completely classify the computational complexity of the list $\bH$-colouring problem for graphs (with possible loops) in combinatorial and algebraic terms: for every graph $\bH$ the problem is either NP-complete, NL-complete, L-complete or is first-order definable; descriptive complexity equivalents are given as well via Datalog and its fragments. Our algebraic characterisations match important conjectures in the study of constraint satisfaction problems.

Cite as

László Egri, Andrei Krokhin, Benoit Larose, and Pascal Tesson. The Complexity of the List Homomorphism Problem for Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 335-346, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{egri_et_al:LIPIcs.STACS.2010.2467,
  author =	{Egri, L\'{a}szl\'{o} and Krokhin, Andrei and Larose, Benoit and Tesson, Pascal},
  title =	{{The Complexity of the List Homomorphism Problem for Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{335--346},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2467},
  URN =		{urn:nbn:de:0030-drops-24675},
  doi =		{10.4230/LIPIcs.STACS.2010.2467},
  annote =	{Keywords: Graph homomorphism, constraint satisfaction problem, complexity, universal algebra, Datalog}
}
Document
Improved Approximation Guarantees for Weighted Matching in the Semi-Streaming Model

Authors: Leah Epstein, Asaf Levin, Julián Mestre, and Danny Segev


Abstract
We study the maximum weight matching problem in the semi-streaming model, and improve on the currently best one-pass algorithm due to Zelke (Proc.\ STACS~'08, pages 669--680) by devising a deterministic approach whose performance guarantee is $4.91 + \eps$. In addition, we study {\em preemptive} online algorithms, a sub-class of one-pass algorithms where we are only allowed to maintain a feasible matching in memory at any point in time. All known results prior to Zelke's belong to this sub-class. We provide a lower bound of $4.967$ on the competitive ratio of any such deterministic algorithm, and hence show that future improvements will have to store in memory a set of edges which is not necessarily a feasible matching. We conclude by presenting an empirical study, conducted in order to compare the practical performance of our approach to that of previously suggested algorithms.

Cite as

Leah Epstein, Asaf Levin, Julián Mestre, and Danny Segev. Improved Approximation Guarantees for Weighted Matching in the Semi-Streaming Model. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 347-358, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{epstein_et_al:LIPIcs.STACS.2010.2476,
  author =	{Epstein, Leah and Levin, Asaf and Mestre, Juli\'{a}n and Segev, Danny},
  title =	{{Improved Approximation Guarantees for Weighted Matching in the Semi-Streaming Model}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{347--358},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2476},
  URN =		{urn:nbn:de:0030-drops-24766},
  doi =		{10.4230/LIPIcs.STACS.2010.2476},
  annote =	{Keywords: Approximation guarantees, semi-streaming model, one-pass algorithm}
}
Document
Computing Least Fixed Points of Probabilistic Systems of Polynomials

Authors: Javier Esparza, Andreas Gaiser, and Stefan Kiefer


Abstract
We study systems of equations of the form $X_1 = f_1(X_1, \ldots, X_n), \ldots, X_n = f_n(X_1, \ldots, X_n)$ where each $f_i$ is a polynomial with nonnegative coefficients that add up to~$1$. The least nonnegative solution, say~$\mu$, of such equation systems is central to problems from various areas, like physics, biology, computational linguistics and probabilistic program verification. We give a simple and strongly polynomial algorithm to decide whether $\mu=(1,\ldots,1)$ holds. Furthermore, we present an algorithm that computes reliable sequences of lower and upper bounds on~$\mu$, converging linearly to~$\mu$. Our algorithm has these features despite using inexact arithmetic for efficiency. We report on experiments that show the performance of our algorithms.

Cite as

Javier Esparza, Andreas Gaiser, and Stefan Kiefer. Computing Least Fixed Points of Probabilistic Systems of Polynomials. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 359-370, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{esparza_et_al:LIPIcs.STACS.2010.2468,
  author =	{Esparza, Javier and Gaiser, Andreas and Kiefer, Stefan},
  title =	{{Computing Least Fixed Points of Probabilistic Systems of Polynomials}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{359--370},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2468},
  URN =		{urn:nbn:de:0030-drops-24685},
  doi =		{10.4230/LIPIcs.STACS.2010.2468},
  annote =	{Keywords: Computing fixed points, numerical approximation, stochastic models, branching processes}
}
Document
The k-in-a-path Problem for Claw-free Graphs

Authors: Jiri Fiala, Marcin Kaminski, Bernard Lidický, and Daniël Paulusma


Abstract
Testing whether there is an induced path in a graph spanning $k$ given vertices is already \NP-complete in general graphs when $k=3$. We show how to solve this problem in polynomial time on claw-free graphs, when $k$ is not part of the input but an arbitrarily fixed integer.

Cite as

Jiri Fiala, Marcin Kaminski, Bernard Lidický, and Daniël Paulusma. The k-in-a-path Problem for Claw-free Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 371-382, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{fiala_et_al:LIPIcs.STACS.2010.2469,
  author =	{Fiala, Jiri and Kaminski, Marcin and Lidick\'{y}, Bernard and Paulusma, Dani\"{e}l},
  title =	{{The k-in-a-path Problem for Claw-free Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{371--382},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2469},
  URN =		{urn:nbn:de:0030-drops-24692},
  doi =		{10.4230/LIPIcs.STACS.2010.2469},
  annote =	{Keywords: Induced path, claw-free graph, polynomial-time algorithm}
}
Document
Finding Induced Subgraphs via Minimal Triangulations

Authors: Fedor V. Fomin and Yngve Villanger


Abstract
Potential maximal cliques and minimal separators are combinatorial objects which were introduced and studied in the realm of minimal triangulation problems including Minimum Fill-in and Treewidth. We discover unexpected applications of these notions to the field of moderate exponential algorithms. In particular, we show that given an n-vertex graph G together with its set of potential maximal cliques, and an integer t, it is possible in time the number of potential maximal cliques times $O(n^{O(t)})$ to find a maximum induced subgraph of treewidth t in G and for a given graph F of treewidth t, to decide if G contains an induced subgraph isomorphic to F. Combined with an improved algorithm enumerating all potential maximal cliques in time $O(1.734601^n)$, this yields that both the problems are solvable in time $1.734601^n$ * $n^{O(t)}$.

Cite as

Fedor V. Fomin and Yngve Villanger. Finding Induced Subgraphs via Minimal Triangulations. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 383-394, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.STACS.2010.2470,
  author =	{Fomin, Fedor V. and Villanger, Yngve},
  title =	{{Finding Induced Subgraphs via Minimal Triangulations}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{383--394},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2470},
  URN =		{urn:nbn:de:0030-drops-24708},
  doi =		{10.4230/LIPIcs.STACS.2010.2470},
  annote =	{Keywords: Bounded treewidth, minimal triangulation, moderately exponential time algorithms}
}
Document
Inseparability and Strong Hypotheses for Disjoint NP Pairs

Authors: Lance Fortnow, Jack H. Lutz, and Elvira Mayordomo


Abstract
This paper investigates the existence of inseparable disjoint pairs of NP languages and related strong hypotheses in computational complexity. Our main theorem says that, if NP does not have measure 0 in EXP, then there exist disjoint pairs of NP languages that are P-inseparable, in fact TIME(2(n k))-inseparable. We also relate these conditions to strong hypotheses concerning randomness and genericity of disjoint pairs.

Cite as

Lance Fortnow, Jack H. Lutz, and Elvira Mayordomo. Inseparability and Strong Hypotheses for Disjoint NP Pairs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 395-404, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{fortnow_et_al:LIPIcs.STACS.2010.2471,
  author =	{Fortnow, Lance and Lutz, Jack H. and Mayordomo, Elvira},
  title =	{{Inseparability and Strong Hypotheses for Disjoint NP Pairs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{395--404},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2471},
  URN =		{urn:nbn:de:0030-drops-24711},
  doi =		{10.4230/LIPIcs.STACS.2010.2471},
  annote =	{Keywords: Computational Complexity, Disjoint NP-pairs, Resource-Bounded Measure, Genericity}
}
Document
Branching-time Model Checking of One-counter Processes

Authors: Stefan Göller and Markus Lohrey


Abstract
One-counter processes (OCPs) are pushdown processes which operate only on a unary stack alphabet. We study the computational complexity of model checking computation tree logic ($\CTL$) over OCPs. A $\PSPACE$ upper bound is inherited from the modal $\mu$-calculus for this problem. First, we analyze the periodic behaviour of $\CTL$ over OCPs and derive a model checking algorithm whose running time is exponential only in the number of control locations and a syntactic notion of the formula that we call leftward until depth. Thus, model checking fixed OCPs against $\CTL$ formulas with a fixed leftward until depth is in $\P$. This generalizes a result of the first author, Mayr, and To for the expression complexity of $\CTL$'s fragment $\EF$. Second, we prove that already over some fixed OCP, $\CTL$ model checking is $\PSPACE$-hard. Third, we show that there already exists a fixed $\CTL$ formula for which model checking of OCPs is $\PSPACE$-hard. For the latter, we employ two results from complexity theory: (i) Converting a natural number in Chinese remainder presentation into binary presentation is in logspace-uniform $\NC^1$ and (ii) $\PSPACE$ is $\AC^0$-serializable. We demonstrate that our approach can be used to answer further open questions.

Cite as

Stefan Göller and Markus Lohrey. Branching-time Model Checking of One-counter Processes. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 405-416, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{goller_et_al:LIPIcs.STACS.2010.2472,
  author =	{G\"{o}ller, Stefan and Lohrey, Markus},
  title =	{{Branching-time Model Checking of One-counter Processes}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{405--416},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2472},
  URN =		{urn:nbn:de:0030-drops-24722},
  doi =		{10.4230/LIPIcs.STACS.2010.2472},
  annote =	{Keywords: Model checking, computation tree logic, complexity theory}
}
Document
Evolving Multialgebras Unify All Usual Sequential Computation Models

Authors: Serge Grigorieff and Pierre Valarcher


Abstract
It is well-known that Abstract State Machines (ASMs) can simulate ``step-by-step" any type of machines (Turing machines, RAMs, etc.). We aim to overcome two facts: 1) simulation is not identification, 2) the ASMs simulating machines of some type do not constitute a natural class among all ASMs. We modify Gurevich's notion of ASM to that of EMA (``Evolving MultiAlgebra") by replacing the program (which is a syntactic object) by a semantic object: a functional which has to be very simply definable over the static part of the ASM. We prove that very natural classes of EMAs correspond via ``literal identifications'' to slight extensions of the usual machine models and also to grammar models. Though we modify these models,we keep their computation approach: only some contingencies are modified. Thus, EMAs appear as the mathematical model unifying all kinds of sequential computation paradigms.

Cite as

Serge Grigorieff and Pierre Valarcher. Evolving Multialgebras Unify All Usual Sequential Computation Models. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 417-428, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{grigorieff_et_al:LIPIcs.STACS.2010.2473,
  author =	{Grigorieff, Serge and Valarcher, Pierre},
  title =	{{Evolving Multialgebras Unify All Usual Sequential Computation Models}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{417--428},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2473},
  URN =		{urn:nbn:de:0030-drops-24737},
  doi =		{10.4230/LIPIcs.STACS.2010.2473},
  annote =	{Keywords: Abstract state machines, Models of machines, Computability, Universality, Logic in computer science, Theory of algorithms}
}
Document
Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses

Authors: Xiaoyang Gu, John M. Hitchcock, and Aduri Pavan


Abstract
This paper presents the following results on sets that are complete for $\NP$. \begin{enumerate} \item If there is a problem in $\NP$ that requires $\twonO$ time at almost all lengths, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. \item If there is a problem in $\CoNP$ that cannot be solved by polynomial-size nondeterministic circuits, then every many-one complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. \item If there exist a one-way permutation that is secure against subexponential-size circuits and there is a hard tally language in $\NP \cap \CoNP$, then there is a Turing complete language for $\NP$ that is not many-one complete. \end{enumerate} Our first two results use worst-case hardness hypotheses whereas earlier work that showed similar results relied on average-case or almost-everywhere hardness assumptions. The use of average-case and worst-case hypotheses in the last result is unique as previous results obtaining the same consequence relied on almost-everywhere hardness results.

Cite as

Xiaoyang Gu, John M. Hitchcock, and Aduri Pavan. Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 429-440, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{gu_et_al:LIPIcs.STACS.2010.2462,
  author =	{Gu, Xiaoyang and Hitchcock, John M. and Pavan, Aduri},
  title =	{{Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{429--440},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2462},
  URN =		{urn:nbn:de:0030-drops-24627},
  doi =		{10.4230/LIPIcs.STACS.2010.2462},
  annote =	{Keywords: Computational complexity, NP-completeness}
}
Document
Revisiting the Rice Theorem of Cellular Automata

Authors: Pierre Guillon and Gaétan Richard


Abstract
A cellular automaton is a parallel synchronous computing model, which consists in a juxtaposition of finite automata whose state evolves according to that of their neighbors. It induces a dynamical system on the set of configurations, \ie the infinite sequences of cell states. The limit set of the cellular automaton is the set of configurations which can be reached arbitrarily late in the evolution. In this paper, we prove that all properties of limit sets of cellular automata with binary-state cells are undecidable, except surjectivity. This is a refinement of the classical ``Rice Theorem'' that Kari proved on cellular automata with arbitrary state sets.

Cite as

Pierre Guillon and Gaétan Richard. Revisiting the Rice Theorem of Cellular Automata. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 441-452, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{guillon_et_al:LIPIcs.STACS.2010.2474,
  author =	{Guillon, Pierre and Richard, Ga\'{e}tan},
  title =	{{Revisiting the Rice Theorem of Cellular Automata}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{441--452},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2474},
  URN =		{urn:nbn:de:0030-drops-24744},
  doi =		{10.4230/LIPIcs.STACS.2010.2474},
  annote =	{Keywords: Cellular automata, undecidability}
}
Document
On Optimal Heuristic Randomized Semidecision Procedures, with Application to Proof Complexity

Authors: Edward A. Hirsch and Dmitry Itsykson


Abstract
The existence of a ($p$-)optimal propositional proof system is a major open question in (proof) complexity; many people conjecture that such systems do not exist. Kraj\'{\i}\v{c}ek and Pudl\'{a}k \cite{KP} show that this question is equivalent to the existence of an algorithm that is optimal\footnote{Recent papers \cite{Monroe} call such algorithms \emph{$p$-optimal} while traditionally Levin's algorithm was called \emph{optimal}. We follow the older tradition. Also there is some mess in terminology here, thus please see formal definitions in Sect.~\ref{sec:prelim} below.} on all propositional tautologies. Monroe \cite{Monroe} recently gave a conjecture implying that such algorithm does not exist. We show that in the presence of errors such optimal algorithms \emph{do} exist. The concept is motivated by the notion of heuristic algorithms. Namely, we allow the algorithm to claim a small number of false ``theorems'' (according to any polynomial-time samplable distribution on non-tautologies) and err with bounded probability on other inputs. Our result can also be viewed as the existence of an optimal proof system in a class of proof systems obtained by generalizing automatizable proof systems.

Cite as

Edward A. Hirsch and Dmitry Itsykson. On Optimal Heuristic Randomized Semidecision Procedures, with Application to Proof Complexity. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 453-464, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hirsch_et_al:LIPIcs.STACS.2010.2475,
  author =	{Hirsch, Edward A. and Itsykson, Dmitry},
  title =	{{On Optimal Heuristic Randomized Semidecision Procedures, with Application to Proof Complexity}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{453--464},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2475},
  URN =		{urn:nbn:de:0030-drops-24753},
  doi =		{10.4230/LIPIcs.STACS.2010.2475},
  annote =	{Keywords: Propositional proof complexity, optimal algorithm}
}
Document
Weakening Assumptions for Deterministic Subexponential Time Non-Singular Matrix Completion

Authors: Maurice Jansen


Abstract
Kabanets and Impagliazzo \cite{KaIm04} show how to decide the circuit polynomial identity testing problem (CPIT) in deterministic subexponential time, assuming hardness of some explicit multilinear polynomial family $\{f_m\}_{m \geq 1}$ for arithmetic circuits. In this paper, a special case of CPIT is considered, namely non-singular matrix completion ($\NSMC$) under a low-individual-degree promise. For this subclass of problems it is shown how to obtain the same deterministic time bound, using a weaker assumption in terms of the {\em determinantal complexity} $\dcomp(f_m)$ of $f_m$. Building on work by Agrawal \cite{Agr05}, hardness-randomness tradeoffs will also be shown in the converse direction, in an effort to make progress on Valiant's $\VP$ versus $\VNP$ problem. To separate $\VP$ and $\VNP$, it is known to be sufficient to prove that the determinantal complexity of the $m\times m$ permanent is $m^{\omega(\log m)}$. In this paper it is shown, for an appropriate notion of explicitness, that the existence of an explicit multilinear polynomial family $\{f_m\}_{m \geq 1}$ with $\dcomp(f_m) = m^{\omega(\log m)}$ is equivalent to the existence of an efficiently computable {\em generator} $\{G_n\}_{n\geq 1}$ {\em for} multilinear $\NSMC$ with seed length $O(n^{1/\sqrt{\log n}})$. The latter is a combinatorial object that provides an efficient deterministic black-box algorithm for $\NSMC$. ``Multilinear $\NSMC$'' indicates that $G_n$ only has to work for matrices $M(x)$ of $poly(n)$ size in $n$ variables, for which $\det(M(x))$ is a multilinear polynomial.

Cite as

Maurice Jansen. Weakening Assumptions for Deterministic Subexponential Time Non-Singular Matrix Completion. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 465-476, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{jansen:LIPIcs.STACS.2010.2477,
  author =	{Jansen, Maurice},
  title =	{{Weakening Assumptions for Deterministic Subexponential Time Non-Singular Matrix Completion}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{465--476},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2477},
  URN =		{urn:nbn:de:0030-drops-24770},
  doi =		{10.4230/LIPIcs.STACS.2010.2477},
  annote =	{Keywords: Computational complexity, arithmetic circuits, hardness-randomness tradeoffs, identity testing, determinant versus permanent}
}
Document
On Equations over Sets of Integers

Authors: Artur Jez and Alexander Okhotin


Abstract
Systems of equations with sets of integers as unknowns are considered. It is shown that the class of sets representable by unique solutions of equations using the operations of union and addition $S+T=\makeset{m+n}{m \in S, \: n \in T}$ and with ultimately periodic constants is exactly the class of hyper-arithmetical sets. Equations using addition only can represent every hyper-arithmetical set under a simple encoding. All hyper-arithmetical sets can also be represented by equations over sets of natural numbers equipped with union, addition and subtraction $S \dotminus T=\makeset{m-n}{m \in S, \: n \in T, \: m \geqslant n}$. Testing whether a given system has a solution is $\Sigma^1_1$-complete for each model. These results, in particular, settle the expressive power of the most general types of language equations, as well as equations over subsets of free groups.

Cite as

Artur Jez and Alexander Okhotin. On Equations over Sets of Integers. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 477-488, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{jez_et_al:LIPIcs.STACS.2010.2478,
  author =	{Jez, Artur and Okhotin, Alexander},
  title =	{{On Equations over Sets of Integers}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{477--488},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2478},
  URN =		{urn:nbn:de:0030-drops-24780},
  doi =		{10.4230/LIPIcs.STACS.2010.2478},
  annote =	{Keywords: Language equations, computability, arithmetical hierarchy, hyper-arithmetical hierarchy}
}
Document
Randomized Algorithm for Agreeable Deadlines Packet Scheduling

Authors: Lukasz Jez


Abstract
In 2005 Li~et~al. gave a \(\phi\)-competitive deterministic online algorithm for scheduling of packets with agreeable deadlines~\cite{DBLP:conf/soda/LiSS05} with a very interesting analysis. This is known to be optimal due to a lower bound by Hajek~\cite{Hajek-det-lb}. We claim that the algorithm by Li~et~al. can be slightly simplified, while retaining its competitive ratio. Then we introduce randomness to the modified algorithm and argue that the competitive ratio against oblivious adversary is at most (\frac{4}{3}\). Note that this still leaves a gap between the best known lower bound of \(\frac{5}{4}\) by Chin~et~al.~\cite{DBLP:journals/algorithmica/ChinF03} for randomized algorithms against oblivious adversary.

Cite as

Lukasz Jez. Randomized Algorithm for Agreeable Deadlines Packet Scheduling. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 489-500, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{jez:LIPIcs.STACS.2010.2479,
  author =	{Jez, Lukasz},
  title =	{{Randomized Algorithm for Agreeable Deadlines Packet Scheduling}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{489--500},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2479},
  URN =		{urn:nbn:de:0030-drops-24795},
  doi =		{10.4230/LIPIcs.STACS.2010.2479},
  annote =	{Keywords: Online algorithms, scheduling, buffer management}
}
Document
Collapsible Pushdown Graphs of Level 2 are Tree-Automatic

Authors: Alexander Kartzow


Abstract
We show that graphs generated by collapsible pushdown systems of level $2$ are tree-automatic. Even when we allow $\varepsilon$-contractions and add a reachability predicate (with regular constraints) for pairs of configurations, the structures remain tree-automatic. Hence, their \FO theories are decidable, even when expanded by a reachability predicate. As a corollary, we obtain the tree-automaticity of the second level of the Caucal-hierarchy.

Cite as

Alexander Kartzow. Collapsible Pushdown Graphs of Level 2 are Tree-Automatic. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 501-512, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kartzow:LIPIcs.STACS.2010.2480,
  author =	{Kartzow, Alexander},
  title =	{{Collapsible Pushdown Graphs of Level 2 are Tree-Automatic}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{501--512},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2480},
  URN =		{urn:nbn:de:0030-drops-24801},
  doi =		{10.4230/LIPIcs.STACS.2010.2480},
  annote =	{Keywords: Tree-automatic structures, collapsible pushdown graphs, collapsible pushdown systems, first-order decidability, reachability}
}
Document
Approximate Shortest Paths Avoiding a Failed Vertex: Optimal Size Data Structures for Unweighted Graphs

Authors: Neelesh Khanna and Surender Baswana


Abstract
Let $G=(V,E)$ be any undirected graph on $V$ vertices and $E$ edges. A path $\textbf{P}$ between any two vertices $u,v\in V$ is said to be $t$-approximate shortest path if its length is at most $t$ times the length of the shortest path between $u$ and $v$. We consider the problem of building a compact data structure for a given graph $G$ which is capable of answering the following query for any $u,v,z\in V$ and $t>1$. \centerline{\em report $t$-approximate shortest path between $u$ and $v$ when vertex $z$ fails} We present data structures for the single source as well all-pairs versions of this problem. Our data structures guarantee optimal query time. Most impressive feature of our data structures is that their size {\em nearly} match the size of their best static counterparts.

Cite as

Neelesh Khanna and Surender Baswana. Approximate Shortest Paths Avoiding a Failed Vertex: Optimal Size Data Structures for Unweighted Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 513-524, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{khanna_et_al:LIPIcs.STACS.2010.2481,
  author =	{Khanna, Neelesh and Baswana, Surender},
  title =	{{Approximate Shortest Paths Avoiding a Failed Vertex: Optimal Size Data Structures for Unweighted Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{513--524},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2481},
  URN =		{urn:nbn:de:0030-drops-24812},
  doi =		{10.4230/LIPIcs.STACS.2010.2481},
  annote =	{Keywords: Shortest path, distance, distance queries, oracle}
}
Document
Holant Problems for Regular Graphs with Complex Edge Functions

Authors: Michael Kowalczyk and Jin-Yi Cai


Abstract
We prove a complexity dichotomy theorem for Holant Problems on $3$-regular graphs with an arbitrary complex-valued edge function. Three new techniques are introduced: (1) higher dimensional iterations in interpolation; (2) Eigenvalue Shifted Pairs, which allow us to prove that a pair of combinatorial gadgets \emph{in combination} succeed in proving \#P-hardness; and (3) algebraic symmetrization, which significantly lowers the \emph{symbolic complexity} of the proof for computational complexity. With \emph{holographic reductions} the classification theorem also applies to problems beyond the basic model.

Cite as

Michael Kowalczyk and Jin-Yi Cai. Holant Problems for Regular Graphs with Complex Edge Functions. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 525-536, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kowalczyk_et_al:LIPIcs.STACS.2010.2482,
  author =	{Kowalczyk, Michael and Cai, Jin-Yi},
  title =	{{Holant Problems for Regular Graphs with Complex Edge Functions}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{525--536},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2482},
  URN =		{urn:nbn:de:0030-drops-24826},
  doi =		{10.4230/LIPIcs.STACS.2010.2482},
  annote =	{Keywords: Computational complexity}
}
Document
Is Ramsey's Theorem omega-automatic?

Authors: Dietrich Kuske


Abstract
We study the existence of infinite cliques in $\omega$-automatic (hyper-)graphs. It turns out that the situation is much nicer than in general uncountable graphs, but not as nice as for automatic graphs. More specifically, we show that every uncountable $\omega$-automatic graph contains an uncountable co-context-free clique or anticlique, but not necessarily a context-free (let alone regular) clique or anticlique. We also show that uncountable $\omega$-automatic ternary hypergraphs need not have uncountable cliques or anticliques at all.

Cite as

Dietrich Kuske. Is Ramsey's Theorem omega-automatic?. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 537-548, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kuske:LIPIcs.STACS.2010.2483,
  author =	{Kuske, Dietrich},
  title =	{{Is Ramsey's Theorem omega-automatic?}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{537--548},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2483},
  URN =		{urn:nbn:de:0030-drops-24838},
  doi =		{10.4230/LIPIcs.STACS.2010.2483},
  annote =	{Keywords: Logic in computer science, automata, Ramsey theory}
}
Document
An Efficient Quantum Algorithm for Some Instances of the Group Isomorphism Problem

Authors: François Le Gall


Abstract
In this paper we consider the problem of testing whether two finite groups are isomorphic. Whereas the case where both groups are abelian is well understood and can be solved efficiently, very little is known about the complexity of isomorphism testing for nonabelian groups. Le Gall has constructed an efficient classical algorithm for a class of groups corresponding to one of the most natural ways of constructing nonabelian groups from abelian groups: the groups that are extensions of an abelian group $A$ by a cyclic group $\Int_m$ with the order of $A$ coprime with $m$. More precisely, the running time of that algorithm is almost linear in the order of the input groups. In this paper we present a \emph{quantum} algorithm solving the same problem in time polynomial in the \emph{logarithm} of the order of the input groups. This algorithm works in the black-box setting and is the first quantum algorithm solving instances of the nonabelian group isomorphism problem exponentially faster than the best known classical algorithms.

Cite as

François Le Gall. An Efficient Quantum Algorithm for Some Instances of the Group Isomorphism Problem. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 549-560, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{legall:LIPIcs.STACS.2010.2484,
  author =	{Le Gall, Fran\c{c}ois},
  title =	{{An Efficient Quantum Algorithm for Some Instances of the Group Isomorphism Problem}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{549--560},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2484},
  URN =		{urn:nbn:de:0030-drops-24843},
  doi =		{10.4230/LIPIcs.STACS.2010.2484},
  annote =	{Keywords: Quantum algorithms, group isomorphism problem, black-box groups}
}
Document
Treewidth Reduction for Constrained Separation and Bipartization Problems

Authors: Dániel Marx, Barry O'Sullivan, and Igor Razgon


Abstract
We present a method for reducing the treewidth of a graph while preserving all the minimal $s-t$ separators. This technique turns out to be very useful for establishing the fixed-parameter tractability of constrained separation and bipartization problems. To demonstrate the power of this technique, we prove the fixed-parameter tractability of a number of well-known separation and bipartization problems with various additional restrictions (e.g., the vertices being removed from the graph form an independent set). These results answer a number of open questions in the area of parameterized complexity.

Cite as

Dániel Marx, Barry O'Sullivan, and Igor Razgon. Treewidth Reduction for Constrained Separation and Bipartization Problems. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 561-572, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{marx_et_al:LIPIcs.STACS.2010.2485,
  author =	{Marx, D\'{a}niel and O'Sullivan, Barry and Razgon, Igor},
  title =	{{Treewidth Reduction for Constrained Separation and Bipartization Problems}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{561--572},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2485},
  URN =		{urn:nbn:de:0030-drops-24850},
  doi =		{10.4230/LIPIcs.STACS.2010.2485},
  annote =	{Keywords: Fixed-parameter algorithms, graph separation problems, treewidth}
}
Document
Online Correlation Clustering

Authors: Claire Mathieu, Ocan Sankur, and Warren Schudy


Abstract
We study the online clustering problem where data items arrive in an online fashion. The algorithm maintains a clustering of data items into similarity classes. Upon arrival of v, the relation between v and previously arrived items is revealed, so that for each u we are told whether v is similar to u. The algorithm can create a new luster for v and merge existing clusters. When the objective is to minimize disagreements between the clustering and the input, we prove that a natural greedy algorithm is O(n)-competitive, and this is optimal. When the objective is to maximize agreements between the clustering and the input, we prove that the greedy algorithm is .5-competitive; that no online algorithm can be better than .834-competitive; we prove that it is possible to get better than 1/2, by exhibiting a randomized algorithm with competitive ratio .5+c for a small positive fixed constant c.

Cite as

Claire Mathieu, Ocan Sankur, and Warren Schudy. Online Correlation Clustering. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 573-584, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{mathieu_et_al:LIPIcs.STACS.2010.2486,
  author =	{Mathieu, Claire and Sankur, Ocan and Schudy, Warren},
  title =	{{Online Correlation Clustering}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{573--584},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2486},
  URN =		{urn:nbn:de:0030-drops-24862},
  doi =		{10.4230/LIPIcs.STACS.2010.2486},
  annote =	{Keywords: Correlation clustering, online algorithms}
}
Document
The Recognition of Tolerance and Bounded Tolerance Graphs

Authors: George B. Mertzios, Ignasi Sau, and Shmuel Zaks


Abstract
Tolerance graphs model interval relations in such a way that intervals can tolerate a certain degree of overlap without being in conflict. This subclass of perfect graphs has been extensively studied, due to both its interesting structure and its numerous applications. Several efficient algorithms for optimization problems that are NP-hard on general graphs have been designed for tolerance graphs. In spite of this, the recognition of tolerance graphs --~namely, the problem of deciding whether a given graph is a tolerance graph~-- as well as the recognition of their main subclass of bounded tolerance graphs, have been the most fundamental open problems on this class of graphs (cf.~the book on tolerance graphs~\cite{GolTol04}) since their introduction in 1982~\cite{GoMo82}. In this article we prove that both recognition problems are NP-complete, even in the case where the input graph is a trapezoid graph. The presented results are surprising because, on the one hand, most subclasses of perfect graphs admit polynomial recognition algorithms and, on the other hand, bounded tolerance graphs were believed to be efficiently recognizable as they are a natural special case of trapezoid graphs (which can be recognized in polynomial time) and share a very similar structure with them. For our reduction we extend the notion of an \emph{acyclic orientation} of permutation and trapezoid graphs. Our main tool is a new algorithm that uses \emph{vertex splitting} to transform a given trapezoid graph into a permutation graph, while preserving this new acyclic orientation property. This method of vertex splitting is of independent interest; very recently, it has been proved a powerful tool also in the design of efficient recognition algorithms for other classes of graphs~\cite{MC-Trapezoid}.

Cite as

George B. Mertzios, Ignasi Sau, and Shmuel Zaks. The Recognition of Tolerance and Bounded Tolerance Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 585-596, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{mertzios_et_al:LIPIcs.STACS.2010.2487,
  author =	{Mertzios, George B. and Sau, Ignasi and Zaks, Shmuel},
  title =	{{The Recognition of Tolerance and Bounded Tolerance Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{585--596},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2487},
  URN =		{urn:nbn:de:0030-drops-24876},
  doi =		{10.4230/LIPIcs.STACS.2010.2487},
  annote =	{Keywords: Tolerance graphs, bounded tolerance graphs, recognition, vertex splitting, NP-complete, trapezoid graphs, permutation graphs}
}
Document
Decidability of the Interval Temporal Logic ABB over the Natural Numbers

Authors: Angelo Montanari, Gabriele Puppis, Pietro Sala, and Guido Sciavicco


Abstract
In this paper, we focus our attention on the interval temporal logic of the Allen's relations ``meets'', ``begins'', and ``begun by'' ($\ABB$ for short), interpreted over natural numbers. We first introduce the logic and we show that it is expressive enough to model distinctive interval properties, such as accomplishment conditions, to capture basic modalities of point-based temporal logic, such as the until operator, and to encode relevant metric constraints. Then, we prove that the satisfiability problem for $\ABB$ over natural numbers is decidable by providing a small model theorem based on an original contraction method. Finally, we prove the EXPSPACE-completeness of the problem.

Cite as

Angelo Montanari, Gabriele Puppis, Pietro Sala, and Guido Sciavicco. Decidability of the Interval Temporal Logic ABB over the Natural Numbers. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 597-608, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{montanari_et_al:LIPIcs.STACS.2010.2488,
  author =	{Montanari, Angelo and Puppis, Gabriele and Sala, Pietro and Sciavicco, Guido},
  title =	{{Decidability of the Interval Temporal Logic ABB over the Natural Numbers}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{597--608},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2488},
  URN =		{urn:nbn:de:0030-drops-24884},
  doi =		{10.4230/LIPIcs.STACS.2010.2488},
  annote =	{Keywords: Interval temporal logics, compass structures, decidability, complexity}
}
Document
Relaxed Spanners for Directed Disk Graphs

Authors: David Peleg and Liam Roditty


Abstract
Let $(V,\delta)$ be a finite metric space, where $V$ is a set of $n$ points and $\delta$ is a distance function defined for these points. Assume that $(V,\delta)$ has a constant doubling dimension $d$ and assume that each point $p\in V$ has a disk of radius $r(p)$ around it. The disk graph that corresponds to $V$ and $r(\cdot)$ is a \emph{directed} graph $I(V,E,r)$, whose vertices are the points of $V$ and whose edge set includes a directed edge from $p$ to $q$ if $\delta(p,q)\leq r(p)$. In~\cite{PeRo08} we presented an algorithm for constructing a $(1+\eps)$-spanner of size $O(n/\eps^d \log M)$, where $M$ is the maximal radius $r(p)$. The current paper presents two results. The first shows that the spanner of~\cite{PeRo08} is essentially optimal, i.e., for metrics of constant doubling dimension it is not possible to guarantee a spanner whose size is independent of $M$. The second result shows that by slightly relaxing the requirements and allowing a small perturbation of the radius assignment, considerably better spanners can be constructed. In particular, we show that if it is allowed to use edges of the disk graph $I(V,E,r_{1+\eps})$, where $r_{1+\eps}(p) = (1+\eps)\cdot r(p)$ for every $p\in V$, then it is possible to get a $(1+\eps)$-spanner of size $O(n/\eps^d)$ for $I(V,E,r)$. Our algorithm is simple and can be implemented efficiently.

Cite as

David Peleg and Liam Roditty. Relaxed Spanners for Directed Disk Graphs. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 609-620, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{peleg_et_al:LIPIcs.STACS.2010.2489,
  author =	{Peleg, David and Roditty, Liam},
  title =	{{Relaxed Spanners for Directed Disk Graphs}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{609--620},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2489},
  URN =		{urn:nbn:de:0030-drops-24898},
  doi =		{10.4230/LIPIcs.STACS.2010.2489},
  annote =	{Keywords: Spanners, directed graphs}
}
Document
Unsatisfiable Linear CNF Formulas Are Large and Complex

Authors: Dominik Scheder


Abstract
We call a CNF formula {\em linear} if any two clauses have at most one variable in common. We show that there exist unsatisfiable linear $k$-CNF formulas with at most $4k^24^k$ clauses, and on the other hand, any linear $k$-CNF formula with at most $\frac{4^k}{8e^2k^2}$ clauses is satisfiable. The upper bound uses probabilistic means, and we have no explicit construction coming even close to it. One reason for this is that unsatisfiable linear formulas exhibit a more complex structure than general (non-linear) formulas: First, any treelike resolution refutation of any unsatisfiable linear $k$-CNF formula has size at least $2^{2^{\frac{k}{2}-1}}$. This implies that small unsatisfiable linear $k$-CNF formulas are hard instances for Davis-Putnam style splitting algorithms. Second, if we require that the formula $F$ have a {\em strict} resolution tree, i.e. every clause of $F$ is used only once in the resolution tree, then we need at least $a^{a^{\iddots^a}}$ clauses, where $a \approx 2$ and the height of this tower is roughly $k$.

Cite as

Dominik Scheder. Unsatisfiable Linear CNF Formulas Are Large and Complex. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 621-632, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{scheder:LIPIcs.STACS.2010.2490,
  author =	{Scheder, Dominik},
  title =	{{Unsatisfiable Linear CNF Formulas Are Large and Complex}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{621--632},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2490},
  URN =		{urn:nbn:de:0030-drops-24901},
  doi =		{10.4230/LIPIcs.STACS.2010.2490},
  annote =	{Keywords: Extremal combinatorics, proof complexity, probabilistic method}
}
Document
Construction Sequences and Certifying 3-Connectedness

Authors: Jens M. Schmidt


Abstract
Given two $3$-connected graphs $G$ and $H$, a \emph{construction sequence} constructs $G$ from $H$ (e.\,g. from the $K_4$) with three basic operations, called the \emph{Barnette-Gr\"unbaum operations}. These operations are known to be able to construct all $3$-connected graphs. We extend this result by identifying every intermediate graph in the construction sequence with a subdivision in $G$ and showing under some minor assumptions that there is still a construction sequence to $G$ when we start from an \emph{arbitrary prescribed} $H$-subdivision. This leads to the first algorithm that computes a construction sequence in time $O(|V(G)|^2)$. As an application, we develop a certificate for the $3$-connectedness of graphs that can be easily computed and verified. Based on this, a certifying test on $3$-connectedness is designed.%Finding certifying algorithms is a major goal for problems where the efficient solutions known are complicated. Tutte proved that every $3$-connected graph on more than $4$ nodes has a \emph{contractible edge}. Barnette and Gr\"unbaum proved the existence of a \emph{removable edge} in the same setting. We show that the sequence of contractions and the sequence of removals from $G$ to the $K_4$ can be computed in $O(|V|^2)$ time by extending Barnette and Gr\"unbaum's theorem. As an application, we derive a certificate for the $3$-connectedness of graphs that can be easily computed and verified.

Cite as

Jens M. Schmidt. Construction Sequences and Certifying 3-Connectedness. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 633-644, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{schmidt:LIPIcs.STACS.2010.2491,
  author =	{Schmidt, Jens M.},
  title =	{{Construction Sequences and Certifying 3-Connectedness}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{633--644},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2491},
  URN =		{urn:nbn:de:0030-drops-24918},
  doi =		{10.4230/LIPIcs.STACS.2010.2491},
  annote =	{Keywords: Construction sequence, 3-connected graph, nested subdivisions, inductive characterization, 3-connectedness, certifying algorithm}
}
Document
Named Models in Coalgebraic Hybrid Logic

Authors: Lutz Schröder and Dirk Pattinson


Abstract
Hybrid logic extends modal logic with support for reasoning about individual states, designated by so-called nominals. We study hybrid logic in the broad context of coalgebraic semantics, where Kripke frames are replaced with coalgebras for a given functor, thus covering a wide range of reasoning principles including, e.g., probabilistic, graded, default, or coalitional operators. Specifically, we establish generic criteria for a given coalgebraic hybrid logic to admit named canonical models, with ensuing completeness proofs for pure extensions on the one hand, and for an extended hybrid language with local binding on the other. We instantiate our framework with a number of examples. Notably, we prove completeness of graded hybrid logic with local binding.

Cite as

Lutz Schröder and Dirk Pattinson. Named Models in Coalgebraic Hybrid Logic. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 645-656, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{schroder_et_al:LIPIcs.STACS.2010.2492,
  author =	{Schr\"{o}der, Lutz and Pattinson, Dirk},
  title =	{{Named Models in Coalgebraic Hybrid Logic}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{645--656},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2492},
  URN =		{urn:nbn:de:0030-drops-24920},
  doi =		{10.4230/LIPIcs.STACS.2010.2492},
  annote =	{Keywords: Logic in computer science, semantics, deduction, modal logic, coalgebra}
}
Document
A Dichotomy Theorem for the General Minimum Cost Homomorphism Problem

Authors: Rustem Takhanov


Abstract
In the constraint satisfaction problem ($CSP$), the aim is to find an assignment of values to a set of variables subject to specified constraints. In the minimum cost homomorphism problem ($MinHom$), one is additionally given weights $c_{va}$ for every variable $v$ and value $a$, and the aim is to find an assignment $f$ to the variables that minimizes $\sum_{v} c_{vf(v)}$. Let $MinHom\left( \Gamma \right)$ denote the $MinHom$ problem parameterized by the set of predicates allowed for constraints. $MinHom\left( \Gamma \right)$ is related to many well-studied combinatorial optimization problems, and concrete applications can be found in, for instance, defence logistics and machine learning. We show that $MinHom\left( \Gamma \right)$ can be studied by using algebraic methods similar to those used for CSPs. With the aid of algebraic techniques, we classify the computational complexity of $MinHom\left( \Gamma \right)$ for all choices of $\Gamma$. Our result settles a general dichotomy conjecture previously resolved only for certain classes of directed graphs, [Gutin, Hell, Rafiey, Yeo, European J. of Combinatorics, 2008].

Cite as

Rustem Takhanov. A Dichotomy Theorem for the General Minimum Cost Homomorphism Problem. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 657-668, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{takhanov:LIPIcs.STACS.2010.2493,
  author =	{Takhanov, Rustem},
  title =	{{A Dichotomy Theorem for the General Minimum Cost Homomorphism Problem}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{657--668},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2493},
  URN =		{urn:nbn:de:0030-drops-24936},
  doi =		{10.4230/LIPIcs.STACS.2010.2493},
  annote =	{Keywords: Minimum cost homomorphisms problem, relational clones, constraint satisfaction problem, perfect graphs, supervised learning}
}
Document
Alternation-Trading Proofs, Linear Programming, and Lower Bounds

Authors: Ryan Williams


Abstract
A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard problems on restricted computational models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path, $\text{MOD}_6\text{-SAT}$, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs of these lower bounds follow a certain proof-by-contradiction strategy that we call {\em alternation-trading}. An important open problem is to determine how powerful such proofs can possibly be. We propose a methodology for studying these proofs that makes them amenable to both formal analysis and automated theorem proving. We prove that the search for better lower bounds can often be turned into a problem of solving a large series of linear programming instances. Implementing a small-scale theorem prover based on this result, we extract new human-readable time lower bounds for several problems. This framework can also be used to prove concrete limitations on the current techniques.

Cite as

Ryan Williams. Alternation-Trading Proofs, Linear Programming, and Lower Bounds. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 669-680, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.STACS.2010.2494,
  author =	{Williams, Ryan},
  title =	{{Alternation-Trading Proofs, Linear Programming, and Lower Bounds}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{669--680},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2494},
  URN =		{urn:nbn:de:0030-drops-24940},
  doi =		{10.4230/LIPIcs.STACS.2010.2494},
  annote =	{Keywords: Time-space tradeoffs, lower bounds, alternation, linear programming}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail