Volume

LIPIcs, Volume 18

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)



Thumbnail PDF

Event

FSTTCS 2012, December 15-17, 2012, Hyderabad, India

Editors

Deepak D'Souza
Jaikumar Radhakrishnan
Kavitha Telikepalli

Publication Details

  • published at: 2012-12-14
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-47-7
  • DBLP: db/conf/fsttcs/fsttcs2012

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 18, FSTTCS'12, Complete Volume

Authors: Deepak D'Souza, Telikepalli Kavitha, and Jaikumar Radhakrishnan


Abstract
LIPIcs, Volume 18, FSTTCS'12, Complete Volume

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{dsouza_et_al:LIPIcs.FSTTCS.2012,
  title =	{{LIPIcs, Volume 18, FSTTCS'12, Complete Volume}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012},
  URN =		{urn:nbn:de:0030-drops-41121},
  doi =		{10.4230/LIPIcs.FSTTCS.2012},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs, Mathematical Logic, Formal Languages}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization

Authors: Deepak D'Souza, Jaikumar Radhakrishnan, and Kavitha Telikepalli


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. i-xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{dsouza_et_al:LIPIcs.FSTTCS.2012.i,
  author =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{i--xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.i},
  URN =		{urn:nbn:de:0030-drops-38411},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization}
}
Document
Learning Mixtures of Distributions over Large Discrete Domains

Authors: Yuval Rabani


Abstract
We discuss recent results giving algorithms for learning mixtures of unstructured distributions.

Cite as

Yuval Rabani. Learning Mixtures of Distributions over Large Discrete Domains. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{rabani:LIPIcs.FSTTCS.2012.1,
  author =	{Rabani, Yuval},
  title =	{{Learning Mixtures of Distributions over Large Discrete Domains}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{1--3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.1},
  URN =		{urn:nbn:de:0030-drops-38428},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.1},
  annote =	{Keywords: machine learning, mixture models, topic models}
}
Document
Imperative Programming in Sets with Atoms

Authors: Mikolaj Bojanczyk and Szymon Torunczyk


Abstract
We define an imperative programming language, which extends while programs with a type for storing atoms or hereditarily orbit-finite sets. To deal with an orbit-finite set, the language has a loop construction, which is executed in parallel for all elements of an orbit-finite set. We show examples of programs in this language, e.g. a program for minimising deterministic orbit-finite automata.

Cite as

Mikolaj Bojanczyk and Szymon Torunczyk. Imperative Programming in Sets with Atoms. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 4-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.FSTTCS.2012.4,
  author =	{Bojanczyk, Mikolaj and Torunczyk, Szymon},
  title =	{{Imperative Programming in Sets with Atoms}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{4--15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.4},
  URN =		{urn:nbn:de:0030-drops-38437},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.4},
  annote =	{Keywords: Nominal sets, sets with atoms, while programs}
}
Document
Algorithmic Improvements of the Lovász Local Lemma via Cluster Expansion

Authors: Dimitris Achlioptas and Themis Gouleakis


Abstract
The Lovasz Local Lemma (LLL) is a powerful tool that can be used to prove that an object having none of a set of bad properties exists, using the probabilistic method. In many applications of the LLL it is also desirable to explicitly construct the combinatorial object. Recently it was shown that this is possible using a randomized algorithm in the full asymmetric LLL setting [R. Moser and G. Tardos, 2010]. A strengthening of the LLL for the case of dense local neighborhoods proved in [R. Bissacot et al., 2010] was recently also made constructive in [W. Pegden, 2011]. In another recent work [B. Haupler, B. Saha, A. Srinivasan, 2010], it was proved that the algorithm of Moser and Tardos is still efficient even when the number of events is exponential. Here we prove that these last two contributions can be combined to yield a new version of the LLL.

Cite as

Dimitris Achlioptas and Themis Gouleakis. Algorithmic Improvements of the Lovász Local Lemma via Cluster Expansion. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 16-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{achlioptas_et_al:LIPIcs.FSTTCS.2012.16,
  author =	{Achlioptas, Dimitris and Gouleakis, Themis},
  title =	{{Algorithmic Improvements of the Lov\'{a}sz Local Lemma via Cluster Expansion}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{16--23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.16},
  URN =		{urn:nbn:de:0030-drops-38440},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.16},
  annote =	{Keywords: Probabilistic Method, Lov\'{a}sz Local Lemma, Algorithms}
}
Document
Test Generation Using Symbolic Execution

Authors: Patrice Godefroid


Abstract
This paper presents a short introduction to automatic code-driven test generation using symbolic execution. It discusses some key technical challenges, solutions and milestones, but is not an exhaustive survey of this research area.

Cite as

Patrice Godefroid. Test Generation Using Symbolic Execution. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 24-33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{godefroid:LIPIcs.FSTTCS.2012.24,
  author =	{Godefroid, Patrice},
  title =	{{Test Generation Using Symbolic Execution}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{24--33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.24},
  URN =		{urn:nbn:de:0030-drops-38456},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.24},
  annote =	{Keywords: Testing, Symbolic Execution, Verification, Test Generation}
}
Document
Automated Reasoning and Natural Proofs for Programs Manipulating Data Structures

Authors: Madhusudan Parthasarathy


Abstract
We consider the problem of automatically verifying programs that manipulate a dynamic heap, maintaining complex and multiple data-structures, given modular pre-post conditions and loop invariants. We discuss specification logics for heaps, and discuss two classes of automatic procedures for reasoning with these logics. The first identifies fragments of logics that admit completely decidable reasoning. The second is a new approach called the natural proof method that builds proof procedures for very expressive logics that are automatic and sound (but incomplete), and that embody natural proof tactics learnt from manual verification.

Cite as

Madhusudan Parthasarathy. Automated Reasoning and Natural Proofs for Programs Manipulating Data Structures. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 34-35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{parthasarathy:LIPIcs.FSTTCS.2012.34,
  author =	{Parthasarathy, Madhusudan},
  title =	{{Automated Reasoning and Natural Proofs for Programs Manipulating Data Structures}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{34--35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.34},
  URN =		{urn:nbn:de:0030-drops-38897},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.34},
  annote =	{Keywords: logic, heap structures, data structures, program verification}
}
Document
Certifying polynomials for AC^0(parity) circuits, with applications

Authors: Swastik Kopparty and Srikanth Srinivasan


Abstract
In this paper, we introduce and develop the method of certifying polynomials for proving AC^0 circuit lower bounds. We use this method to show that Approximate Majority cannot be computed by AC^0(parity) circuits of size n^{1 + o(1)}. This implies a separation between the power of AC^0(parity) circuits of near-linear size and uniform AC^0(parity) (and even AC^0) circuits of polynomial size. This also implies a separation between randomized AC^0(parity) circuits of linear size and deterministic AC^0(parity) circuits of near-linear size. Our proof using certifying polynomials extends the deterministic restrictions technique of Chaudhuri and Radhakrishnan, who showed that Approximate Majority cannot be computed by AC^0 circuits of size n^{1+o(1)}. At the technical level, we show that for every ACP circuit C of near-linear size, there is a low degree variety V over F_2 such that the restriction of C to V is constant. We also prove other results exploring various aspects of the power of certifying polynomials. In the process, we show an essentially optimal lower bound of Omega\left(\log^{\Theta(d)} s \cdot \log \frac{1}{\epsilon} \right) on the degree of \epsilon-approximating polynomials for AC^0(parity) circuits of size s.

Cite as

Swastik Kopparty and Srikanth Srinivasan. Certifying polynomials for AC^0(parity) circuits, with applications. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 36-47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{kopparty_et_al:LIPIcs.FSTTCS.2012.36,
  author =	{Kopparty, Swastik and Srinivasan, Srikanth},
  title =	{{Certifying polynomials for AC^0(parity) circuits, with applications}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{36--47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.36},
  URN =		{urn:nbn:de:0030-drops-38467},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.36},
  annote =	{Keywords: Constant-depth Boolean circuits, Polynomials over finite fields, Size hierarchies}
}
Document
Randomly-oriented k-d Trees Adapt to Intrinsic Dimension

Authors: Santosh S. Vempala


Abstract
The classic k-d tree data structure continues to be widely used in spite of its vulnerability to the so-called curse of dimensionality. Here we provide a rigorous explanation: for randomly rotated data, a k-d tree adapts to the intrinsic dimension of the data and is not affected by the ambient dimension, thus keeping the data structure efficient for objects such as low-dimensional manifolds and sparse data. The main insight of the analysis can be used as an algorithmic pre-processing step to realize the same benefit: rotate the data randomly; then build a k-d tree. Our work can be seen as a refinement of Random Projection trees [Dasgupta 2008], which also adapt to intrinsic dimension but incur higher traversal costs as the resulting cells are polyhedra and not cuboids. Using k-d trees after a random rotation results in cells that are cuboids, thus preserving the traversal efficiency of standard k-d trees.

Cite as

Santosh S. Vempala. Randomly-oriented k-d Trees Adapt to Intrinsic Dimension. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 48-57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{vempala:LIPIcs.FSTTCS.2012.48,
  author =	{Vempala, Santosh S.},
  title =	{{Randomly-oriented k-d Trees Adapt to Intrinsic Dimension}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{48--57},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.48},
  URN =		{urn:nbn:de:0030-drops-38470},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.48},
  annote =	{Keywords: Data structures, Nearest Neighbors, Intrinsic Dimension, k-d Tree}
}
Document
Lower Bounds for the Average and Smoothed Number of Pareto Optima

Authors: Navin Goyal and Luis Rademacher


Abstract
Smoothed analysis of multiobjective 0-1 linear optimization has drawn considerable attention recently. In this literature, the number of Pareto-optimal solutions (i.e., solutions with the property that no other solution is at least as good in all the coordinates and better in at least one) for multiobjective optimization problems is the central object of study. In this paper, we prove several lower bounds for the expected number of Pareto optima. Our basic result is a lower bound of Omega_d(n^{d-1}) for optimization problems with d objectives and $n$ variables under fairly general conditions on the distributions of the linear objectives. Our proof relates the problem of lower bounding the number of Pareto optima to results in discrete geometry and geometric probability connected to arrangements of hyperplanes. We use our basic result to derive (1) To our knowledge, the first lower bound for natural multiobjective optimization problems. We illustrate this for the maximum spanning tree problem with randomly chosen edge weights. Our technique is sufficiently flexible to yield such lower bounds for other standard objective functions studied in this setting (such as multiobjective shortest path, TSP tour, matching). (2) Smoothed lower bound of min(Omega_d( n^{d-1.5} phi^{(d-\log d) (1-Theta(1/phi))}), 2^{Theta(n)}) for the 0-1 knapsack problem with d profits for phi-semirandom distributions for a version of the knapsack problem. This improves the recent lower bound of Brunsch and Röglin.

Cite as

Navin Goyal and Luis Rademacher. Lower Bounds for the Average and Smoothed Number of Pareto Optima. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 58-69, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{goyal_et_al:LIPIcs.FSTTCS.2012.58,
  author =	{Goyal, Navin and Rademacher, Luis},
  title =	{{Lower Bounds for the Average and Smoothed Number of Pareto Optima}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{58--69},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.58},
  URN =		{urn:nbn:de:0030-drops-38488},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.58},
  annote =	{Keywords: geometric probability, smoothed analysis, multiobjective optimization, Pareto optimal, knapsack}
}
Document
Exponential Space Improvement for minwise Based Algorithms

Authors: Guy Feigenblat, Ely Porat, and Ariel Shiftan


Abstract
In this paper we introduce a general framework that exponentially improves the space, the degree of independence, and the time needed by min-wise based algorithms. The authors, in SODA 2011, we introduced an exponential time improvement for min-wise based algorithms by defining and constructing an almost k-min-wise independent family of hash functions. Here we develop an alternative approach that achieves both exponential time and exponential space improvement. The new approach relaxes the need for approximately min-wise hash functions, hence gets around the Omega(log(1/epsilon)) independence lower bound in [Patrascu 2010]. This is done by defining and constructing a d-k-min-wise independent family of hash functions. Surprisingly, for most cases only 8-wise independence is needed for the additional improvement. Moreover, as the degree of independence is a small constant, our function can be implemented efficiently. Informally, under this definition, all subsets of size d of any fixed set X have an equal probability to have hash values among the minimal k values in X, where the probability is over the random choice of hash function from the family. This property measures the randomness of the family, as choosing a truly random function, obviously, satisfies the definition for d=k=|X|. We define and give an efficient time and space construction of approximately d-k-min-wise independent family of hash functions for the case where d=2, as this is sufficient for the additional exponential improvement. We discuss how this construction can be used to improve many min-wise based algorithms. To our knowledge such definitions, for hash functions, were never studied and no construction was given before. As an example we show how to apply it for similarity and rarity estimation over data streams. Other min-wise based algorithms, can be adjusted in the same way.

Cite as

Guy Feigenblat, Ely Porat, and Ariel Shiftan. Exponential Space Improvement for minwise Based Algorithms. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 70-85, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{feigenblat_et_al:LIPIcs.FSTTCS.2012.70,
  author =	{Feigenblat, Guy and Porat, Ely and Shiftan, Ariel},
  title =	{{Exponential Space Improvement for minwise Based Algorithms}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{70--85},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.70},
  URN =		{urn:nbn:de:0030-drops-38495},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.70},
  annote =	{Keywords: Streaming, Min-Wise, Hash Functions, Similarity, On line algorithms, Sub-linear algorithms}
}
Document
An effective characterization of the alternation hierarchy in two-variable logic

Authors: Andreas Krebs and Howard Straubing


Abstract
We characterize the languages in the individual levels of the quantifier alternation hierarchy of first-order logic with two variables by identities. This implies decidability of the individual levels. More generally we show that two-sided semidirect products with J as the right-hand factor preserve decidability.

Cite as

Andreas Krebs and Howard Straubing. An effective characterization of the alternation hierarchy in two-variable logic. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 86-98, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{krebs_et_al:LIPIcs.FSTTCS.2012.86,
  author =	{Krebs, Andreas and Straubing, Howard},
  title =	{{An effective characterization of the alternation hierarchy in two-variable logic}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{86--98},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.86},
  URN =		{urn:nbn:de:0030-drops-38501},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.86},
  annote =	{Keywords: FO\underline2, Quantifier Alternation, J, Pseudovarities, Identities}
}
Document
Decidable classes of documents for XPath

Authors: Vince Bárány, Mikolaj Bojanczyk, Diego Figueira, and Pawel Parys


Abstract
We study the satisfiability problem for XPath over XML documents of bounded depth. We define two parameters, called match width and braid width, that assign a number to any class of documents. We show that for all k, satisfiability for XPath restricted to bounded depth documents with match width at most k is decidable; and that XPath is undecidable on any class of documents with unbounded braid width. We conjecture that these two parameters are equivalent, in the sense that a class of documents has bounded match width iff it has bounded braid width.

Cite as

Vince Bárány, Mikolaj Bojanczyk, Diego Figueira, and Pawel Parys. Decidable classes of documents for XPath. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 99-111, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{barany_et_al:LIPIcs.FSTTCS.2012.99,
  author =	{B\'{a}r\'{a}ny, Vince and Bojanczyk, Mikolaj and Figueira, Diego and Parys, Pawel},
  title =	{{Decidable classes of documents for XPath}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{99--111},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.99},
  URN =		{urn:nbn:de:0030-drops-38512},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.99},
  annote =	{Keywords: XPath, XML, class automata, data trees, data words, satisfiability}
}
Document
Faster Deciding MSO Properties of Trees of Fixed Height, and Some Consequences

Authors: Jakub Gajarsky and Petr Hlineny


Abstract
We prove, in the universe of trees of bounded height, that for any MSO formula with $m$ variables there exists a set of kernels such that the size of each of these kernels can be bounded by an elementary function of m. This yields a faster MSO model checking algorithm for trees of bounded height than the one for general trees. From that we obtain, by means of interpretation, corresponding results for the classes of graphs of bounded tree-depth (MSO_2) and shrub-depth (MSO_1), and thus we give wide generalizations of Lampis' (ESA 2010) and Ganian's (IPEC 2011) results. In the second part of the paper we use this kernel structure to show that FO has the same expressive power as MSO_1 on the graph classes of bounded shrub-depth. This makes bounded shrub-depth a good candidate for characterization of the hereditary classes of graphs on which FO and MSO_1 coincide, a problem recently posed by Elberfeld, Grohe, and Tantau (LICS 2012).

Cite as

Jakub Gajarsky and Petr Hlineny. Faster Deciding MSO Properties of Trees of Fixed Height, and Some Consequences. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 112-123, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{gajarsky_et_al:LIPIcs.FSTTCS.2012.112,
  author =	{Gajarsky, Jakub and Hlineny, Petr},
  title =	{{Faster Deciding MSO Properties of Trees of Fixed Height, and Some Consequences}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{112--123},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.112},
  URN =		{urn:nbn:de:0030-drops-38553},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.112},
  annote =	{Keywords: MSO graph property, tree-width, tree-depth, shrub-depth}
}
Document
Cost-Parity and Cost-Streett Games

Authors: Nathanael Fijalkow and Martin Zimmermann


Abstract
We consider two-player games played on finite graphs equipped with costs on edges and introduce two winning conditions, cost-parity and cost-Streett, which require bounds on the cost between requests and their responses. Both conditions generalize the corresponding classical omega-regular conditions as well as the corresponding finitary conditions. For cost-parity games we show that the first player has positional winning strategies and that determining the winner lies in NP intersection Co-NP. For cost-Streett games we show that the first player has finite-state winning strategies and that determining the winner is EXPTIME-complete. This unifies the complexity results for the classical and finitary variants of these games. Both types of cost games can be solved by solving linearly many instances of their classical variants.

Cite as

Nathanael Fijalkow and Martin Zimmermann. Cost-Parity and Cost-Streett Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 124-135, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{fijalkow_et_al:LIPIcs.FSTTCS.2012.124,
  author =	{Fijalkow, Nathanael and Zimmermann, Martin},
  title =	{{Cost-Parity and Cost-Streett Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{124--135},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.124},
  URN =		{urn:nbn:de:0030-drops-38538},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.124},
  annote =	{Keywords: Parity Games, Streett Games, Costs, Scoring Functions}
}
Document
Super-Fast 3-Ruling Sets

Authors: Kishore Kothapalli and Sriram Pemmaraju


Abstract
A t-ruling set of a graph G = (V, E) is a vertex-subset S that is independent and satisfies the property that every vertex v in V is at a distance of at most t from some vertex in S. A maximal independent set (MIS) is a 1-ruling set. The problem of computing an MIS on a network is a fundamental problem in distributed algorithms and the fastest algorithm for this problem is the O(log n)-round algorithm due to Luby (SICOMP 1986) and Alon et al. (J. Algorithms 1986) from more than 25 years ago. Since then the problem has resisted all efforts to yield to a sub-logarithmic round algorithm. There has been recent progress on this problem, most importantly an O(log Delta . sqrt(log n))-round algorithm on graphs with n vertices and maximum degree Delta, due to Barenboim et al. (to appear FOCS 2012). The time complexity of this algorithm is sub-logarithmic for Delta =2^{o(sqrt{log n})}. We approach the MIS problem from a different angle and ask if O(1)-ruling sets can be computed faster than the currently known fastest algorithm for an MIS? As an answer to this question, we show how to compute a 2-ruling set of an n-vertex graph in O((log n)^{3/4}) rounds. We also show that the above result can be improved for special classes of graphs. For instance, on high girth graphs (girth 6 or more), trees, and graphs of bounded arboricity, we show how to compute 3-ruling sets in exp(O({sqrt{loglog n}})) rounds, O((log log n)^2 .logloglog n) rounds, and O((loglog n)^3) rounds, respectively. Our main technique involves randomized sparsification that rapidly reduces the graph degree while ensuring that every deleted vertex is close to some vertex that remains. This technique may have further applications in other contexts, e.g., in designing sub-logarithmic distributed approximation algorithms. Our results raise intriguing questions about how quickly an MIS (or 1-ruling sets) can be computed, given that 2-ruling sets can be computed in sub-logarithmic rounds.

Cite as

Kishore Kothapalli and Sriram Pemmaraju. Super-Fast 3-Ruling Sets. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 136-147, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{kothapalli_et_al:LIPIcs.FSTTCS.2012.136,
  author =	{Kothapalli, Kishore and Pemmaraju, Sriram},
  title =	{{Super-Fast 3-Ruling Sets}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{136--147},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.136},
  URN =		{urn:nbn:de:0030-drops-38549},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.136},
  annote =	{Keywords: MIS, ruling sets, graph sparsification, distributed algorithms}
}
Document
New bounds on the classical and quantum communication complexity of some graph properties

Authors: Gábor Ivanyos, Hartmut Klauck, Troy Lee, Miklos Santha, and Ronald de Wolf


Abstract
We study the communication complexity of a number of graph properties where the edges of the graph G are distributed between Alice and Bob (i.e., each receives some of the edges as input). Our main results are: 1. An Omega(n) lower bound on the quantum communication complexity of deciding whether an n-vertex graph G is connected, nearly matching the trivial classical upper bound of O(n log n) bits of communication. 2. A deterministic upper bound of O(n^{3/2} log n) bits for deciding if a bipartite graph contains a perfect matching, and a quantum lower bound of Omega(n) for this problem. 3. A Theta(n^2) bound for the randomized communication complexity of deciding if a graph has an Eulerian tour, and a Theta(n^{3/2}) bound for its quantum communication complexity. 4. The first two quantum lower bounds are obtained by exhibiting a reduction from the n-bit Inner Product problem to these graph problems, which solves an open question of Babai, Frankl and Simon [Babai et al 1986]. The third quantum lower bound comes from recent results about the quantum communication complexity of composed functions. We also obtain essentially tight bounds for the quantum communication complexity of a few other problems, such as deciding if $G$ is triangle-free, or if G is bipartite, as well as computing the determinant of a distributed matrix.

Cite as

Gábor Ivanyos, Hartmut Klauck, Troy Lee, Miklos Santha, and Ronald de Wolf. New bounds on the classical and quantum communication complexity of some graph properties. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 148-159, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{ivanyos_et_al:LIPIcs.FSTTCS.2012.148,
  author =	{Ivanyos, G\'{a}bor and Klauck, Hartmut and Lee, Troy and Santha, Miklos and de Wolf, Ronald},
  title =	{{New bounds on the classical and quantum communication complexity of some graph properties}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{148--159},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.148},
  URN =		{urn:nbn:de:0030-drops-38523},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.148},
  annote =	{Keywords: Graph properties, communication complexity, quantum communication}
}
Document
On Bisimilarity of Higher-Order Pushdown Automata: Undecidability at Order Two

Authors: Christopher Broadbent and Stefan Göller


Abstract
We show that bisimulation equivalence of order-two pushdown automata is undecidable. Moreover, we study the lower order problem of higher-order pushdown automata, which asks, given an order-k pushdown automaton and some k'<k, to determine if there exists a reachable configuration that is bisimilar to some order-k' pushdown automaton. We show that the lower order problem is undecidable for each k >= 2 even when the input k-PDA is deterministic and real-time.

Cite as

Christopher Broadbent and Stefan Göller. On Bisimilarity of Higher-Order Pushdown Automata: Undecidability at Order Two. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 160-172, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{broadbent_et_al:LIPIcs.FSTTCS.2012.160,
  author =	{Broadbent, Christopher and G\"{o}ller, Stefan},
  title =	{{On Bisimilarity of Higher-Order Pushdown Automata: Undecidability at Order Two}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{160--172},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.160},
  URN =		{urn:nbn:de:0030-drops-38583},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.160},
  annote =	{Keywords: Bisimulation equivalence, Higher-order pushdown automata}
}
Document
Scope-bounded Multistack Pushdown Systems: Fixed-Point, Sequentialization, and Tree-Width

Authors: Salvatore La Torre and Gennaro Parlato


Abstract
We present a novel fixed-point algorithm to solve reachability of multi-stack pushdown systems restricted to runs where matching push and pop transitions happen within a bounded number of context switches. The followed approach is compositional, in the sense that the runs of the system are summarized by bounded-size interfaces. Moreover, it is suitable for a direct implementation and can be exploited to prove two new results. We give a sequentialization for this class of systems, i.e., for each such multi-stack pushdown system we construct an equivalent single-stack pushdown system that faithfully simulates the behavior of each thread. We prove that the behavior graphs (multiply nested words) for these systems have bounded tree-width, and thus a number of decidability results can be derived from Courcelle's theorem.

Cite as

Salvatore La Torre and Gennaro Parlato. Scope-bounded Multistack Pushdown Systems: Fixed-Point, Sequentialization, and Tree-Width. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 173-184, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{latorre_et_al:LIPIcs.FSTTCS.2012.173,
  author =	{La Torre, Salvatore and Parlato, Gennaro},
  title =	{{Scope-bounded Multistack Pushdown Systems: Fixed-Point, Sequentialization, and Tree-Width}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{173--184},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.173},
  URN =		{urn:nbn:de:0030-drops-38563},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.173},
  annote =	{Keywords: Model-checking, multi-threaded programs, sequentialization, tree-width}
}
Document
Scheduling with Setup Costs and Monotone Penalties

Authors: Rohit Khandekar, Kirsten Hildrum, Deepak Rajan, and Joel Wolf


Abstract
We consider single processor preemptive scheduling with job-dependent setup times. In this model, a job-dependent setup time is incurred when a job is started for the first time, and each time it is restarted after preemption. This model is a common generalization of preemptive scheduling, and actually of non-preemptive scheduling as well. The objective is to minimize the sum of any general non-negative, non-decreasing cost functions of the completion times of the jobs -- this generalizes objectives of minimizing weighted flow time, flow-time squared, tardiness or the number of tardy jobs among many others. Our main result is a randomized polynomial time O(1)-speed O(1)-approximation algorithm for this problem. Without speedup, no polynomial time finite multiplicative approximation is possible unless P=NP. We extend the approach of Bansal et al. (FOCS 2007) of rounding a linear programming relaxation which accounts for costs incurred due to the non-preemptive nature of the schedule. A key new idea used in the rounding is that a point in the intersection polytope of two matroids can be decomposed as a convex combination of incidence vectors of sets that are independent in both matroids. In fact, we use this for the intersection of a partition matroid and a laminar matroid, in which case the decomposition can be found efficiently using network flows. Our approach gives a randomized polynomial time offline O(1)-speed O(1)-approximation algorithm for the broadcast scheduling problem with general cost functions as well.

Cite as

Rohit Khandekar, Kirsten Hildrum, Deepak Rajan, and Joel Wolf. Scheduling with Setup Costs and Monotone Penalties. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 185-198, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{khandekar_et_al:LIPIcs.FSTTCS.2012.185,
  author =	{Khandekar, Rohit and Hildrum, Kirsten and Rajan, Deepak and Wolf, Joel},
  title =	{{Scheduling with Setup Costs and Monotone Penalties}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{185--198},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.185},
  URN =		{urn:nbn:de:0030-drops-38576},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.185},
  annote =	{Keywords: Scheduling, resource augmentation, approximation algorithm, preemption, setup times}
}
Document
Scheduling Resources for Executing a Partial Set of Jobs

Authors: Venkatesan T. Chakaravarthy, Arindam Pal, Sambuddha Roy, and Yogish Sabharwal


Abstract
In this paper, we consider the problem of choosing a minimum cost set of resources for executing a specified set of jobs. Each input job is an interval, determined by its start-time and end-time. Each resource is also an interval determined by its start-time and end-time; moreover, every resource has a capacity and a cost associated with it. We consider two versions of this problem. In the partial covering version, we are also given as input a number k, specifying the number of jobs that must be performed. The goal is to choose $k$ jobs and find a minimum cost set of resources to perform the chosen k jobs (at any point of time the capacity of the chosen set of resources should be sufficient to execute the jobs active at that time). We present an O(log n)-factor approximation algorithm for this problem. We also consider the prize collecting version, wherein every job also has a penalty associated with it. The feasible solution consists of a subset of the jobs, and a set of resources, to perform the chosen subset of jobs. The goal is to find a feasible solution that minimizes the sum of the costs of the selected resources and the penalties of the jobs that are not selected. We present a constant factor approximation algorithm for this problem.

Cite as

Venkatesan T. Chakaravarthy, Arindam Pal, Sambuddha Roy, and Yogish Sabharwal. Scheduling Resources for Executing a Partial Set of Jobs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 199-210, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chakaravarthy_et_al:LIPIcs.FSTTCS.2012.199,
  author =	{Chakaravarthy, Venkatesan T. and Pal, Arindam and Roy, Sambuddha and Sabharwal, Yogish},
  title =	{{Scheduling Resources for Executing a Partial Set of Jobs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{199--210},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.199},
  URN =		{urn:nbn:de:0030-drops-38598},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.199},
  annote =	{Keywords: Approximation Algorithms, Partial Covering, Interval Graphs}
}
Document
Visibly Rational Expressions

Authors: Laura Bozzelli and César Sánchez


Abstract
Regular Expressions (RE) are an algebraic formalism for expressing regular languages, widely used in string search and as a specification language in verification. In this paper we introduce and investigate Visibly Rational Expressions (VRE), an extension of RE for the well-known class of Visibly Pushdown Languages (VPL). We show that VRE capture the class of VPL. Moreover, we identify an equally expressive fragment of VRE which admits a quadratic time compositional translation into the automata acceptors of VPL. We also prove that, for this fragment, universality, inclusion and language equivalence are EXPTIME-complete. Finally, we provide an extension of VRE for VPL over infinite words.

Cite as

Laura Bozzelli and César Sánchez. Visibly Rational Expressions. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 211-223, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bozzelli_et_al:LIPIcs.FSTTCS.2012.211,
  author =	{Bozzelli, Laura and S\'{a}nchez, C\'{e}sar},
  title =	{{Visibly Rational Expressions}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{211--223},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.211},
  URN =		{urn:nbn:de:0030-drops-38604},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.211},
  annote =	{Keywords: Visibly Pushdown Languages, Context-free specifications, Regular expressions, Algebraic characterization}
}
Document
Safety Verification of Communicating One-Counter Machines

Authors: Alexander Heußner, Tristan Le Gall, and Grégoire Sutre


Abstract
In order to verify protocols that tag messages with integer values, we investigate the decidability of the reachability problem for systems of communicating one-counter machines. These systems consist of local one-counter machines that asynchronously communicate by exchanging the value of their counters via, a priori unbounded, FIFO channels. This model extends communicating finite-state machines (CFSM) by infinite-state local processes and an infinite message alphabet. The main result of the paper is a complete characterization of the communication topologies that have a solvable reachability question. As already CFSM exclude the possibility of automatic verification in presence of mutual communication, we also consider an under-approximative approach to the reachability problem, based on rendezvous synchronization.

Cite as

Alexander Heußner, Tristan Le Gall, and Grégoire Sutre. Safety Verification of Communicating One-Counter Machines. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 224-235, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{heuner_et_al:LIPIcs.FSTTCS.2012.224,
  author =	{Heu{\ss}ner, Alexander and Le Gall, Tristan and Sutre, Gr\'{e}goire},
  title =	{{Safety Verification of Communicating One-Counter Machines}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{224--235},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.224},
  URN =		{urn:nbn:de:0030-drops-38614},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.224},
  annote =	{Keywords: Counter Machines, FIFO Channels, Reachability Problem, Data Words}
}
Document
Density Functions subject to a Co-Matroid Constraint

Authors: Venkatesan T. Chakaravarthy, Natwar Modani, Sivaramakrishnan R. Natarajan, Sambuddha Roy, and Yogish Sabharwal


Abstract
In this paper we consider the problem of finding the densest subset subject to co-matroid constraints. We are given a monotone supermodular set function f defined over a universe U, and the density of a subset S is defined to be f(S)/|S|. This generalizes the concept of graph density. Co-matroid constraints are the following: given matroid M a set S is feasible, iff the complement of S is independent in the matroid. Under such constraints, the problem becomes NP-hard. The specific case of graph density has been considered in literature under specific co-matroid constraints, for example, the cardinality matroid and the partition matroid. We show a 2-approximation for finding the densest subset subject to co-matroid constraints. Thereby we improve the approximation guarantees for the result for partition matroids in the literature.

Cite as

Venkatesan T. Chakaravarthy, Natwar Modani, Sivaramakrishnan R. Natarajan, Sambuddha Roy, and Yogish Sabharwal. Density Functions subject to a Co-Matroid Constraint. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 236-248, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chakaravarthy_et_al:LIPIcs.FSTTCS.2012.236,
  author =	{Chakaravarthy, Venkatesan T. and Modani, Natwar and Natarajan, Sivaramakrishnan R. and Roy, Sambuddha and Sabharwal, Yogish},
  title =	{{Density Functions subject to a Co-Matroid Constraint}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{236--248},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.236},
  URN =		{urn:nbn:de:0030-drops-38627},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.236},
  annote =	{Keywords: Approximation Algorithms, Submodular Functions, Graph Density}
}
Document
Efficient on-line algorithm for maintaining k-cover of sparse bit-strings

Authors: Amit Kumar, Preeti R. Panda, and Smruti Sarangi


Abstract
We consider the on-line problem of representing a sparse bit string by a set of k intervals, where k is much smaller than the length of the string. The goal is to minimize the total length of these intervals under the condition that each 1-bit must be in one of these intervals. We give an efficient greedy algorithm which takes time O(log k) per update (an update involves converting a 0-bit to a 1-bit), which is independent of the size of the entire string. We prove that this greedy algorithm is 2-competitive. We use a natural linear programming relaxation for this problem, and analyze the algorithm by finding a dual feasible solution whose value matches the cost of the greedy algorithm.

Cite as

Amit Kumar, Preeti R. Panda, and Smruti Sarangi. Efficient on-line algorithm for maintaining k-cover of sparse bit-strings. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 249-256, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.FSTTCS.2012.249,
  author =	{Kumar, Amit and Panda, Preeti R. and Sarangi, Smruti},
  title =	{{Efficient on-line algorithm for maintaining k-cover of sparse bit-strings}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{249--256},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.249},
  URN =		{urn:nbn:de:0030-drops-38639},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.249},
  annote =	{Keywords: On-line algorithms, string algorithms}
}
Document
Maintaining Approximate Maximum Weighted Matching in Fully Dynamic Graphs

Authors: Abhash Anand, Surender Baswana, Manoj Gupta, and Sandeep Sen


Abstract
We present a fully dynamic algorithm for maintaining approximate maximum weight matching in general weighted graphs. The algorithm maintains a matching M whose weight is at least 1/8 M^{*} where M^{*} is the weight of the maximum weight matching. The algorithm achieves an expected amortized O(log n log C) time per edge insertion or deletion, where C is the ratio of the weights of the highest weight edge to the smallest weight edge in the given graph.

Cite as

Abhash Anand, Surender Baswana, Manoj Gupta, and Sandeep Sen. Maintaining Approximate Maximum Weighted Matching in Fully Dynamic Graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 257-266, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{anand_et_al:LIPIcs.FSTTCS.2012.257,
  author =	{Anand, Abhash and Baswana, Surender and Gupta, Manoj and Sen, Sandeep},
  title =	{{Maintaining Approximate Maximum Weighted Matching in Fully  Dynamic Graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{257--266},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.257},
  URN =		{urn:nbn:de:0030-drops-38648},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.257},
  annote =	{Keywords: Matching, Dynamic Algorithm, Graph Algorithm}
}
Document
Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees

Authors: Khaled Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal


Abstract
We study the Unsplittable Flow Problem (UFP) and related variants, namely UFP with Bag Constraints and UFP with Rounds, on paths and trees. We provide improved constant factor approximation algorithms for all these problems under the no bottleneck assumption (NBA), which says that the maximum demand for any source-sink pair is at most the minimum capacity of any edge. We obtain these improved results by expressing a feasible solution to a natural LP relaxation of the UFP as a near-convex combination of feasible integral solutions.

Cite as

Khaled Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal. Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 267-275, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{elbassioni_et_al:LIPIcs.FSTTCS.2012.267,
  author =	{Elbassioni, Khaled and Garg, Naveen and Gupta, Divya and Kumar, Amit and Narula, Vishal and Pal, Arindam},
  title =	{{Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{267--275},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.267},
  URN =		{urn:nbn:de:0030-drops-38650},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.267},
  annote =	{Keywords: Approximation Algorithms, Integer Decomposition, Linear Programming, Scheduling, Unsplittable Flows}
}
Document
Graphs, Rewriting and Pathway Reconstruction for Rule-Based Models

Authors: Vincent Danos, Jerome Feret, Walter Fontana, Russell Harmer, Jonathan Hayman, Jean Krivine, Chris Thompson-Walsh, and Glynn Winskel


Abstract
In this paper, we introduce a novel way of constructing concise causal histories (pathways) to represent how specified structures are formed during simulation of systems represented by rule-based models. This is founded on a new, clean, graph-based semantics introduced in the first part of this paper for Kappa, a rule-based modelling language that has emerged as a natural description of protein-protein interactions in molecular biology [Bachman 2011]. The semantics is capable of capturing the whole of Kappa, including subtle side-effects on deletion of structure, and its structured presentation provides the basis for the translation of techniques to other models. In particular, we give a notion of trajectory compression, which restricts a trace culminating in the production of a given structure to the actions necessary for the structure to occur. This is central to the reconstruction of biochemical pathways due to the failure of traditional techniques to provide adequately concise causal histories, and we expect it to be applicable in a range of other modelling situations.

Cite as

Vincent Danos, Jerome Feret, Walter Fontana, Russell Harmer, Jonathan Hayman, Jean Krivine, Chris Thompson-Walsh, and Glynn Winskel. Graphs, Rewriting and Pathway Reconstruction for Rule-Based Models. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 276-288, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{danos_et_al:LIPIcs.FSTTCS.2012.276,
  author =	{Danos, Vincent and Feret, Jerome and Fontana, Walter and Harmer, Russell and Hayman, Jonathan and Krivine, Jean and Thompson-Walsh, Chris and Winskel, Glynn},
  title =	{{Graphs, Rewriting and Pathway Reconstruction for Rule-Based Models}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{276--288},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.276},
  URN =		{urn:nbn:de:0030-drops-38669},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.276},
  annote =	{Keywords: concurrency, rule-based models, graph rewriting, pathways, causality}
}
Document
On the Complexity of Parameterized Reachability in Reconfigurable Broadcast Networks

Authors: Giorgio Delzanno, Arnaud Sangnier, Riccardo Traverso, and Gianluigi Zavattaro


Abstract
We investigate the impact of dynamic topology reconfiguration on the complexity of verification problems for models of protocols with broadcast communication. We first consider reachability of a configuration with a given set of control states and show that parameterized verification is decidable with polynomial time complexity. We then move to richer queries and show how the complexity changes when considering properties with negation or cardinality constraints.

Cite as

Giorgio Delzanno, Arnaud Sangnier, Riccardo Traverso, and Gianluigi Zavattaro. On the Complexity of Parameterized Reachability in Reconfigurable Broadcast Networks. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 289-300, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{delzanno_et_al:LIPIcs.FSTTCS.2012.289,
  author =	{Delzanno, Giorgio and Sangnier, Arnaud and Traverso, Riccardo and Zavattaro, Gianluigi},
  title =	{{On the Complexity of Parameterized Reachability in Reconfigurable Broadcast Networks}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{289--300},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.289},
  URN =		{urn:nbn:de:0030-drops-38671},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.289},
  annote =	{Keywords: Broadcast Communication, Parameterized Verification, Complexity}
}
Document
Extending the Rackoff technique to Affine nets

Authors: Rémi Bonnet, Alain Finkel, and M. Praveen


Abstract
We study the possibility of extending the Rackoff technique to Affine nets, which are Petri nets extended with affine functions. The Rackoff technique has been used for establishing EXPSPACE upper bounds for the coverability and boundedness problems for Petri nets. We show that this technique can be extended to strongly increasing Affine nets, obtaining better upper bounds compared to known results. The possible copies between places of a strongly increasing Affine net make this extension non-trivial. One cannot expect similar results for the entire class of Affine nets since coverability is Ackermann-hard and boundedness is undecidable. Moreover, it can be proved that model checking a logic expressing generalized coverability properties is undecidable for strongly increasing Affine nets, while it is known to be EXPSPACE-complete for Petri nets.

Cite as

Rémi Bonnet, Alain Finkel, and M. Praveen. Extending the Rackoff technique to Affine nets. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 301-312, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.FSTTCS.2012.301,
  author =	{Bonnet, R\'{e}mi and Finkel, Alain and Praveen, M.},
  title =	{{Extending the Rackoff technique to Affine nets}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{301--312},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.301},
  URN =		{urn:nbn:de:0030-drops-38688},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.301},
  annote =	{Keywords: Complexity of VASS, Affine nets, Rackoff technique, model checking, coverability, boundedness}
}
Document
Accelerating tree-automatic relations

Authors: Anthony Widjaja Lin


Abstract
We study the problem of computing the transitive closure of tree-automatic (binary) relations, which are represented by tree automata. Such relations include classes of infinite systems generated by pushdown systems (PDS), ground tree rewrite systems (GTRS), PA-processes, and Turing machines, to name a few. Although this problem is unsolvable in general, we provide a semi-algorithm for the problem and prove completeness guarantee for PDS, GTRS, and PA-processes. The semi-algorithm is an extension of a known semi-algorithm for structure-preserving tree-automatic relations, for which completeness is guaranteed for several interesting parameterized systems over tree topology. Hence, there is a single generic procedure that solves reachability for PDS, GTRS, PA-processes, and several parameterized systems in a uniform way. As an application, we provide a single generic semi-algorithm for checking repeated reachability over tree-automatic relations, for which completeness is guaranteed for the aforementioned classes.

Cite as

Anthony Widjaja Lin. Accelerating tree-automatic relations. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 313-324, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{lin:LIPIcs.FSTTCS.2012.313,
  author =	{Lin, Anthony Widjaja},
  title =	{{Accelerating tree-automatic relations}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{313--324},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.313},
  URN =		{urn:nbn:de:0030-drops-38691},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.313},
  annote =	{Keywords: Semi-algorithm, Model Checking, Infinite Systems, Automata}
}
Document
k-delivery traveling salesman problem on tree networks

Authors: Binay Bhattacharya and Yuzhuang Hu


Abstract
In this paper we study the k-delivery traveling salesman problem (TSP)on trees, a variant of the non-preemptive capacitated vehicle routing problem with pickups and deliveries. We are given n pickup locations and n delivery locations on trees, with exactly one item at each pickup location. The k-delivery TSP is to find a minimum length tour by a vehicle of finite capacity k to pick up and deliver exactly one item to each delivery location. We show that an optimal solution for the k-delivery TSP on paths can be found that allows succinct representations of the routes. By exploring the symmetry inherent in the k-delivery TSP, we design a 5/3-approximation algorithm for the k-delivery TSP on trees of arbitrary heights. The ratio can be improved to (3/2 - 1/2k) for the problem on trees of height 2. The developed algorithms are based on the following observation: under certain conditions, it makes sense for a non-empty vehicle to turn around and pick up additional loads.

Cite as

Binay Bhattacharya and Yuzhuang Hu. k-delivery traveling salesman problem on tree networks. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 325-336, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.FSTTCS.2012.325,
  author =	{Bhattacharya, Binay and Hu, Yuzhuang},
  title =	{{k-delivery traveling salesman problem on tree networks}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{325--336},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.325},
  URN =		{urn:nbn:de:0030-drops-38707},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.325},
  annote =	{Keywords: k-delivery traveling salesman problem, approximation algorithms}
}
Document
Rerouting shortest paths in planar graphs

Authors: Paul Bonsma


Abstract
A rerouting sequence is a sequence of shortest st-paths such that consecutive paths differ in one vertex. We study the Shortest Path Rerouting Problem, which asks, given two shortest st-paths P and Q in a graph G, whether a rerouting sequence exists from P to Q. This problem is PSPACE-hard in general, but we show that it can be solved in polynomial time if G is planar. To this end, we introduce a dynamic programming method for reconfiguration problems.

Cite as

Paul Bonsma. Rerouting shortest paths in planar graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 337-349, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bonsma:LIPIcs.FSTTCS.2012.337,
  author =	{Bonsma, Paul},
  title =	{{Rerouting shortest paths in planar graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{337--349},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.337},
  URN =		{urn:nbn:de:0030-drops-38715},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.337},
  annote =	{Keywords: shortest path, rerouting, reconfiguration problem, planar graph, polynomial time, dynamic programming}
}
Document
Space Efficient Edge-Fault Tolerant Routing

Authors: Varun Rajan


Abstract
Let G be an undirected weighted graph with n vertices and m edges, and k >= 1 be an integer. We preprocess the graph in O^~(mn) time, constructing a data structure of size O^~ k deg{v}+n^{1/k}) words per vertex v in V, which is then used by our routing scheme to ensure successful routing of packets even in the presence of a single edge fault. The scheme adds only O(k) words of information to the message. Moreover, the stretch of the routing scheme, i.e., the maximum ratio of the cost of the path along which the packet is routed to the cost of the actual shortest path that avoids the fault, is only O(k^2). Our results match the best known results for routing schemes that do not consider failures, with only the stretch being larger by a small constant factor of O(k). Moreover, a 1963 girth conjecture of Erdos, known to hold for k=1,2,3 and 5, implies that Omega(n^{1+1/k}) space is required by any routing scheme that has a stretch less than 2k+1. Hence our data structures are essentially space efficient. The algorithms are extremely simple, easy to implement, and with minor modifications, can be used under a centralized setting to efficiently answer distance queries in the presence of faults. An important component of our routing scheme that may be of independent interest is an algorithm to compute the shortest cycle passing through each edge. As an intermediate result, we show that computing this in a distributed model that stores at each vertex the shortest path tree rooted at that node requires Theta(mn) message passings in the worst case.

Cite as

Varun Rajan. Space Efficient Edge-Fault Tolerant Routing. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 350-361, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{rajan:LIPIcs.FSTTCS.2012.350,
  author =	{Rajan, Varun},
  title =	{{Space Efficient Edge-Fault Tolerant Routing}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{350--361},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.350},
  URN =		{urn:nbn:de:0030-drops-38721},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.350},
  annote =	{Keywords: Routing, Fault tolerant algorithms, Space efficiency, Distributed data structures, Tight bounds}
}
Document
Approximate Determinization of Quantitative Automata

Authors: Udi Boker and Thomas A. Henzinger


Abstract
Quantitative automata are nondeterministic finite automata with edge weights. They value a run by some function from the sequence of visited weights to the reals, and value a word by its minimal/maximal run. They generalize boolean automata, and have gained much attention in recent years. Unfortunately, important automaton classes, such as sum, discounted-sum, and limit-average automata, cannot be determinized. Yet, the quantitative setting provides the potential of approximate determinization. We define approximate determinization with respect to a distance function, and investigate this potential. We show that sum automata cannot be determinized approximately with respect to any distance function. However, restricting to nonnegative weights allows for approximate determinization with respect to some distance functions. Discounted-sum automata allow for approximate determinization, as the influence of a word's suffix is decaying. However, the naive approach, of unfolding the automaton computations up to a sufficient level, is shown to be doubly exponential in the discount factor. We provide an alternative construction that is singly exponential in the discount factor, in the precision, and in the number of states. We prove matching lower bounds, showing exponential dependency on each of these three parameters. Average and limit-average automata are shown to prohibit approximate determinization with respect to any distance function, and this is the case even for two weights, 0 and 1.

Cite as

Udi Boker and Thomas A. Henzinger. Approximate Determinization of Quantitative Automata. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 362-373, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{boker_et_al:LIPIcs.FSTTCS.2012.362,
  author =	{Boker, Udi and Henzinger, Thomas A.},
  title =	{{Approximate Determinization of Quantitative Automata}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{362--373},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.362},
  URN =		{urn:nbn:de:0030-drops-38739},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.362},
  annote =	{Keywords: Quantitative; Automata; Determinization; Approximation}
}
Document
Timed Lossy Channel Systems

Authors: Parosh Aziz Abdulla, Mohamed Faouzi Atig, and Jonathan Cederberg


Abstract
Lossy channel systems are a classical model with applications ranging from the modeling of communication protocols to programs running on weak memory models. All existing work assume that messages traveling inside the channels are picked from a finite alphabet. In this paper, we extend the model by assuming that each message is equipped with a clock representing the age of the message, thus obtaining the model of Timed Lossy Channel Systems (TLCS). The main contribution of the paper is to show that the control state reachability problem is decidable for TLCS.

Cite as

Parosh Aziz Abdulla, Mohamed Faouzi Atig, and Jonathan Cederberg. Timed Lossy Channel Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 374-386, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{abdulla_et_al:LIPIcs.FSTTCS.2012.374,
  author =	{Abdulla, Parosh Aziz and Atig, Mohamed Faouzi and Cederberg, Jonathan},
  title =	{{Timed Lossy Channel Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{374--386},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.374},
  URN =		{urn:nbn:de:0030-drops-38746},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.374},
  annote =	{Keywords: Lossy channel systems, timed automata, model checking}
}
Document
Solving the Canonical Representation and Star System Problems for Proper Circular-Arc Graphs in Logspace

Authors: Johannes Köbler, Sebastian Kuhnert, and Oleg Verbitsky


Abstract
We present a logspace algorithm that constructs a canonical intersection model for a given proper circular-arc graph, where canonical means that isomorphic graphs receive identical models. This implies that the recognition and the isomorphism problems for these graphs are solvable in logspace. For the broader class of concave-round graphs, which still possess (not necessarily proper) circular-arc models, we show that a canonical circular-arc model can also be constructed in logspace. As a building block for these results, we design a logspace algorithm for computing canonical circular-arc models of circular-arc hypergraphs; this important class of hypergraphs corresponds to matrices with the circular ones property. Furthermore, we consider the Star System Problem that consists in reconstructing a graph from its closed neighborhood hypergraph. We show that this problem is solvable in logarithmic space for the classes of proper circular-arc, concave-round, and co-convex graphs.

Cite as

Johannes Köbler, Sebastian Kuhnert, and Oleg Verbitsky. Solving the Canonical Representation and Star System Problems for Proper Circular-Arc Graphs in Logspace. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 387-399, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{kobler_et_al:LIPIcs.FSTTCS.2012.387,
  author =	{K\"{o}bler, Johannes and Kuhnert, Sebastian and Verbitsky, Oleg},
  title =	{{Solving the Canonical Representation and Star System Problems for Proper Circular-Arc Graphs in Logspace}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{387--399},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.387},
  URN =		{urn:nbn:de:0030-drops-38757},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.387},
  annote =	{Keywords: Proper circular-arc graphs, graph isomorphism, canonization, circular ones property, logspace complexity}
}
Document
Directed Acyclic Subgraph Problem Parameterized above the Poljak-Turzik Bound

Authors: Robert Crowston, Gregory Gutin, and Mark Jones


Abstract
An oriented graph is a directed graph without directed 2-cycles. Poljak and Turzik (1986) proved that every connected oriented graph G on n vertices and m arcs contains an acyclic subgraph with at least m/2+(n-1)/4 arcs. Raman and Saurabh (2006) gave another proof of this result and left it as an open question to establish the parameterized complexity of the following problem: does G have an acyclic subgraph with least m/2 + (n-1)/4 + k arcs, where k is the parameter? We answer this question by showing that the problem can be solved by an algorithm of runtime (12k)!n^{O(1)}. Thus, the problem is fixed-parameter tractable. We also prove that there is a polynomial time algorithm that either establishes that the input instance of the problem is a Yes-instance or reduces the input instance to an equivalent one of size O(k^2).

Cite as

Robert Crowston, Gregory Gutin, and Mark Jones. Directed Acyclic Subgraph Problem Parameterized above the Poljak-Turzik Bound. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 400-411, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{crowston_et_al:LIPIcs.FSTTCS.2012.400,
  author =	{Crowston, Robert and Gutin, Gregory and Jones, Mark},
  title =	{{Directed Acyclic Subgraph Problem Parameterized above the Poljak-Turzik Bound}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{400--411},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.400},
  URN =		{urn:nbn:de:0030-drops-38765},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.400},
  annote =	{Keywords: Acyclic Subgraph, Fixed-parameter tractable, Polynomial Kernel}
}
Document
Beyond Max-Cut: lambda-Extendible Properties Parameterized Above the Poljak-Turzik Bound

Authors: Matthias Mnich, Geevarghese Philip, Saket Saurabh, and Ondrej Suchy


Abstract
Poljak and Turzík (Discrete Math. 1986) introduced the notion of lambda-extendible properties of graphs as a generalization of the property of being bipartite. They showed that for any 0 < lambda < 1 and lambda-extendible property Pi, any connected graph G on n vertices and m edges contains a spanning subgraph H in Pi with at least lambda m+ (1-lambda)/2 (n-1) edges. The property of being bipartite is lambda-extendible for lambda=1/2, and thus the Poljak-Turzík bound generalizes the well-known Edwards-Erdos bound for MAXCUT. We define a variant, namely strong lambda-extendibility, to which the Poljak-Turzík bound applies. For a strong lambda-extendible graph property \Pi, we define the parameterized Above Poljak-Turzík problem as follows: Given a connected graph G on n vertices and m edges and an integer parameter k, does there exist a spanning subgraph H of G such that H in Pi and H has at least lambda m+ (1-lambda)/2 (n-1)+k edges? The parameter is k, the surplus over the number of edges guaranteed by the Poljak-Turzík bound. We consider properties Pi for which the Above Poljak-Turzík problem is fixed-parameter tractable (FPT) on graphs which are O(k) vertices away from being a graph in which each block is a clique. We show that for all such properties, Above Poljak-Turzík is FPT for all 0< lambda <1. Our results hold for properties of oriented graphs and graphs with edge labels. Our results generalize the recent result of Crowston et al. (ICALP 2012) on MAXCUT parameterized above the Edwards-Erdos, and yield FPT algorithms for several graph problems parameterized above lower bounds. For instance, we get that the above-guarantee Max q-Colorable Subgraph problem is FPT. Our results also imply that the parameterized above-guarantee Oriented Max Acyclic Digraph problem thus solving an open question of Raman and Saurabh (Theor. Comput. Sci. 2006).

Cite as

Matthias Mnich, Geevarghese Philip, Saket Saurabh, and Ondrej Suchy. Beyond Max-Cut: lambda-Extendible Properties Parameterized Above the Poljak-Turzik Bound. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 412-423, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{mnich_et_al:LIPIcs.FSTTCS.2012.412,
  author =	{Mnich, Matthias and Philip, Geevarghese and Saurabh, Saket and Suchy, Ondrej},
  title =	{{Beyond Max-Cut:  lambda-Extendible Properties Parameterized Above the Poljak-Turzik Bound}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{412--423},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.412},
  URN =		{urn:nbn:de:0030-drops-38776},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.412},
  annote =	{Keywords: Algorithms and data structures; fixed-parameter algorithms; bipartite graphs; above-guarantee parameterization}
}
Document
Subexponential Parameterized Odd Cycle Transversal on Planar Graphs

Authors: Daniel Lokshtanov, Saket Saurabh, and Magnus Wahlström


Abstract
In the Odd Cycle Transversal (OCT) problem we are given a graph G on n vertices and an integer k, the objective is to determine whether there exists a vertex set O in G of size at most k such that G - O is bipartite. Reed, Smith and Vetta [Oper. Res. Lett., 2004] gave an algorithm for OCT with running time 3^kn^{O(1)}. Assuming the exponential time hypothesis of Impagliazzo, Paturi and Zane, the running time can not be improved to 2^{o(k)}n^{O(1)}. We show that OCT admits a randomized algorithm running in O(n^{O(1)} + 2^{O(sqrt{k} log k)}n) time when the input graph is planar. As a byproduct we also obtain a linear time algorithm for OCT on planar graphs with running time O(n^O(1) + 2O( sqrt(k) log k) n) time. This improves over an algorithm of Fiorini et al. [Disc. Appl. Math., 2008].

Cite as

Daniel Lokshtanov, Saket Saurabh, and Magnus Wahlström. Subexponential Parameterized Odd Cycle Transversal on Planar Graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 424-434, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.FSTTCS.2012.424,
  author =	{Lokshtanov, Daniel and Saurabh, Saket and Wahlstr\"{o}m, Magnus},
  title =	{{Subexponential Parameterized Odd Cycle Transversal on Planar Graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{424--434},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.424},
  URN =		{urn:nbn:de:0030-drops-38783},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.424},
  annote =	{Keywords: Graph Theory, Parameterized Algorithms, Odd Cycle Transversal}
}
Document
Deciding Probabilistic Automata Weak Bisimulation in Polynomial Time

Authors: Holger Hermanns and Andrea Turrini


Abstract
Deciding in an efficient way weak probabilistic bisimulation in the context of probabilistic automata is an open problem for about a decade. In this work we close this problem by proposing a procedure that checks in polynomial time the existence of a weak combined transition satisfying the step condition of the bisimulation. This enables us to arrive at a polynomial time algorithm for deciding weak probabilistic bisimulation. We also present several extensions to interesting related problems setting the ground for the development of more effective and compositional analysis algorithms for probabilistic systems.

Cite as

Holger Hermanns and Andrea Turrini. Deciding Probabilistic Automata Weak Bisimulation in Polynomial Time. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 435-447, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{hermanns_et_al:LIPIcs.FSTTCS.2012.435,
  author =	{Hermanns, Holger and Turrini, Andrea},
  title =	{{Deciding Probabilistic Automata Weak Bisimulation in Polynomial Time}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{435--447},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.435},
  URN =		{urn:nbn:de:0030-drops-38794},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.435},
  annote =	{Keywords: Probabilistic Automata, Weak probabilsitic bisimulation, Linear Programming problem, Polynomial decision algorithm}
}
Document
Bisimilarity of Probabilistic Pushdown Automata

Authors: Vojtech Forejt, Petr Jancar, Stefan Kiefer, and James Worrell


Abstract
We study the bisimilarity problem for probabilistic pushdown automata (pPDA) and subclasses thereof. Our definition of pPDA allows both probabilistic and non-deterministic branching, generalising the classical notion of pushdown automata (without epsilon-transitions). Our first contribution is a general construction that reduces checking bisimilarity of probabilistic transition systems to checking bisimilarity of non-deterministic transition systems. This construction directly yields decidability of bisimilarity for pPDA, as well as an elementary upper bound for the bisimilarity problem on the subclass of probabilistic basic process algebras, i.e., single-state pPDA. We further show that, with careful analysis, the general reduction can be used to prove an EXPTIME upper bound for bisimilarity of probabilistic visibly pushdown automata. Here we also provide a matching lower bound, establishing EXPTIME-completeness. Finally we prove that deciding bisimilarity of probabilistic one-counter automata, another subclass of pPDA, is PSPACE-complete. Here we use a more specialised argument to obtain optimal complexity bounds.

Cite as

Vojtech Forejt, Petr Jancar, Stefan Kiefer, and James Worrell. Bisimilarity of Probabilistic Pushdown Automata. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 448-460, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{forejt_et_al:LIPIcs.FSTTCS.2012.448,
  author =	{Forejt, Vojtech and Jancar, Petr and Kiefer, Stefan and Worrell, James},
  title =	{{Bisimilarity of Probabilistic Pushdown Automata}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{448--460},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.448},
  URN =		{urn:nbn:de:0030-drops-38800},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.448},
  annote =	{Keywords: bisimilarity, probabilistic systems, pushdown automata}
}
Document
Average Case Analysis of the Classical Algorithm for Markov Decision Processes with Büchi Objectives

Authors: Krishnendu Chatterjee, Manas Joglekar, and Nisarg Shah


Abstract
We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objectives. We consider the problem of computing the set of almost-sure winning vertices from where the objective can be ensured with probability 1. We study for the first time the average case complexity of the classical algorithm for computing the set of almost-sure winning vertices for MDPs with Buchi objectives. Our contributions are as follows: First, we show that for MDPs with constant out-degree the expected number of iterations is at most logarithmic and the average case running time is linear (as compared to the worst case linear number of iterations and quadratic time complexity). Second, for the average case analysis over all MDPs we show that the expected number of iterations is constant and the average case running time is linear (again as compared to the worst case linear number of iterations and quadratic time complexity). Finally we also show that given that all MDPs are equally likely, the probability that the classical algorithm requires more than constant number of iterations is exponentially small.

Cite as

Krishnendu Chatterjee, Manas Joglekar, and Nisarg Shah. Average Case Analysis of the Classical Algorithm for Markov Decision Processes with Büchi Objectives. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 461-473, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.FSTTCS.2012.461,
  author =	{Chatterjee, Krishnendu and Joglekar, Manas and Shah, Nisarg},
  title =	{{Average Case Analysis of the Classical Algorithm for Markov Decision Processes with B\"{u}chi Objectives}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{461--473},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.461},
  URN =		{urn:nbn:de:0030-drops-38817},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.461},
  annote =	{Keywords: MDPs, Buchi objectives, Average case analysis}
}
Document
Verification of Open Interactive Markov Chains

Authors: Tomas Brazdil, Holger Hermanns, Jan Krcal, Jan Kretinsky, and Vojtech Rehak


Abstract
Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation. For this we embed the IMC into a game that is played with the environment. We devise algorithms that enable us to derive bounds on reachability probabilities that are assured to hold in any composition context.

Cite as

Tomas Brazdil, Holger Hermanns, Jan Krcal, Jan Kretinsky, and Vojtech Rehak. Verification of Open Interactive Markov Chains. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 474-485, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{brazdil_et_al:LIPIcs.FSTTCS.2012.474,
  author =	{Brazdil, Tomas and Hermanns, Holger and Krcal, Jan and Kretinsky, Jan and Rehak, Vojtech},
  title =	{{Verification of Open Interactive Markov Chains}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{474--485},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.474},
  URN =		{urn:nbn:de:0030-drops-38826},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.474},
  annote =	{Keywords: IMC, compositional verification, synthesis, time bounded reachability, discretization}
}
Document
On the Sensitivity of Shape Fitting Problems

Authors: Kasturi Varadarajan and Xin Xiao


Abstract
In this article, we study shape fitting problems, epsilon-coresets, and total sensitivity. We focus on the (j,k)-projective clustering problems, including k-median/k-means, k-line clustering, j-subspace approximation, and the integer (j,k)-projective clustering problem. We derive upper bounds of total sensitivities for these problems, and obtain epsilon-coresets using these upper bounds. Using a dimension-reduction type argument, we are able to greatly simplify earlier results on total sensitivity for the k-median/k-means clustering problems, and obtain positively-weighted epsilon-coresets for several variants of the (j,k)-projective clustering problem. We also extend an earlier result on epsilon-coresets for the integer (j,k)-projective clustering problem in fixed dimension to the case of high dimension.

Cite as

Kasturi Varadarajan and Xin Xiao. On the Sensitivity of Shape Fitting Problems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 486-497, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{varadarajan_et_al:LIPIcs.FSTTCS.2012.486,
  author =	{Varadarajan, Kasturi and Xiao, Xin},
  title =	{{On the Sensitivity of Shape Fitting Problems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{486--497},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.486},
  URN =		{urn:nbn:de:0030-drops-38830},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.486},
  annote =	{Keywords: Coresets, shape fitting, k-means, subspace approximation}
}
Document
Overlap of Convex Polytopes under Rigid Motion

Authors: Hee-Kap Ahn, Siu-Wing Cheng, Hyuk Jun Kweon, and Juyoung Yon


Abstract
We present an algorithm to compute an approximate overlap of two convex polytopes P_1 and P_2 in R^3 under rigid motion. Given any epsilon in (0,1/2], our algorithm runs in O(epsilon^{-3}n log^{3.5}n) time with probability 1 - n^{-O(1)} and returns a (1-epsilon)-approximate maximum overlap, provided that the maximum overlap is at least lambda max(|P_1|,|P_2|) for some given constant lambda in (0,1].

Cite as

Hee-Kap Ahn, Siu-Wing Cheng, Hyuk Jun Kweon, and Juyoung Yon. Overlap of Convex Polytopes under Rigid Motion. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 498-509, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{ahn_et_al:LIPIcs.FSTTCS.2012.498,
  author =	{Ahn, Hee-Kap and Cheng, Siu-Wing and Kweon, Hyuk Jun and Yon, Juyoung},
  title =	{{Overlap of Convex Polytopes under Rigid Motion}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{498--509},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.498},
  URN =		{urn:nbn:de:0030-drops-38845},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.498},
  annote =	{Keywords: convex polytope, overlap, approximation, rigid motion}
}
Document
Minimum Enclosing Circle with Few Extra Variables

Authors: Minati De, Subhas C. Nandy, and Sasanka Roy


Abstract
Asano et al. [JoCG 2011] proposed an open problem of computing the minimum enclosing circle of a set of n points in R^2 given in a read-only array in sub-quadratic time. We show that Megiddo's prune and search algorithm for computing the minimum radius circle enclosing the given points can be tailored to work in a read-only environment in O(n^{1+epsilon}) time using O(log n) extra space, where epsilon is a positive constant less than 1. As a warm-up, we first solve the same problem in an in-place setup in linear time with O(1) extra space.

Cite as

Minati De, Subhas C. Nandy, and Sasanka Roy. Minimum Enclosing Circle with Few Extra Variables. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 510-521, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.FSTTCS.2012.510,
  author =	{De, Minati and Nandy, Subhas C. and Roy, Sasanka},
  title =	{{Minimum Enclosing Circle with Few Extra Variables}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{510--521},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.510},
  URN =		{urn:nbn:de:0030-drops-38855},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.510},
  annote =	{Keywords: Minimum enclosing circle, space-efficient algorithm, prune-and-search}
}
Document
Static Analysis for Checking Data Format Compatibility of Programs

Authors: Pranavadatta Devaki and Aditya Kanade


Abstract
Large software systems are developed by composing multiple programs. If the programs manipulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs generated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.

Cite as

Pranavadatta Devaki and Aditya Kanade. Static Analysis for Checking Data Format Compatibility of Programs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 522-533, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{devaki_et_al:LIPIcs.FSTTCS.2012.522,
  author =	{Devaki, Pranavadatta and Kanade, Aditya},
  title =	{{Static Analysis for Checking Data Format Compatibility of Programs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{522--533},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.522},
  URN =		{urn:nbn:de:0030-drops-38862},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.522},
  annote =	{Keywords: Data format compatibility, producer-consumer/version-related programs}
}
Document
The Complexity of Quantitative Information Flow in Recursive Programs

Authors: Rohit Chadha and Michael Ummels


Abstract
Information-theoretic measures based upon mutual information can be employed to quantify the information that an execution of a program reveals about its secret inputs. The information leakage bounding problem asks whether the information leaked by a program does not exceed a given threshold. We consider this problem for two scenarios: a) the outputs of the program are revealed, and b)the timing (measured in the number of execution steps) of the program is revealed. For both scenarios, we establish complexity results in the context of deterministic boolean programs, both for programs with and without recursion. In particular, we prove that for recursive programs the information leakage bounding problem is no harder than checking reachability.

Cite as

Rohit Chadha and Michael Ummels. The Complexity of Quantitative Information Flow in Recursive Programs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 534-545, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chadha_et_al:LIPIcs.FSTTCS.2012.534,
  author =	{Chadha, Rohit and Ummels, Michael},
  title =	{{The Complexity of Quantitative Information Flow in Recursive Programs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{534--545},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.534},
  URN =		{urn:nbn:de:0030-drops-38872},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.534},
  annote =	{Keywords: quantitative information flow, recursive programs, program analysis, verification, computational complexity}
}
Document
Computationally Complete Symbolic Attacker in Action

Authors: Gergei Bana, Pedro Adao, and Hideki Sakurada


Abstract
We show that the recent technique of computationally complete symbolic attackers proposed by Bana and Comon-Lundh [POST 2012] for computationally sound verification of security protocols is powerful enough to verify actual protocols. In their work, Bana and Comon-Lundh presented only the general framework, but they did not introduce sufficiently many axioms to actually prove protocols. We present a set of axioms -- some generic axioms that are computationally sound for all PPT algorithms, and two specific axioms that are sound for CCA2 secure encryptions -- and illustrate the power of this technique by giving the first computationally sound verification (secrecy and authentication) via symbolic attackers of the NSL Protocol that does not need any further restrictive assumptions about the computational implementation. The axioms are entirely modular, not particular to the NSL protocol.

Cite as

Gergei Bana, Pedro Adao, and Hideki Sakurada. Computationally Complete Symbolic Attacker in Action. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 546-560, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{bana_et_al:LIPIcs.FSTTCS.2012.546,
  author =	{Bana, Gergei and Adao, Pedro and Sakurada, Hideki},
  title =	{{Computationally Complete Symbolic Attacker  in Action}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{546--560},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.546},
  URN =		{urn:nbn:de:0030-drops-38888},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.546},
  annote =	{Keywords: Security Protocols, Symbolic Adversary, Computational Soundness}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail