LIPIcs, Volume 13

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)



Thumbnail PDF

Event

FSTTCS 2011, December 12-14, 2011, Mumbai, India

Editors

Supratik Chakraborty
Amit Kumar

Publication Details

  • published at: 2011-12-01
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-34-7
  • DBLP: db/conf/fsttcs/fsttcs2011

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 13, FSTTCS'11, Complete Volume

Authors: Supratik Chakraborty and Amit Kumar


Abstract
LIPIcs, Volume 13, FSTTCS'11, Complete Volume

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011,
  title =	{{LIPIcs, Volume 13, FSTTCS'11, Complete Volume}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011},
  URN =		{urn:nbn:de:0030-drops-41077},
  doi =		{10.4230/LIPIcs.FSTTCS.2011},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers

Authors: Supratik Chakraborty and Amit Kumar


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. i-xv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011.i,
  author =	{Chakraborty, Supratik and Kumar, Amit},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{i--xv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.i},
  URN =		{urn:nbn:de:0030-drops-33196},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers}
}
Document
Author Index

Authors: Supratik Chakraborty and Amit Kumar


Abstract
Author Index

Cite as

Supratik Chakraborty and Amit Kumar. Author Index. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 16-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011.xvi,
  author =	{Chakraborty, Supratik and Kumar, Amit},
  title =	{{Author Index}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{16--17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.xvi},
  URN =		{urn:nbn:de:0030-drops-33636},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.xvi},
  annote =	{Keywords: Author Index}
}
Document
Invited Talk
Energy-Efficient Algorithms (Invited Talk)

Authors: Susanne Albers


Abstract
This presentation surveys algorithmic techniques for energy savings. We address power-down as well as dynamic speed scaling mechanisms.

Cite as

Susanne Albers. Energy-Efficient Algorithms (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{albers:LIPIcs.FSTTCS.2011.1,
  author =	{Albers, Susanne},
  title =	{{Energy-Efficient Algorithms}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{1--2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.1},
  URN =		{urn:nbn:de:0030-drops-33380},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.1},
  annote =	{Keywords: energy efficiency, power-down mechanisms, dynamic speed scaling, offline algorithm, online algorithm}
}
Document
Invited Talk
Constraints, Graphs, Algebra, Logic, and Complexity (Invited Talk)

Authors: Moshe Y. Vardi


Abstract
A large class of problems in AI and other areas of computer science can be viewed as constraint-satisfaction problems. This includes problems in database query optimization, machine vision, belief maintenance, scheduling, temporal reasoning, type reconstruction, graph theory, and satisfiability. All of these problems can be recast as questions regarding the existence of homomorphisms between two directed graphs. It is well-known that the constraint-satisfaction problem is NP-complete. This motivated an extensive research program into identify tractable cases of constraint satisfaction. This research proceeds along two major lines. The first line of research focuses on non-uniform constraint satisfaction, where the target graph is fixed. The goal is to identify those target graphs that give rise to a tractable constraint-satisfaction problem. The second line of research focuses on identifying large classes of source graphs for which constraint-satisfaction is tractable. We show in how tools from graph theory, universal algebra, logic, and complexity theory, shed light on the tractability of constraint satisfaction.

Cite as

Moshe Y. Vardi. Constraints, Graphs, Algebra, Logic, and Complexity (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, p. 3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{vardi:LIPIcs.FSTTCS.2011.3,
  author =	{Vardi, Moshe Y.},
  title =	{{Constraints, Graphs, Algebra, Logic, and Complexity}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{3--3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.3},
  URN =		{urn:nbn:de:0030-drops-33580},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.3},
  annote =	{Keywords: constraint satisfaction, NP completeness, dichotomy}
}
Document
Invited Talk
Physical limits of Communication (Invited Talk)

Authors: Madhu Sudan


Abstract
We describe recent work with Sanjeev Khanna (U.\ Penn.) where we explore potential axioms about the mechanics of information transmission with a view to understanding whether continuous signals can carry more information than analog signals.

Cite as

Madhu Sudan. Physical limits of Communication (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 4-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{sudan:LIPIcs.FSTTCS.2011.4,
  author =	{Sudan, Madhu},
  title =	{{Physical limits of Communication}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{4--5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.4},
  URN =		{urn:nbn:de:0030-drops-33392},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.4},
  annote =	{Keywords: Analog signals, information capacity, delays}
}
Document
Invited Talk
A Domain-Specific Language for Computing on Encrypted Data (Invited Talk)

Authors: Alex Bain, John Mitchell, Rahul Sharma, Deian Stefan, and Joe Zimmerman


Abstract
In cloud computing, a client may request computation on confidential data that is sent to untrusted servers. While homomorphic encryption and secure multiparty computation provide building blocks for secure computation, software must be properly structured to preserve confidentiality. Using a general definition of secure execution platform, we propose a single Haskell-based domain-specific language for cryptographic cloud computing and prove correctness and confidentiality for two representative and distinctly different implementations of the same programming language. The secret sharing execution platform provides information-theoretic security against colluding servers. The homomorphic encryption execution platform requires only one server, but has limited efficiency, and provides secrecy against a computationally-bounded adversary. Experiments with our implementation suggest promising computational feasibility, as cryptography improves, and show how code can be developed uniformly for a variety of secure cloud platforms, without explicitly programming separate clients and servers.

Cite as

Alex Bain, John Mitchell, Rahul Sharma, Deian Stefan, and Joe Zimmerman. A Domain-Specific Language for Computing on Encrypted Data (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 6-24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bain_et_al:LIPIcs.FSTTCS.2011.6,
  author =	{Bain, Alex and Mitchell, John and Sharma, Rahul and Stefan, Deian and Zimmerman, Joe},
  title =	{{A Domain-Specific Language for Computing on Encrypted Data}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{6--24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.6},
  URN =		{urn:nbn:de:0030-drops-33604},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.6},
  annote =	{Keywords: Domain-Specific Language, Secret Sharing, Homomorphic Encryption}
}
Document
Invited Talk
Schema Mappings and Data Examples: Deriving Syntax from Semantics (Invited Talk)

Authors: Phokion G. Kolaitis


Abstract
Schema mappings are high-level specifications that describe the relationship between two database schemas. Schema mappings are considered to be the essential building blocks in such critical data interoperability tasks as data exchange and data integration. For this reason, they have been the focus of extensive research investigations over the past several years. Since in real-life applications schema mappings can be quite complex, it is important to develop methods and tools for illustrating, explaining, and deriving schema mappings. A promising approach to this effect is to use “good” data examples that illustrate the schema mapping at hand. In this talk, we present an overview of recent work on characterizing and deriving schema mappings via a finite set of data examples. We show that every LAV schema mapping (i.e., a schema mapping specified by a finite set of local-as-view tuple-generating dependencies) is uniquely characterized by a finite set of universal data examples with respect to the class of all LAV schema mappings. We also show that this type of result does not hold for arbitrary GAV schema mappings (i.e., schema mappings specified by a finite set of global-as-view tuple- generating dependencies). After this, we give a necessary and sufficient algorithmic condition for a GAV schema mapping to be uniquely characterizable by a finite set of universal examples with respect to the class of all GAV schema mappings. Along the way, we establish tight connections between unique characterizability of schema mappings and homomorphism dualities. This is joint work with Bogdan Alexe (IBM Research - Almaden), Balder ten Cate (UC Santa Cruz), and Wang-Chiew Tan (UC Santa Cruz and IBM Research - Almaden) based on [1, 2, 3].

Cite as

Phokion G. Kolaitis. Schema Mappings and Data Examples: Deriving Syntax from Semantics (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, p. 25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kolaitis:LIPIcs.FSTTCS.2011.25,
  author =	{Kolaitis, Phokion G.},
  title =	{{Schema Mappings and Data Examples: Deriving Syntax from Semantics}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{25--25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.25},
  URN =		{urn:nbn:de:0030-drops-33598},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.25},
  annote =	{Keywords: Schema mappings, database constraints, data exchange, data integration, universal solutions, homomorphism dualities}
}
Document
Invited Talk
Quantum State Description Complexity (Invited Talk)

Authors: Umesh V. Vazirani


Abstract
Quantum states generally require exponential sized classical descriptions, but the long conjectured area law provides hope that a large class of natural quantum states can be described succinctly. Recent progress in formally proving the area law is described.

Cite as

Umesh V. Vazirani. Quantum State Description Complexity (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 26-27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{vazirani:LIPIcs.FSTTCS.2011.26,
  author =	{Vazirani, Umesh V.},
  title =	{{Quantum State Description Complexity}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{26--27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.26},
  URN =		{urn:nbn:de:0030-drops-33408},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.26},
  annote =	{Keywords: area law, Hamiltonian, description complexity, detectability lemma, entanglement}
}
Document
Approximation Algorithms for Union and Intersection Covering Problems

Authors: Marek Cygan, Fabrizio Grandoni, Stefano Leonardi, Marcin Mucha, Marcin Pilipczuk, and Piotr Sankowski


Abstract
In a classical covering problem, we are given a set of requests that we need to satisfy (fully or partially), by buying a subset of items at minimum cost. For example, in the k-MST problem we want to find the cheapest tree spanning at least k nodes of an edge-weighted graph. Here, nodes represent requests whereas edges correspond to items. In this paper, we initiate the study of a new family of multi-layer covering problems. Each such problem consists of a collection of h distinct instances of a standard covering problem (layers), with the constraint that all layers share the same set of requests. We identify two main subfamilies of these problems: - in an union multi-layer problem, a request is satisfied if it is satisfied in at least one layer; - in an intersection multi-layer problem, a request is satisfied if it is satisfied in all layers. To see some natural applications, consider both generalizations of k-MST. Union k-MST can model a problem where we are asked to connect a set of users to at least one of two communication networks, e.g., a wireless and a wired network. On the other hand, Intersection k-MST can formalize the problem of providing both electricity and water to at least k users.

Cite as

Marek Cygan, Fabrizio Grandoni, Stefano Leonardi, Marcin Mucha, Marcin Pilipczuk, and Piotr Sankowski. Approximation Algorithms for Union and Intersection Covering Problems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 28-40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.FSTTCS.2011.28,
  author =	{Cygan, Marek and Grandoni, Fabrizio and Leonardi, Stefano and Mucha, Marcin and Pilipczuk, Marcin and Sankowski, Piotr},
  title =	{{Approximation Algorithms for Union and Intersection Covering Problems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{28--40},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.28},
  URN =		{urn:nbn:de:0030-drops-33213},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.28},
  annote =	{Keywords: Approximation algorithms, Partial covering problems}
}
Document
Tight Gaps for Vertex Cover in the Sherali-Adams SDP Hierarchy

Authors: Siavosh Benabbas, Siu On Chan, Konstantinos Georgiou, and Avner Magen


Abstract
We give the first tight integrality gap for Vertex Cover in the Sherali-Adams SDP system. More precisely, we show that for every \epsilon >0, the standard SDP for Vertex Cover that is strengthened with the level-6 Sherali-Adams system has integrality gap 2-\epsilon. To the best of our knowledge this is the first nontrivial tight integrality gap for the Sherali-Adams SDP hierarchy for a combinatorial problem with hard constraints. For our proof we introduce a new tool to establish Local-Global Discrepancy which uses simple facts from high-dimensional geometry. This allows us to give Sherali-Adams solutions with objective value n(1/2+o(1)) for graphs with small (2+o(1)) vector chromatic number. Since such graphs with no linear size independent sets exist, this immediately gives a tight integrality gap for the Sherali-Adams system for superconstant number of tightenings. In order to obtain a Sherali-Adams solution that also satisfies semidefinite conditions, we reduce semidefiniteness to a condition on the Taylor expansion of a reasonably simple function that we are able to establish up to constant-level SDP tightenings. We conjecture that this condition holds even for superconstant levels which would imply that in fact our solution is valid for superconstant level Sherali-Adams SDPs.

Cite as

Siavosh Benabbas, Siu On Chan, Konstantinos Georgiou, and Avner Magen. Tight Gaps for Vertex Cover in the Sherali-Adams SDP Hierarchy. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 41-54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{benabbas_et_al:LIPIcs.FSTTCS.2011.41,
  author =	{Benabbas, Siavosh and Chan, Siu On and Georgiou, Konstantinos and Magen, Avner},
  title =	{{Tight Gaps for Vertex Cover in the Sherali-Adams SDP Hierarchy}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{41--54},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.41},
  URN =		{urn:nbn:de:0030-drops-33299},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.41},
  annote =	{Keywords: Vertex Cover, Integrality Gap, Lift-and-Project systems, Linear Programming, Semidefinite Programming}
}
Document
Applications of Discrepancy Theory in Multiobjective Approximation

Authors: Christian Glaßer, Christian Reitwießner, and Maximilian Witek


Abstract
We apply a multi-color extension of the Beck-Fiala theorem to show that the multiobjective maximum traveling salesman problem is randomized 1/2-approximable on directed graphs and randomized 2/3-approximable on undirected graphs. Using the same technique we show that the multiobjective maximum satisfiablilty problem is 1/2-approximable.

Cite as

Christian Glaßer, Christian Reitwießner, and Maximilian Witek. Applications of Discrepancy Theory in Multiobjective Approximation. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 55-65, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{glaer_et_al:LIPIcs.FSTTCS.2011.55,
  author =	{Gla{\ss}er, Christian and Reitwie{\ss}ner, Christian and Witek, Maximilian},
  title =	{{Applications of Discrepancy Theory in Multiobjective Approximation}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{55--65},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.55},
  URN =		{urn:nbn:de:0030-drops-33233},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.55},
  annote =	{Keywords: Discrepancy Theory, Multiobjective Optimization, Satisfiability, Traveling Salesman}
}
Document
Quasi-Weak Cost Automata: A New Variant of Weakness

Authors: Denis Kuperberg and Michael Vanden Boom


Abstract
Cost automata have a finite set of counters which can be manipulated on each transition but do not affect control flow. Based on the evolution of the counter values, these automata define functions from a domain like words or trees to \N \cup \set{\infty}, modulo an equivalence relation which ignores exact values but preserves boundedness properties. These automata have been studied by Colcombet et al. as part of a "theory of regular cost functions", an extension of the theory of regular languages which retains robust equivalences, closure properties, and decidability like the classical theory. We extend this theory by introducing quasi-weak cost automata. Unlike traditional weak automata which have a hard-coded bound on the number of alternations between accepting and rejecting states, quasi-weak automata bound the alternations using the counter values (which can vary across runs). We show that these automata are strictly more expressive than weak cost automata over infinite trees. The main result is a Rabin-style characterization theorem: a function is quasi-weak definable if and only if it is definable using two dual forms of non-deterministic Büchi cost automata. This yields a new decidability result for cost functions over infinite trees.

Cite as

Denis Kuperberg and Michael Vanden Boom. Quasi-Weak Cost Automata: A New Variant of Weakness. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 66-77, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kuperberg_et_al:LIPIcs.FSTTCS.2011.66,
  author =	{Kuperberg, Denis and Vanden Boom, Michael},
  title =	{{Quasi-Weak Cost Automata: A New Variant of Weakness}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{66--77},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.66},
  URN =		{urn:nbn:de:0030-drops-33517},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.66},
  annote =	{Keywords: Automata, infinite trees, cost functions, weak}
}
Document
Using non-convex approximations for efficient analysis of timed automata

Authors: Frédéric Herbreteau, Dileep Kini, B. Srivathsan, and Igor Walukiewicz


Abstract
The reachability problem for timed automata asks if there exists a path from an initial state to a target state. The standard solution to this problem involves computing the zone graph of the automaton, which in principle could be infinite. In order to make the graph finite, zones are approximated using an extrapolation operator. For reasons of efficiency in current algorithms extrapolation of a zone is always a zone; and in particular it is convex. In this paper, we propose to solve the reachability problem without such extrapolation operators. To ensure termination, we provide an efficient algorithm to check if a zone is included in the so called region closure of another. Although theoretically better, closure cannot be used in the standard algorithm since a closure of a zone may not be convex. An additional benefit of the proposed approach is that it permits to calculate approximating parameters on-the-fly during exploration of the zone graph, as opposed to the current methods which do it by a static analysis of the automaton prior to the exploration. This allows for further improvements in the algorithm. Promising experimental results are presented.

Cite as

Frédéric Herbreteau, Dileep Kini, B. Srivathsan, and Igor Walukiewicz. Using non-convex approximations for efficient analysis of timed automata. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 78-89, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{herbreteau_et_al:LIPIcs.FSTTCS.2011.78,
  author =	{Herbreteau, Fr\'{e}d\'{e}ric and Kini, Dileep and Srivathsan, B. and Walukiewicz, Igor},
  title =	{{Using non-convex approximations for efficient analysis of timed automata}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{78--89},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.78},
  URN =		{urn:nbn:de:0030-drops-33619},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.78},
  annote =	{Keywords: Timed Automata, Model-checking, Non-convex abstraction, On-the-fly abstraction}
}
Document
Shrinking Timed Automata

Authors: Ocan Sankur, Patricia Bouyer, and Nicolas Markey


Abstract
We define and study a new approach to the implementability of timed automata, where the semantics is perturbed by imprecisions and finite frequency of the hardware. In order to circumvent these effects, we introduce parametric shrinking of clock constraints, which corresponds to tightening these. We propose symbolic procedures to decide the existence of (and then compute) parameters under which the shrunk version of a given timed automaton is non-blocking and can time-abstract simulate the exact semantics. We then define an implementation semantics for timed automata with a digital clock and positive reaction times, and show that for shrinkable timed automata, non-blockingness and time-abstract simulation are preserved in implementation.

Cite as

Ocan Sankur, Patricia Bouyer, and Nicolas Markey. Shrinking Timed Automata. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 90-102, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{sankur_et_al:LIPIcs.FSTTCS.2011.90,
  author =	{Sankur, Ocan and Bouyer, Patricia and Markey, Nicolas},
  title =	{{Shrinking Timed Automata}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{90--102},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.90},
  URN =		{urn:nbn:de:0030-drops-33627},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.90},
  annote =	{Keywords: Timed automata, implementability, robustness}
}
Document
The Quantitative Linear-Time--Branching-Time Spectrum

Authors: Uli Fahrenberg, Axel Legay, and Claus Thrane


Abstract
We present a distance-agnostic approach to quantitative verification. Taking as input an unspecified distance on system traces, or executions, we develop a game-based framework which allows us to define a spectrum of different interesting system distances corresponding to the given trace distance. Thus we extend the classic linear-time--branching-time spectrum to a quantitative setting, parametrized by trace distance. We also provide fixed-point characterizations of all system distances, and we prove a general transfer principle which allows us to transfer counterexamples from the qualitative to the quantitative setting,showing that all system distances are mutually topologically inequivalent.

Cite as

Uli Fahrenberg, Axel Legay, and Claus Thrane. The Quantitative Linear-Time--Branching-Time Spectrum. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 103-114, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fahrenberg_et_al:LIPIcs.FSTTCS.2011.103,
  author =	{Fahrenberg, Uli and Legay, Axel and Thrane, Claus},
  title =	{{The Quantitative Linear-Time--Branching-Time Spectrum}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{103--114},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.103},
  URN =		{urn:nbn:de:0030-drops-33324},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.103},
  annote =	{Keywords: Quantitative verification, System distance, Distance hierarchy, Linear time, Branching time}
}
Document
Isomorphism testing of read-once functions and polynomials

Authors: Raghavendra Rao B .V. and Jayalal Sarma M. N.


Abstract
In this paper, we study the isomorphism testing problem of formulas in the Boolean and arithmetic settings. We show that isomorphism testing of Boolean formulas in which a variable is read at most once (known as read-once formulas) is complete for log-space. In contrast, we observe that the problem becomes polynomial time equivalent to the graph isomorphism problem, when the input formulas can be represented as OR of two or more monotone read-once formulas. This classifies the complexity of the problem in terms of the number of reads, as read-3 formula isomorphism problem is hard for \co\NP. We address the polynomial isomorphism problem, a special case of polynomial equivalence problem which in turn is important from a cryptographic perspective[Patarin EUROCRYPT'96, and Kayal SODA'11]. As our main result, we propose a deterministic polynomial time canonization scheme for polynomials computed by constant-free read-once arithmetic formulas. In contrast, we show that when the arithmetic formula is allowed to read a variable twice, this problem is as hard as the graph isomorphism problem.

Cite as

Raghavendra Rao B .V. and Jayalal Sarma M. N.. Isomorphism testing of read-once functions and polynomials. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 115-126, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{raob.v._et_al:LIPIcs.FSTTCS.2011.115,
  author =	{Rao B .V., Raghavendra and Sarma M. N., Jayalal},
  title =	{{Isomorphism testing of read-once functions and polynomials}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{115--126},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.115},
  URN =		{urn:nbn:de:0030-drops-33202},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.115},
  annote =	{Keywords: Isomorphism Problems, Computational Complexity, Read-once formulas, Read-once Polynomials}
}
Document
The Limited Power of Powering: Polynomial Identity Testing and a Depth-four Lower Bound for the Permanent

Authors: Bruno Grenet, Pascal Koiran, Natacha Portier, and Yann Strozecki


Abstract
Polynomial identity testing and arithmetic circuit lower bounds are two central questions in algebraic complexity theory. It is an intriguing fact that these questions are actually related. One of the authors of the present paper has recently proposed a "real tau-conjecture" which is inspired by this connection. The real tau-conjecture states that the number of real roots of a sum of products of sparse univariate polynomials should be polynomially bounded. It implies a superpolynomial lower bound on the size of arithmetic circuits computing the permanent polynomial. In this paper we show that the real tau-conjecture holds true for a restricted class of sums of products of sparse polynomials. This result yields lower bounds for a restricted class of depth-4 circuits: we show that polynomial size circuits from this class cannot compute the permanent, and we also give a deterministic polynomial identity testing algorithm for the same class of circuits.

Cite as

Bruno Grenet, Pascal Koiran, Natacha Portier, and Yann Strozecki. The Limited Power of Powering: Polynomial Identity Testing and a Depth-four Lower Bound for the Permanent. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 127-139, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{grenet_et_al:LIPIcs.FSTTCS.2011.127,
  author =	{Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
  title =	{{The Limited Power of Powering: Polynomial Identity Testing and a Depth-four Lower Bound for the Permanent}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{127--139},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.127},
  URN =		{urn:nbn:de:0030-drops-33501},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.127},
  annote =	{Keywords: Algebraic Complexity, Sparse Polynomials, Descartes' Rule of Signs, Lower Bound for the Permanent, Polynomial Identity Testing}
}
Document
Petri Net Reachability Graphs: Decidability Status of FO Properties

Authors: Philippe Darondeau, Stéphane Demri, Roland Meyer, and Christophe Morvan


Abstract
We investigate the decidability and complexity status of model-checking problems on unlabelled reachability graphs of Petri nets by considering first-order, modal and pattern-based languages without labels on transitions or atomic propositions on markings. We consider several parameters to separate decidable problems from undecidable ones. Not only are we able to provide precise borders and a systematic analysis, but we also demonstrate the robustness of our proof techniques.

Cite as

Philippe Darondeau, Stéphane Demri, Roland Meyer, and Christophe Morvan. Petri Net Reachability Graphs: Decidability Status of FO Properties. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 140-151, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{darondeau_et_al:LIPIcs.FSTTCS.2011.140,
  author =	{Darondeau, Philippe and Demri, St\'{e}phane and Meyer, Roland and Morvan, Christophe},
  title =	{{Petri Net Reachability Graphs: Decidability Status of FO Properties}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{140--151},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.140},
  URN =		{urn:nbn:de:0030-drops-33563},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.140},
  annote =	{Keywords: Petri nets, First order logic, Reachability graph}
}
Document
Approximating Petri Net Reachability Along Context-free Traces

Authors: Mohamed Faouzi Atig and Pierre Ganty


Abstract
We investigate the problem asking whether the intersection of a context-free language (CFL) and a Petri net language (PNL) (with reachability as acceptance condition) is empty. Our contribution to solve this long-standing problem which relates, for instance, to the reachability analysis of recursive programs over unbounded data domain, is to identify a class of CFLs called the finite-index CFLs for which the problem is decidable. The k-index approximation of a CFL can be obtained by discarding all the words that cannot be derived within a budget k on the number of occurrences of non-terminals. A finite-index CFL is thus a CFL which coincides with its k-index approximation for some k. We decide whether the intersection of a finite-index CFL and a PNL is empty by reducing it to the reachability problem of Petri nets with weak inhibitor arcs, a class of systems with infinitely many states for which reachability is known to be decidable. Conversely, we show that the reachability problem for a Petri net with weak inhibitor arcs reduces to the emptiness problem of a finite-index CFL intersected with a PNL.

Cite as

Mohamed Faouzi Atig and Pierre Ganty. Approximating Petri Net Reachability Along Context-free Traces. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 152-163, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{atig_et_al:LIPIcs.FSTTCS.2011.152,
  author =	{Atig, Mohamed Faouzi and Ganty, Pierre},
  title =	{{Approximating Petri Net Reachability Along Context-free Traces}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{152--163},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.152},
  URN =		{urn:nbn:de:0030-drops-33448},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.152},
  annote =	{Keywords: Petri nets, Context-free Grammars, Reachability Problem}
}
Document
Minimum Fill-in of Sparse Graphs: Kernelization and Approximation

Authors: Fedor V. Fomin, Geevarghese Philip, and Yngve Villanger


Abstract
The Minimum Fill-in problem is to decide if a graph can be triangulated by adding at most k edges. The problem has important applications in numerical algebra, in particular in sparse matrix computations. We develop kernelization algorithms for the problem on several classes of sparse graphs. We obtain linear kernels on planar graphs, and kernels of size O(k^{3/2}) in graphs excluding some fixed graph as a minor and in graphs of bounded degeneracy. As a byproduct of our results, we obtain approximation algorithms with approximation ratios O(log{k}) on planar graphs and O(sqrt{k} log{k}) on H-minor-free graphs. These results significantly improve the previously known kernelization and approximation results for Minimum Fill-in on sparse graphs.

Cite as

Fedor V. Fomin, Geevarghese Philip, and Yngve Villanger. Minimum Fill-in of Sparse Graphs: Kernelization and Approximation. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 164-175, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.FSTTCS.2011.164,
  author =	{Fomin, Fedor V. and Philip, Geevarghese and Villanger, Yngve},
  title =	{{Minimum Fill-in of Sparse Graphs: Kernelization and Approximation}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{164--175},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.164},
  URN =		{urn:nbn:de:0030-drops-33451},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.164},
  annote =	{Keywords: Minimum Fill-In, Approximation, Kernelization, Sparse graphs}
}
Document
Cubicity, Degeneracy, and Crossing Number

Authors: Abhijin Adiga, L. Sunil Chandran, and Rogers Mathew


Abstract
A k-box B=(R_1,R_2,...,R_k), where each R_i is a closed interval on the real line, is defined to be the Cartesian product R_1 X R_2 X ... X R_k. If each R_i is a unit length interval, we call B a k-cube. Boxicity of a graph G, denoted as box(G), is the minimum integer k such that G is an intersection graph of k-boxes. Similarly, the cubicity of G, denoted as cub(G), is the minimum integer k such that G is an intersection graph of k-cubes. It was shown in [L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan. Representing graphs as the intersection of axis-parallel cubes. MCDES-2008, IISc Centenary Conference, available at CoRR, abs/cs/0607092, 2006.] that, for a graph G with maximum degree \Delta, cub(G) <= \lceil 4(\Delta +1) ln n\rceil. In this paper we show that, for a k-degenerate graph G, cub(G) <= (k+2) \lceil 2e log n \rceil. Since k is at most \Delta and can be much lower, this clearly is a stronger result. We also give an efficient deterministic algorithm that runs in O(n^2k) time to output a 8k(\lceil 2.42 log n\rceil + 1) dimensional cube representation for G. The crossing number of a graph G, denoted as CR(G), is the minimum number of crossing pairs of edges, over all drawings of G in the plane. An important consequence of the above result is that if the crossing number of a graph G is t, then box(G) is O(t^{1/4}{\lceil log t\rceil}^{3/4}) . This bound is tight upto a factor of O((log t)^{3/4}). Let (P,\leq) be a partially ordered set and let G_{P} denote its underlying comparability graph. Let dim(P) denote the poset dimension of P. Another interesting consequence of our result is to show that dim(P) \leq 2(k+2) \lceil 2e \log n \rceil, where k denotes the degeneracy of G_{P}. Also, we get a deterministic algorithm that runs in O(n^2k) time to construct a 16k(\lceil 2.42 log n\rceil + 1) sized realizer for P. As far as we know, though very good upper bounds exist for poset dimension in terms of maximum degree of its underlying comparability graph, no upper bounds in terms of the degeneracy of the underlying comparability graph is seen in the literature.

Cite as

Abhijin Adiga, L. Sunil Chandran, and Rogers Mathew. Cubicity, Degeneracy, and Crossing Number. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 176-190, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{adiga_et_al:LIPIcs.FSTTCS.2011.176,
  author =	{Adiga, Abhijin and Chandran, L. Sunil and Mathew, Rogers},
  title =	{{Cubicity, Degeneracy, and Crossing Number}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{176--190},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.176},
  URN =		{urn:nbn:de:0030-drops-33428},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.176},
  annote =	{Keywords: Degeneracy, Cubicity, Boxicity, Crossing Number, Interval Graph, Intersection Graph, Poset Dimension, Comparability Graph}
}
Document
Conditional Reactive Systems

Authors: H. J. Sander Bruggink, Raphaël Cauderlier, Mathias Hülsbusch, and Barbara König


Abstract
We lift the notion of nested application conditions from graph transformation systems to the general categorical setting of reactive systems as defined by Leifer and Milner. This serves two purposes: first, we enrich the formalism of reactive systems by adding application conditions for rules; second, it turns out that some constructions for graph transformation systems (such as computing weakest preconditions and strongest postconditions and showing local confluence by means of critical pair analysis) can be done very elegantly in the more general setting.

Cite as

H. J. Sander Bruggink, Raphaël Cauderlier, Mathias Hülsbusch, and Barbara König. Conditional Reactive Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 191-203, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bruggink_et_al:LIPIcs.FSTTCS.2011.191,
  author =	{Bruggink, H. J. Sander and Cauderlier, Rapha\"{e}l and H\"{u}lsbusch, Mathias and K\"{o}nig, Barbara},
  title =	{{Conditional Reactive Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{191--203},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.191},
  URN =		{urn:nbn:de:0030-drops-33257},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.191},
  annote =	{Keywords: reactive systems, graph transformation, graph logic, Hoare triples, critical pair analysis}
}
Document
Transforming Password Protocols to Compose

Authors: Céline Chevalier, Stéphanie Delaune, and Steve Kremer


Abstract
Formal, symbolic techniques are extremely useful for modelling and analysing security protocols. They improved our understanding of security protocols, allowed to discover flaws, and also provide support for protocol design. However, such analyses usually consider that the protocol is executed in isolation or assume a bounded number of protocol sessions. Hence, no security guarantee is provided when the protocol is executed in a more complex environment. In this paper, we study whether password protocols can be safely composed, even when a same password is reused. More precisely, we present a transformation which maps a password protocol that is secure for a single protocol session (a decidable problem) to a protocol that is secure for an unbounded number of sessions. Our result provides an effective strategy to design secure password protocols: (i) design a protocol intended to be secure for one protocol session; (ii) apply our transformation and obtain a protocol which is secure for an unbounded number of sessions. Our technique also applies to compose different password protocols allowing us to obtain both inter-protocol and inter-session composition.

Cite as

Céline Chevalier, Stéphanie Delaune, and Steve Kremer. Transforming Password Protocols to Compose. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 204-216, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chevalier_et_al:LIPIcs.FSTTCS.2011.204,
  author =	{Chevalier, C\'{e}line and Delaune, St\'{e}phanie and Kremer, Steve},
  title =	{{Transforming Password Protocols to Compose}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{204--216},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.204},
  URN =		{urn:nbn:de:0030-drops-33273},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.204},
  annote =	{Keywords: Security, cryptographic protocols, composition}
}
Document
Obtaining a Bipartite Graph by Contracting Few Edges

Authors: Pinar Heggernes, Pim van 't Hof, Daniel Lokshtanov, and Christophe Paul


Abstract
We initiate the study of the Bipartite Contraction problem from the perspective of parameterized complexity. In this problem we are given a graph G on n vertices and an integer k, and the task is to determine whether we can obtain a bipartite graph from G by a sequence of at most k edge contractions. Our main result is an f(k) n^{O(1)} time algorithm for Bipartite Contraction. Despite a strong resemblance between Bipartite Contraction and the classical Odd Cycle Transversal (OCT) problem, the methods developed to tackle OCT do not seem to be directly applicable to Bipartite Contraction. To obtain our result, we combine several techniques and concepts that are central in parameterized complexity: iterative compression, irrelevant vertex, and important separators. To the best of our knowledge, this is the first time the irrelevant vertex technique and the concept of important separators are applied in unison. Furthermore, our algorithm may serve as a comprehensible example of the usage of the irrelevant vertex technique.

Cite as

Pinar Heggernes, Pim van 't Hof, Daniel Lokshtanov, and Christophe Paul. Obtaining a Bipartite Graph by Contracting Few Edges. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 217-228, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{heggernes_et_al:LIPIcs.FSTTCS.2011.217,
  author =	{Heggernes, Pinar and van 't Hof, Pim and Lokshtanov, Daniel and Paul, Christophe},
  title =	{{Obtaining a Bipartite Graph by Contracting Few Edges}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{217--228},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.217},
  URN =		{urn:nbn:de:0030-drops-33579},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.217},
  annote =	{Keywords: fixed parameter tractability, graph modification problems, edge contractions, bipartite graphs}
}
Document
Simultaneously Satisfying Linear Equations Over F_2: MaxLin2 and Max-r-Lin2 Parameterized Above Average

Authors: Robert Crowston, Michael Fellows, Gregory Gutin, Mark Jones, Frances Rosamond, Stéphan Thomassé, and Anders Yeo


Abstract
In the parameterized problem MaxLin2-AA[$k$], we are given a system with variables x_1,...,x_n consisting of equations of the form Product_{i in I}x_i = b, where x_i,b in {-1, 1} and I is a nonempty subset of {1,...,n}, each equation has a positive integral weight, and we are to decide whether it is possible to simultaneously satisfy equations of total weight at least W/2+k, where W is the total weight of all equations and k is the parameter (if k=0, the possibility is assured). We show that MaxLin2-AA[k] has a kernel with at most O(k^2 log k) variables and can be solved in time 2^{O(k log k)}(nm)^{O(1)}. This solves an open problem of Mahajan et al. (2006). The problem Max-r-Lin2-AA[k,r] is the same as MaxLin2-AA[k] with two differences: each equation has at most r variables and r is the second parameter. We prove a theorem on Max-$r$-Lin2-AA[k,r] which implies that Max-r-Lin2-AA[k,r] has a kernel with at most (2k-1)r variables, improving a number of results including one by Kim and Williams (2010). The theorem also implies a lower bound on the maximum of a function f that maps {-1,1}^n to the set of reals and whose Fourier expansion (which is a multilinear polynomial) is of degree r. We show applicability of the lower bound by giving a new proof of the Edwards-Erdös bound (each connected graph on n vertices and m edges has a bipartite subgraph with at least m/2 +(n-1)/4 edges) and obtaining a generalization.

Cite as

Robert Crowston, Michael Fellows, Gregory Gutin, Mark Jones, Frances Rosamond, Stéphan Thomassé, and Anders Yeo. Simultaneously Satisfying Linear Equations Over F_2: MaxLin2 and Max-r-Lin2 Parameterized Above Average. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 229-240, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{crowston_et_al:LIPIcs.FSTTCS.2011.229,
  author =	{Crowston, Robert and Fellows, Michael and Gutin, Gregory and Jones, Mark and Rosamond, Frances and Thomass\'{e}, St\'{e}phan and Yeo, Anders},
  title =	{{Simultaneously Satisfying Linear Equations Over F\underline2: MaxLin2 and Max-r-Lin2 Parameterized Above Average}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{229--240},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.229},
  URN =		{urn:nbn:de:0030-drops-33416},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.229},
  annote =	{Keywords: MaxLin, fixed-parameter tractability, kernelization, pseudo-boolean functions}
}
Document
Rainbow Connectivity: Hardness and Tractability

Authors: Prabhanjan Ananth, Meghana Nasre, and Kanthi K. Sarpatwar


Abstract
A path in an edge colored graph is said to be a rainbow path if no two edges on the path have the same color. An edge colored graph is (strongly) rainbow connected if there exists a (geodesic) rainbow path between every pair of vertices. The (strong) rainbow connectivity of a graph G, denoted by (src(G), respectively) rc(G) is the smallest number of colors required to edge color the graph such that G is (strongly) rainbow connected. In this paper we study the rainbow connectivity problem and the strong rainbow connectivity problem from a computational point of view. Our main results can be summarised as below: 1) For every fixed k >= 3, it is NP-Complete to decide whether src(G) <= k even when the graph G is bipartite. 2) For every fixed odd k >= 3, it is NP-Complete to decide whether rc(G) <= k. This resolves one of the open problems posed by Chakraborty et al. (J. Comb. Opt., 2011) where they prove the hardness for the even case. 3) The following problem is fixed parameter tractable: Given a graph G, determine the maximum number of pairs of vertices that can be rainbow connected using two colors. 4) For a directed graph G, it is NP-Complete to decide whether rc(G) <= 2.

Cite as

Prabhanjan Ananth, Meghana Nasre, and Kanthi K. Sarpatwar. Rainbow Connectivity: Hardness and Tractability. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 241-251, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{ananth_et_al:LIPIcs.FSTTCS.2011.241,
  author =	{Ananth, Prabhanjan and Nasre, Meghana and Sarpatwar, Kanthi K.},
  title =	{{Rainbow Connectivity: Hardness and Tractability}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{241--251},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.241},
  URN =		{urn:nbn:de:0030-drops-33535},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.241},
  annote =	{Keywords: Computational Complexity, Rainbow Connectivity, Graph Theory, Fixed Parameter Tractable Algorithms}
}
Document
Dependence logic with a majority quantifier

Authors: Arnaud Durand, Johannes Ebbing, Juha Kontinen, and Heribert Vollmer


Abstract
We study the extension of dependence logic D by a majority quantifier M over finite structures. We show that the resulting logic is equi-expressive with the extension of second-order logic by second-order majority quantifiers of all arities. Our results imply that, from the point of view of descriptive complexity theory, D(M) captures the complexity class counting hierarchy.

Cite as

Arnaud Durand, Johannes Ebbing, Juha Kontinen, and Heribert Vollmer. Dependence logic with a majority quantifier. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 252-263, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{durand_et_al:LIPIcs.FSTTCS.2011.252,
  author =	{Durand, Arnaud and Ebbing, Johannes and Kontinen, Juha and Vollmer, Heribert},
  title =	{{Dependence logic with a majority quantifier}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{252--263},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.252},
  URN =		{urn:nbn:de:0030-drops-33467},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.252},
  annote =	{Keywords: dependence logic, counting hierarchy, majority quantifier, second order logic, descriptive complexity, finite model theory}
}
Document
Modal Logics Definable by Universal Three-Variable Formulas

Authors: Emanuel Kieronski, Jakub Michaliszyn, and Jan Otop


Abstract
We consider the satisfiability problem for modal logic over classes of structures definable by universal first-order formulas with three variables. We exhibit a simple formula for which the problem is undecidable. This improves an earlier result in which nine variables were used. We also show that for classes defined by three-variable, universal Horn formulas the problem is decidable. This subsumes decidability results for many natural modal logics, including T, B, K4, S4, S5.

Cite as

Emanuel Kieronski, Jakub Michaliszyn, and Jan Otop. Modal Logics Definable by Universal Three-Variable Formulas. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 264-275, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kieronski_et_al:LIPIcs.FSTTCS.2011.264,
  author =	{Kieronski, Emanuel and Michaliszyn, Jakub and Otop, Jan},
  title =	{{Modal Logics Definable by Universal Three-Variable Formulas}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{264--275},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.264},
  URN =		{urn:nbn:de:0030-drops-33495},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.264},
  annote =	{Keywords: modal logic, decidability}
}
Document
The First-Order Theory of Ground Tree Rewrite Graphs

Authors: Stefan Göller and Markus Lohrey


Abstract
We prove that the complexity of the uniform first-order theory of ground tree rewrite graphs is in ATIME(2^{2^{poly(n)}},O(n). Providing a matching lower bound, we show that there is some fixed ground tree rewrite graph whose first-order theory is hard for ATIME(2^{2^{poly(n)}},poly(n)) with respect to logspace reductions. Finally, we prove that there exists a fixed ground tree rewrite graph together with a single unary predicate in form of a regular tree language such that the resulting structure has a non-elementary first-order theory.

Cite as

Stefan Göller and Markus Lohrey. The First-Order Theory of Ground Tree Rewrite Graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 276-287, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{goller_et_al:LIPIcs.FSTTCS.2011.276,
  author =	{G\"{o}ller, Stefan and Lohrey, Markus},
  title =	{{The First-Order Theory of Ground Tree Rewrite Graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{276--287},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.276},
  URN =		{urn:nbn:de:0030-drops-33220},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.276},
  annote =	{Keywords: ground tree rewriting systems, first-order theories, complexity}
}
Document
Layer Systems for Proving Confluence

Authors: Bertram Felgenhauer, Harald Zankl, and Aart Middeldorp


Abstract
We introduce layer systems for proving generalizations of the modularity of confluence for first-order rewrite systems. Layer systems specify how terms can be divided into layers. We establish structural conditions on those systems that imply confluence. Our abstract framework covers known results like many-sorted persistence, layer-preservation and currying. We present a counterexample to an extension of the former to order-sorted rewriting and derive new sufficient conditions for the extension to hold.

Cite as

Bertram Felgenhauer, Harald Zankl, and Aart Middeldorp. Layer Systems for Proving Confluence. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 288-299, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{felgenhauer_et_al:LIPIcs.FSTTCS.2011.288,
  author =	{Felgenhauer, Bertram and Zankl, Harald and Middeldorp, Aart},
  title =	{{Layer Systems for Proving Confluence}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{288--299},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.288},
  URN =		{urn:nbn:de:0030-drops-33265},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.288},
  annote =	{Keywords: Term rewriting, Confluence, Modularity, Persistence}
}
Document
The Semi-stochastic Ski-rental Problem

Authors: Aleksander Madry and Debmalya Panigrahi


Abstract
In this paper, we introduce the semi-stochastic model for dealing with input uncertainty in optimization problems. This model is a hybrid between the overly pessimistic online model and the highly optimistic stochastic (or Bayesian) model. In this model, the algorithm can obtain only limited stochastic information about the future (i.e. about the input distribution)---as the amount of stochastic information we make available to the algorithm grows from no information to full information, we interpolate between the online and stochastic models. The central question in this framework is the trade-off between the performance of an algorithm, and the stochastic information that it can access. As a first step towards understanding this trade-off, we consider the ski-rental problem in the semi-stochastic setting. More precisely, given a desired competitive ratio, we give upper and lower bounds on the amount of stochastic information required by a deterministic algorithm for the ski-rental problem to achieve that competitive ratio.

Cite as

Aleksander Madry and Debmalya Panigrahi. The Semi-stochastic Ski-rental Problem. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 300-311, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{madry_et_al:LIPIcs.FSTTCS.2011.300,
  author =	{Madry, Aleksander and Panigrahi, Debmalya},
  title =	{{The Semi-stochastic Ski-rental Problem}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{300--311},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.300},
  URN =		{urn:nbn:de:0030-drops-33305},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.300},
  annote =	{Keywords: online optimization, stochastic algorithm}
}
Document
Streamability of Nested Word Transductions

Authors: Emmanuel Filiot, Olivier Gauwin, Pierre-Alain Reynier, and Frédéric Servais


Abstract
We consider the problem of evaluating in streaming (i.e. in a single left-to-right pass) a nested word transduction with a limited amount of memory. A transduction T is said to be height bounded memory (HBM) if it can be evaluated with a memory that depends only on the size of T and on the height of the input word. We show that it is decidable in coNPTime for a nested word transduction defined by a visibly pushdown transducer (VPT), if it is HBM. In this case, the required amount of memory may depend exponentially on the height of the word. We exhibit a sufficient, decidable condition for a VPT to be evaluated with a memory that depends quadratically on the height of the word. This condition defines a class of transductions that strictly contains all determinizable VPTs.

Cite as

Emmanuel Filiot, Olivier Gauwin, Pierre-Alain Reynier, and Frédéric Servais. Streamability of Nested Word Transductions. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 312-324, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{filiot_et_al:LIPIcs.FSTTCS.2011.312,
  author =	{Filiot, Emmanuel and Gauwin, Olivier and Reynier, Pierre-Alain and Servais, Fr\'{e}d\'{e}ric},
  title =	{{Streamability of Nested Word Transductions}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{312--324},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.312},
  URN =		{urn:nbn:de:0030-drops-33523},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.312},
  annote =	{Keywords: nested word, visibly pushdown transducer, streaming, XML}
}
Document
The update complexity of selection and related problems

Authors: Manoj Gupta, Yogish Sabharwal, and Sandeep Sen


Abstract
We present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters. To compute a solution, the algorithm can query the input parameters that yield more refined estimates in form of sub-intervals and the objective is to minimize the number of queries.The previous approaches address the scenario where every query returns an exact value. Our framework is more general as it can deal with a wider variety of inputs and query responses and we establish interesting relationships between them that have not been investigated previously. Although some of the approaches of the previous restricted models can be adapted to the more general model, we require more sophisticated techniques for the analysis and we also obtain improved algorithms for the previous model. We address selection problems in the generalized model and show that there exist 2-update competitive algorithms that do not depend on the lengths or distribution of the sub-intervals and hold against the worst case adversary. We also obtain similar bounds on the competitive ratio for the MST problem in graphs.

Cite as

Manoj Gupta, Yogish Sabharwal, and Sandeep Sen. The update complexity of selection and related problems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 325-338, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.FSTTCS.2011.325,
  author =	{Gupta, Manoj and Sabharwal, Yogish and Sen, Sandeep},
  title =	{{The update complexity of selection and related problems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{325--338},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.325},
  URN =		{urn:nbn:de:0030-drops-33314},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.325},
  annote =	{Keywords: Uncertain data, Competitive analysis}
}
Document
A Tight Lower Bound for Streett Complementation

Authors: Yang Cai and Ting Zhang


Abstract
Finite automata on infinite words (omega-automata) proved to be a powerful weapon for modeling and reasoning infinite behaviors of reactive systems. Complementation of omega-automata is crucial in many of these applications. But the problem is non-trivial; even after extensive study during the past two decades, we still have an important type of omega-automata, namely Streett automata, for which the gap between the current best lower bound 2^(Omega(n lg nk)) and upper bound 2^(O (nk lg nk)) is substantial, for the Streett index size k can be exponential in the number of states n. In a previous work we showed a construction for complementing Streett automata with the upper bound 2^(O(n lg n+nk lg k)) for k = O(n) and 2^(O(n^2 lg n)) for k = omega(n). In this paper we establish a matching lower bound 2^(Omega (n lg n+nk lg k)) for k = O(n) and 2^(Omega (n^2 lg n)) for k = omega(n), and therefore showing that the construction is asymptotically optimal with respect to the ^(Theta(.)) notation.

Cite as

Yang Cai and Ting Zhang. A Tight Lower Bound for Streett Complementation. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 339-350, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.FSTTCS.2011.339,
  author =	{Cai, Yang and Zhang, Ting},
  title =	{{A Tight Lower Bound for Streett Complementation}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{339--350},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.339},
  URN =		{urn:nbn:de:0030-drops-33474},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.339},
  annote =	{Keywords: omega-automata, Streett automata, complementation, lower bounds}
}
Document
Parameterized Regular Expressions and Their Languages

Authors: Pablo Barceló, Leonid Libkin, and Juan L. Reutter


Abstract
We study regular expressions that use variables, or parameters, which are interpreted as alphabet letters. We consider two classes of languages denoted by such expressions: under the possibility semantics, a word belongs to the language if it is denoted by some regular expression obtained by replacing variables with letters; under the certainty semantics, the word must be denoted by every such expression. Such languages are regular, and we show that they naturally arise in several applications such as querying graph databases and program analysis. As the main contribution of the paper, we provide a complete characterization of the complexity of the main computational problems related to such languages: nonemptiness, universality, containment, membership, as well as the problem of constructing NFAs capturing such languages. We also look at the extension when domains of variables could be arbitrary regular languages, and show that under the certainty semantics, languages remain regular and the complexity of the main computational problems does not change.

Cite as

Pablo Barceló, Leonid Libkin, and Juan L. Reutter. Parameterized Regular Expressions and Their Languages. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 351-362, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{barcelo_et_al:LIPIcs.FSTTCS.2011.351,
  author =	{Barcel\'{o}, Pablo and Libkin, Leonid and Reutter, Juan L.},
  title =	{{Parameterized Regular Expressions and Their Languages}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{351--362},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.351},
  URN =		{urn:nbn:de:0030-drops-33334},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.351},
  annote =	{Keywords: Regular expressions, complexity, decision problems, regular languages}
}
Document
Definable Operations On Weakly Recognizable Sets of Trees

Authors: Jacques Duparc, Alessandro Facchini, and Filip Murlak


Abstract
Alternating automata on infinite trees induce operations on languages which do not preserve natural equivalence relations, like having the same Mostowski--Rabin index, the same Borel rank, or being continuously reducible to each other (Wadge equivalence). In order to prevent this, alternation needs to be restricted to the choice of direction in the tree. For weak alternating automata with restricted alternation a small set of computable operations generates all definable operations, which implies that the Wadge degree of a given automaton is computable. The weak index and the Borel rank coincide, and are computable. An equivalent automaton of minimal index can be computed in polynomial time (if the productive states of the automaton are given).

Cite as

Jacques Duparc, Alessandro Facchini, and Filip Murlak. Definable Operations On Weakly Recognizable Sets of Trees. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 363-374, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{duparc_et_al:LIPIcs.FSTTCS.2011.363,
  author =	{Duparc, Jacques and Facchini, Alessandro and Murlak, Filip},
  title =	{{Definable Operations On Weakly Recognizable Sets of Trees}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{363--374},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.363},
  URN =		{urn:nbn:de:0030-drops-33361},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.363},
  annote =	{Keywords: alternating automata, Wadge hierarchy, index hierarchy}
}
Document
Nash Equilibria in Concurrent Games with Büchi Objectives

Authors: Patricia Bouyer, Romain Brenguier, Nicolas Markey, and Michael Ummels


Abstract
We study the problem of computing pure-strategy Nash equilibria in multiplayer concurrent games with Büchi-definable objectives. First, when the objectives are Büchi conditions on the game, we prove that the existence problem can be solved in polynomial time. In a second part, we extend our technique to objectives defined by deterministic Büchi automata, and prove that the problem then becomes EXPTIME-complete. We prove PSPACE-completeness for the case where the Büchi automata are 1-weak.

Cite as

Patricia Bouyer, Romain Brenguier, Nicolas Markey, and Michael Ummels. Nash Equilibria in Concurrent Games with Büchi Objectives. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 375-386, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bouyer_et_al:LIPIcs.FSTTCS.2011.375,
  author =	{Bouyer, Patricia and Brenguier, Romain and Markey, Nicolas and Ummels, Michael},
  title =	{{Nash Equilibria in Concurrent Games with B\"{u}chi Objectives}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{375--386},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.375},
  URN =		{urn:nbn:de:0030-drops-33340},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.375},
  annote =	{Keywords: Concurrent games, Nash equilibria, B\"{u}chi Objectives}
}
Document
A Perfect-Information Construction for Coordination in Games

Authors: Dietmar Berwanger, Lukasz Kaiser, and Bernd Puchala


Abstract
We present a general construction for eliminating imperfect information from games with several players who coordinate against nature, and to transform them into two-player games with perfect information while preserving winning strategy profiles. The construction yields an infinite game tree with epistemic models associated to nodes. To obtain a more succinct representation, we define an abstraction based on homomorphic equivalence, which we prove to be sound for games with observable winning conditions. The abstraction generates finite game graphs in several relevant cases, and leads to a new semi-decision procedure for multi-player games with imperfect information.

Cite as

Dietmar Berwanger, Lukasz Kaiser, and Bernd Puchala. A Perfect-Information Construction for Coordination in Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 387-398, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{berwanger_et_al:LIPIcs.FSTTCS.2011.387,
  author =	{Berwanger, Dietmar and Kaiser, Lukasz and Puchala, Bernd},
  title =	{{A Perfect-Information Construction for Coordination in Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{387--398},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.387},
  URN =		{urn:nbn:de:0030-drops-33354},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.387},
  annote =	{Keywords: Distributed Synthesis, Games on Graphs, Imperfect Information, Epistemic Models}
}
Document
Efficient Approximation of Optimal Control for Continuous-Time Markov Games

Authors: John Fearnley, Markus Rabe, Sven Schewe, and Lijun Zhang


Abstract
We study the time-bounded reachability problem for continuous time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretization techniques to break time into discrete intervals, and optimal control is approximated for each interval separately. Current techniques provide an accuracy of O(\epsilon^2) on each interval, which leads to an infeasibly large number of intervals. We propose a sequence of approximations that achieve accuracies of O(\epsilon^3), O(\epsilon^4), and O(\epsilon^5), that allow us to drastically reduce the number of intervals that are considered. For CTMDPs, the resulting algorithms are comparable to the heuristic approach given by Buckholz and Schulz, while also being theoretically justified. All of our results generalise to CTMGs, where our results yield the first practically implementable algorithms for this problem. We also provide positional strategies for both players that achieve similar error bounds.

Cite as

John Fearnley, Markus Rabe, Sven Schewe, and Lijun Zhang. Efficient Approximation of Optimal Control for Continuous-Time Markov Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 399-410, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fearnley_et_al:LIPIcs.FSTTCS.2011.399,
  author =	{Fearnley, John and Rabe, Markus and Schewe, Sven and Zhang, Lijun},
  title =	{{Efficient Approximation of Optimal Control for Continuous-Time Markov Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{399--410},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.399},
  URN =		{urn:nbn:de:0030-drops-33547},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.399},
  annote =	{Keywords: Continuous time Markov decision processes and games, discretisation}
}
Document
Minimal Disclosure in Partially Observable Markov Decision Processes

Authors: Nathalie Bertrand and Blaise Genest


Abstract
For security and efficiency reasons, most systems do not give the users a full access to their information. One key specification formalism for these systems are the so called Partially Observable Markov Decision Processes (POMDP for short), which have been extensively studied in several research communities, among which AI and model-checking. In this paper we tackle the problem of the minimal information a user needs at runtime to achieve a simple goal, modeled as reaching an objective with probability one. More precisely, to achieve her goal, the user can at each step either choose to use the partial information, or pay a fixed cost and receive the full information. The natural question is then to minimize the cost the user needs to fulfill her objective. This optimization question gives rise to two different problems, whether we consider to minimize the worst case cost, or the average cost. On the one hand, concerning the worst case cost, we show that efficient techniques from the model checking community can be adapted to compute the optimal worst case cost and give optimal strategies for the users. On the other hand, we show that the optimal average price (a question typically considered in the AI community) cannot be computed in general, nor can it be approximated in polynomial time even up to a large approximation factor.

Cite as

Nathalie Bertrand and Blaise Genest. Minimal Disclosure in Partially Observable Markov Decision Processes. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 411-422, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bertrand_et_al:LIPIcs.FSTTCS.2011.411,
  author =	{Bertrand, Nathalie and Genest, Blaise},
  title =	{{Minimal Disclosure in Partially Observable Markov Decision Processes}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{411--422},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.411},
  URN =		{urn:nbn:de:0030-drops-33286},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.411},
  annote =	{Keywords: Partially Observable Markov Decision Processes, Stochastic Games, Model-Checking, Worst-Case/Average-Case Analysis}
}
Document
Optimal Packed String Matching

Authors: Oren Ben-Kiki, Philip Bille, Dany Breslauer, Leszek Gasieniec, Roberto Grossi, and Oren Weimann


Abstract
In the packed string matching problem, each machine word accomodates alpha characters, thus an n-character text occupies n/alpha memory words. We extend the Crochemore-Perrin constant-space O(n)-time string matching algorithm to run in optimal O(n/alpha) time and even in real-time, achieving a factor alpha speedup over traditional algorithms that examine each character individually. Our solution can be efficiently implemented, unlike prior theoretical packed string matching work. We adapt the standard RAM model and only use its AC0 instructions (i.e. no multiplication) plus two specialized AC0 packed string instructions. The main string-matching instruction is available in commodity processors (i.e. Intel's SSE4.2 and AVX Advanced String Operations); the other maximal-suffix instruction is only required during pattern preprocessing. In the absence of these two specialized instructions, we propose theoretically-efficient emulation using integer multiplication (not AC0) and table lookup.

Cite as

Oren Ben-Kiki, Philip Bille, Dany Breslauer, Leszek Gasieniec, Roberto Grossi, and Oren Weimann. Optimal Packed String Matching. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 423-432, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{benkiki_et_al:LIPIcs.FSTTCS.2011.423,
  author =	{Ben-Kiki, Oren and Bille, Philip and Breslauer, Dany and Gasieniec, Leszek and Grossi, Roberto and Weimann, Oren},
  title =	{{Optimal Packed String Matching}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{423--432},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.423},
  URN =		{urn:nbn:de:0030-drops-33558},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.423},
  annote =	{Keywords: String matching, bit parallelism, real time, space efficiency}
}
Document
Dynamic programming in faulty memory hierarchies (cache-obliviously)

Authors: Saverio Caminiti, Irene Finocchi, Emanuele G. Fusco, and Francesco Silvestri


Abstract
Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algorithms implicitly assume data to be stored in reliable memories, they might easily exhibit unpredictable behaviors even in the presence of a small number of faults. In this paper we investigate the design of dynamic programming algorithms in faulty memory hierarchies. Previous works on resilient algorithms considered a one-level faulty memory model and, with respect to dynamic programming, could address only problems with local dependencies. Our improvement upon these works is two-fold: (1) we significantly extend the class of problems that can be solved resiliently via dynamic programming in the presence of faults, settling challenging non-local problems such as all-pairs shortest paths and matrix multiplication; (2) we investigate the connection between resiliency and cache-efficiency, providing cache-oblivious implementations that incur an (almost) optimal number of cache misses. Our approach yields the first resilient algorithms that can tolerate faults at any level of the memory hierarchy, while maintaining cache-efficiency. All our algorithms are correct with high probability and match the running time and cache misses of their standard non-resilient counterparts while tolerating a large (polynomial) number of faults. Our results also extend to Fast Fourier Transform.

Cite as

Saverio Caminiti, Irene Finocchi, Emanuele G. Fusco, and Francesco Silvestri. Dynamic programming in faulty memory hierarchies (cache-obliviously). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 433-444, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{caminiti_et_al:LIPIcs.FSTTCS.2011.433,
  author =	{Caminiti, Saverio and Finocchi, Irene and Fusco, Emanuele G. and Silvestri, Francesco},
  title =	{{Dynamic programming in faulty memory hierarchies (cache-obliviously)}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{433--444},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.433},
  URN =		{urn:nbn:de:0030-drops-33241},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.433},
  annote =	{Keywords: Unreliable memories, fault-tolerant algorithms, dynamic programming, cache-oblivious algorithms, Gaussian elimination paradigm}
}
Document
Deciding Probabilistic Simulation between Probabilistic Pushdown Automata and Finite-State Systems

Authors: Hongfei Fu and Joost-Pieter Katoen


Abstract
This paper studies the decidability and computational complexity of checking probabilistic simulation pre-order between probabilistic pushdown automata (pPDA) and (probabilistic)finite-state systems. We show that checking classical and combined probabilistic similarity are EXPTIME-complete in both directions and become polynomial if both the number of control states of the pPDA and the size of the finite-state system are fixed. These results show that checking probabilistic similarity is as hard as checking similarity in the standard, i.e., non-probabilistic setting.

Cite as

Hongfei Fu and Joost-Pieter Katoen. Deciding Probabilistic Simulation between Probabilistic Pushdown Automata and Finite-State Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 445-456, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.FSTTCS.2011.445,
  author =	{Fu, Hongfei and Katoen, Joost-Pieter},
  title =	{{Deciding Probabilistic Simulation between Probabilistic Pushdown Automata and Finite-State Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{445--456},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.445},
  URN =		{urn:nbn:de:0030-drops-33435},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.445},
  annote =	{Keywords: infinite-state systems, probabilistic simulation, probabilistic pushdown automata}
}
Document
Parameterised Pushdown Systems with Non-Atomic Writes

Authors: Matthew Hague


Abstract
We consider the master/slave parameterised reachability problem for networks of pushdown systems, where communication is via a global store using only non-atomic reads and writes. We show that the control-state reachability problem is decidable. As part of the result, we provide a constructive extension of a theorem by Ehrenfeucht and Rozenberg to produce an NFA equivalent to certain kinds of CFG. Finally, we show that the non-parameterised version is undecidable.

Cite as

Matthew Hague. Parameterised Pushdown Systems with Non-Atomic Writes. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 457-468, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{hague:LIPIcs.FSTTCS.2011.457,
  author =	{Hague, Matthew},
  title =	{{Parameterised Pushdown Systems with Non-Atomic Writes}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{457--468},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.457},
  URN =		{urn:nbn:de:0030-drops-33485},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.457},
  annote =	{Keywords: Verification, Concurrency, Pushdown Systems, Reachability, Parameterised Systems, Non-atomicity}
}
Document
Higher order indexed monadic systems

Authors: Didier Caucal and Teodor Knapik


Abstract
A word rewriting system is called monadic if each of its right hand sides is either a single letter or the empty word. We study the images of higher order indexed languages (defined by Maslov) under inverse derivations of infinite monadic systems. We show that the inverse derivations of deterministic level n indexed languages by confluent regular monadic systems are deterministic level n+1 languages, and that the inverse derivations of level n indexed monadic systems preserve level $n$ indexed languages. Both results are established using a fine structural study of classes of infinite automata accepting level $n$ indexed languages. Our work generalizes formerly known results about regular and context-free languages which form the first two levels of the indexed language hierarchy.

Cite as

Didier Caucal and Teodor Knapik. Higher order indexed monadic systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 469-480, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{caucal_et_al:LIPIcs.FSTTCS.2011.469,
  author =	{Caucal, Didier and Knapik, Teodor},
  title =	{{Higher order indexed monadic systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{469--480},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.469},
  URN =		{urn:nbn:de:0030-drops-33379},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.469},
  annote =	{Keywords: Higher-order indexed languages, monadic systems}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail