LIPIcs, Volume 8

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)



Thumbnail PDF

Event

FSTTCS 2010, December 15-18, 2010, Chennai, India

Editors

Kamal Lodaya
Meena Mahajan

Publication Details

  • published at: 2010-12-13
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-23-1
  • DBLP: db/conf/fsttcs/fsttcs2010

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 8, FSTTCS'10, Complete Volume

Authors: Kamal Lodaya and Meena Mahajan


Abstract
LIPIcs, Volume 8, FSTTCS'10, Complete Volume

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{lodaya_et_al:LIPIcs.FSTTCS.2010,
  title =	{{LIPIcs, Volume 8, FSTTCS'10, Complete Volume}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010},
  URN =		{urn:nbn:de:0030-drops-41027},
  doi =		{10.4230/LIPIcs.FSTTCS.2010},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems pecifying and Verifying and Reasoning about Program}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization, Author Index

Authors: Kamal Lodaya and Meena Mahajan


Abstract
This proceedings volume has the papers presented at the 30th annual conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010), held at the Institute of Mathematical Sciences (IMSc), Chennai, during 15–18 December 2010. The conference attracted 128 submissions from 35 countries in 6 continents, most of them of very high quality. We thank the authors who submitted for making this such a competitive conference. The PC succeeded in obtaining the help of 216 external reviewers, in all producing 400 referee reports which were of immeasurable help in deciding the 38 contributed papers which have made it to this publication.

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. i-xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{lodaya_et_al:LIPIcs.FSTTCS.2010.i,
  author =	{Lodaya, Kamal and Mahajan, Meena},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization, Author Index}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{i--xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.i},
  URN =		{urn:nbn:de:0030-drops-28475},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization, Author Index}
}
Document
Expressiveness of streaming string transducers

Authors: Rajeev Alur and Pavol Černý


Abstract
Streaming string transducers define (partial) functions from input strings to output strings. A streaming string transducer makes a single pass through the input string and uses a finite set of variables that range over strings from the output alphabet. At every step, the transducer processes an input symbol, and updates all the variables in parallel using assignments whose right-hand-sides are concatenations of output symbols and variables with the restriction that a variable can be used at most once in a right-hand-side expression. It has been shown that streaming string transducers operating on strings over infinite data domains are of interest in algorithmic verification of list-processing programs, as they lead to Pspace decision procedures for checking pre/postconditions and for checking semantic equivalence, for a well-defined class of heap-manipulating programs. In order to understand the theoretical expressiveness of streaming transducers, we focus on streaming transducers processing strings over finite alphabets, given the existence of a robust and well-studied class of ``regular'' transductions for this case. Such regular transductions can be defined either by two-way deterministic finite-state transducers, or using a logical MSO-based characterization. Our main result is that the expressiveness of streaming string transducers coincides exactly with this class of regular transductions.

Cite as

Rajeev Alur and Pavol Černý. Expressiveness of streaming string transducers. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{alur_et_al:LIPIcs.FSTTCS.2010.1,
  author =	{Alur, Rajeev and \v{C}ern\'{y}, Pavol},
  title =	{{Expressiveness of streaming string transducers}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{1--12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.1},
  URN =		{urn:nbn:de:0030-drops-28538},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.1},
  annote =	{Keywords: streaming string transducer, list processing, heap manipulation, monadic second-order transduction}
}
Document
Special tree-width and the verification of monadic second-order graph pr operties

Authors: Bruno Courcelle


Abstract
The model-checking problem for monadic second-order logic on graphs is fixed-parameter tractable with respect to tree-width and clique-width. The proof constructs finite deterministic automata from monadic second-order sentences, but this computation produces automata of hyper-exponential sizes, and this is not avoidable. To overcome this difficulty, we propose to consider particular monadic second-order graph properties that are nevertheless interesting for Graph Theory and to interpret automata instead of trying to compile them (joint work with I. Durand). For checking monadic second-order sentences written with edge set quantifications, the appropriate parameter is tree-width. We introduce special tree-width, a graph complexity measure between path-width and tree-width. The corresponding automata are easier to construct than those for tree-width.

Cite as

Bruno Courcelle. Special tree-width and the verification of monadic second-order graph pr operties. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 13-29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{courcelle:LIPIcs.FSTTCS.2010.13,
  author =	{Courcelle, Bruno},
  title =	{{Special tree-width and the verification of monadic second-order graph pr operties}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{13--29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.13},
  URN =		{urn:nbn:de:0030-drops-28494},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.13},
  annote =	{Keywords: model-checking, monadic second-order logic, fixed-parameter tractability, special tree-width}
}
Document
On extracting computations from propositional proofs (a survey)

Authors: Pavel Pudlák


Abstract
This paper describes a project that aims at showing that propositional proofs of certain tautologies in weak proof system give upper bounds on the computational complexity of functions associated with the tautologies. Such bounds can potentially be used to prove (conditional or unconditional) lower bounds on the lengths of proofs of these tautologies and show separations of some weak proof systems. The prototype are the results showing the feasible interpolation property for resolution. In order to prove similar results for systems stronger than resolution one needs to define suitable generalizations of boolean circuits. We will survey the known results concerning this project and sketch in which direction we want to generalize them.

Cite as

Pavel Pudlák. On extracting computations from propositional proofs (a survey). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 30-41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{pudlak:LIPIcs.FSTTCS.2010.30,
  author =	{Pudl\'{a}k, Pavel},
  title =	{{On extracting computations from propositional proofs (a survey)}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{30--41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.30},
  URN =		{urn:nbn:de:0030-drops-28512},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.30},
  annote =	{Keywords: proof complexity, propositional tautology, boolean circuits, resolution, feasible interpolation}
}
Document
Recent Progress and Open Problems in Algorithmic Convex Geometry

Authors: Santosh S. Vempala


Abstract
This article is a survey of developments in algorithmic convex geometry over the past decade. These include algorithms for sampling, optimization, integration, rounding and learning, as well as mathematical tools such as isoperimetric and concentration inequalities. Several open problems and conjectures are discussed on the way.

Cite as

Santosh S. Vempala. Recent Progress and Open Problems in Algorithmic Convex Geometry. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 42-64, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{vempala:LIPIcs.FSTTCS.2010.42,
  author =	{Vempala, Santosh S.},
  title =	{{Recent Progress and Open Problems in Algorithmic Convex Geometry}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{42--64},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.42},
  URN =		{urn:nbn:de:0030-drops-28529},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.42},
  annote =	{Keywords: convex geometry, geometric inequalities, algorithms, sampling, optimization, integration, rounding, learning}
}
Document
Playing in stochastic environment: from multi-armed bandits to two-player games

Authors: Wieslaw Zielonka


Abstract
Given a zero-sum infinite game we examine the question if players have optimal memoryless deterministic strategies. It turns out that under some general conditions the problem for two-player games can be reduced to the same problem for one-player games which in turn can be reduced to a simpler related problem for multi-armed bandits.

Cite as

Wieslaw Zielonka. Playing in stochastic environment: from multi-armed bandits to two-player games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 65-72, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{zielonka:LIPIcs.FSTTCS.2010.65,
  author =	{Zielonka, Wieslaw},
  title =	{{Playing in stochastic environment: from multi-armed bandits to two-player games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{65--72},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.65},
  URN =		{urn:nbn:de:0030-drops-28500},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.65},
  annote =	{Keywords: two-player zero-sum game, one-player zero-sum game, multi-armed bandit, memoryless deterministic strategy}
}
Document
Better Algorithms for Satisfiability Problems for Formulas of Bounded Rank-width

Authors: Robert Ganian, Petr Hlinený, and Jan Obdrzálek


Abstract
We provide a parameterized polynomial algorithm for the propositional model counting problem #SAT, the runtime of which is single-exponential in the rank-width of a formula. Previously, analogous algorithms have been known --e.g. [Fischer, Makowsky, and Ravve]-- with a single-exponential dependency on the clique-width of a formula. Our algorithm thus presents an exponential runtime improvement (since clique-width reaches up to exponentially higher values than rank-width), and can be of practical interest for small values of rank-width. We also provide an algorithm for the MAX-SAT problem along the same lines.

Cite as

Robert Ganian, Petr Hlinený, and Jan Obdrzálek. Better Algorithms for Satisfiability Problems for Formulas of Bounded Rank-width. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 73-83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.FSTTCS.2010.73,
  author =	{Ganian, Robert and Hlinen\'{y}, Petr and Obdrz\'{a}lek, Jan},
  title =	{{Better Algorithms for Satisfiability Problems for Formulas of Bounded Rank-width}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{73--83},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.73},
  URN =		{urn:nbn:de:0030-drops-28541},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.73},
  annote =	{Keywords: propositional model counting; satisfiability; rank-width; clique-width ; parameterized complexity}
}
Document
Satisfiability of Acyclic and Almost Acyclic CNF Formulas

Authors: Sebastian Ordyniak, Daniel Paulusma, and Stefan Szeider


Abstract
We study the propositional satisfiability problem (SAT) on classes of CNF formulas (formulas in Conjunctive Normal Form) that obey certain structural restrictions in terms of their hypergraph structure, by associating to a CNF formula the hypergraph obtained by ignoring negations and considering clauses as hyperedges on variables. We show that satisfiability of CNF formulas with so-called ``beta-acyclic hypergraphs'' can be decided in polynomial time. We also study the parameterized complexity of SAT for ``almost'' beta-acyclic instances, using as parameter the formula's distance from being beta-acyclic. As distance we use the size of smallest strong backdoor sets and the beta-hypertree width. As a by-product we obtain the W[1]-hardness of SAT parameterized by the (undirected) clique-width of the incidence graph, which disproves a conjecture by Fischer, Makowsky, and Ravve (Discr. Appl. Math. 156, 2008).

Cite as

Sebastian Ordyniak, Daniel Paulusma, and Stefan Szeider. Satisfiability of Acyclic and Almost Acyclic CNF Formulas. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 84-95, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ordyniak_et_al:LIPIcs.FSTTCS.2010.84,
  author =	{Ordyniak, Sebastian and Paulusma, Daniel and Szeider, Stefan},
  title =	{{Satisfiability of Acyclic and Almost Acyclic CNF Formulas}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{84--95},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.84},
  URN =		{urn:nbn:de:0030-drops-28556},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.84},
  annote =	{Keywords: Satisfiability, chordal bipartite graphs, beta-acyclic hypergraphs, backdoor sets, parameterized complexity}
}
Document
The effect of girth on the kernelization complexity of Connected Dominating Set

Authors: Neeldhara Misra, Geevarghese Philip, Venkatesh Raman, and Saket Saurabh


Abstract
In the Connected Dominating Set problem we are given as input a graph $G$ and a positive integer $k$, and are asked if there is a set $S$ of at most $k$ vertices of $G$ such that $S$ is a dominating set of $G$ and the subgraph induced by $S$ is connected. This is a basic connectivity problem that is known to be NP-complete, and it has been extensively studied using several algorithmic approaches. In this paper we study the effect of excluding short cycles, as a subgraph, on the kernelization complexity of Connected Dominating Set. Kernelization algorithms are polynomial-time algorithms that take an input and a positive integer $k$ (the parameter) and output an equivalent instance where the size of the new instance and the new parameter are both bounded by some function $g(k)$. The new instance is called a $g(k)$ kernel for the problem. If $g(k)$ is a polynomial in $k$ then we say that the problem admits polynomial kernels. The girth of a graph $G$ is the length of a shortest cycle in $G$. It turns out that Connected Dominating Set is ``hard'' on graphs with small cycles, and becomes progressively easier as the girth increases. More specifically, we obtain the following interesting trichotomy: Connected Dominating Set (a) does not have a kernel of any size on graphs of girth $3$ or $4$ (since the problem is W[2]-hard); (b) admits a $g(k)$ kernel, where $g(k)$ is $k^{O(k)}$, on graphs of girth $5$ or $6$ but has no polynomial kernel (unless the Polynomial Hierarchy (PH) collapses to the third level) on these graphs; (c) has a cubic ($O(k^3)$) kernel on graphs of girth at least $7$. While there is a large and growing collection of parameterized complexity results available for problems on graph classes characterized by excluded minors, our results add to the very few known in the field for graph classes characterized by excluded subgraphs.

Cite as

Neeldhara Misra, Geevarghese Philip, Venkatesh Raman, and Saket Saurabh. The effect of girth on the kernelization complexity of Connected Dominating Set. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 96-107, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{misra_et_al:LIPIcs.FSTTCS.2010.96,
  author =	{Misra, Neeldhara and Philip, Geevarghese and Raman, Venkatesh and Saurabh, Saket},
  title =	{{The effect of girth on the kernelization complexity of Connected Dominating Set}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{96--107},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.96},
  URN =		{urn:nbn:de:0030-drops-28567},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.96},
  annote =	{Keywords: Connected Dominating Set, parameterized complexity, kernelization, girth}
}
Document
One-Counter Stochastic Games

Authors: Tomás Brázdil, Václav Brozek, and Kousha Etessami


Abstract
We study the computational complexity of basic decision problems for one-counter simple stochastic games (OC-SSGs), under various objectives. OC-SSGs are 2-player turn-based stochastic games played on the transition graph of classic one-counter automata. We study primarily the termination objective, where the goal of one player is to maximize the probability of reaching counter value 0, while the other player wishes to avoid this. Partly motivated by the goal of understanding termination objectives, we also study certain ``limit'' and ``long run average'' reward objectives that are closely related to some well-studied objectives for stochastic games with rewards. Examples of problems we address include: does player 1 have a strategy to ensure that the counter eventually hits 0, i.e., terminates, almost surely, regardless of what player 2 does? Or that the $liminf$ (or $limsup$) counter value equals $infty$ with a desired probability? Or that the long run average reward is $>0$ with desired probability? We show that the qualitative termination problem for OC-SSGs is in $NP$ intersect $coNP$, and is in P-time for 1-player OC-SSGs, or equivalently for one-counter Markov Decision Processes (OC-MDPs). Moreover, we show that quantitative limit problems for OC-SSGs are in $NP$ intersect $coNP$, and are in P-time for 1-player OC-MDPs. Both qualitative limit problems and qualitative termination problems for OC-SSGs are already at least as hard as Condon's quantitative decision problem for finite-state SSGs.

Cite as

Tomás Brázdil, Václav Brozek, and Kousha Etessami. One-Counter Stochastic Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 108-119, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{brazdil_et_al:LIPIcs.FSTTCS.2010.108,
  author =	{Br\'{a}zdil, Tom\'{a}s and Brozek, V\'{a}clav and Etessami, Kousha},
  title =	{{One-Counter Stochastic Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{108--119},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.108},
  URN =		{urn:nbn:de:0030-drops-28571},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.108},
  annote =	{Keywords: one-counter automata, simple stochastic games, Markov decision process, termination, limit, long run average reward}
}
Document
ATL with Strategy Contexts: Expressiveness and Model Checking

Authors: Arnaud Da Costa, François Laroussinie, and Nicolas Markey


Abstract
We study the alternating-time temporal logics ATL and ATL* extended with strategy contexts: these make agents commit to their strategies during the evaluation of formulas, contrary to plain ATL and ATL* where strategy quantifiers reset previously selected strategies. We illustrate the important expressive power of strategy contexts by proving that they make the extended logics, namely ATLsc and ATLsc*, equally expressive: any formula in ATLsc* can be translated into an equivalent, linear-size ATLsc formula. Despite the high expressiveness of these logics, we~prove that their model-checking problems remain decidable by~designing a tree-automata-based algorithm for model-checking ATLsc* on the full class of $n$-player concurrent game structures.

Cite as

Arnaud Da Costa, François Laroussinie, and Nicolas Markey. ATL with Strategy Contexts: Expressiveness and Model Checking. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 120-132, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dacosta_et_al:LIPIcs.FSTTCS.2010.120,
  author =	{Da Costa, Arnaud and Laroussinie, Fran\c{c}ois and Markey, Nicolas},
  title =	{{ATL with Strategy Contexts: Expressiveness and Model Checking}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{120--132},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.120},
  URN =		{urn:nbn:de:0030-drops-28589},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.120},
  annote =	{Keywords: alternating temporal logic, agent, strategy quantifier}
}
Document
Reasoning About Strategies

Authors: Fabio Mogavero, Aniello Murano, and Moshe Y. Vardi


Abstract
In open systems verification, to formally check for reliability, one needs an appropriate formalism to model the interaction between open entities and express that the system is correct no matter how the environment behaves. An important contribution in this context is given by modal logics for strategic ability, in the setting of multi-agent games, such as ATL, ATL*, and the like. Recently, Chatterjee, Henzinger, and Piterman introduced Strategy Logic, which we denote here by SL-CHP, with the aim of getting a powerful framework for reasoning explicitly about strategies. SL-CHP is obtained by using first-order quantifications over strategies and it has been investigated in the specific setting of two-agents turn-based game structures where a non-elementary model-checking algorithm has been provided. While SL-CHP is a very expressive logic, we claim that it does not fully capture the strategic aspects of multi-agent systems. In this paper, we introduce and study a more general strategy logic, denoted SL, for reasoning about strategies in multi-agent concurrent systems. We prove that SL strictly includes SL-CHP, while maintaining a decidable model-checking problem. Indeed, we show that it is 2ExpTime-complete, thus not harder than that for ATL* and a remarkable improvement of the same problem for SL-CHP. We also consider the satisfiability problem and show that it is undecidable already for the sub-logic SL-CHP under the concurrent game semantics.

Cite as

Fabio Mogavero, Aniello Murano, and Moshe Y. Vardi. Reasoning About Strategies. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 133-144, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{mogavero_et_al:LIPIcs.FSTTCS.2010.133,
  author =	{Mogavero, Fabio and Murano, Aniello and Vardi, Moshe Y.},
  title =	{{Reasoning About Strategies}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{133--144},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.133},
  URN =		{urn:nbn:de:0030-drops-28597},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.133},
  annote =	{Keywords: open systems, multi-agent systems, verification, strategy quantifier, alternating temporal logic, model-checking}
}
Document
New Results on Quantum Property Testing

Authors: Sourav Chakraborty, Eldar Fischer, Arie Matsliah, and Ronald de Wolf


Abstract
We present several new examples of speed-ups obtainable by quantum algorithms in the context of property testing. First, motivated by sampling algorithms, we consider probability distributions given in the form of an oracle $f:[n]\to[m]$. Here the probability $P_f(j)$ of an outcome $j$ in $[m]$ is the fraction of its domain that $f$ maps to $j$. We give quantum algorithms for testing whether two such distributions are identical or $epsilon$-far in $L_1$-norm. Recently, Bravyi, Hassidim, and Harrow showed that if $P_f$ and $P_g$ are both unknown (i.e., given by oracles $f$ and $g$), then this testing can be done in roughly $sqrt{m}$ quantum queries to the functions. We consider the case where the second distribution is known, and show that testing can be done with roughly $m^{1/3}$ quantum queries, which we prove to be essentially optimal. In contrast, it is known that classical testing algorithms need about $m^{2/3}$ queries in the unknown-unknown case and about $sqrt{m}$ queries in the known-unknown case. Based on this result, we also reduce the query complexity of graph isomorphism testers with quantum oracle access. While those examples provide polynomial quantum speed-ups, our third example gives a much larger improvement (constant quantum queries vs polynomial classical queries) for the problem of testing periodicity, based on Shor's algorithm and a modification of a classical lower bound by Lachish and Newman. This provides an alternative to a recent constant-vs-polynomial speed-up due to Aaronson.

Cite as

Sourav Chakraborty, Eldar Fischer, Arie Matsliah, and Ronald de Wolf. New Results on Quantum Property Testing. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 145-156, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2010.145,
  author =	{Chakraborty, Sourav and Fischer, Eldar and Matsliah, Arie and de Wolf, Ronald},
  title =	{{New Results on Quantum Property Testing}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{145--156},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.145},
  URN =		{urn:nbn:de:0030-drops-28603},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.145},
  annote =	{Keywords: quantum algorithm, property testing}
}
Document
Lower bounds for Quantum Oblivious Transfer

Authors: André Chailloux, Iordanis Kerenidis, and Jamie Sikora


Abstract
Oblivious transfer is a fundamental primitive in cryptography. While perfect information theoretic security is impossible, quantum oblivious transfer protocols can limit the dishonest players' cheating. Finding the optimal security parameters in such protocols is an important open question. In this paper we show that every 1-out-of-2 oblivious transfer protocol allows a dishonest party to cheat with probability bounded below by a constant strictly larger than $1/2$. Alice's cheating is defined as her probability of guessing Bob's index, and Bob's cheating is defined as his probability of guessing both input bits of Alice. In our proof, we relate these cheating probabilities to the cheating probabilities of a coin flipping protocol and conclude by using Kitaev's coin flipping lower bound. Then, we present an oblivious transfer protocol with two messages and cheating probabilities at most $3/4$. Last, we extend Kitaev's semidefinite programming formulation to more general primitives, where the security is against a dishonest player trying to force the outcome of the other player, and prove optimal lower and upper bounds for them.

Cite as

André Chailloux, Iordanis Kerenidis, and Jamie Sikora. Lower bounds for Quantum Oblivious Transfer. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 157-168, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chailloux_et_al:LIPIcs.FSTTCS.2010.157,
  author =	{Chailloux, Andr\'{e} and Kerenidis, Iordanis and Sikora, Jamie},
  title =	{{Lower bounds for Quantum Oblivious Transfer}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{157--168},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.157},
  URN =		{urn:nbn:de:0030-drops-28613},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.157},
  annote =	{Keywords: quantum oblivious transfer, coin flipping protocol, semidefinite programming}
}
Document
Minimizing Busy Time in Multiple Machine Real-time Scheduling

Authors: Rohit Khandekar, Baruch Schieber, Hadas Shachnai, and Tami Tamir


Abstract
We consider the following fundamental scheduling problem. The input consists of $n$ jobs to be scheduled on a set of machines of bounded capacities. Each job is associated with a release time, a due date, a processing time and demand for machine capacity. The goal is to schedule all of the jobs non-preemptively in their release-time-deadline windows, subject to machine capacity constraints, such that the total busy time of the machines is minimized. Our problem has important applications in power-aware scheduling, optical network design and unit commitment in power systems. Scheduling to minimize busy times is APX-hard already in the special case where all jobs have the same (unit) processing times and can be scheduled in a fixed time interval. Our main result is a $5$-approximation algorithm for general instances. We extend this result to obtain an algorithm with the same approximation ratio for the problem of scheduling moldable jobs, that requires also to determine, for each job, one of several processing-time vs. demand configurations. Better bounds and exact algorithms are derived for several special cases, including proper interval graphs, intervals forming a clique and laminar families of intervals.

Cite as

Rohit Khandekar, Baruch Schieber, Hadas Shachnai, and Tami Tamir. Minimizing Busy Time in Multiple Machine Real-time Scheduling. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 169-180, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{khandekar_et_al:LIPIcs.FSTTCS.2010.169,
  author =	{Khandekar, Rohit and Schieber, Baruch and Shachnai, Hadas and Tamir, Tami},
  title =	{{Minimizing Busy Time in Multiple Machine Real-time Scheduling}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{169--180},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.169},
  URN =		{urn:nbn:de:0030-drops-28909},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.169},
  annote =	{Keywords: real-time scheduling, busy time, preemption, approximation algorithm}
}
Document
A Near-linear Time Constant Factor Algorithm for Unsplittable Flow Problem on Line with Bag Constraints

Authors: Venkatesan T. Chakaravarthy, Anamitra R. Choudhury, and Yogish Sabharwal


Abstract
Consider a scenario where we need to schedule a set of jobs on a system offering some resource (such as electrical power or communication bandwidth), which we shall refer to as bandwidth. Each job consists of a set (or bag) of job instances. For each job instance, the input specifies the start time, finish time, bandwidth requirement and profit. The bandwidth offered by the system varies at different points of time and is specified as part of the input. A feasible solution is to choose a subset of instances such that at any point of time, the sum of bandwidth requirements of the chosen instances does not exceed the bandwidth available at that point of time, and furthermore, at most one instance is picked from each job. The goal is to find a maximum profit feasible solution. We study this problem under a natural assumption called the no-bottleneck assumption (NBA), wherein the bandwidth requirement of any job instance is at most the minimum bandwidth available. We present a simple, near-linear time constant factor approximation algorithm for this problem, under NBA. When each job consists of only one job instance, the above problem is the same as the well-studied unsplittable flow problem (UFP) on lines. A constant factor approximation algorithm is known for the UFP on line, under NBA. Our result leads to an alternative constant factor approximation algorithm for this problem. Though the approximation ratio achieved by our algorithm is inferior, it is much simpler, deterministic and faster in comparison to the existing algorithms. Our algorithm runs in near-linear time ($O(n*log^2 n)$), whereas the running time of the known algorithms is a high order polynomial. The core idea behind our algorithm is a reduction from the varying bandwidth case to the easier uniform bandwidth case, using a technique that we call slicing.

Cite as

Venkatesan T. Chakaravarthy, Anamitra R. Choudhury, and Yogish Sabharwal. A Near-linear Time Constant Factor Algorithm for Unsplittable Flow Problem on Line with Bag Constraints. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 181-191, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chakaravarthy_et_al:LIPIcs.FSTTCS.2010.181,
  author =	{Chakaravarthy, Venkatesan T. and Choudhury, Anamitra R. and Sabharwal, Yogish},
  title =	{{A Near-linear Time Constant Factor Algorithm for Unsplittable Flow Problem on Line with Bag Constraints}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{181--191},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.181},
  URN =		{urn:nbn:de:0030-drops-28623},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.181},
  annote =	{Keywords: Approximation Algorithms; Scheduling; Resource Allocation}
}
Document
Place-Boundedness for Vector Addition Systems with one zero-test

Authors: Rémi Bonnet, Alain Finkel, Jérôme Leroux, and Marc Zeitoun


Abstract
Reachability and boundedness problems have been shown decidable for Vector Addition Systems with one zero-test. Surprisingly, place-boundedness remained open. We provide here a variation of the Karp-Miller algorithm to compute a basis of the downward closure of the reachability set which allows to decide place-boundedness. This forward algorithm is able to pass the zero-tests thanks to a finer cover, hybrid between the reachability and cover sets, reclaiming accuracy on one component. We show that this filtered cover is still recursive, but that equality of two such filtered covers, even for usual Vector Addition Systems (with no zero-test), is undecidable.

Cite as

Rémi Bonnet, Alain Finkel, Jérôme Leroux, and Marc Zeitoun. Place-Boundedness for Vector Addition Systems with one zero-test. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 192-203, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.FSTTCS.2010.192,
  author =	{Bonnet, R\'{e}mi and Finkel, Alain and Leroux, J\'{e}r\^{o}me and Zeitoun, Marc},
  title =	{{Place-Boundedness for Vector Addition Systems with one zero-test}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{192--203},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.192},
  URN =		{urn:nbn:de:0030-drops-28638},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.192},
  annote =	{Keywords: Place-boundedness, vector addition system with one zero-test, Karp-Miller algorithm}
}
Document
Model checking time-constrained scenario-based specifications

Authors: S. Akshay, Paul Gastin, Madhavan Mukund, and K. Narayan Kumar


Abstract
We consider the problem of model checking message-passing systems with real-time requirements. As behavioural specifications, we use message sequence charts (MSCs) annotated with timing constraints. Our system model is a network of communicating finite state machines with local clocks, whose global behaviour can be regarded as a timed automaton. Our goal is to verify that all timed behaviours exhibited by the system conform to the timing constraints imposed by the specification. In general, this corresponds to checking inclusion for timed languages, which is an undecidable problem even for timed regular languages. However, we show that we can translate regular collections of time-constrained MSCs into a special class of event-clock automata that can be determinized and complemented, thus permitting an algorithmic solution to the model checking problem.

Cite as

S. Akshay, Paul Gastin, Madhavan Mukund, and K. Narayan Kumar. Model checking time-constrained scenario-based specifications. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 204-215, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.FSTTCS.2010.204,
  author =	{Akshay, S. and Gastin, Paul and Mukund, Madhavan and Narayan Kumar, K.},
  title =	{{Model checking time-constrained scenario-based specifications}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{204--215},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.204},
  URN =		{urn:nbn:de:0030-drops-28649},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.204},
  annote =	{Keywords: model-checking, message-passing system, time-constrained MSC}
}
Document
Global Model Checking of Ordered Multi-Pushdown Systems

Authors: Mohamed Faouzi Atig


Abstract
In this paper, we address the verification problem of ordered multi-pushdown systems: A multi-stack extension of pushdown systems that comes with a constraint on stack operations such that a pop can only be performed on the first non-empty stack. First, we show that for an ordered multi-pushdown system the set of all predecessors of a regular set of configurations is an effectively constructible regular set. Then, we exploit this result to solve the global model checking which consists in computing the set of all configurations of an ordered multi-pushdown system that satisfy a given $w$-regular property (expressible in linear-time temporal logics or the linear-time $\mu$-calculus). As an immediate consequence of this result, we obtain an 2ETIME upper bound for the model checking problem of $w$-regular properties for ordered multi-pushdown systems (matching its lower-bound).

Cite as

Mohamed Faouzi Atig. Global Model Checking of Ordered Multi-Pushdown Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 216-227, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{atig:LIPIcs.FSTTCS.2010.216,
  author =	{Atig, Mohamed Faouzi},
  title =	{{Global  Model Checking of Ordered Multi-Pushdown Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{216--227},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.216},
  URN =		{urn:nbn:de:0030-drops-28653},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.216},
  annote =	{Keywords: Concurrent Programs, Pushdown Systems, Global Model-Checking}
}
Document
The Complexity of Model Checking (Collapsible) Higher-Order Pushdown Systems

Authors: Matthew Hague and Anthony Widjaja To


Abstract
We study (collapsible) higher-order pushdown systems --- theoretically robust and well-studied models of higher-order programs --- along with their natural subclass called (collapsible) higher-order basic process algebras. We provide a comprehensive analysis of the model checking complexity of a range of both branching-time and linear-time temporal logics. We obtain tight bounds on data, expression, and combined-complexity for both (collapsible) higher-order pushdown systems and (collapsible) higher-order basic process algebra. At order-$k$, results range from polynomial to $(k+1)$-exponential time. Finally, we study (collapsible) higher-order basic process algebras as graph generators and show that they are almost as powerful as (collapsible) higher-order pushdown systems up to MSO interpretations.

Cite as

Matthew Hague and Anthony Widjaja To. The Complexity of Model Checking (Collapsible) Higher-Order Pushdown Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 228-239, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hague_et_al:LIPIcs.FSTTCS.2010.228,
  author =	{Hague, Matthew and To, Anthony Widjaja},
  title =	{{The Complexity of Model Checking (Collapsible) Higher-Order Pushdown Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{228--239},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.228},
  URN =		{urn:nbn:de:0030-drops-28663},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.228},
  annote =	{Keywords: Higher-Order, Collapsible, Pushdown Systems, Temporal Logics, Complexity, Model Checking}
}
Document
A graph polynomial for independent sets of bipartite graphs

Authors: Qi Ge and Daniel Stefankovic


Abstract
We introduce a new graph polynomial that encodes interesting properties of graphs, for example, the number of matchings, the number of perfect matchings, and, for bipartite graphs, the number of independent sets (#BIS). We analyze the complexity of exact evaluation of the polynomial at rational points and show a dichotomy result---for most points exact evaluation is #P-hard (assuming the generalized Riemann hypothesis) and for the rest of the points exact evaluation is trivial. We propose a natural Markov chain to approximately evaluate the polynomial for a range of parameters. We prove an upper bound on the mixing time of the Markov chain on trees. As a by-product we show that the ``single bond flip'' Markov chain for the random cluster model is rapidly mixing on constant tree-width graphs.

Cite as

Qi Ge and Daniel Stefankovic. A graph polynomial for independent sets of bipartite graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 240-250, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ge_et_al:LIPIcs.FSTTCS.2010.240,
  author =	{Ge, Qi and Stefankovic, Daniel},
  title =	{{A graph polynomial for independent sets of bipartite graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{240--250},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.240},
  URN =		{urn:nbn:de:0030-drops-28676},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.240},
  annote =	{Keywords: graph polynomials, #P-complete, independent sets, approximate counting problems, Markov chain Monte Carlo}
}
Document
Finding Independent Sets in Unions of Perfect Graphs

Authors: Venkatesan T. Chakaravarthy, Vinayaka Pandit, Sambuddha Roy, and Yogish Sabharwal


Abstract
The maximum independent set problem (MaxIS) on general graphs is known to be NP-hard to approximate within a factor of $n^{1-epsilon}$, for any $epsilon > 0$. However, there are many ``easy" classes of graphs on which the problem can be solved in polynomial time. In this context, an interesting question is that of computing the maximum independent set in a graph that can be expressed as the union of a small number of graphs from an easy class. The MaxIS problem has been studied on unions of interval graphs and chordal graphs. We study the MaxIS problem on unions of perfect graphs (which generalize the above two classes). We present an $O(sqrt{n})$-approximation algorithm when the input graph is the union of two perfect graphs. We also show that the MaxIS problem on unions of two comparability graphs (a subclass of perfect graphs) cannot be approximated within any constant factor.

Cite as

Venkatesan T. Chakaravarthy, Vinayaka Pandit, Sambuddha Roy, and Yogish Sabharwal. Finding Independent Sets in Unions of Perfect Graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 251-259, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chakaravarthy_et_al:LIPIcs.FSTTCS.2010.251,
  author =	{Chakaravarthy, Venkatesan T. and Pandit, Vinayaka and Roy, Sambuddha and Sabharwal, Yogish},
  title =	{{Finding Independent Sets in Unions of Perfect Graphs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{251--259},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.251},
  URN =		{urn:nbn:de:0030-drops-28683},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.251},
  annote =	{Keywords: Approximation Algorithms; Comparability Graphs; Hardness of approximation}
}
Document
Fast equivalence-checking for normed context-free processes

Authors: Wojciech Czerwinski and Slawomir Lasota


Abstract
Bisimulation equivalence is decidable in polynomial time over normed graphs generated by a context-free grammar. We present a new algorithm, working in time $O(n^5)$, thus improving the previously known complexity $O(n^8 * polylog(n))$. It also improves the previously known complexity $O(n^6 * polylog(n))$ of the equality problem for simple grammars.

Cite as

Wojciech Czerwinski and Slawomir Lasota. Fast equivalence-checking for normed context-free processes. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 260-271, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{czerwinski_et_al:LIPIcs.FSTTCS.2010.260,
  author =	{Czerwinski, Wojciech and Lasota, Slawomir},
  title =	{{Fast equivalence-checking for normed context-free processes}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{260--271},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.260},
  URN =		{urn:nbn:de:0030-drops-28690},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.260},
  annote =	{Keywords: bisimulation, norm, context-free grammar, simple grammar}
}
Document
Generalizing the powerset construction, coalgebraically

Authors: Alexandra Silva, Filippo Bonchi, Marcello M. Bonsangue, and Jan J. M. M. Rutten


Abstract
Coalgebra is an abstract framework for the uniform study of different kinds of dynamical systems. An endofunctor $F$ determines both the type of systems ($F$-coalgebras) and a notion of behavioral equivalence ($\sim_F$) amongst them. Many types of transition systems and their equivalences can be captured by a functor $F$. For example, for deterministic automata the derived equivalence is language equivalence, while for non-deterministic automata it is ordinary bisimilarity. The powerset construction is a standard method for converting a nondeterministic automaton into an equivalent deterministic one as far as language is concerned. In this paper, we lift the powerset construction on automata to the more general framework of coalgebras with structured state spaces. Examples of applications include partial Mealy machines, (structured) Moore automata, and Rabin probabilistic automata.

Cite as

Alexandra Silva, Filippo Bonchi, Marcello M. Bonsangue, and Jan J. M. M. Rutten. Generalizing the powerset construction, coalgebraically. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 272-283, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{silva_et_al:LIPIcs.FSTTCS.2010.272,
  author =	{Silva, Alexandra and Bonchi, Filippo and Bonsangue, Marcello M. and Rutten, Jan J. M. M.},
  title =	{{Generalizing the powerset construction, coalgebraically}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{272--283},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.272},
  URN =		{urn:nbn:de:0030-drops-28706},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.272},
  annote =	{Keywords: coalgebra, language equivalence, bisimilarity, powerset construction}
}
Document
Uniqueness of Normal Forms is Decidable for Shallow Term Rewrite Systems

Authors: Nicholas Radcliffe and Rakesh M. Verma


Abstract
Uniqueness of normal forms (UN=) is an important property of term rewrite systems. UN= is decidable for ground (i.e., variable-free) systems and undecidable in general. Recently it was shown to be decidable for linear, shallow systems. We generalize this previous result and show that this property is decidable for shallow rewrite systems, in contrast to confluence, reachability and other properties, which are all undecidable for flat systems. Our result is also optimal in some sense, since we prove that the UN= property is undecidable for two superclasses of flat systems: left-flat, left-linear systems in which right-hand sides are of depth at most two and right-flat, right-linear systems in which left-hand sides are of depth at most two.

Cite as

Nicholas Radcliffe and Rakesh M. Verma. Uniqueness of Normal Forms is Decidable for Shallow Term Rewrite Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 284-295, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{radcliffe_et_al:LIPIcs.FSTTCS.2010.284,
  author =	{Radcliffe, Nicholas and Verma, Rakesh M.},
  title =	{{Uniqueness of Normal Forms is Decidable for Shallow Term Rewrite Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{284--295},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.284},
  URN =		{urn:nbn:de:0030-drops-28715},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.284},
  annote =	{Keywords: term rewrite systems, uniqueness of normal forms, decidability, shallo w rewrite systems, flat rewrite systems}
}
Document
Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs

Authors: Maurice Jansen, Youming Qiao, and Jayalal Sarma M.N.


Abstract
In this paper we study algebraic branching programs (ABPs) with restrictions on the order and the number of reads of variables in the program. An ABP is given by a layered directed acyclic graph with source $s$ and sink $t$, whose edges are labeled by variables taken from the set $\{x_1, x_2, \ldots, x_n\}$ or field constants. It computes the sum of weights of all paths from $s$ to $t$, where the weight of a path is defined as the product of edge-labels on the path. Given a permutation $\pi$ of the $n$ variables, for a $\pi$-ordered ABP ($\pi$-OABP), for any directed path $p$ from $s$ to $t$, a variable can appear at most once on $p$, and the order in which variables appear on $p$ must respect $\pi$. One can think of OABPs as being the arithmetic analogue of ordered binary decision diagrams (OBDDs). We say an ABP $A$ is of read $r$, if any variable appears at most $r$ times in $A$. Our main result pertains to the polynomial identity testing problem, i.e. the problem of deciding whether a given $n$-variate polynomial is identical to the zero polynomial or not. We prove that over any field $\F$, and in the black-box model, i.e. given only query access to the polynomial, read $r$ $\pi$-OABP computable polynomials can be tested in $\DTIME[2^{O(r\log r \cdot \log^2 n \log\log n)}]$. In case $\F$ is a finite field, the above time bound holds provided the identity testing algorithm is allowed to make queries to extension fields of $\F$. To establish this result, we combine some basic tools from algebraic geometry with ideas from derandomization in the Boolean domain. Our next set of results investigates the computational limitations of OABPs. It is shown that any OABP computing the determinant or permanent requires size $\Omega(2^n/n)$ and read $\Omega(2^n/n^2)$. We give a multilinear polynomial $p$ in $2n+1$ variables over some specifically selected field $\mathbb{G}$, such that any OABP computing $p$ must read some variable at least $2^n$ times. We prove a strict separation for the computational power of read $(r-1)$ and read $r$ OABPs. Namely, we show that the elementary symmetric polynomial of degree $r$ in $n$ variables can be computed by a size $O(rn)$ read $r$ OABP, but not by a read $(r-1)$ OABP, for any $0 < 2r-1 \leq n$. Finally, we give an example of a polynomial $p$ and two variables orders $\pi \neq \pi'$, such that $p$ can be computed by a read-once $\pi$-OABP, but where any $\pi'$-OABP computing $p$ must read some variable at least $2^n$ times.

Cite as

Maurice Jansen, Youming Qiao, and Jayalal Sarma M.N.. Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 296-307, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.FSTTCS.2010.296,
  author =	{Jansen, Maurice and Qiao, Youming and Sarma M.N., Jayalal},
  title =	{{Deterministic Black-Box Identity Testing \$pi\$-Ordered Algebraic Branching Programs}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{296--307},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.296},
  URN =		{urn:nbn:de:0030-drops-28728},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.296},
  annote =	{Keywords: ordered algebraic branching program, polynomial identity testing}
}
Document
Computing Rational Radical Sums in Uniform TC^0

Authors: Paul Hunter, Patricia Bouyer, Nicolas Markey, Joël Ouaknine, and James Worrell


Abstract
A fundamental problem in numerical computation and computational geometry is to determine the sign of arithmetic expressions in radicals. Here we consider the simpler problem of deciding whether $\sum_{i=1}^m C_i A_i^{X_i}$ is zero for given rational numbers $A_i$, $C_i$, $X_i$. It has been known for almost twenty years that this can be decided in polynomial time. In this paper we improve this result by showing membership in uniform TC0. This requires several significant departures from Blömer's polynomial-time algorithm as the latter crucially relies on primitives, such as gcd computation and binary search, that are not known to be in TC0.

Cite as

Paul Hunter, Patricia Bouyer, Nicolas Markey, Joël Ouaknine, and James Worrell. Computing Rational Radical Sums in Uniform TC^0. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 308-316, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{hunter_et_al:LIPIcs.FSTTCS.2010.308,
  author =	{Hunter, Paul and Bouyer, Patricia and Markey, Nicolas and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{Computing Rational Radical Sums in Uniform TC^0}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{308--316},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.308},
  URN =		{urn:nbn:de:0030-drops-28739},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.308},
  annote =	{Keywords: Sum of square roots, Threshold circuits, Complexity}
}
Document
Graph Isomorphism is not AC^0 reducible to Group Isomorphism

Authors: Arkadev Chattopadhyay, Jacobo Torán, and Fabian Wagner


Abstract
We give a new upper bound for the Group and Quasigroup Isomorphism problems when the input structures are given explicitly by multiplication tables. We show that these problems can be computed by polynomial size nondeterministic circuits of unbounded fan-in with $O(\log\log n)$ depth and $O(\log^2 n)$ nondeterministic bits, where $n$ is the number of group elements. This improves the existing upper bound from \cite{Wolf 94} for the problems. In the previous upper bound the circuits have bounded fan-in but depth $O(\log^2 n)$ and also $O(\log^2 n)$ nondeterministic bits. We then prove that the kind of circuits from our upper bound cannot compute the Parity function. Since Parity is AC0 reducible to Graph Isomorphism, this implies that Graph Isomorphism is strictly harder than Group or Quasigroup Isomorphism under the ordering defined by AC0 reductions.

Cite as

Arkadev Chattopadhyay, Jacobo Torán, and Fabian Wagner. Graph Isomorphism is not AC^0 reducible to Group Isomorphism. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 317-326, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2010.317,
  author =	{Chattopadhyay, Arkadev and Tor\'{a}n, Jacobo and Wagner, Fabian},
  title =	{{Graph Isomorphism is not AC^0 reducible to Group Isomorphism}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{317--326},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.317},
  URN =		{urn:nbn:de:0030-drops-28748},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.317},
  annote =	{Keywords: Complexity, Algorithms, Group Isomorphism Problem, Circuit Com plexity}
}
Document
Colored Hypergraph Isomorphism is Fixed Parameter Tractable

Authors: V. Arvind, Bireswar Das, Johannes Köbler, and Seinosuke Toda


Abstract
We describe a fixed parameter tractable (fpt) algorithm for Colored Hypergraph Isomorphism} which has running time $2^{O(b)}N^{O(1)}$, where the parameter $b$ is the maximum size of the color classes of the given hypergraphs and $N$ is the input size. We also describe fpt algorithms for certain permutation group problems that are used as subroutines in our algorithm.

Cite as

V. Arvind, Bireswar Das, Johannes Köbler, and Seinosuke Toda. Colored Hypergraph Isomorphism is Fixed Parameter Tractable. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 327-337, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.FSTTCS.2010.327,
  author =	{Arvind, V. and Das, Bireswar and K\"{o}bler, Johannes and Toda, Seinosuke},
  title =	{{Colored Hypergraph Isomorphism is Fixed Parameter Tractable}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{327--337},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.327},
  URN =		{urn:nbn:de:0030-drops-28751},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.327},
  annote =	{Keywords: Fixed parameter tractability, fpt algorithms, graph isomorphism, computational complexity}
}
Document
Global Escape in Multiparty Sessions

Authors: Sara Capecchi, Elena Giachino, and Nobuko Yoshida


Abstract
This paper proposes a global escape mechanism which can handle unexpected or unwanted conditions changing the default execution of distributed communicational flows, preserving compatibility of the multiparty conversations. Our escape is realised by a collection of asynchronous local exceptions which can be thrown at any stage of the communication and to any subsets of participants in a multiparty session. This flexibility enables to model complex exceptions such as criss-crossing global interactions and fault tolerance for distributed cooperating threads. Guided by multiparty session types, our semantics automatically provides an efficient termination algorithm for global escapes with low complexity of exception messages.

Cite as

Sara Capecchi, Elena Giachino, and Nobuko Yoshida. Global Escape in Multiparty Sessions. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 338-351, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{capecchi_et_al:LIPIcs.FSTTCS.2010.338,
  author =	{Capecchi, Sara and Giachino, Elena and Yoshida, Nobuko},
  title =	{{Global Escape in Multiparty Sessions}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{338--351},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.338},
  URN =		{urn:nbn:de:0030-drops-28760},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.338},
  annote =	{Keywords: escape mechanism, exception handling, multiparty communication}
}
Document
Computationally Sound Abstraction and Verification of Secure Multi-Party Computations

Authors: Michael Backes, Matteo Maffei, and Esfandiar Mohammadi


Abstract
We devise an abstraction of secure multi-party computations in the applied $\pi$-calculus. Based on this abstraction, we propose a methodology to mechanically analyze the security of cryptographic protocols employing secure multi-party computations. We exemplify the applicability of our framework by analyzing the SIMAP sugar-beet double auction protocol. We finally study the computational soundness of our abstraction, proving that the analysis of protocols expressed in the applied $\pi$-calculus and based on our abstraction provides computational security guarantees.

Cite as

Michael Backes, Matteo Maffei, and Esfandiar Mohammadi. Computationally Sound Abstraction and Verification of Secure Multi-Party Computations. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 352-363, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{backes_et_al:LIPIcs.FSTTCS.2010.352,
  author =	{Backes, Michael and Maffei, Matteo and Mohammadi, Esfandiar},
  title =	{{Computationally Sound Abstraction and Verification of Secure Multi-Party  Computations}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{352--363},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.352},
  URN =		{urn:nbn:de:0030-drops-28771},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.352},
  annote =	{Keywords: Computational soundness, Secure multi-party computation, Process calculi, Protocol verification}
}
Document
Model Checking Concurrent Programs with Nondeterminism and Randomization

Authors: Rohit Chadha, A. Prasad Sistla, and Mahesh Viswanathan


Abstract
For concurrent probabilistic programs having process-level nondeterminism, it is often necessary to restrict the class of schedulers that resolve nondeterminism to obtain sound and precise model checking algorithms. In this paper, we introduce two classes of schedulers called view consistent and locally Markovian schedulers and consider the model checking problem of concurrent, probabilistic programs under these alternate semantics. Specifically, given a B\"{u}chi automaton $Spec$, a threshold $x$ in $[0,1]$, and a concurrent program $P$, the model checking problem asks if the measure of computations of $P$ that satisfy $Spec$ is at least $x$, under all view consistent (or locally Markovian) schedulers. We give precise complexity results for the model checking problem (for different classes of B\"{u}chi automata specifications) and contrast it with the complexity under the standard semantics that considers all schedulers.

Cite as

Rohit Chadha, A. Prasad Sistla, and Mahesh Viswanathan. Model Checking Concurrent Programs with Nondeterminism and Randomization. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 364-375, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chadha_et_al:LIPIcs.FSTTCS.2010.364,
  author =	{Chadha, Rohit and Sistla, A. Prasad and Viswanathan, Mahesh},
  title =	{{Model Checking Concurrent Programs with Nondeterminism and  Randomization}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{364--375},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.364},
  URN =		{urn:nbn:de:0030-drops-28788},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.364},
  annote =	{Keywords: view consistent scheduler, locally Markovian scheduler, model-checking, probabilistic program}
}
Document
Two Size Measures for Timed Languages

Authors: Eugene Asarin and Aldric Degorre


Abstract
Quantitative properties of timed regular languages, such as information content (growth rate, entropy) are explored. The approach suggested by the same authors is extended to languages of timed automata with punctual (equalities) and non-punctual (non-equalities) transition guards. Two size measures for such languages are identified: mean dimension and volumetric entropy. The former is the linear growth rate of the dimension of the language; it is characterized as the spectral radius of a max-plus matrix associated to the automaton. The latter is the exponential growth rate of the volume of the language; it is characterized as the logarithm of the spectral radius of a matrix integral operator on some Banach space associated to the automaton. Relation of the two size measures to classical information-theoretic concepts is explored.

Cite as

Eugene Asarin and Aldric Degorre. Two Size Measures for Timed Languages. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 376-387, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{asarin_et_al:LIPIcs.FSTTCS.2010.376,
  author =	{Asarin, Eugene and Degorre, Aldric},
  title =	{{Two Size Measures for Timed Languages}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{376--387},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.376},
  URN =		{urn:nbn:de:0030-drops-28793},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.376},
  annote =	{Keywords: timed automata, entropy, mean dimension}
}
Document
Average Analysis of Glushkov Automata under a BST-Like Model

Authors: Cyril Nicaud, Carine Pivoteau, and Benoît Razet


Abstract
We study the average number of transitions in Glushkov automata built from random regular expressions. This statistic highly depends on the probabilistic distribution set on the expressions. A recent work shows that, under the uniform distribution, regular expressions lead to automata with a linear number of transitions. However, uniform regular expressions are not necessarily a satisfying model. Therefore, we rather focus on an other model, inspired from random binary search trees (BST), which is widely used, in particular for testing. We establish that, in this case, the average number of transitions becomes quadratic according to the size of the regular expression.

Cite as

Cyril Nicaud, Carine Pivoteau, and Benoît Razet. Average Analysis of Glushkov Automata under a BST-Like Model. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 388-399, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{nicaud_et_al:LIPIcs.FSTTCS.2010.388,
  author =	{Nicaud, Cyril and Pivoteau, Carine and Razet, Beno\^{i}t},
  title =	{{Average Analysis of Glushkov Automata under a BST-Like Model}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{388--399},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.388},
  URN =		{urn:nbn:de:0030-drops-28809},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.388},
  annote =	{Keywords: Glushkov automata, random binary search tree, regular expression}
}
Document
Beyond Hyper-Minimisation---Minimising DBAs and DPAs is NP-Complete

Authors: Sven Schewe


Abstract
In this paper we study the problem of minimising deterministic automata over finite and infinite words. Deterministic finite automata are the simplest devices to recognise regular languages, and deterministic \buchi, \cobuchi, and parity automata play a similar role in the recognition of $\omega$-regular languages. While it is well known that the minimisation of deterministic finite and weak automata is cheap, the complexity of minimising deterministic \buchi\ and parity automata has remained an open challenge. We establish the NP-completeness of these problems. A second contribution of this paper is the introduction of almost equivalence, an equivalence class for strictly between language equivalence for deterministic \buchi\ or \cobuchi\ automata and language equivalence for deterministic finite automata. Two finite automata are almost equivalent if they, when used as a monitor, provide a different answer only a bounded number of times in any run, and we call the minimal such automaton relatively minimal. Minimisation of DFAs, hyper-minimisation, relative minimisation, and the minimisation of deterministic \buchi\ (or \cobuchi) automata are operations of increasing reduction power, as the respective equivalence relations on automata become coarser from left to right. Besides being a natural equivalence relation for finite automata, almost equivalence is language preserving for weak automata, and can therefore also be viewed as a generalisation of language equivalence for weak automata to a more general class of automata. From the perspective of \buchi\ and \cobuchi\ automata, we gain a cheap algorithm for state-space reduction that also turns out to be beneficial for further heuristic or exhaustive state-space reductions put on top of it.

Cite as

Sven Schewe. Beyond Hyper-Minimisation---Minimising DBAs and DPAs is NP-Complete. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 400-411, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{schewe:LIPIcs.FSTTCS.2010.400,
  author =	{Schewe, Sven},
  title =	{{Beyond Hyper-Minimisation---Minimising DBAs and DPAs is NP-Complete}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{400--411},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.400},
  URN =		{urn:nbn:de:0030-drops-28816},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.400},
  annote =	{Keywords: Automata Theory, Complexity, B\"{u}chi Automata, Parity Automata}
}
Document
Parityizing Rabin and Streett

Authors: Udi Boker, Orna Kupferman, and Avital Steinitz


Abstract
The parity acceptance condition for $omega$-regular languages is a special case of the Rabin and Streett acceptance conditions. While the parity acceptance condition is as expressive as the richer conditions, in both the deterministic and nondeterministic settings, Rabin and Streett automata are more succinct, and their translation to parity automata may blow-up the state space. The appealing properties of the parity condition, mainly the fact it is dualizable and allows for memoryless strategies, make such a translation useful in various decision procedures. In this paper we study languages that are recognizable by an automaton on top of which one can define both a Rabin and a Streett condition for the language. We show that if the underlying automaton is deterministic, then we can define on top of it also a parity condition for the language. We also show that this relation does not hold in the nondeterministic setting. Finally, we use the construction of the parity condition in the deterministic case in order to solve the problem of deciding whether a given Rabin or Streett automaton has an equivalent parity automaton on the same structure, and show that it is PTIME-complete in the deterministic setting and is PSPACE-complete in the nondeterministic setting.

Cite as

Udi Boker, Orna Kupferman, and Avital Steinitz. Parityizing Rabin and Streett. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 412-423, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{boker_et_al:LIPIcs.FSTTCS.2010.412,
  author =	{Boker, Udi and Kupferman, Orna and Steinitz, Avital},
  title =	{{Parityizing Rabin and Streett}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{412--423},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.412},
  URN =		{urn:nbn:de:0030-drops-28822},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.412},
  annote =	{Keywords: omega-automata, Rabin condition, Streett condition, parity condition}
}
Document
Finding Sparser Directed Spanners

Authors: Piotr Berman, Sofya Raskhodnikova, and Ge Ruan


Abstract
A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, a subgraph $H = (V,E_H)$ is a $k$-spanner of a graph $G=(V,E)$ if for every pair of vertices $u,v \in V$, the shortest path distance $dist_H(u,v)$ from $u$ to $v$ in $H$ is at most $k.dist_G(u,v)$. We focus on spanners of directed graphs and a related notion of transitive-closure spanners. The latter captures the idea that a spanner should have a small diameter but preserve the connectivity of the original graph. We study the computational problem of finding the sparsest $k$-spanner (resp., $k$-TC-spanner) of a given directed graph, which we refer to as DIRECTED $k$-SPANNER (resp., $k$-TC-SPANNER). We improve all known approximation algorithms for these problems for $k\geq 3$. (For $k=2$, the current ratios are tight, assuming P$\neq$NP.) Along the way, we prove several structural results about the size of the sparsest spanners of directed graphs.

Cite as

Piotr Berman, Sofya Raskhodnikova, and Ge Ruan. Finding Sparser Directed Spanners. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 424-435, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{berman_et_al:LIPIcs.FSTTCS.2010.424,
  author =	{Berman, Piotr and Raskhodnikova, Sofya and Ruan, Ge},
  title =	{{Finding Sparser Directed Spanners}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{424--435},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.424},
  URN =		{urn:nbn:de:0030-drops-28830},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.424},
  annote =	{Keywords: Approximation algorithms, directed graphs, spanners}
}
Document
Combinatorial Problems with Discounted Price Functions in Multi-agent Systems

Authors: Gagan Goel, Pushkar Tripathi, and Lei Wang


Abstract
Motivated by economic thought, a recent research agenda has suggested the algorithmic study of combinatorial optimization problems under functions which satisfy the property of decreasing marginal cost. A natural first step to model such functions is to consider submodular functions. However, many fundamental problems have turned out to be extremely hard to approximate under general submodular functions, thus indicating the need for a systematic study of subclasses of submodular functions that are practically motivated and yield good approximation ratios. In this paper, we introduce and study an important subclass of submodular functions, which we call discounted price functions. These functions are succinctly representable and generalize linear(additive) price functions. We study the following fundamental combinatorial optimization problems: edge cover, spanning tree, perfect matching and $s-t$ path. We give both upper and lower bound for the approximability of these problems.

Cite as

Gagan Goel, Pushkar Tripathi, and Lei Wang. Combinatorial Problems with Discounted Price Functions in Multi-agent Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 436-446, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.FSTTCS.2010.436,
  author =	{Goel, Gagan and Tripathi, Pushkar and Wang, Lei},
  title =	{{Combinatorial Problems with Discounted Price Functions in Multi-agent Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{436--446},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.436},
  URN =		{urn:nbn:de:0030-drops-28847},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.436},
  annote =	{Keywords: Approximation algorithm, decreasing marginal cost, submodular function, discounted price function}
}
Document
Quasi-Random PCP and Hardness of 2-Catalog Segmentation

Authors: Rishi Saket


Abstract
We study the problem of 2-Catalog Segmentation which is one of the several variants of segmentation problems, introduced by Kleinberg et al., that naturally arise in data mining applications. Formally, given a bipartite graph $G = (U, V, E)$ and parameter $r$, the goal is to output two subsets $V_1, V_2 subseteq V$, each of size $r$, to maximize, $sum_{u \in U} max {|E(u, V_1)|, |E(u, V_2)|},$ where $E(u, V_i)$ is the set of edges between $u$ and the vertices in $V_i$ for $i = 1, 2$. There is a simple 2-approximation for this problem, and stronger approximation factors are known for the special case when $r = |V|/2$. On the other hand, it is known to be NP-hard, and Feige showed a constant factor hardness based on an assumption of average case hardness of random 3SAT. In this paper we show that there is no PTAS for $2$-Catalog Segmentation assuming that NP does not have subexponential time probabilistic algorithms, i.e. NP $\not\subseteq \cap_{\eps > 0}$ BPTIME($2^{n^\eps}$). In order to prove our result we strengthen the analysis of the Quasi-Random PCP of Khot, which we transform into an instance of $2$-Catalog Segmentation. Our improved analysis of the Quasi-Random PCP proves stronger properties of the PCP which might be useful in other applications.

Cite as

Rishi Saket. Quasi-Random PCP and Hardness of 2-Catalog Segmentation. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 447-458, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{saket:LIPIcs.FSTTCS.2010.447,
  author =	{Saket, Rishi},
  title =	{{Quasi-Random PCP and Hardness of 2-Catalog Segmentation}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{447--458},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.447},
  URN =		{urn:nbn:de:0030-drops-28858},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.447},
  annote =	{Keywords: Hardness of Approximation, PCPs, Catalog Segmentation}
}
Document
Determining the Winner of a Dodgson Election is Hard

Authors: Michael Fellows, Bart M. P. Jansen, Daniel Lokshtanov, Frances A. Rosamond, and Saket Saurabh


Abstract
Computing the Dodgson Score of a candidate in an election is a hard computational problem, which has been analyzed using classical and parameterized analysis. In this paper we resolve two open problems regarding the parameterized complexity of DODGSON SCORE. We show that DODGSON SCORE parameterized by the target score value $k$ does not have a polynomial kernel unless the polynomial hierarchy collapses to the third level; this complements a result of Fellows, Rosamond and Slinko who obtain a non-trivial kernel of exponential size for a generalization of this problem. We also prove that DODGSON SCORE parameterized by the number $n$ of votes is hard for $W[1]$.

Cite as

Michael Fellows, Bart M. P. Jansen, Daniel Lokshtanov, Frances A. Rosamond, and Saket Saurabh. Determining the Winner of a Dodgson Election is Hard. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 459-468, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{fellows_et_al:LIPIcs.FSTTCS.2010.459,
  author =	{Fellows, Michael and Jansen, Bart M. P. and Lokshtanov, Daniel and Rosamond, Frances A. and Saurabh, Saket},
  title =	{{Determining the Winner of a Dodgson Election is Hard}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{459--468},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.459},
  URN =		{urn:nbn:de:0030-drops-28866},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.459},
  annote =	{Keywords: Dodgson Score, Parameterized Complexity, Kernelization Lower Bounds}
}
Document
Verifying Recursive Active Documents with Positive Data Tree Rewriting

Authors: Blaise Genest, Anca Muscholl, and Zhilin Wu


Abstract
This paper considers a tree-rewriting framework for modeling documents evolving through service calls. We focus on the automatic verification of properties of documents that may contain data from an infinite domain. We establish the boundaries of decidability: while verifying documents with recursive calls is undecidable, we obtain decidability as soon as either documents are in the positive-bounded fragment (while calls are unrestricted), or when there is a bound on the number of service calls (bounded model-checking of unrestricted documents). In the latter case, the complexity is NexpTime-complete. Our data tree-rewriting framework resembles Guarded Active XML, a platform handling XML repositories that evolve through web services. The model here captures the basic features of Guarded Active XML and extends it by node renaming and subtree deletion.

Cite as

Blaise Genest, Anca Muscholl, and Zhilin Wu. Verifying Recursive Active Documents with Positive Data Tree Rewriting. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 469-480, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{genest_et_al:LIPIcs.FSTTCS.2010.469,
  author =	{Genest, Blaise and Muscholl, Anca and Wu, Zhilin},
  title =	{{Verifying Recursive Active Documents with Positive Data Tree Rewriting}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{469--480},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.469},
  URN =		{urn:nbn:de:0030-drops-28877},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.469},
  annote =	{Keywords: Active documents, Guarded Active XML, verification, data trees, tree rewriting, well-structured systems.}
}
Document
Temporal Logics on Words with Multiple Data Values

Authors: Ahmet Kara, Thomas Schwentick, and Thomas Zeume


Abstract
The paper proposes and studies temporal logics for attributed words, that is, data words with a (finite) set of (attribute,value)-pairs at each position. It considers a basic logic which is a semantical fragment of the logic $\LTL^\downarrow_1$ of Demri and Lazic with operators for navigation into the future and the past. By reduction to the emptiness problem for data automata it is shown that this basic logic is decidable. Whereas the basic logic only allows navigation to positions where a fixed data value occurs, extensions are studied that also allow navigation to positions with different data values. Besides some undecidable results it is shown that the extension by a certain UNTIL-operator with an inequality target condition remains decidable.

Cite as

Ahmet Kara, Thomas Schwentick, and Thomas Zeume. Temporal Logics on Words with Multiple Data Values. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 481-492, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kara_et_al:LIPIcs.FSTTCS.2010.481,
  author =	{Kara, Ahmet and Schwentick, Thomas and Zeume, Thomas},
  title =	{{Temporal Logics on Words with Multiple Data Values}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{481--492},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.481},
  URN =		{urn:nbn:de:0030-drops-28883},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.481},
  annote =	{Keywords: Expressiveness, Decidability, Temporal Logic, Data words}
}
Document
First-Order Logic with Reachability Predicates on Infinite Systems

Authors: Stefan Schulz


Abstract
This paper focuses on first-order logic (FO) extended by reachability predicates such that the expressiveness and hence decidability properties lie between FO and monadic second-order logic (MSO): in FO(R) one can demand that a node is reachably from another by some sequence of edges, whereas in FO(Reg) a regular set of allowed edge sequences can be given additionally. We study FO(Reg) logic in infinite grid-like structures which are important in verification. The decidability of logics between FO and MSO on those simple structures turns out to be sensitive to various parameters. Furthermore we introduce a transformation for infinite graphs called set-based unfolding which is based on an idea of Lohrey and Ondrusch. It allows to transfer the decidability of MSO to FO(Reg) onto the class of transformed structures. Finally we extend regular ground tree rewriting with a skeleton tree. We show that graphs specified in this way coincide with those expressible by vertex replacement and product operators. This allows to extend decidability results of Colcombet for FO(R) to those graphs.

Cite as

Stefan Schulz. First-Order Logic with Reachability Predicates on Infinite Systems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 493-504, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{schulz:LIPIcs.FSTTCS.2010.493,
  author =	{Schulz, Stefan},
  title =	{{First-Order Logic with Reachability Predicates on Infinite Systems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{493--504},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.493},
  URN =		{urn:nbn:de:0030-drops-28896},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.493},
  annote =	{Keywords: First-Order Logic, Reachability, Infinite Grid, Structure Transformati on, Unfolding, Ground Tree Rewriting, Vertex Replacement with Product}
}
Document
Generalized Mean-payoff and Energy Games

Authors: Krishnendu Chatterjee, Laurent Doyen, Thomas A. Henzinger, and Jean-François Raskin


Abstract
In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Generalized mean-payoff and energy games replace individual weights by tuples, and the limit average (resp. running sum) of each coordinate must be (resp. remain) nonnegative. These games have applications in the synthesis of resource-bounded processes with multiple resources. We prove the finite-memory determinacy of generalized energy games and show the inter-reducibility of generalized mean-payoff and energy games for finite-memory strategies. We also improve the computational complexity for solving both classes of games with finite-memory strategies: while the previously best known upper bound was EXPSPACE, and no lower bound was known, we give an optimal coNP-complete bound. For memoryless strategies, we show that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete.

Cite as

Krishnendu Chatterjee, Laurent Doyen, Thomas A. Henzinger, and Jean-François Raskin. Generalized Mean-payoff and Energy Games. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Leibniz International Proceedings in Informatics (LIPIcs), Volume 8, pp. 505-516, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.FSTTCS.2010.505,
  author =	{Chatterjee, Krishnendu and Doyen, Laurent and Henzinger, Thomas A. and Raskin, Jean-Fran\c{c}ois},
  title =	{{Generalized Mean-payoff and Energy Games}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
  pages =	{505--516},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-23-1},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{8},
  editor =	{Lodaya, Kamal and Mahajan, Meena},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2010.505},
  URN =		{urn:nbn:de:0030-drops-28484},
  doi =		{10.4230/LIPIcs.FSTTCS.2010.505},
  annote =	{Keywords: mean-payoff games, energy games, finite memory strategies, determinacy}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail