LIPIcs, Volume 93

37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)



Thumbnail PDF

Event

FSTTCS 2017, December 11-15, 2017, Kanpur, India

Editors

Satya Lokam
R. Ramanujam

Publication Details

  • published at: 2018-02-12
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-055-2
  • DBLP: db/conf/fsttcs/fsttcs2017

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 93, FSTTCS'17, Complete Volume

Authors: Satya Lokam and R. Ramanujam


Abstract
LIPIcs, Volume 93, FSTTCS'17, Complete Volume

Cite as

37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{lokam_et_al:LIPIcs.FSTTCS.2017,
  title =	{{LIPIcs, Volume 93, FSTTCS'17, Complete Volume}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017},
  URN =		{urn:nbn:de:0030-drops-85391},
  doi =		{10.4230/LIPIcs.FSTTCS.2017},
  annote =	{Keywords: Complexity Measures and Classes, Analysis of Algorithms and Problem Complexity, Logics and Meanings of Programs}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Satya Lokam and R. Ramanujam


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lokam_et_al:LIPIcs.FSTTCS.2017.0,
  author =	{Lokam, Satya and Ramanujam, R.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{0:i--0:x},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.0},
  URN =		{urn:nbn:de:0030-drops-83720},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers}
}
Document
Justified Representation in Multiwinner Voting: Axioms and Algorithms

Authors: Edith Elkind


Abstract
Suppose that a group of voters wants to select k 1 alternatives from a given set, and each voter indicates which of the alternatives are acceptable to her: the alternatives could be conference submissions, applicants for a scholarship or locations for a fast food chain. In this setting it is natural to require that the winning set represents the voters fairly, in the sense that large groups of voters with similar preferences have at least some of their approved alternatives in the winning set. We describe several ways to formalize this idea, and show how to use it to classify voting rules; surprisingly, two voting rules proposed in the XIXth century turn out to play an important role in our analysis.

Cite as

Edith Elkind. Justified Representation in Multiwinner Voting: Axioms and Algorithms. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 1:1-1:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{elkind:LIPIcs.FSTTCS.2017.1,
  author =	{Elkind, Edith},
  title =	{{Justified Representation in Multiwinner Voting: Axioms and Algorithms}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{1:1--1:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.1},
  URN =		{urn:nbn:de:0030-drops-84179},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.1},
  annote =	{Keywords: voting, committee selection, axioms, PAV}
}
Document
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

Authors: Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, and Aaron Sidford


Abstract
This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares. This result is obtained by analyzing SGD as a stochastic process and by sharply characterizing the stationary covariance matrix of this process. The finite rate optimality characterization captures the constant factors and addresses model mis-specification.

Cite as

Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, and Aaron Sidford. A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares). In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 2:1-2:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.FSTTCS.2017.2,
  author =	{Jain, Prateek and Kakade, Sham M. and Kidambi, Rahul and Netrapalli, Praneeth and Pillutla, Venkata Krishna and Sidford, Aaron},
  title =	{{A Markov Chain Theory Approach to Characterizing the Minimax Optimality  of Stochastic Gradient Descent  (for Least Squares)}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{2:1--2:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.2},
  URN =		{urn:nbn:de:0030-drops-83941},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.2},
  annote =	{Keywords: Stochastic Gradient Descent, Minimax Optimality, Least Squares Regression}
}
Document
Automated Synthesis: a Distributed Viewpoint

Authors: Anca Muscholl


Abstract
Distributed algorithms are inherently hard to get right, and a major challenge is to come up with automated techniques for error detection and recovery. The talk will survey recent results on the synthesis of distributed monitors and controllers.

Cite as

Anca Muscholl. Automated Synthesis: a Distributed Viewpoint. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 3:1-3:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{muscholl:LIPIcs.FSTTCS.2017.3,
  author =	{Muscholl, Anca},
  title =	{{Automated Synthesis: a Distributed Viewpoint}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{3:1--3:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.3},
  URN =		{urn:nbn:de:0030-drops-84151},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.3},
  annote =	{Keywords: Synthesis, distributed program}
}
Document
Matrix Estimation, Latent Variable Model and Collaborative Filtering

Authors: Devavrat Shah


Abstract
Estimating a matrix based on partial, noisy observations is prevalent in variety of modern applications with recommendation system being a prototypical example. The non-parametric latent variable model provides canonical representation for such matrix data when the underlying distribution satisfies ``exchangeability'' with graphons and stochastic block model being recent examples of interest. Collaborative filtering has been a successfully utilized heuristic in practice since the dawn of e-commerce. In this extended abstract, we will argue that collaborative filtering (and its variants) solve matrix estimation for a generic latent variable model with near optimal sample complexity.

Cite as

Devavrat Shah. Matrix Estimation, Latent Variable Model and Collaborative Filtering. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 4:1-4:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{shah:LIPIcs.FSTTCS.2017.4,
  author =	{Shah, Devavrat},
  title =	{{Matrix Estimation, Latent Variable Model and Collaborative Filtering}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{4:1--4:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.4},
  URN =		{urn:nbn:de:0030-drops-84193},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.4},
  annote =	{Keywords: Matrix Estimation, Graphon Estimation, Collaborative Filtering}
}
Document
Some Open Problems in Information-Theoretic Cryptography

Authors: Vinod Vaikuntanathan


Abstract
Information-theoretic cryptography is full of open problems with a communication-complexity flavor. We will describe several such problems that arise in the study of private information retrieval, secure multi-party computation, secret sharing, private simultaneous messages (PSM) and conditional disclosure of secrets (CDS). In all these cases, there is a huge (exponential) gap between the best known upper and lower bounds. We will also describe the connections between these problems, some old and some new.

Cite as

Vinod Vaikuntanathan. Some Open Problems in Information-Theoretic Cryptography. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 5:1-5:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{vaikuntanathan:LIPIcs.FSTTCS.2017.5,
  author =	{Vaikuntanathan, Vinod},
  title =	{{Some Open Problems in Information-Theoretic Cryptography}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{5:1--5:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.5},
  URN =		{urn:nbn:de:0030-drops-84188},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.5},
  annote =	{Keywords: Cryptography, Information-Theoretic Security, Private Information Retrieval, Secret Sharing, Multiparty Computation}
}
Document
Backward Deterministic Büchi Automata on Infinite Words

Authors: Thomas Wilke


Abstract
This paper describes how backward deterministic Büchi automata are defined, what their main features are, and how they can be applied to solve problems in temporal logic.

Cite as

Thomas Wilke. Backward Deterministic Büchi Automata on Infinite Words. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 6:1-6:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wilke:LIPIcs.FSTTCS.2017.6,
  author =	{Wilke, Thomas},
  title =	{{Backward Deterministic B\"{u}chi Automata on Infinite Words}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{6:1--6:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.6},
  URN =		{urn:nbn:de:0030-drops-84164},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.6},
  annote =	{Keywords: finite automata, infinite words, determinism, backward automata, temporal logic, separated formulas}
}
Document
Monitoring for Silent Actions

Authors: Luca Aceto, Antonis Achilleos, Adrian Francalanza, and Anna Ingólfsdóttir


Abstract
Silent actions are an essential mechanism for system modelling and specification. They are used to abstractly report the occurrence of computation steps without divulging their precise details, thereby enabling the description of important aspects such as the branching structure of a system. Yet, their use rarely features in specification logics used in runtime verification. We study monitorability aspects of a branching-time logic that employs silent actions, identifying which formulas are monitorable for a number of instrumentation setups. We also consider defective instrumentation setups that imprecisely report silent events, and establish monitorability results for tolerating these imperfections.

Cite as

Luca Aceto, Antonis Achilleos, Adrian Francalanza, and Anna Ingólfsdóttir. Monitoring for Silent Actions. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aceto_et_al:LIPIcs.FSTTCS.2017.7,
  author =	{Aceto, Luca and Achilleos, Antonis and Francalanza, Adrian and Ing\'{o}lfsd\'{o}ttir, Anna},
  title =	{{Monitoring for Silent Actions}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.7},
  URN =		{urn:nbn:de:0030-drops-84023},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.7},
  annote =	{Keywords: Runtime Verification, Monitorability, Hennessy-Milner Logic with Recursion, Silent Actions}
}
Document
Maintaining Reeb Graphs of Triangulated 2-Manifolds

Authors: Pankaj K. Agarwal, Kyle Fox, and Abhinandan Nath


Abstract
Let M be a triangulated, orientable 2-manifold of genus g without boundary, and let h be a height function over M that is linear within each triangle. We present a kinetic data structure (KDS) for maintaining the Reeb graph R of h as the heights of M's vertices vary continuously with time. Assuming the heights of two vertices of M become equal only O(1) times, the KDS processes O((k + g) n \polylog n) events; n is the number of vertices in M, and k is the number of external events which change the combinatorial structure of R. Each event is processed in O(\log^2 n) time, and the total size of our KDS is O(gn). The KDS can be extended to maintain an augmented Reeb graph as well.

Cite as

Pankaj K. Agarwal, Kyle Fox, and Abhinandan Nath. Maintaining Reeb Graphs of Triangulated 2-Manifolds. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.FSTTCS.2017.8,
  author =	{Agarwal, Pankaj K. and Fox, Kyle and Nath, Abhinandan},
  title =	{{Maintaining Reeb Graphs of Triangulated 2-Manifolds}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.8},
  URN =		{urn:nbn:de:0030-drops-84043},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.8},
  annote =	{Keywords: Reeb graphs, 2-manifolds, topological graph theory}
}
Document
On the Parameterized Complexity of Simultaneous Deletion Problems

Authors: Akanksha Agrawal, R. Krithika, Daniel Lokshtanov, Amer E. Mouawad, and M. S. Ramanujan


Abstract
For a family of graphs F, an n-vertex graph G, and a positive integer k, the F-Deletion problem asks whether we can delete at most k vertices from G to obtain a graph in F. F-Deletion generalizes many classical graph problems such as Vertex Cover, Feedback Vertex Set, and Odd Cycle Transversal. A (multi) graph G = (V, \cup_{i=1}^{\alpha} E_{i}), where the edge set of G is partitioned into \alpha color classes, is called an \alpha-edge-colored graph. A natural extension of the F-Deletion problem to edge-colored graphs is the Simultaneous (F_1, \ldots, F_\alpha)-Deletion problem. In the latter problem, we are given an \alpha-edge-colored graph G and the goal is to find a set S of at most k vertices such that each graph G_i - S, where G_i = (V, E_i) and 1 \leq i \leq \alpha, is in F_i. Recently, a subset of the authors considered the aforementioned problem with F_1 = \ldots = F_\alpha being the family of all forests. They showed that the problem is fixed-parameter tractable when parameterized by k and \alpha, and can be solved in O(2^{O(\alpha k)}n^{O(1)}) time. In this work, we initiate the investigation of the complexity of Simultaneous (F_1, \ldots, F_\alpha)-Deletion with different families of graphs. In the process, we obtain a complete characterization of the parameterized complexity of this problem when one or more of the F_i's is the class of bipartite graphs and the rest (if any) are forests. We show that if F_1 is the family of all bipartite graphs and each of F_2 = F_3 = \ldots = F_\alpha is the family of all forests then the problem is fixed-parameter tractable parameterized by k and \alpha. However, even when F_1 and F_2 are both the family of all bipartite graphs, then the Simultaneous (F_1, F_2)-Deletion} problem itself is already W[1]-hard.

Cite as

Akanksha Agrawal, R. Krithika, Daniel Lokshtanov, Amer E. Mouawad, and M. S. Ramanujan. On the Parameterized Complexity of Simultaneous Deletion Problems. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.FSTTCS.2017.9,
  author =	{Agrawal, Akanksha and Krithika, R. and Lokshtanov, Daniel and Mouawad, Amer E. and Ramanujan, M. S.},
  title =	{{On the Parameterized Complexity of Simultaneous Deletion Problems}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.9},
  URN =		{urn:nbn:de:0030-drops-84128},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.9},
  annote =	{Keywords: parameterized complexity, feedback vertex set, odd cycle transversal, edge-colored graphs, simultaneous deletion}
}
Document
A Composition Theorem for Randomized Query Complexity

Authors: Anurag Anshu, Dmitry Gavinsky, Rahul Jain, Srijita Kundu, Troy Lee, Priyanka Mukhopadhyay, Miklos Santha, and Swagato Sanyal


Abstract
Let the randomized query complexity of a relation for error probability epsilon be denoted by R_epsilon(). We prove that for any relation f contained in {0,1}^n times R and Boolean function g:{0,1}^m -> {0,1}, R_{1/3}(f o g^n) = Omega(R_{4/9}(f).R_{1/2-1/n^4}(g)), where f o g^n is the relation obtained by composing f and g. We also show using an XOR lemma that R_{1/3}(f o (g^{xor}_{O(log n)})^n) = Omega(log n . R_{4/9}(f) . R_{1/3}(g))$, where g^{xor}_{O(log n)} is the function obtained by composing the XOR function on O(log n) bits and g.

Cite as

Anurag Anshu, Dmitry Gavinsky, Rahul Jain, Srijita Kundu, Troy Lee, Priyanka Mukhopadhyay, Miklos Santha, and Swagato Sanyal. A Composition Theorem for Randomized Query Complexity. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{anshu_et_al:LIPIcs.FSTTCS.2017.10,
  author =	{Anshu, Anurag and Gavinsky, Dmitry and Jain, Rahul and Kundu, Srijita and Lee, Troy and Mukhopadhyay, Priyanka and Santha, Miklos and Sanyal, Swagato},
  title =	{{A Composition Theorem for Randomized Query Complexity}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.10},
  URN =		{urn:nbn:de:0030-drops-83967},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.10},
  annote =	{Keywords: Query algorithms and complexity, Decision trees, Composition theorem, XOR lemma, Hardness amplification}
}
Document
Verification of Asynchronous Programs with Nested Locks

Authors: Mohamed Faouzi Atig, Ahmed Bouajjani, K. Narayan Kumar, and Prakash Saivasan


Abstract
In this paper, we consider asynchronous programs consisting of multiple recursive threads running in parallel. Each of the threads is equipped with a multi-set. The threads can create tasks and post them onto the multi-sets or read a task from their own. In addition, they can synchronise through a finite set of locks. In this paper, we show that the reachability problem for such class of asynchronous programs is undecidable even under the nested locking policy. We then show that the reachability problem becomes decidable (Exp-space-complete) when the locks are not allowed to be held across tasks. Finally, we show that the problem is NP-complete when in addition to previous restrictions, threads always read tasks from the same state.

Cite as

Mohamed Faouzi Atig, Ahmed Bouajjani, K. Narayan Kumar, and Prakash Saivasan. Verification of Asynchronous Programs with Nested Locks. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{atig_et_al:LIPIcs.FSTTCS.2017.11,
  author =	{Atig, Mohamed Faouzi and Bouajjani, Ahmed and Narayan Kumar, K. and Saivasan, Prakash},
  title =	{{Verification of Asynchronous Programs with Nested Locks}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.11},
  URN =		{urn:nbn:de:0030-drops-84106},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.11},
  annote =	{Keywords: asynchronous programs locks concurrency multi-set pushdown systems, multi-threaded programs, reachability, model checking, verification, nested lockin}
}
Document
Modulo Counting on Words and Trees

Authors: Bartosz Bednarczyk and Witold Charatonik


Abstract
We consider the satisfiability problem for the two-variable fragment of the first-order logic extended with modulo counting quantifiers and interpreted over finite words or trees. We prove a small-model property of this logic, which gives a technique for deciding the satisfiability problem. In the case of words this gives a new proof of EXPSPACE upper bound, and in the case of trees it gives a 2EXPTIME algorithm. This algorithm is optimal: we prove a matching lower bound by a generic reduction from alternating Turing machines working in exponential space; the reduction involves a development of a new version of tiling games.

Cite as

Bartosz Bednarczyk and Witold Charatonik. Modulo Counting on Words and Trees. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bednarczyk_et_al:LIPIcs.FSTTCS.2017.12,
  author =	{Bednarczyk, Bartosz and Charatonik, Witold},
  title =	{{Modulo Counting on Words and Trees}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.12},
  URN =		{urn:nbn:de:0030-drops-84083},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.12},
  annote =	{Keywords: satisfiability, trees, words, two-variable logic, modulo quantifiers}
}
Document
Probabilistic Disclosure: Maximisation vs. Minimisation

Authors: Béatrice Bérard, Serge Haddad, and Engel Lefaucheux


Abstract
We consider opacity questions where an observation function provides to an external attacker a view of the states along executions and secret executions are those visiting some state from a fixed subset. Disclosure occurs when the observer can deduce from a finite observation that the execution is secret, the epsilon-disclosure variant corresponding to the execution being secret with probability greater than 1 - epsilon. In a probabilistic and non deterministic setting, where an internal agent can choose between actions, there are two points of view, depending on the status of this agent: the successive choices can either help the attacker trying to disclose the secret, if the system has been corrupted, or they can prevent disclosure as much as possible if these choices are part of the system design. In the former situation, corresponding to a worst case, the disclosure value is the supremum over the strategies of the probability to disclose the secret (maximisation), whereas in the latter case, the disclosure is the infimum (minimisation). We address quantitative problems (comparing the optimal value with a threshold) and qualitative ones (when the threshold is zero or one) related to both forms of disclosure for a fixed or finite horizon. For all problems, we characterise their decidability status and their complexity. We discover a surprising asymmetry: on the one hand optimal strategies may be chosen among deterministic ones in maximisation problems, while it is not the case for minimisation. On the other hand, for the questions addressed here, more minimisation problems than maximisation ones are decidable.

Cite as

Béatrice Bérard, Serge Haddad, and Engel Lefaucheux. Probabilistic Disclosure: Maximisation vs. Minimisation. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{berard_et_al:LIPIcs.FSTTCS.2017.13,
  author =	{B\'{e}rard, B\'{e}atrice and Haddad, Serge and Lefaucheux, Engel},
  title =	{{Probabilistic Disclosure: Maximisation vs. Minimisation}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.13},
  URN =		{urn:nbn:de:0030-drops-83844},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.13},
  annote =	{Keywords: Partially observed systems, Opacity, Markov chain, Markov decision process}
}
Document
Reasons for Hardness in QBF Proof Systems

Authors: Olaf Beyersdorff, Luke Hinde, and Ján Pich


Abstract
We aim to understand inherent reasons for lower bounds for QBF proof systems, and revisit and compare two previous approaches in this direction. The first of these relates size lower bounds for strong QBF Frege systems to circuit lower bounds via strategy extraction (Beyersdorff & Pich, LICS'16). Here we show a refined version of strategy extraction and thereby for any QBF proof system obtain a trichotomy for hardness: (1) via circuit lower bounds, (2) via propositional Resolution lower bounds, or (3) `genuine' QBF lower bounds. The second approach tries to explain QBF lower bounds through quantifier alternations in a system called relaxing QU-Res (Chen, ICALP'16). We prove a strong lower bound for relaxing QU-Res, which also exhibits significant shortcomings of that model. Prompted by this we propose an alternative, improved version, allowing more flexible oracle queries in proofs. We show that lower bounds in our new model correspond to the trichotomy obtained via strategy extraction.

Cite as

Olaf Beyersdorff, Luke Hinde, and Ján Pich. Reasons for Hardness in QBF Proof Systems. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{beyersdorff_et_al:LIPIcs.FSTTCS.2017.14,
  author =	{Beyersdorff, Olaf and Hinde, Luke and Pich, J\'{a}n},
  title =	{{Reasons for Hardness in QBF Proof Systems}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{14:1--14:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.14},
  URN =		{urn:nbn:de:0030-drops-83824},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.14},
  annote =	{Keywords: proof complexity, quantified Boolean formulas, resolution, lower bounds}
}
Document
An Improved Dictatorship Test with Perfect Completeness

Authors: Amey Bhangale, Subhash Khot, and Devanathan Thiruvenkatachari


Abstract
A Boolean function f:{0,1}^n\->{0,1} is called a dictator if it depends on exactly one variable i.e f(x_1, x_2, ..., x_n) = x_i for some i in [n]. In this work, we study a k-query dictatorship test. Dictatorship tests are central in proving many hardness results for constraint satisfaction problems. The dictatorship test is said to have perfect completeness if it accepts any dictator function. The soundness of a test is the maximum probability with which it accepts any function far from a dictator. Our main result is a k-query dictatorship test with perfect completeness and soundness (2k + 1)/(2^k), where k is of the form 2^t -1 for any integer t > 2. This improves upon the result of [Tamaki-Yoshida, Random Structures & Algorithms, 2015] which gave a dictatorship test with soundness (2k + 3)/(2^k).

Cite as

Amey Bhangale, Subhash Khot, and Devanathan Thiruvenkatachari. An Improved Dictatorship Test with Perfect Completeness. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 15:1-15:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.FSTTCS.2017.15,
  author =	{Bhangale, Amey and Khot, Subhash and Thiruvenkatachari, Devanathan},
  title =	{{An Improved Dictatorship Test with Perfect Completeness}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{15:1--15:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.15},
  URN =		{urn:nbn:de:0030-drops-83854},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.15},
  annote =	{Keywords: Property Testing, Distatorship Test, Fourier Analysis}
}
Document
Forward Analysis for WSTS, Part III: Karp-Miller Trees

Authors: Michael Blondin, Alain Finkel, and Jean Goubault-Larrecq


Abstract
This paper is a sequel of "Forward Analysis for WSTS, Part I: Completions" [STACS 2009, LZI Intl. Proc. in Informatics 3, 433–444] and "Forward Analysis for WSTS, Part II: Complete WSTS" [Logical Methods in Computer Science 8(3), 2012]. In these two papers, we provided a framework to conduct forward reachability analyses of WSTS, using finite representations of downwards-closed sets. We further develop this framework to obtain a generic Karp-Miller algorithm for the new class of very-WSTS. This allows us to show that coverability sets of very-WSTS can be computed as their finite ideal decompositions. Under natural assumptions on positive sequences, we also show that LTL model checking for very-WSTS is decidable. The termination of our procedure rests on a new notion of acceleration levels, which we study. We characterize those domains that allow for only finitely many accelerations, based on ordinal ranks.

Cite as

Michael Blondin, Alain Finkel, and Jean Goubault-Larrecq. Forward Analysis for WSTS, Part III: Karp-Miller Trees. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blondin_et_al:LIPIcs.FSTTCS.2017.16,
  author =	{Blondin, Michael and Finkel, Alain and Goubault-Larrecq, Jean},
  title =	{{Forward Analysis for WSTS, Part III: Karp-Miller Trees}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.16},
  URN =		{urn:nbn:de:0030-drops-84033},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.16},
  annote =	{Keywords: WSTS, model checking, coverability, Karp-Miller algorithm, ideals}
}
Document
Rabin vs. Streett Automata

Authors: Udi Boker


Abstract
The Rabin and Streett acceptance conditions are dual. Accordingly, deterministic Rabin and Streett automata are dual. Yet, when adding nondeterminsim, the picture changes dramatically. In fact, the state blowup involved in translations between Rabin and Streett automata is a longstanding open problem, having an exponential gap between the known lower and upper bounds. We resolve the problem, showing that the translation of Streett to Rabin automata involves a state blowup in $\Theta(n^2)$, whereas in the other direction, the translations of both deterministic and nondeterministic Rabin automata to nondeterministic Streett automata involve a state blowup in $2^{\Theta(n)}$. Analyzing this substantial difference between the two directions, we get to the conclusion that when studying translations between automata, one should not only consider the state blowup, but also the \emph{size} blowup, where the latter takes into account all of the automaton elements. More precisely, the size of an automaton is defined to be the maximum of the alphabet length, the number of states, the number of transitions, and the acceptance condition length (index). Indeed, size-wise, the results are opposite. That is, the translation of Rabin to Streett involves a size blowup in $\Theta(n^2)$ and of Streett to Rabin in $2^{\Theta(n)}$. The core difference between state blowup and size blowup stems from the tradeoff between the index and the number of states. (Recall that the index of Rabin and Streett automata might be exponential in the number of states.) We continue with resolving the open problem of translating deterministic Rabin and Streett automata to the weaker types of deterministic co-B\"uchi and B\"uchi automata, respectively. We show that the state blowup involved in these translations, when possible, is in $2^{\Theta(n)}$, whereas the size blowup is in $\Theta(n^2)$.

Cite as

Udi Boker. Rabin vs. Streett Automata. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boker:LIPIcs.FSTTCS.2017.17,
  author =	{Boker, Udi},
  title =	{{Rabin vs. Streett Automata}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.17},
  URN =		{urn:nbn:de:0030-drops-83782},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.17},
  annote =	{Keywords: Finite automata on infinite words, translations, automata size, state space}
}
Document
How Deterministic are Good-For-Games Automata?

Authors: Udi Boker, Orna Kupferman, and Michal Skrzypczak


Abstract
In good for games (GFG) automata, it is possible to resolve nondeterminism in a way that only depends on the past and still accepts all the words in the language. The motivation for GFG automata comes from their adequacy for games and synthesis, wherein general nondeterminism is inappropriate. We continue the ongoing effort of studying the power of nondeterminism in GFG automata. Initial indications have hinted that every GFG automaton embodies a deterministic one. Today we know that this is not the case, and in fact GFG automata may be exponentially more succinct than deterministic ones. We focus on the typeness question, namely the question of whether a GFG automaton with a certain acceptance condition has an equivalent GFG automaton with a weaker acceptance condition on the same structure. Beyond the theoretical interest in studying typeness, its existence implies efficient translations among different acceptance conditions. This practical issue is of special interest in the context of games, where the Büchi and co-Büchi conditions admit memoryless strategies for both players. Typeness is known to hold for deterministic automata and not to hold for general nondeterministic automata. We show that GFG automata enjoy the benefits of typeness, similarly to the case of deterministic automata. In particular, when Rabin or Streett GFG automata have equivalent Büchi or co-Büchi GFG automata, respectively, then such equivalent automata can be defined on a substructure of the original automata. Using our typeness results, we further study the place of GFG automata in between deterministic and nondeterministic ones. Specifically, considering automata complementation, we show that GFG automata lean toward nondeterministic ones, admitting an exponential state blow-up in the complementation of a Streett automaton into a Rabin automaton, as opposed to the constant blow-up in the deterministic case.

Cite as

Udi Boker, Orna Kupferman, and Michal Skrzypczak. How Deterministic are Good-For-Games Automata?. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boker_et_al:LIPIcs.FSTTCS.2017.18,
  author =	{Boker, Udi and Kupferman, Orna and Skrzypczak, Michal},
  title =	{{How Deterministic are Good-For-Games Automata?}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.18},
  URN =		{urn:nbn:de:0030-drops-83776},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.18},
  annote =	{Keywords: finite automata on infinite words, determinism, good-for-games}
}
Document
Popular Matchings with Multiple Partners

Authors: Florian Brandl and Telikepalli Kavitha


Abstract
Our input is a bipartite graph G=(A\cup B,E) where each vertex in A\cup B has a preference list strictly ranking its neighbors. The vertices in A and in B are called students and courses, respectively. Each student a seeks to be matched to cap(a)\geq 1 many courses while each course b seeks cap(b)\geq 1 many students to be matched to it. The Gale-Shapley algorithm computes a pairwise-stable matching (one with no blocking edge) in G in linear time. We consider the problem of computing a popular matching in G - a matching M is popular if M cannot lose an election to any matching where vertices cast votes for one matching versus another. Our main contribution is to show that a max-size popular matching in G can be computed by the 2-level Gale-Shapley algorithm in linear time. This is an extension of the classical Gale-Shapley algorithm and we prove its correctness via linear programming.

Cite as

Florian Brandl and Telikepalli Kavitha. Popular Matchings with Multiple Partners. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 19:1-19:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{brandl_et_al:LIPIcs.FSTTCS.2017.19,
  author =	{Brandl, Florian and Kavitha, Telikepalli},
  title =	{{Popular Matchings with Multiple Partners}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{19:1--19:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.19},
  URN =		{urn:nbn:de:0030-drops-83765},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.19},
  annote =	{Keywords: Bipartite graphs, Linear programming duality, Gale-Shapley algorithm}
}
Document
Non-Adaptive Data Structure Bounds for Dynamic Predecessor

Authors: Joseph Boninger, Joshua Brody, and Owen Kephart


Abstract
In this work, we continue the examination of the role non-adaptivity plays in maintaining dynamic data structures, initiated by Brody and Larsen. We consider non-adaptive data structures for predecessor search in the w-bit cell probe model. In this problem, the goal is to dynamically maintain a subset T of up to n elements from {1, ..., m}, while supporting insertions, deletions, and a predecessor query Pred(x), which returns the largest element in T that is less than or equal to x. Predecessor search is one of the most well-studied data structure problems. For this problem, using non-adaptivity comes at a steep price. We provide exponential cell probe complexity separations between (i) adaptive and non-adaptive data structures and (ii) non-adaptive and memoryless data structures for predecessor search. A classic data structure of van Emde Boas solves dynamic predecessor search in log(log(m)) probes; this data structure is adaptive. For dynamic data structures which make non-adaptive updates, we show the cell probe complexity is O(log(m)/log(w/log(m))). We also give a nearly-matching Omega(log(m)/log(w)) lower bound. We also give an m/w lower bound for memoryless data structures. Our lower bound technique is tailored to non-adaptive (as opposed to memoryless) updates and might be of independent interest.

Cite as

Joseph Boninger, Joshua Brody, and Owen Kephart. Non-Adaptive Data Structure Bounds for Dynamic Predecessor. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 20:1-20:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boninger_et_al:LIPIcs.FSTTCS.2017.20,
  author =	{Boninger, Joseph and Brody, Joshua and Kephart, Owen},
  title =	{{Non-Adaptive Data Structure Bounds for Dynamic Predecessor}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{20:1--20:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.20},
  URN =		{urn:nbn:de:0030-drops-83892},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.20},
  annote =	{Keywords: dynamic data structures, lower bounds, predecessor search, non-adaptivity}
}
Document
On Colourability of Polygon Visibility Graphs

Authors: Onur Cagirici, Petr Hlinený, and Bodhayan Roy


Abstract
We study the problem of colouring the visibility graphs of polygons. In particular, we provide a polynomial algorithm for 4-colouring of the polygon visibility graphs, and prove that the 6- colourability question is already NP-complete for them.

Cite as

Onur Cagirici, Petr Hlinený, and Bodhayan Roy. On Colourability of Polygon Visibility Graphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cagirici_et_al:LIPIcs.FSTTCS.2017.21,
  author =	{Cagirici, Onur and Hlinen\'{y}, Petr and Roy, Bodhayan},
  title =	{{On Colourability of Polygon Visibility Graphs}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.21},
  URN =		{urn:nbn:de:0030-drops-83814},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.21},
  annote =	{Keywords: polygon visibility graph, graph coloring, NP-completeness}
}
Document
Vertex Deletion Problems on Chordal Graphs

Authors: Yixin Cao, Yuping Ke, Yota Otachi, and Jie You


Abstract
Containing many classic optimization problems, the family of vertex deletion problems has an important position in algorithm and complexity study. The celebrated result of Lewis and Yannakakis gives a complete dichotomy of their complexity. It however has nothing to say about the case when the input graph is also special. This paper initiates a systematic study of vertex deletion problems from one subclass of chordal graphs to another. We give polynomial-time algorithms or proofs of NP-completeness for most of the problems. In particular, we show that the vertex deletion problem from chordal graphs to interval graphs is NP-complete.

Cite as

Yixin Cao, Yuping Ke, Yota Otachi, and Jie You. Vertex Deletion Problems on Chordal Graphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cao_et_al:LIPIcs.FSTTCS.2017.22,
  author =	{Cao, Yixin and Ke, Yuping and Otachi, Yota and You, Jie},
  title =	{{Vertex Deletion Problems on Chordal Graphs}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.22},
  URN =		{urn:nbn:de:0030-drops-83799},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.22},
  annote =	{Keywords: vertex deletion problem, maximum subgraph, chordal graph, (unit) interval graph, split graph, hereditary property, NP-complete, polynomial-time algori}
}
Document
A Lifting Theorem with Applications to Symmetric Functions

Authors: Arkadev Chattopadhyay and Nikhil S. Mande


Abstract
We use a technique of “lifting” functions introduced by Krause and Pudlak [Theor. Comput. Sci., 1997], to amplify degree-hardness measures of a function to corresponding monomial-hardness properties of the lifted function. We then show that any symmetric function F projects onto a “lift” of another suitable symmetric function f . These two key results enable us to prove several results on the complexity of symmetric functions in various models, as given below: 1. We provide a characterization of the approximate spectral norm of symmetric functions in terms of the spectrum of the underlying predicate, affirming a conjecture of Ada et al. [APPROX-RANDOM, 2012] which has several consequences. 2. We characterize symmetric functions computable by quasi-polynomial sized Threshold of Parity circuits. 3. We show that the approximate spectral norm of a symmetric function f characterizes the (quantum and classical) bounded error communication complexity of f o XOR. 4. Finally, we characterize the weakly-unbounded error communication complexity of symmetric XOR functions, resolving a weak form of a conjecture by Shi and Zhang [Quantum Information & Computation, 2009]

Cite as

Arkadev Chattopadhyay and Nikhil S. Mande. A Lifting Theorem with Applications to Symmetric Functions. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2017.23,
  author =	{Chattopadhyay, Arkadev and Mande, Nikhil S.},
  title =	{{A Lifting Theorem with Applications to Symmetric Functions}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.23},
  URN =		{urn:nbn:de:0030-drops-83839},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.23},
  annote =	{Keywords: Symmetric functions, lifting, circuit complexity, communication com- plexity}
}
Document
Local Patterns

Authors: Joel D. Day, Pamela Fleischmann, Florin Manea, and Dirk Nowotka


Abstract
A pattern is a word consisting of constants from an alphabet Sigma of terminal symbols and variables from a set X. Given a pattern alpha, the decision-problem whether a given word w may be obtained by substituting the variables in \alpha for words over Sigma is called the matching problem. While this problem is, in general, NP-complete, several classes of patterns for which it can be efficiently solved are already known. We present two new classes of patterns, called k-local, and strongly-nested, and show that the respective matching problems, as well as membership can be solved efficiently for any fixed k.

Cite as

Joel D. Day, Pamela Fleischmann, Florin Manea, and Dirk Nowotka. Local Patterns. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{day_et_al:LIPIcs.FSTTCS.2017.24,
  author =	{Day, Joel D. and Fleischmann, Pamela and Manea, Florin and Nowotka, Dirk},
  title =	{{Local Patterns}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.24},
  URN =		{urn:nbn:de:0030-drops-84013},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.24},
  annote =	{Keywords: Patterns with Variables, Local Patterns, Combinatorial Pattern Matching, Descriptive Patterns}
}
Document
On Symbolic Heaps Modulo Permission Theories

Authors: Stéphane Demri, Etienne Lozes, and Denis Lugiez


Abstract
We address the entailment problem for separation logic with symbolic heaps admitting list pred- icates and permissions for memory cells that are essential to express ownership of a heap region. In the permission-free case, the entailment problem is known to be in P. Herein, we design new decision procedures for solving the satisfiability and entailment problems that are parameterised by the permission theories. This permits the use of solvers dealing with the permission theory at hand, independently of the shape analysis. We also show that the entailment problem without list predicates is coNP-complete for several permission models, such as counting permissions and binary tree shares but the problem is in P for fractional permissions. Furthermore, when list predicates are added, we prove that the entailment problem is coNP-complete when the entail- ment problem for permission formulae is in coNP, assuming the write permission can be split into as many read permissions as desired. Finally, we show that the entailment problem for any Boolean permission model with infinite width is coNP-complete.

Cite as

Stéphane Demri, Etienne Lozes, and Denis Lugiez. On Symbolic Heaps Modulo Permission Theories. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{demri_et_al:LIPIcs.FSTTCS.2017.25,
  author =	{Demri, St\'{e}phane and Lozes, Etienne and Lugiez, Denis},
  title =	{{On Symbolic Heaps Modulo Permission Theories}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.25},
  URN =		{urn:nbn:de:0030-drops-83887},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.25},
  annote =	{Keywords: separation logic, entailment, permission, reasoning modulo theories}
}
Document
Symmetric Synthesis

Authors: Rüdiger Ehlers and Bernd Finkbeiner


Abstract
We study the problem of determining whether a given temporal specification can be implemented by a symmetric system, i.e., a system composed from identical components. Symmetry is an important goal in the design of distributed systems, because systems that are composed from identical components are easier to build and maintain. We show that for the class of rotation-symmetric architectures, i.e., multi-process architectures where all processes have access to all system inputs, but see different rotations of the inputs, the symmetric synthesis problem is EXPTIME-complete in the number of processes. In architectures where the processes do not have access to all input variables, the symmetric synthesis problem becomes undecidable, even in cases where the standard distributed synthesis problem is decidable.

Cite as

Rüdiger Ehlers and Bernd Finkbeiner. Symmetric Synthesis. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ehlers_et_al:LIPIcs.FSTTCS.2017.26,
  author =	{Ehlers, R\"{u}diger and Finkbeiner, Bernd},
  title =	{{Symmetric Synthesis}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{26:1--26:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.26},
  URN =		{urn:nbn:de:0030-drops-83996},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.26},
  annote =	{Keywords: Reactive Synthesis, Symmetry}
}
Document
Approximation Algorithms for Stochastic k-TSP

Authors: Alina Ene, Viswanath Nagarajan, and Rishi Saket


Abstract
This paper studies the stochastic variant of the classical k-TSP problem where rewards at the vertices are independent random variables which are instantiated upon the tour's visit. The objective is to minimize the expected length of a tour that collects reward at least k. The solution is a policy describing the tour which may (adaptive) or may not (non-adaptive) depend on the observed rewards. Our work presents an adaptive O(log k)-approximation algorithm for Stochastic k-TSP, along with a non-adaptive O(log^2 k)-approximation algorithm which also upper bounds the adaptivity gap by O(log^2 k). We also show that the adaptivity gap of Stochastic k-TSP is at least e, even in the special case of stochastic knapsack cover.

Cite as

Alina Ene, Viswanath Nagarajan, and Rishi Saket. Approximation Algorithms for Stochastic k-TSP. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ene_et_al:LIPIcs.FSTTCS.2017.27,
  author =	{Ene, Alina and Nagarajan, Viswanath and Saket, Rishi},
  title =	{{Approximation Algorithms for Stochastic k-TSP}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.27},
  URN =		{urn:nbn:de:0030-drops-83910},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.27},
  annote =	{Keywords: Stochastic TSP, algorithms, approximation, adaptivity gap}
}
Document
Synthesis in Distributed Environments

Authors: Bernd Finkbeiner and Paul Gölz


Abstract
Most approaches to the synthesis of reactive systems study the problem in terms of a two-player game with complete observation. In many applications, however, the system's environment consists of several distinct entities, and the system must actively communicate with these entities in order to obtain information available in the environment. In this paper, we model such environments as a team of players and keep track of the information known to each individual player. This allows us to synthesize programs that interact with a distributed environment and leverage multiple interacting sources of information. The synthesis problem in distributed environments corresponds to solving a special class of Petri games, i.e., multi-player games played over Petri nets, where the net has a distinguished token representing the system and an arbitrary number of tokens representing the environment. While, in general, even the decidability of Petri games is an open question, we show that the synthesis problem in distributed environments can be solved in polynomial time for nets with up to two environment tokens. For an arbitrary but fixed number of three or more environment tokens, the problem is NP-complete. If the number of environment tokens grows with the size of the net, the problem is EXPTIME-complete.

Cite as

Bernd Finkbeiner and Paul Gölz. Synthesis in Distributed Environments. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 28:1-28:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{finkbeiner_et_al:LIPIcs.FSTTCS.2017.28,
  author =	{Finkbeiner, Bernd and G\"{o}lz, Paul},
  title =	{{Synthesis in Distributed Environments}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{28:1--28:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.28},
  URN =		{urn:nbn:de:0030-drops-84068},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.28},
  annote =	{Keywords: reactive synthesis, distributed information, causal memory, Petri nets}
}
Document
Train Scheduling on a Unidirectional Path

Authors: Apoorv Garg and Abhiram G. Ranade


Abstract
We formulate what might be the simplest train scheduling problem considered in the literature and show it to be NP-hard. We also give a log-factor randomised algorithm for it. In our problem we have a unidirectional train track with equidistant stations, each station initially having at most one train. In addition, there can be at most one train poised to enter each station. The trains must move to their destinations subject to the constraint that at every time instant there can be at most one train at each station and on the track between stations. The goal is to minimise the maximum delay of any train. Our problem can also be interpreted as a packet routing problem, and our work strengthens the hardness results from that literature.

Cite as

Apoorv Garg and Abhiram G. Ranade. Train Scheduling on a Unidirectional Path. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.FSTTCS.2017.29,
  author =	{Garg, Apoorv and Ranade, Abhiram G.},
  title =	{{Train Scheduling on a Unidirectional Path}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{29:1--29:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.29},
  URN =		{urn:nbn:de:0030-drops-84134},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.29},
  annote =	{Keywords: Combinatorial optimization, Train scheduling, Max-delay minimization, Complexity analysis, Approximation algorithm}
}
Document
On the Control of Asynchronous Automata

Authors: Hugo Gimbert


Abstract
The decidability of the distributed version of the Ramadge and Wonham controller synthesis problem, where both the plant and the controllers are modeled as asynchronous automata and the controllers have causal memory is a challenging open problem. There exist three classes of plants for which the existence of a correct controller with causal memory has been shown decidable: when the dependency graph of actions is series-parallel, when the processes are connectedly communicating and when the dependency graph of processes is a tree. We design a class of plants, called decomposable games, with a decidable controller synthesis problem. This provides a unified proof of the three existing decidability results as well as new examples of decidable plants.

Cite as

Hugo Gimbert. On the Control of Asynchronous Automata. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gimbert:LIPIcs.FSTTCS.2017.30,
  author =	{Gimbert, Hugo},
  title =	{{On the Control of Asynchronous Automata}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.30},
  URN =		{urn:nbn:de:0030-drops-84142},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.30},
  annote =	{Keywords: Asynchronous automata, Controller synthesis}
}
Document
Streaming for Aibohphobes: Longest Palindrome with Mismatches

Authors: Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou


Abstract
A palindrome is a string that reads the same as its reverse, such as "aibohphobia" (fear of palindromes). Given a metric and an integer d>0, a d-near-palindrome} is a string of Hamming distance at most d from its reverse. We study the natural problem of identifying the longest d-near-palindrome in data streams. The problem is relevant to the analysis of DNA databases, and to the task of repairing recursive structures in documents such as XML and JSON. We present the first streaming algorithm for the longest d-near-palindrome problem that returns a d-near-palindrome whose length is within a multiplicative (1+\eps)-factor of the longest d-near-palindrome. Our algorithm also returns the set of mismatched indices in the d-near-palindrome, and uses O{\frac{d\log^7 n}{\eps\log(1+\eps)}} bits of space, and O{\frac{d\log^6 n}{\eps\log(1+\eps)}} update time per arrival symbol. We show that for d=o(\sqrt{n}), any randomized algorithm with multiplicative approximation (1+\eps) that succeeds with probability at least 1-1/n requires \Omega(d\log n) space. We further obtain a streaming algorithm that returns a d-near-palindrome whose length is within an additive E-error of the longest d-near-palindrome. The algorithm uses O{\frac{dn\log^6 n}{E}} bits of space and O{\frac{dn\log^5 n}{E}} update time. As before, we show that any randomized streaming algorithm that solves the longest d-near-palindrome problem for additive error E with probability at least 1-\frac{1}{n}, uses \Omega\left(\frac{dn}{E}\right) space. Finally, we give an exact two-pass algorithm that solves the longest d-near-palindrome problem using O{d^2\sqrt{n}\log^6 n} bits of space.

Cite as

Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou. Streaming for Aibohphobes: Longest Palindrome with Mismatches. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 31:1-31:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.FSTTCS.2017.31,
  author =	{Grigorescu, Elena and Sadeqi Azer, Erfan and Zhou, Samson},
  title =	{{Streaming for Aibohphobes: Longest Palindrome with Mismatches}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{31:1--31:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.31},
  URN =		{urn:nbn:de:0030-drops-84053},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.31},
  annote =	{Keywords: Longest palindrome with mismatches, Streaming algorithms, Hamming distance}
}
Document
Understanding the Correlation Gap For Matchings

Authors: Guru Guruganesh and Euiwoong Lee


Abstract
Given a set of vertices V with |V| = n, a weight vector w in (R^+ \cup {0})^{\binom{V}{2}}, and a probability vector x In [0, 1]^{\binom{V}{2}} in the matching polytope, we study the quantity (\E_{G}[ \nu_w(G)])/(sum_(u, v) in \binom{V}{2} w_{u, v} x_{u, v}) where G is a random graph where each edge e with weight w_e appears with probability x_e independently, and let \nu_w(G) denotes the weight of the maximum matching of G. This quantity is closely related to correlation gap and contention resolution schemes, which are important tools in the design of approximation algorithms, algorithmic game theory, and stochastic optimization. We provide lower bounds for the above quantity for general and bipartite graphs, and for weighted and unweighted settings. The best known upper bound is 0.54 by Karp and Sipser, and the best lower bound is 0.4. We show that it is at least 0.47 for unweighted bipartite graphs, at least 0.45 for weighted bipartite graphs, and at least 0.43 for weighted general graphs. To achieve our results, we construct local distribution schemes on the dual which may be of independent interest.

Cite as

Guru Guruganesh and Euiwoong Lee. Understanding the Correlation Gap For Matchings. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 32:1-32:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guruganesh_et_al:LIPIcs.FSTTCS.2017.32,
  author =	{Guruganesh, Guru and Lee, Euiwoong},
  title =	{{Understanding the Correlation Gap For Matchings}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{32:1--32:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.32},
  URN =		{urn:nbn:de:0030-drops-84003},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.32},
  annote =	{Keywords: Mathings, Randomized Algorithms, Correlation Gap}
}
Document
Hardness of Rainbow Coloring Hypergraphs

Authors: Venkatesan Guruswami and Rishi Saket


Abstract
A hypergraph is k-rainbow colorable if there exists a vertex coloring using k colors such that each hyperedge has all the k colors. Unlike usual hypergraph coloring, rainbow coloring becomes harder as the number of colors increases. This work studies the rainbow colorability of hypergraphs which are guaranteed to be nearly balanced rainbow colorable. Specifically, we show that for any Q,k >= 2 and \ell <= k/2, given a Qk-uniform hypergraph which admits a k-rainbow coloring satisfying: - in each hyperedge e, for some \ell_e <= \ell all but 2\ell_e colors occur exactly Q times and the rest (Q +/- 1) times, it is NP-hard to compute an independent set of (1 - (\ell+1)/k + \eps)-fraction of vertices, for any constant \eps > 0. In particular, this implies the hardness of even (k/\ell)-rainbow coloring such hypergraphs. The result is based on a novel long code PCP test that ensures the strong balancedness property desired of the k-rainbow coloring in the completeness case. The soundness analysis relies on a mixing bound based on uniform reverse hypercontractivity due to Mossel, Oleszkiewicz, and Sen, which was also used in earlier proofs of the hardness of \omega(1)-coloring 2-colorable 4-uniform hypergraphs due to Saket, and k-rainbow colorable 2k-uniform hypergraphs due to Guruswami and Lee.

Cite as

Venkatesan Guruswami and Rishi Saket. Hardness of Rainbow Coloring Hypergraphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.FSTTCS.2017.33,
  author =	{Guruswami, Venkatesan and Saket, Rishi},
  title =	{{Hardness of Rainbow Coloring Hypergraphs}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.33},
  URN =		{urn:nbn:de:0030-drops-83905},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.33},
  annote =	{Keywords: Fourier analysis of Boolean functions, hypergraph coloring, Inapproximability, Label Cover, PCP}
}
Document
Network Construction with Ordered Constraints

Authors: Yi Huang, Mano Vikash Janardhanan, and Lev Reyzin


Abstract
In this paper, we study the problem of constructing a network by observing ordered connectivity constraints, which we define herein. These ordered constraints are made to capture realistic properties of real-world problems that are not reflected in previous, more general models. We give hardness of approximation results and nearly-matching upper bounds for the offline problem, and we study the online problem in both general graphs and restricted sub-classes. In the online problem, for general graphs, we give exponentially better upper bounds than exist for algorithms for general connectivity problems. For the restricted classes of stars and paths we are able to find algorithms with optimal competitive ratios, the latter of which involve analysis using a potential function defined over PQ-trees.

Cite as

Yi Huang, Mano Vikash Janardhanan, and Lev Reyzin. Network Construction with Ordered Constraints. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 34:1-34:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.FSTTCS.2017.34,
  author =	{Huang, Yi and Janardhanan, Mano Vikash and Reyzin, Lev},
  title =	{{Network Construction with Ordered Constraints}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{34:1--34:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.34},
  URN =		{urn:nbn:de:0030-drops-84090},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.34},
  annote =	{Keywords: subgraph connectivity, network construction, ordered connectivity constraints, PQ-trees}
}
Document
Complexity of Model Checking MDPs against LTL Specifications

Authors: Dileep Kini and Mahesh Viswanathan


Abstract
Given a Markov Decision Process (MDP) M, an LTL formula \varphi, and a threshold \theta \in [0,1], the verification question is to determine if there is a scheduler with respect to which the executions of M satisfying \varphi have probability greater than (or greater than or equal to) \theta. When \theta = 0, we call it the qualitative verification problem, and when \theta \in (0,1], we call it the quantitative verification problem. In this paper we study the precise complexity of these problems when the specification is constrained to be in different fragments of LTL.

Cite as

Dileep Kini and Mahesh Viswanathan. Complexity of Model Checking MDPs against LTL Specifications. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 35:1-35:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kini_et_al:LIPIcs.FSTTCS.2017.35,
  author =	{Kini, Dileep and Viswanathan, Mahesh},
  title =	{{Complexity of Model Checking MDPs against LTL Specifications}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{35:1--35:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.35},
  URN =		{urn:nbn:de:0030-drops-83928},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.35},
  annote =	{Keywords: Markov Decision Processes, Linear Temporal Logic, model checking, complexity}
}
Document
A Unified Method for Placing Problems in Polylogarithmic Depth

Authors: Andreas Krebs, Nutan Limaye, and Michael Ludwig


Abstract
In this work we consider the term evaluation problem which is, given a term over some algebra and a valid input to the term, computing the value of the term on that input. In contrast to previous methods we allow the algebra to be completely general and consider the problem of obtaining an efficient upper bound for this problem. Many variants of the problems where the algebra is well behaved have been studied. For example, the problem over the Boolean semiring or over the semiring (N,+,*). We extend this line of work. Our efficient term evaluation algorithm then serves as a tool for obtaining polylogarithmic depth upper bounds for various well-studied problems. To demonstrate the utility of our result we show new bounds and reprove known results for a large spectrum of problems. In particular, the applications of the algorithm we consider include (but are not restricted to) arithmetic formula evaluation, word problems for tree and visibly pushdown automata, and various problems related to bounded tree-width and clique-width graphs.

Cite as

Andreas Krebs, Nutan Limaye, and Michael Ludwig. A Unified Method for Placing Problems in Polylogarithmic Depth. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{krebs_et_al:LIPIcs.FSTTCS.2017.36,
  author =	{Krebs, Andreas and Limaye, Nutan and Ludwig, Michael},
  title =	{{A Unified Method for Placing Problems in Polylogarithmic Depth}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.36},
  URN =		{urn:nbn:de:0030-drops-83805},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.36},
  annote =	{Keywords: Polylogarithmic depth, Term evaluation, Parallel algorithms}
}
Document
Finding Pseudorandom Colorings of Pseudorandom Graphs

Authors: Akash Kumar, Anand Louis, and Madhur Tulsiani


Abstract
We consider the problem of recovering a planted pseudorandom 3-coloring in expanding and low threshold-rank graphs. Alon and Kahale [SICOMP 1997] gave a spectral algorithm to recover the coloring for a random graph with a planted random 3-coloring. We show that their analysis can be adapted to work when coloring is pseudorandom i.e., all color classes are of equal size and the size of the intersection of the neighborhood of a random vertex with each color class has small variance. We also extend our results to partial colorings and low threshold-rank graphs to show the following: * For graphs on n vertices with threshold-rank r, for which there exists a 3-coloring that is eps-pseudorandom and properly colors the induced subgraph on (1-gamma)n vertices, we show how to recover the coloring for (1 - O(gamma + eps)) n vertices in time (rn)^{O(r)}. * For expanding graphs on n vertices, which admit a pseudorandom 3-coloring properly coloring all the vertices, we show how to recover such a coloring in polynomial time. Our results are obtained by combining the method of Alon and Kahale, with eigenspace enumeration methods used for solving constraint satisfaction problems on low threshold-rank graphs.

Cite as

Akash Kumar, Anand Louis, and Madhur Tulsiani. Finding Pseudorandom Colorings of Pseudorandom Graphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 37:1-37:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.FSTTCS.2017.37,
  author =	{Kumar, Akash and Louis, Anand and Tulsiani, Madhur},
  title =	{{Finding Pseudorandom Colorings of Pseudorandom Graphs}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{37:1--37:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.37},
  URN =		{urn:nbn:de:0030-drops-83956},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.37},
  annote =	{Keywords: Graph Coloring, Expanders, Spectral algorithms}
}
Document
Flow Games

Authors: Orna Kupferman, Gal Vardi, and Moshe Y. Vardi


Abstract
In the traditional maximal-flow problem, the goal is to transfer maximum flow in a network by directing, in each vertex in the network, incoming flow into outgoing edges. While the problem has been extensively used in order to optimize the performance of networks in numerous application areas, it corresponds to a setting in which the authority has control on all vertices of the network. Today's computing environment involves parties that should be considered adversarial. We introduce and study {\em flow games}, which capture settings in which the authority can control only part of the vertices. In these games, the vertices are partitioned between two players: the authority and the environment. While the authority aims at maximizing the flow, the environment need not cooperate. We argue that flow games capture many modern settings, such as partially-controlled pipe or road systems or hybrid software-defined communication networks. We show that the problem of finding the maximal flow as well as an optimal strategy for the authority in an acyclic flow game is $\Sigma_2^P$-complete, and is already $\Sigma_2^P$-hard to approximate. We study variants of the game: a restriction to strategies that ensure no loss of flow, an extension to strategies that allow non-integral flows, which we prove to be stronger, and a dynamic setting in which a strategy for a vertex is chosen only once flow reaches the vertex. We discuss additional variants and their applications, and point to several interesting open problems.

Cite as

Orna Kupferman, Gal Vardi, and Moshe Y. Vardi. Flow Games. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kupferman_et_al:LIPIcs.FSTTCS.2017.38,
  author =	{Kupferman, Orna and Vardi, Gal and Vardi, Moshe Y.},
  title =	{{Flow Games}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.38},
  URN =		{urn:nbn:de:0030-drops-83738},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.38},
  annote =	{Keywords: Flow networks, Two-player Games, Algorithms}
}
Document
A Dichotomy Theorem for the Inverse Satisfiability Problem

Authors: Victor Lagerkvist and Biman Roy


Abstract
The inverse satisfiability problem over a set of Boolean relations Gamma (Inv-SAT(Gamma)) is the computational decision problem of, given a set of models R, deciding whether there exists a SAT(Gamma) instance with R as its set of models. This problem is co-NP-complete in general and a dichotomy theorem for finite Γ containing the constant Boolean relations was obtained by Kavvadias and Sideri. In this paper we remove the latter condition and prove that Inv-SAT(Gamma) is always either tractable or co-NP-complete for all finite sets of relations Gamma, thus solving a problem open since 1998. Very little of the techniques used by Kavvadias and Sideri are applicable and we have to turn to more recently developed algebraic approaches based on partial polymorphisms. We also consider the case when Γ is infinite, where the situation differs markedly from the case of SAT. More precisely, we show that there exists infinite Gamma such that Inv-SAT(Gamma) is tractable even though there exists finite Delta is subset of Gamma such that Inv-SAT(Delta) is co-NP-complete.

Cite as

Victor Lagerkvist and Biman Roy. A Dichotomy Theorem for the Inverse Satisfiability Problem. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lagerkvist_et_al:LIPIcs.FSTTCS.2017.39,
  author =	{Lagerkvist, Victor and Roy, Biman},
  title =	{{A Dichotomy Theorem for the Inverse Satisfiability Problem}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.39},
  URN =		{urn:nbn:de:0030-drops-83745},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.39},
  annote =	{Keywords: Complexity Theory, Inverse Satisfiability, Clone Theory}
}
Document
Balanced Judicious Bipartition is Fixed-Parameter Tractable

Authors: Daniel Lokshtanov, Saket Saurabh, Roohani Sharma, and Meirav Zehavi


Abstract
The family of judicious partitioning problems, introduced by Bollob\'as and Scott to the field of extremal combinatorics, has been extensively studied from a structural point of view for over two decades. This rich realm of problems aims to counterbalance the objectives of classical partitioning problems such as Min Cut, Min Bisection and Max Cut. While these classical problems focus solely on the minimization/maximization of the number of edges crossing the cut, judicious (bi)partitioning problems ask the natural question of the minimization/maximization of the number of edges lying in the (two) sides of the cut. In particular, Judicious Bipartition (JB) seeks a bipartition that is "judicious" in the sense that neither side is burdened by too many edges, and Balanced JB also requires that the sizes of the sides themselves are "balanced" in the sense that neither of them is too large. Both of these problems were defined in the work by Bollob\'as and Scott, and have received notable scientific attention since then. In this paper, we shed light on the study of judicious partitioning problems from the viewpoint of algorithm design. Specifically, we prove that BJB is FPT (which also proves that JB is FPT).

Cite as

Daniel Lokshtanov, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Balanced Judicious Bipartition is Fixed-Parameter Tractable. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.FSTTCS.2017.40,
  author =	{Lokshtanov, Daniel and Saurabh, Saket and Sharma, Roohani and Zehavi, Meirav},
  title =	{{Balanced Judicious Bipartition is Fixed-Parameter Tractable}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.40},
  URN =		{urn:nbn:de:0030-drops-84115},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.40},
  annote =	{Keywords: Judicious Partition, Tree Decomposition, Parameterized Complexity, Odd Cycle Transversal, Minimum Bisection}
}
Document
On Hashing-Based Approaches to Approximate DNF-Counting

Authors: Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi


Abstract
Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along with a new hash family of Row-Echelon hash functions. These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity (up to polylog factors) as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.

Cite as

Kuldeep S. Meel, Aditya A. Shrotri, and Moshe Y. Vardi. On Hashing-Based Approaches to Approximate DNF-Counting. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{meel_et_al:LIPIcs.FSTTCS.2017.41,
  author =	{Meel, Kuldeep S. and Shrotri, Aditya A. and Vardi, Moshe Y.},
  title =	{{On Hashing-Based Approaches to Approximate DNF-Counting}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.41},
  URN =		{urn:nbn:de:0030-drops-84073},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.41},
  annote =	{Keywords: Model Counting, Approximation, DNF, Hash Functions}
}
Document
Average Stack Cost of Büchi Pushdown Automata

Authors: Jakub Michaliszyn and Jan Otop


Abstract
We study the average stack cost of Buechi pushdown automata (Buechi PDA). We associate a non-negative price with each stack symbol and define the cost of a stack as the sum of costs of all its elements. We introduce and study the average stack cost problem (ASC), which asks whether there exists an accepting run of a given Buechi PDA such that the long-run average of stack costs is below some given threshold. The ASC problem generalises mean-payoff objective and can be use to express quantitative properties of pushdown systems. In particular, we can compute the average response time using the ASC problem. We show that the ASC problem can be solved in polynomial time.

Cite as

Jakub Michaliszyn and Jan Otop. Average Stack Cost of Büchi Pushdown Automata. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{michaliszyn_et_al:LIPIcs.FSTTCS.2017.42,
  author =	{Michaliszyn, Jakub and Otop, Jan},
  title =	{{Average Stack Cost of B\"{u}chi Pushdown Automata}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.42},
  URN =		{urn:nbn:de:0030-drops-83971},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.42},
  annote =	{Keywords: pushdown automata, average stack cost, weighted pushdown systems}
}
Document
Querying Best Paths in Graph Databases

Authors: Jakub Michaliszyn, Jan Otop, and Piotr Wieczorek


Abstract
Querying graph databases has recently received much attention. We propose a new approach to this problem, which balances competing goals of expressive power, language clarity and computational complexity. A distinctive feature of our approach is the ability to express properties of minimal (e.g. shortest) and maximal (e.g. most valuable) paths satisfying given criteria. To express complex properties in a modular way, we introduce labelling-generating ontologies. The resulting formalism is computationally attractive - queries can be answered in non-deterministic logarithmic space in the size of the database.

Cite as

Jakub Michaliszyn, Jan Otop, and Piotr Wieczorek. Querying Best Paths in Graph Databases. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{michaliszyn_et_al:LIPIcs.FSTTCS.2017.43,
  author =	{Michaliszyn, Jakub and Otop, Jan and Wieczorek, Piotr},
  title =	{{Querying Best Paths in Graph Databases}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.43},
  URN =		{urn:nbn:de:0030-drops-83989},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.43},
  annote =	{Keywords: graph databases, queries, aggregation}
}
Document
Popular Matchings with Lower Quotas

Authors: Meghana Nasre and Prajakta Nimbhorkar


Abstract
We consider the well-studied Hospital Residents (HR) problem in the presence of lower quotas (LQ). The input instance consists of a bipartite graph G = (R U H, E) where R and H denote sets of residents and hospitals, respectively. Every vertex has a preference list that imposes a strict ordering on its neighbors. In addition, each hospital has an associated upper-quota and a lower-quota. A matching M in G is an assignment of residents to hospitals, and M is said to be feasible if every resident is assigned to at most one hospital and a hospital is assigned at least its lower-quota many residents and at most its upper-quota many residents. Stability is a de-facto notion of optimality in a model where both sets of vertices have preferences. A matching is stable if no unassigned pair has an incentive to deviate from it. It is well-known that an instance of the HRLQ problem need not admit a feasible stable matching. In this paper, we consider the notion of popularity for the HRLQ problem. A matching M is popular if no other matching M' gets more votes than M when vertices vote between M and M'. When there are no lower quotas, there always exists a stable matching and it is known that every stable matching is popular. We show that in an HRLQ instance, although a feasible stable matching need not exist, there is always a matching that is popular in the set of feasible matchings. We give an efficient algorithm to compute a maximum cardinality matching that is popular amongst all the feasible matchings in an HRLQ instance.

Cite as

Meghana Nasre and Prajakta Nimbhorkar. Popular Matchings with Lower Quotas. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 44:1-44:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nasre_et_al:LIPIcs.FSTTCS.2017.44,
  author =	{Nasre, Meghana and Nimbhorkar, Prajakta},
  title =	{{Popular Matchings with Lower Quotas}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{44:1--44:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.44},
  URN =		{urn:nbn:de:0030-drops-83937},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.44},
  annote =	{Keywords: bipartite matchings, preferences, hospital residents, lower-quota, popular matchings}
}
Document
The Complexity of the Diagonal Problem for Recursion Schemes

Authors: Pawel Parys


Abstract
We consider nondeterministic higher-order recursion schemes as recognizers of languages of finite words or finite trees. We establish the complexity of the diagonal problem for schemes: given a set of letters A and a scheme G, is it the case that for every number n the scheme accepts a word (a tree) in which every letter from A appears at least n times. We prove that this problem is (m-1)-EXPTIME-complete for word-recognizing schemes of order m, and m-EXPTIME-complete for tree-recognizing schemes of order m.

Cite as

Pawel Parys. The Complexity of the Diagonal Problem for Recursion Schemes. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{parys:LIPIcs.FSTTCS.2017.45,
  author =	{Parys, Pawel},
  title =	{{The Complexity of the Diagonal Problem for Recursion Schemes}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.45},
  URN =		{urn:nbn:de:0030-drops-83757},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.45},
  annote =	{Keywords: diagonal problem, higher-order recursion schemes, intersection types, downward closure}
}
Document
A Combinatorial Proof of Ihara-Bass's Formula for the Zeta Function of Regular Graphs

Authors: Bharatram Rangarajan


Abstract
We give an elementary combinatorial proof of Bass's determinant formula for the zeta function of a finite regular graph. This is done by expressing the number of non-backtracking cycles of a given length in terms of Chebyshev polynomials in the eigenvalues of the adjacency operator of the graph. A related observation of independent interest is that the Ramanujan property of a regular graph is equivalent to tight bounds on the number of non-backtracking cycles of every length.

Cite as

Bharatram Rangarajan. A Combinatorial Proof of Ihara-Bass's Formula for the Zeta Function of Regular Graphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{rangarajan:LIPIcs.FSTTCS.2017.46,
  author =	{Rangarajan, Bharatram},
  title =	{{A Combinatorial Proof of Ihara-Bass's Formula for the Zeta Function of Regular Graphs}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.46},
  URN =		{urn:nbn:de:0030-drops-83861},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.46},
  annote =	{Keywords: non-backtracking, Ihara zeta, Chebyshev polynomial, Ramanujan graph, Hashimoto matrix}
}
Document
VLDL Satisfiability and Model Checking via Tree Automata

Authors: Alexander Weinert


Abstract
We present novel algorithms solving the satisfiability problem and the model checking problem for Visibly Linear Dynamic Logic (VLDL) in asymptotically optimal time via a reduction to the emptiness problem for tree automata with Büchi acceptance. Since VLDL allows for the specification of important properties of recursive systems, this reduction enables the efficient analysis of such systems. Furthermore, as the problem of tree automata emptiness is well-studied, this reduction enables leveraging the mature algorithms and tools for that problem in order to solve the satisfiability problem and the model checking problem for VLDL.

Cite as

Alexander Weinert. VLDL Satisfiability and Model Checking via Tree Automata. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{weinert:LIPIcs.FSTTCS.2017.47,
  author =	{Weinert, Alexander},
  title =	{{VLDL Satisfiability and Model Checking via Tree Automata}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.47},
  URN =		{urn:nbn:de:0030-drops-83871},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.47},
  annote =	{Keywords: Visibly Linear Dynamic Logic, Visibly Pushdown Languages, Satisfiability, Model Checking, Tree Automata}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail