LIPIcs, Volume 150

39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)



Thumbnail PDF

Event

FSTTCS 2019, December 11-13, 2019, Bombay, India

Editors

Arkadev Chattopadhyay
  • Tata Institute of Fundamental Research, Mumbai, India
Paul Gastin
  • LSV, ENS Paris-Saclay, CNRS, Université Paris-Saclay, France

Publication Details

  • published at: 2019-12-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-131-3
  • DBLP: db/conf/fsttcs/fsttcs2019

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 150, FSTTCS'19, Complete Volume

Authors: Arkadev Chattopadhyay and Paul Gastin


Abstract
LIPIcs, Volume 150, FSTTCS'19, Complete Volume

Cite as

39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2019,
  title =	{{LIPIcs, Volume 150, FSTTCS'19, Complete Volume}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019},
  URN =		{urn:nbn:de:0030-drops-116426},
  doi =		{10.4230/LIPIcs.FSTTCS.2019},
  annote =	{Keywords: Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Arkadev Chattopadhyay and Paul Gastin


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2019.0,
  author =	{Chattopadhyay, Arkadev and Gastin, Paul},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.0},
  URN =		{urn:nbn:de:0030-drops-115621},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Practical Formal Methods for Real World Cryptography (Invited Talk)

Authors: Karthikeyan Bhargavan and Prasad Naldurg


Abstract
Cryptographic algorithms, protocols, and applications are difficult to implement correctly, and errors and vulnerabilities in their code can remain undiscovered for long periods before they are exploited. Even highly-regarded cryptographic libraries suffer from bugs like buffer overruns, incorrect numerical computations, and timing side-channels, which can lead to the exposure of sensitive data and long-term secrets. We describe a tool chain and framework based on the F* programming language to formally specify, verify and compile high-performance cryptographic software that is secure by design. This tool chain has been used to build a verified cryptographic library called HACL*, and provably secure implementations of sophisticated secure communication protocols like Signal and TLS. We describe these case studies and conclude with ongoing work on using our framework to build verified implementations of privacy preserving machine learning software.

Cite as

Karthikeyan Bhargavan and Prasad Naldurg. Practical Formal Methods for Real World Cryptography (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 1:1-1:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhargavan_et_al:LIPIcs.FSTTCS.2019.1,
  author =	{Bhargavan, Karthikeyan and Naldurg, Prasad},
  title =	{{Practical Formal Methods for Real World Cryptography}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{1:1--1:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.1},
  URN =		{urn:nbn:de:0030-drops-115632},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.1},
  annote =	{Keywords: Formal verification, Applied cryptography, Security protocols, Machine learning}
}
Document
Invited Talk
Sketching Graphs and Combinatorial Optimization (Invited Talk)

Authors: Robert Krauthgamer


Abstract
Graph-sketching algorithms summarize an input graph G in a manner that suffices to later answer (perhaps approximately) one or more optimization problems on G, like distances, cuts, and matchings. Two famous examples are the Gomory-Hu tree, which represents all the minimum st-cuts in a graph G using a tree on the same vertex set V(G); and the cut-sparsifier of Benczúr and Karger, which is a sparse graph (often a reweighted subgraph) that approximates every cut in G within factor 1 +/- epsilon. Another genre of these problems limits the queries to designated terminal vertices, denoted T subseteq V(G), and the sketch size depends on |T| instead of |V(G)|. The talk will survey this topic, particularly cut and flow problems such as the three examples above. Currently, most known sketches are based on a graphical representation, often called edge and vertex sparsification, which leaves room for potential improvements like smaller storage by using another representation, and faster running time to answer a query. These algorithms employ a host of techniques, ranging from combinatorial methods, like graph partitioning and edge or vertex sampling, to standard tools in data-stream algorithms and in sparse recovery. There are also several lower bounds known, either combinatorial (for the graphical representation) or based on communication complexity and information theory. Many of the recent efforts focus on characterizing the tradeoff between accuracy and sketch size, yet many intriguing and very accessible problems are still open, and I will describe them in the talk.

Cite as

Robert Krauthgamer. Sketching Graphs and Combinatorial Optimization (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{krauthgamer:LIPIcs.FSTTCS.2019.2,
  author =	{Krauthgamer, Robert},
  title =	{{Sketching Graphs and Combinatorial Optimization}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.2},
  URN =		{urn:nbn:de:0030-drops-115646},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.2},
  annote =	{Keywords: Sketching, edge sparsification, vertex sparsification, Gomory-Hu tree, mimicking networks, graph sampling, succinct data structures}
}
Document
Invited Talk
Finkel Was Right: Counter-Examples to Several Conjectures on Variants of Vector Addition Systems (Invited Talk)

Authors: Ranko Lazić


Abstract
Studying one-dimensional grammar vector addition systems has long been advocated by Alain Finkel. In this presentation, we shall see how research on those systems has led to the recent breakthrough tower lower bound for the reachability problem on vector addition systems, obtained by Czerwiński et al. In fact, we shall look at how appropriate modifications of an underlying technical construction can lead to counter-examples to several conjectures on one-dimensional grammar vector addition systems, fixed-dimensional vector addition systems, and fixed-dimensional flat vector addition systems.

Cite as

Ranko Lazić. Finkel Was Right: Counter-Examples to Several Conjectures on Variants of Vector Addition Systems (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lazic:LIPIcs.FSTTCS.2019.3,
  author =	{Lazi\'{c}, Ranko},
  title =	{{Finkel Was Right: Counter-Examples to Several Conjectures on Variants of Vector Addition Systems}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.3},
  URN =		{urn:nbn:de:0030-drops-115653},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.3},
  annote =	{Keywords: Petri nets, vector addition systems, reachability}
}
Document
Invited Talk
Progress in Lifting and Applications in Lower Bounds (Invited Talk)

Authors: Toniann Pitassi


Abstract
Ever since Yao introduced the communication complexity model in 1979, it has played a pivotal role in our understanding of limitations for a wide variety of problems in Computer Science. In this talk, I will present the lifting method, whereby communication lower bounds are obtained by lifting much simpler lower bounds. I will show how lifting theorems have been used to solve many open problems in a variety of areas of computer science, including: circuit complexity, proof complexity, combinatorial optimization (size of linear programming formulations), cryptography (linear secret sharing schemes), game theory and privacy. At the end of the talk, I will sketch the proof of a unified lifting theorem for deterministic and randomized communication (joint with Arkadev Chattopadyhay, Yuval Filmus, Sajin Koroth, and Or Meir.)

Cite as

Toniann Pitassi. Progress in Lifting and Applications in Lower Bounds (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{pitassi:LIPIcs.FSTTCS.2019.4,
  author =	{Pitassi, Toniann},
  title =	{{Progress in Lifting and Applications in Lower Bounds}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.4},
  URN =		{urn:nbn:de:0030-drops-115664},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.4},
  annote =	{Keywords: complexity theory, lower bounds, communication complexity}
}
Document
Invited Talk
How Computer Science Informs Modern Auction Design (Invited Talk)

Authors: Tim Roughgarden


Abstract
Over the last twenty years, computer science has relied on concepts borrowed from game theory and economics to reason about applications ranging from internet routing to real-time auctions for online advertising. More recently, ideas have increasingly flowed in the opposite direction, with concepts and techniques from computer science beginning to influence economic theory and practice. In this lecture, I will illustrate this point with a detailed case study of the 2016-2017 Federal Communications Commission incentive auction for repurposing wireless spectrum. Computer science techniques, ranging from algorithms for NP-hard problems to nondeterministic communication complexity, have played a critical role both in the design of the reverse auction (with the government procuring existing licenses from television broadcasters) and in the analysis of the forward auction (when the procured licenses sell to the highest bidder).

Cite as

Tim Roughgarden. How Computer Science Informs Modern Auction Design (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{roughgarden:LIPIcs.FSTTCS.2019.5,
  author =	{Roughgarden, Tim},
  title =	{{How Computer Science Informs Modern Auction Design}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.5},
  URN =		{urn:nbn:de:0030-drops-115678},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.5},
  annote =	{Keywords: Game Theory, Auction Design, Algorithms}
}
Document
Invited Talk
An Algebraic Framework to Reason About Concurrency (Invited Talk)

Authors: Alexandra Silva


Abstract
Kleene algebra with tests (KAT) is an algebraic framework for reasoning about the control flow of sequential programs. Hoare, Struth, and collaborators proposed a concurrent extension of Kleene Algebra (CKA) as a first step towards developing algebraic reasoning for concurrent programs. Completing their research program and extending KAT to encompass concurrent behaviour has however proven to be more challenging than initially expected. The core problem appears because when generalising KAT to reason about concurrent programs, axioms native to KAT in conjunction with expected axioms for reasoning about concurrency lead to an unexpected equation about programs. In this talk, we will revise the literature on CKA(T) and explain the challenges and solutions in the development of an algebraic framework for concurrency. The talk is based on a series of papers joint with Tobias Kappé, Paul Brunet, Bas Luttik, Jurriaan Rot, Jana Wagemaker, and Fabio Zanasi [Tobias Kappé et al., 2017; Tobias Kappé et al., 2019; Kappé et al., 2018]. Additional references can be found on the CoNeCo project website: https://coneco-project.org/.

Cite as

Alexandra Silva. An Algebraic Framework to Reason About Concurrency (Invited Talk). In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, p. 6:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{silva:LIPIcs.FSTTCS.2019.6,
  author =	{Silva, Alexandra},
  title =	{{An Algebraic Framework to Reason About Concurrency}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{6:1--6:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.6},
  URN =		{urn:nbn:de:0030-drops-115687},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.6},
  annote =	{Keywords: Kleene Algebra, Concurrency, Automata}
}
Document
Connected Search for a Lazy Robber

Authors: Isolde Adler, Christophe Paul, and Dimitrios M. Thilikos


Abstract
The node search game against a lazy (or, respectively, agile) invisible robber has been introduced as a search-game analogue of the treewidth parameter (and, respectively, pathwidth). In the connected variants of the above two games, we additionally demand that, at each moment of the search, the clean territories are connected. The connected search game against an agile and invisible robber has been extensively examined. The monotone variant (where we also demand that the clean territories are progressively increasing) of this game, corresponds to the graph parameter of connected pathwidth. It is known that the price of connectivty to search for an agile robber is bounded by 2, that is the connected pathwidth of a graph is at most twice (plus some constant) its pathwidth. In this paper, we investigate the connected search game against a lazy robber. A lazy robber moves only when the searchers' strategy threatens the location that he currently occupies. We introduce two alternative graph-theoretic formulations of this game, one in terms of connected tree decompositions, and one in terms of (connected) layouts, leading to the graph parameter of connected treewidth. We observe that connected treewidth parameter is closed under contractions and prove that for every k >= 2, the set of contraction obstructions of the class of graphs with connected treewidth at most k is infinite. Our main result is a complete characterization of the obstruction set for k=2. One may observe that, so far, only a few complete obstruction sets are explicitly known for contraction closed graph classes. We finally show that, in contrast to the agile robber game, the price of connectivity is unbounded.

Cite as

Isolde Adler, Christophe Paul, and Dimitrios M. Thilikos. Connected Search for a Lazy Robber. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{adler_et_al:LIPIcs.FSTTCS.2019.7,
  author =	{Adler, Isolde and Paul, Christophe and Thilikos, Dimitrios M.},
  title =	{{Connected Search for a Lazy Robber}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.7},
  URN =		{urn:nbn:de:0030-drops-115695},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.7},
  annote =	{Keywords: Graph searching, Tree-decomposition, Obstructions}
}
Document
Parameterized Streaming Algorithms for Min-Ones d-SAT

Authors: Akanksha Agrawal, Arindam Biswas, Édouard Bonnet, Nick Brettell, Radu Curticapean, Dániel Marx, Tillmann Miltzow, Venkatesh Raman, and Saket Saurabh


Abstract
In this work, we initiate the study of the Min-Ones d-SAT problem in the parameterized streaming model. An instance of the problem consists of a d-CNF formula F and an integer k, and the objective is to determine if F has a satisfying assignment which sets at most k variables to 1. In the parameterized streaming model, input is provided as a stream, just as in the usual streaming model. A key difference is that the bound on the read-write memory available to the algorithm is O(f(k) log n) (f: N -> N, a computable function) as opposed to the O(log n) bound of the usual streaming model. The other important difference is that the number of passes the algorithm makes over its input must be a (preferably small) function of k. We design a (k + 1)-pass parameterized streaming algorithm that solves Min-Ones d-SAT (d >= 2) using space O((kd^(ck) + k^d)log n) (c > 0, a constant) and a (d + 1)^k-pass algorithm that uses space O(k log n). We also design a streaming kernelization for Min-Ones 2-SAT that makes (k + 2) passes and uses space O(k^6 log n) to produce a kernel with O(k^6) clauses. To complement these positive results, we show that any k-pass algorithm for or Min-Ones d-SAT (d >= 2) requires space Omega(max{n^(1/k) / 2^k, log(n / k)}) on instances (F, k). This is achieved via a reduction from the streaming problem POT Pointer Chasing (Guha and McGregor [ICALP 2008]), which might be of independent interest. Given this, our (k + 1)-pass parameterized streaming algorithm is the best possible, inasmuch as the number of passes is concerned. In contrast to the results of Fafianie and Kratsch [MFCS 2014] and Chitnis et al. [SODA 2015], who independently showed that there are 1-pass parameterized streaming algorithms for Vertex Cover (a restriction of Min-Ones 2-SAT), we show using lower bounds from Communication Complexity that for any d >= 1, a 1-pass streaming algorithm for Min-Ones d-SAT requires space Omega(n). This excludes the possibility of a 1-pass parameterized streaming algorithm for the problem. Additionally, we show that any p-pass algorithm for the problem requires space Omega(n/p).

Cite as

Akanksha Agrawal, Arindam Biswas, Édouard Bonnet, Nick Brettell, Radu Curticapean, Dániel Marx, Tillmann Miltzow, Venkatesh Raman, and Saket Saurabh. Parameterized Streaming Algorithms for Min-Ones d-SAT. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.FSTTCS.2019.8,
  author =	{Agrawal, Akanksha and Biswas, Arindam and Bonnet, \'{E}douard and Brettell, Nick and Curticapean, Radu and Marx, D\'{a}niel and Miltzow, Tillmann and Raman, Venkatesh and Saurabh, Saket},
  title =	{{Parameterized Streaming Algorithms for Min-Ones d-SAT}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.8},
  URN =		{urn:nbn:de:0030-drops-115708},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.8},
  annote =	{Keywords: min, ones, sat, d-sat, parameterized, kernelization, streaming, space, efficient, algorithm, parameter}
}
Document
Fast Exact Algorithms Using Hadamard Product of Polynomials

Authors: V. Arvind, Abhranil Chatterjee, Rajit Datta, and Partha Mukhopadhyay


Abstract
Let C be an arithmetic circuit of poly(n) size given as input that computes a polynomial f in F[X], where X={x_1,x_2,...,x_n} and F is any field where the field arithmetic can be performed efficiently. We obtain new algorithms for the following two problems first studied by Koutis and Williams [Ioannis Koutis, 2008; Ryan Williams, 2009; Ioannis Koutis and Ryan Williams, 2016]. - (k,n)-MLC: Compute the sum of the coefficients of all degree-k multilinear monomials in the polynomial f. - k-MMD: Test if there is a nonzero degree-k multilinear monomial in the polynomial f. Our algorithms are based on the fact that the Hadamard product f o S_{n,k}, is the degree-k multilinear part of f, where S_{n,k} is the k^{th} elementary symmetric polynomial. - For (k,n)-MLC problem, we give a deterministic algorithm of run time O^*(n^(k/2+c log k)) (where c is a constant), answering an open question of Koutis and Williams [Ioannis Koutis and Ryan Williams, 2016]. As corollaries, we show O^*(binom{n}{downarrow k/2})-time exact counting algorithms for several combinatorial problems: k-Tree, t-Dominating Set, m-Dimensional k-Matching. - For k-MMD problem, we give a randomized algorithm of run time 4.32^k * poly(n,k). Our algorithm uses only poly(n,k) space. This matches the run time of a recent algorithm [Cornelius Brand et al., 2018] for k-MMD which requires exponential (in k) space. Other results include fast deterministic algorithms for (k,n)-MLC and k-MMD problems for depth three circuits.

Cite as

V. Arvind, Abhranil Chatterjee, Rajit Datta, and Partha Mukhopadhyay. Fast Exact Algorithms Using Hadamard Product of Polynomials. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.FSTTCS.2019.9,
  author =	{Arvind, V. and Chatterjee, Abhranil and Datta, Rajit and Mukhopadhyay, Partha},
  title =	{{Fast Exact Algorithms Using Hadamard Product of Polynomials}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.9},
  URN =		{urn:nbn:de:0030-drops-115711},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.9},
  annote =	{Keywords: Hadamard Product, Multilinear Monomial Detection and Counting, Rectangular Permanent, Symmetric Polynomial}
}
Document
Approximate Online Pattern Matching in Sublinear Time

Authors: Diptarka Chakraborty, Debarati Das, and Michal Koucký


Abstract
We consider the approximate pattern matching problem under edit distance. In this problem we are given a pattern P of length m and a text T of length n over some alphabet Sigma, and a positive integer k. The goal is to find all the positions j in T such that there is a substring of T ending at j which has edit distance at most k from the pattern P. Recall, the edit distance between two strings is the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. For a position t in {1,...,n}, let k_t be the smallest edit distance between P and any substring of T ending at t. In this paper we give a constant factor approximation to the sequence k_1,k_2,...,k_n. We consider both offline and online settings. In the offline setting, where both P and T are available, we present an algorithm that for all t in {1,...,n}, computes the value of k_t approximately within a constant factor. The worst case running time of our algorithm is O~(n m^(3/4)). In the online setting, we are given P and then T arrives one symbol at a time. We design an algorithm that upon arrival of the t-th symbol of T computes k_t approximately within O(1)-multiplicative factor and m^(8/9)-additive error. Our algorithm takes O~(m^(1-(7/54))) amortized time per symbol arrival and takes O~(m^(1-(1/54))) additional space apart from storing the pattern P. Both of our algorithms are randomized and produce correct answer with high probability. To the best of our knowledge this is the first algorithm that takes worst-case sublinear (in the length of the pattern) time and sublinear extra space for the online approximate pattern matching problem. To get our result we build on the technique of Chakraborty, Das, Goldenberg, Koucký and Saks [FOCS'18] for computing a constant factor approximation of edit distance in sub-quadratic time.

Cite as

Diptarka Chakraborty, Debarati Das, and Michal Koucký. Approximate Online Pattern Matching in Sublinear Time. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2019.10,
  author =	{Chakraborty, Diptarka and Das, Debarati and Kouck\'{y}, Michal},
  title =	{{Approximate Online Pattern Matching in Sublinear Time}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.10},
  URN =		{urn:nbn:de:0030-drops-115726},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.10},
  annote =	{Keywords: Approximate Pattern Matching, Online Pattern Matching, Edit Distance, Sublinear Algorithm, Streaming Algorithm}
}
Document
Constructing Faithful Homomorphisms over Fields of Finite Characteristic

Authors: Prerona Chatterjee and Ramprasad Saptharishi


Abstract
We study the question of algebraic rank or transcendence degree preserving homomorphisms over finite fields. This concept was first introduced by Beecken et al. [Malte Beecken et al., 2013] and exploited by them and Agrawal et al. [Manindra Agrawal et al., 2016] to design algebraic independence based identity tests using the Jacobian criterion over characteristic zero fields. An analogue of such constructions over finite characteristic fields were unknown due to the failure of the Jacobian criterion over finite characteristic fields. Building on a recent criterion of Pandey, Saxena and Sinhababu [Anurag Pandey et al., 2018], we construct explicit faithful maps for some natural classes of polynomials in fields of positive characteristic, when a certain parameter called the inseparable degree of the underlying polynomials is bounded (this parameter is always 1 in fields of characteristic zero). This presents the first generalisation of some of the results of Beecken, Mittmann and Saxena [Malte Beecken et al., 2013] and Agrawal, Saha, Saptharishi, Saxena [Manindra Agrawal et al., 2016] in the positive characteristic setting.

Cite as

Prerona Chatterjee and Ramprasad Saptharishi. Constructing Faithful Homomorphisms over Fields of Finite Characteristic. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.FSTTCS.2019.11,
  author =	{Chatterjee, Prerona and Saptharishi, Ramprasad},
  title =	{{Constructing Faithful Homomorphisms over Fields of Finite Characteristic}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.11},
  URN =		{urn:nbn:de:0030-drops-115733},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.11},
  annote =	{Keywords: Faithful Homomorphisms, Identity Testing, Algebraic Independence, Finite characteristic fields}
}
Document
Maximum-Area Rectangles in a Simple Polygon

Authors: Yujin Choi, Seungjun Lee, and Hee-Kap Ahn


Abstract
We study the problem of finding maximum-area rectangles contained in a polygon in the plane. There has been a fair amount of work for this problem when the rectangles have to be axis-aligned or when the polygon is convex. We consider this problem in a simple polygon with n vertices, possibly with holes, and with no restriction on the orientation of the rectangles. We present an algorithm that computes a maximum-area rectangle in O(n^3 log n) time using O(kn^2) space, where k is the number of reflex vertices of P. Our algorithm can report all maximum-area rectangles in the same time using O(n^3) space. We also present a simple algorithm that finds a maximum-area rectangle contained in a convex polygon with n vertices in O(n^3) time using O(n) space.

Cite as

Yujin Choi, Seungjun Lee, and Hee-Kap Ahn. Maximum-Area Rectangles in a Simple Polygon. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{choi_et_al:LIPIcs.FSTTCS.2019.12,
  author =	{Choi, Yujin and Lee, Seungjun and Ahn, Hee-Kap},
  title =	{{Maximum-Area Rectangles in a Simple Polygon}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.12},
  URN =		{urn:nbn:de:0030-drops-115745},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.12},
  annote =	{Keywords: Maximum-area rectangle, largest rectangle, simple polygon}
}
Document
Motif Counting in Preferential Attachment Graphs

Authors: Jan Dreier and Peter Rossmanith


Abstract
Network motifs are small patterns that occur in a network significantly more often than expected. They have gathered a lot of interest, as they may describe functional dependencies of complex networks and yield insights into their basic structure [Milo et al., 2002]. Therefore, a large amount of work went into the development of methods for network motif detection in complex networks [Kashtan et al., 2004; Schreiber and Schwöbbermeyer, 2005; Chen et al., 2006; Wernicke, 2006; Grochow and Kellis, 2007; Alon et al., 2008; Omidi et al., 2009]. The underlying problem of motif detection is to count how often a copy of a pattern graph H occurs in a target graph G. This problem is #W[1]-hard when parameterized by the size of H [Flum and Grohe, 2004] and cannot be solved in time f(|H|)n^o(|H|) under #ETH [Chen et al., 2005]. Preferential attachment graphs [Barabási and Albert, 1999] are a very popular random graph model designed to mimic complex networks. They are constructed by a random process that iteratively adds vertices and attaches them preferentially to vertices that already have high degree. Preferential attachment has been empirically observed in real growing networks [Newman, 2001; Jeong et al., 2003]. We show that one can count subgraph copies of a graph H in the preferential attachment graph G^n_m (with n vertices and nm edges, where m is usually a small constant) in expected time f(|H|) m^O(|H|^6) log(n)^O(|H|^12) n. This means the motif counting problem can be solved in expected quasilinear FPT time on preferential attachment graphs with respect to the parameters |H| and m. In particular, for fixed H and m the expected run time is O(n^(1+epsilon)) for every epsilon>0. Our results are obtained using new concentration bounds for degrees in preferential attachment graphs. Assume the (total) degree of a set of vertices at a time t of the random process is d. We show that if d is sufficiently large then the degree of the same set at a later time n is likely to be in the interval (1 +/- epsilon)d sqrt(n/t) (for epsilon > 0) for all n >= t. More specifically, the probability that this interval is left is exponentially small in d.

Cite as

Jan Dreier and Peter Rossmanith. Motif Counting in Preferential Attachment Graphs. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dreier_et_al:LIPIcs.FSTTCS.2019.13,
  author =	{Dreier, Jan and Rossmanith, Peter},
  title =	{{Motif Counting in Preferential Attachment Graphs}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.13},
  URN =		{urn:nbn:de:0030-drops-115750},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.13},
  annote =	{Keywords: random graphs, motif counting, average case analysis, preferential attachment graphs}
}
Document
Parameterized k-Clustering: Tractability Island

Authors: Fedor V. Fomin, Petr A. Golovach, and Kirill Simonov


Abstract
In k-Clustering we are given a multiset of n vectors X subset Z^d and a nonnegative number D, and we need to decide whether X can be partitioned into k clusters C_1, ..., C_k such that the cost sum_{i=1}^k min_{c_i in R^d} sum_{x in C_i} |x-c_i|_p^p <= D, where |*|_p is the Minkowski (L_p) norm of order p. For p=1, k-Clustering is the well-known k-Median. For p=2, the case of the Euclidean distance, k-Clustering is k-Means. We study k-Clustering from the perspective of parameterized complexity. The problem is known to be NP-hard for k=2 and it is also NP-hard for d=2. It is a long-standing open question, whether the problem is fixed-parameter tractable (FPT) for the combined parameter d+k. In this paper, we focus on the parameterization by D. We complement the known negative results by showing that for p=0 and p=infty, k-Clustering is W1-hard when parameterized by D. Interestingly, the complexity landscape of the problem appears to be more intricate than expected. We discover a tractability island of k-Clustering: for every p in (0,1], k-Clustering is solvable in time 2^O(D log D) (nd)^O(1).

Cite as

Fedor V. Fomin, Petr A. Golovach, and Kirill Simonov. Parameterized k-Clustering: Tractability Island. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.FSTTCS.2019.14,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Simonov, Kirill},
  title =	{{Parameterized k-Clustering: Tractability Island}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{14:1--14:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.14},
  URN =		{urn:nbn:de:0030-drops-115761},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.14},
  annote =	{Keywords: clustering, parameterized complexity, k-means, k-median}
}
Document
Nonnegative Rank Measures and Monotone Algebraic Branching Programs

Authors: Hervé Fournier, Guillaume Malod, Maud Szusterman, and Sébastien Tavenas


Abstract
Inspired by Nisan’s characterization of noncommutative complexity (Nisan 1991), we study different notions of nonnegative rank, associated complexity measures and their link with monotone computations. In particular we answer negatively an open question of Nisan asking whether nonnegative rank characterizes monotone noncommutative complexity for algebraic branching programs. We also prove a rather tight lower bound for the computation of elementary symmetric polynomials by algebraic branching programs in the monotone setting or, equivalently, in the homogeneous syntactically multilinear setting.

Cite as

Hervé Fournier, Guillaume Malod, Maud Szusterman, and Sébastien Tavenas. Nonnegative Rank Measures and Monotone Algebraic Branching Programs. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fournier_et_al:LIPIcs.FSTTCS.2019.15,
  author =	{Fournier, Herv\'{e} and Malod, Guillaume and Szusterman, Maud and Tavenas, S\'{e}bastien},
  title =	{{Nonnegative Rank Measures and Monotone Algebraic Branching Programs}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.15},
  URN =		{urn:nbn:de:0030-drops-115774},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.15},
  annote =	{Keywords: Elementary symmetric polynomials, lower bounds}
}
Document
Unambiguous Catalytic Computation

Authors: Chetan Gupta, Rahul Jain, Vimal Raj Sharma, and Raghunath Tewari


Abstract
The catalytic Turing machine is a model of computation defined by Buhrman, Cleve, Koucký, Loff, and Speelman (STOC 2014). Compared to the classical space-bounded Turing machine, this model has an extra space which is filled with arbitrary content in addition to the clean space. In such a model we study if this additional filled space can be used to increase the power of computation or not, with the condition that the initial content of this extra filled space must be restored at the end of the computation. In this paper, we define the notion of unambiguous catalytic Turing machine and prove that under a standard derandomization assumption, the class of problems solved by an unambiguous catalytic Turing machine is same as the class of problems solved by a general nondeterministic catalytic Turing machine in the logspace setting.

Cite as

Chetan Gupta, Rahul Jain, Vimal Raj Sharma, and Raghunath Tewari. Unambiguous Catalytic Computation. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.FSTTCS.2019.16,
  author =	{Gupta, Chetan and Jain, Rahul and Sharma, Vimal Raj and Tewari, Raghunath},
  title =	{{Unambiguous Catalytic Computation}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.16},
  URN =		{urn:nbn:de:0030-drops-115782},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.16},
  annote =	{Keywords: Catalytic computation, Logspace, Reinhardt-Allender}
}
Document
A Fast Exponential Time Algorithm for Max Hamming Distance X3SAT

Authors: Gordon Hoi, Sanjay Jain, and Frank Stephan


Abstract
X3SAT is the problem of whether one can satisfy a given set of clauses with up to three literals such that in every clause, exactly one literal is true and the others are false. A related question is to determine the maximal Hamming distance between two solutions of the instance. Dahllöf provided an algorithm for Maximum Hamming Distance XSAT, which is more complicated than the same problem for X3SAT, with a runtime of O(1.8348^n); Fu, Zhou and Yin considered Maximum Hamming Distance for X3SAT and found for this problem an algorithm with runtime O(1.6760^n). In this paper, we propose an algorithm in O(1.3298^n) time to solve the Max Hamming Distance X3SAT problem; the algorithm actually counts for each k the number of pairs of solutions which have Hamming Distance k.

Cite as

Gordon Hoi, Sanjay Jain, and Frank Stephan. A Fast Exponential Time Algorithm for Max Hamming Distance X3SAT. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hoi_et_al:LIPIcs.FSTTCS.2019.17,
  author =	{Hoi, Gordon and Jain, Sanjay and Stephan, Frank},
  title =	{{A Fast Exponential Time Algorithm for Max Hamming Distance X3SAT}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.17},
  URN =		{urn:nbn:de:0030-drops-115799},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.17},
  annote =	{Keywords: X3SAT Problem, Maximum Hamming Distance of Solutions, Exponential Time Algorithms, DPLL Algorithms}
}
Document
Exact and Approximate Digraph Bandwidth

Authors: Pallavi Jain, Lawqueen Kanesh, William Lochet, Saket Saurabh, and Roohani Sharma


Abstract
In this paper, we introduce a directed variant of the classical Bandwidth problem and study it from the view-point of moderately exponential time algorithms, both exactly and approximately. Motivated by the definitions of the directed variants of the classical Cutwidth and Pathwidth problems, we define Digraph Bandwidth as follows. Given a digraph D and an ordering sigma of its vertices, the digraph bandwidth of sigma with respect to D is equal to the maximum value of sigma(v)-sigma(u) over all arcs (u,v) of D going forward along sigma (that is, when sigma(u) < sigma (v)). The Digraph Bandwidth problem takes as input a digraph D and asks to output an ordering with the minimum digraph bandwidth. The undirected Bandwidth easily reduces to Digraph Bandwidth and thus, it immediately implies that Directed Bandwidth is {NP-hard}. While an O^*(n!) time algorithm for the problem is trivial, the goal of this paper is to design algorithms for Digraph Bandwidth which have running times of the form 2^O(n). In particular, we obtain the following results. Here, n and m denote the number of vertices and arcs of the input digraph D, respectively. - Digraph Bandwidth can be solved in O^*(3^n * 2^m) time. This result implies a 2^O(n) time algorithm on sparse graphs, such as graphs of bounded average degree. - Let G be the underlying undirected graph of the input digraph. If the treewidth of G is at most t, then Digraph Bandwidth can be solved in time O^*(2^(n + (t+2) log n)). This result implies a 2^(n+O(sqrt(n) log n)) algorithm for directed planar graphs and, in general, for the class of digraphs whose underlying undirected graph excludes some fixed graph H as a minor. - Digraph Bandwidth can be solved in min{O^*(4^n * b^n), O^*(4^n * 2^(b log b log n))} time, where b denotes the optimal digraph bandwidth of D. This allow us to deduce a 2^O(n) algorithm in many cases, for example when b <= n/(log^2n). - Finally, we give a (Single) Exponential Time Approximation Scheme for Digraph Bandwidth. In particular, we show that for any fixed real epsilon > 0, we can find an ordering whose digraph bandwidth is at most (1+epsilon) times the optimal digraph bandwidth, in time O^*(4^n * (ceil[4/epsilon])^n).

Cite as

Pallavi Jain, Lawqueen Kanesh, William Lochet, Saket Saurabh, and Roohani Sharma. Exact and Approximate Digraph Bandwidth. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.FSTTCS.2019.18,
  author =	{Jain, Pallavi and Kanesh, Lawqueen and Lochet, William and Saurabh, Saket and Sharma, Roohani},
  title =	{{Exact and Approximate Digraph Bandwidth}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.18},
  URN =		{urn:nbn:de:0030-drops-115802},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.18},
  annote =	{Keywords: directed bandwidth, digraph bandwidth, approximation scheme, exact exponential algorithms}
}
Document
An O(n^(1/4 +epsilon)) Space and Polynomial Algorithm for Grid Graph Reachability

Authors: Rahul Jain and Raghunath Tewari


Abstract
The reachability problem is to determine if there exists a path from one vertex to another in a graph. Grid graphs are the class of graphs where vertices are present on the lattice points of a two-dimensional grid, and an edge can occur between a vertex and its immediate horizontal or vertical neighbor only. Asano et al. presented the first simultaneous time space bound for reachability in grid graphs by presenting an algorithm that solves the problem in polynomial time and O(n^(1/2 + epsilon)) space. In 2018, the space bound was improved to O~(n^(1/3)) by Ashida and Nakagawa. In this paper, we show that reachability in an n vertex grid graph can be decided by an algorithm using O(n^(1/4 + epsilon)) space and polynomial time simultaneously.

Cite as

Rahul Jain and Raghunath Tewari. An O(n^(1/4 +epsilon)) Space and Polynomial Algorithm for Grid Graph Reachability. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.FSTTCS.2019.19,
  author =	{Jain, Rahul and Tewari, Raghunath},
  title =	{{An O(n^(1/4 +epsilon)) Space and Polynomial Algorithm for Grid Graph Reachability}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.19},
  URN =		{urn:nbn:de:0030-drops-115813},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.19},
  annote =	{Keywords: graph reachability, grid graph, graph algorithm, sublinear space algorithm}
}
Document
Popular Roommates in Simply Exponential Time

Authors: Telikepalli Kavitha


Abstract
We consider the popular matching problem in a graph G = (V,E) on n vertices with strict preferences. A matching M is popular if there is no matching N in G such that vertices that prefer N to M outnumber those that prefer M to N. It is known that it is NP-hard to decide if G has a popular matching or not. There is no faster algorithm known for this problem than the brute force algorithm that could take n! time. Here we show a simply exponential time algorithm for this problem, i.e., one that runs in O^*(k^n) time, where k is a constant. We use the recent breakthrough result on the maximum number of stable matchings possible in such instances to analyze our algorithm for the popular matching problem. We identify a natural (also, hard) subclass of popular matchings called truly popular matchings and show an O^*(2^n) time algorithm for the truly popular matching problem.

Cite as

Telikepalli Kavitha. Popular Roommates in Simply Exponential Time. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kavitha:LIPIcs.FSTTCS.2019.20,
  author =	{Kavitha, Telikepalli},
  title =	{{Popular Roommates in Simply Exponential Time}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{20:1--20:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.20},
  URN =		{urn:nbn:de:0030-drops-115820},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.20},
  annote =	{Keywords: Roommates instance, Popular matching, Stable matching, Dual certificate}
}
Document
The Complexity of Finding S-Factors in Regular Graphs

Authors: Sanjana Kolisetty, Linh Le, Ilya Volkovich, and Mihalis Yannakakis


Abstract
A graph G has an S-factor if there exists a spanning subgraph F of G such that for all v in V: deg_F(v) in S. The simplest example of such factor is a 1-factor, which corresponds to a perfect matching in a graph. In this paper we study the computational complexity of finding S-factors in regular graphs. Our techniques combine some classical as well as recent tools from graph theory.

Cite as

Sanjana Kolisetty, Linh Le, Ilya Volkovich, and Mihalis Yannakakis. The Complexity of Finding S-Factors in Regular Graphs. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 21:1-21:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kolisetty_et_al:LIPIcs.FSTTCS.2019.21,
  author =	{Kolisetty, Sanjana and Le, Linh and Volkovich, Ilya and Yannakakis, Mihalis},
  title =	{{The Complexity of Finding S-Factors in Regular Graphs}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{21:1--21:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.21},
  URN =		{urn:nbn:de:0030-drops-115834},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.21},
  annote =	{Keywords: constraint satisfaction problem, Dichotomy theorem, Graph Factors, Regular Graphs}
}
Document
More on AC^0[oplus] and Variants of the Majority Function

Authors: Nutan Limaye, Srikanth Srinivasan, and Utkarsh Tripathi


Abstract
In this paper we prove two results about AC^0[oplus] circuits. (1) We show that for d(N) = o(sqrt(log N/log log N)) and N <= s(N) <= 2^(dN^(1/4d^2)) there is an explicit family of functions {f_N:{0,1}^N - > {0,1}} such that - f_N has uniform AC^0 formulas of depth d and size at most s; - f_N does not have AC^0[oplus] formulas of depth d and size s^epsilon, where epsilon is a fixed absolute constant. This gives a quantitative improvement on the recent result of Limaye, Srinivasan, Sreenivasaiah, Tripathi, and Venkitesh, (STOC, 2019), which proved a similar Fixed-Depth Size-Hierarchy theorem but for d << log log N and s << exp(N^(1/2^Omega(d))). As in the previous result, we use the Coin Problem to prove our hierarchy theorem. Our main technical result is the construction of uniform size-optimal formulas for solving the coin problem with improved sample complexity (1/delta)^O(d) (down from (1/delta)^(2^O(d)) in the previous result). (2) In our second result, we show that randomness buys depth in the AC^0[oplus] setting. Formally, we show that for any fixed constant d >= 2, there is a family of Boolean functions that has polynomial-sized randomized uniform AC^0 circuits of depth d but no polynomial-sized (deterministic) AC^0[oplus] circuits of depth d. Previously Viola (Computational Complexity, 2014) showed that an increase in depth (by at least 2) is essential to avoid superpolynomial blow-up while derandomizing randomized AC^0 circuits. We show that an increase in depth (by at least 1) is essential even for AC^0[oplus]. As in Viola’s result, the separating examples are promise variants of the Majority function on N inputs that accept inputs of weight at least N/2 + N/(log N)^(d-1) and reject inputs of weight at most N/2 - N/(log N)^(d-1).

Cite as

Nutan Limaye, Srikanth Srinivasan, and Utkarsh Tripathi. More on AC^0[oplus] and Variants of the Majority Function. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{limaye_et_al:LIPIcs.FSTTCS.2019.22,
  author =	{Limaye, Nutan and Srinivasan, Srikanth and Tripathi, Utkarsh},
  title =	{{More on AC^0\lbrackoplus\rbrack and Variants of the Majority Function}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.22},
  URN =		{urn:nbn:de:0030-drops-115844},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.22},
  annote =	{Keywords: AC^0\lbrackoplus\rbrack, Coin Problem, Promise Majority}
}
Document
Planted Models for k-Way Edge and Vertex Expansion

Authors: Anand Louis and Rakesh Venkat


Abstract
Graph partitioning problems are a central topic of study in algorithms and complexity theory. Edge expansion and vertex expansion, two popular graph partitioning objectives, seek a 2-partition of the vertex set of the graph that minimizes the considered objective. However, for many natural applications, one might require a graph to be partitioned into k parts, for some k >=slant 2. For a k-partition S_1, ..., S_k of the vertex set of a graph G = (V,E), the k-way edge expansion (resp. vertex expansion) of {S_1, ..., S_k} is defined as max_{i in [k]} Phi(S_i), and the balanced k-way edge expansion (resp. vertex expansion) of G is defined as min_{{S_1, ..., S_k} in P_k} max_{i in [k]} Phi(S_i) , where P_k is the set of all balanced k-partitions of V (i.e each part of a k-partition in P_k should have cardinality |V|/k), and Phi(S) denotes the edge expansion (resp. vertex expansion) of S subset V. We study a natural planted model for graphs where the vertex set of a graph has a k-partition S_1, ..., S_k such that the graph induced on each S_i has large expansion, but each S_i has small edge expansion (resp. vertex expansion) in the graph. We give bi-criteria approximation algorithms for computing the balanced k-way edge expansion (resp. vertex expansion) of instances in this planted model.

Cite as

Anand Louis and Rakesh Venkat. Planted Models for k-Way Edge and Vertex Expansion. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{louis_et_al:LIPIcs.FSTTCS.2019.23,
  author =	{Louis, Anand and Venkat, Rakesh},
  title =	{{Planted Models for k-Way Edge and Vertex Expansion}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.23},
  URN =		{urn:nbn:de:0030-drops-115857},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.23},
  annote =	{Keywords: Vertex Expansion, k-way partitioning, Semi-Random models, Planted Models, Approximation Algorithms, Beyond Worst Case Analysis}
}
Document
Online Non-Preemptive Scheduling to Minimize Maximum Weighted Flow-Time on Related Machines

Authors: Giorgio Lucarelli, Benjamin Moseley, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram


Abstract
We consider the problem of scheduling jobs to minimize the maximum weighted flow-time on a set of related machines. When jobs can be preempted this problem is well-understood; for example, there exists a constant competitive algorithm using speed augmentation. When jobs must be scheduled non-preemptively, only hardness results are known. In this paper, we present the first online guarantees for the non-preemptive variant. We present the first constant competitive algorithm for minimizing the maximum weighted flow-time on related machines by relaxing the problem and assuming that the online algorithm can reject a small fraction of the total weight of jobs. This is essentially the best result possible given the strong lower bounds on the non-preemptive problem without rejection.

Cite as

Giorgio Lucarelli, Benjamin Moseley, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram. Online Non-Preemptive Scheduling to Minimize Maximum Weighted Flow-Time on Related Machines. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 24:1-24:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lucarelli_et_al:LIPIcs.FSTTCS.2019.24,
  author =	{Lucarelli, Giorgio and Moseley, Benjamin and Thang, Nguyen Kim and Srivastav, Abhinav and Trystram, Denis},
  title =	{{Online Non-Preemptive Scheduling to Minimize Maximum Weighted Flow-Time on Related Machines}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{24:1--24:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.24},
  URN =		{urn:nbn:de:0030-drops-115867},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.24},
  annote =	{Keywords: Online Algorithms, Scheduling, Resource Augmentation}
}
Document
On the AC^0[oplus] Complexity of Andreev’s Problem

Authors: Aditya Potukuchi


Abstract
Andreev’s Problem is the following: Given an integer d and a subset of S subset F_q x F_q, is there a polynomial y = p(x) of degree at most d such that for every a in F_q, (a,p(a)) in S? We show an AC^0[oplus] lower bound for this problem. This problem appears to be similar to the list recovery problem for degree-d Reed-Solomon codes over F_q which states the following: Given subsets A_1,...,A_q of F_q, output all (if any) the Reed-Solomon codewords contained in A_1 x *s x A_q. In particular, we study this problem when the lists A_1, ..., A_q are randomly chosen, and are of a certain size. This may be of independent interest.

Cite as

Aditya Potukuchi. On the AC^0[oplus] Complexity of Andreev’s Problem. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{potukuchi:LIPIcs.FSTTCS.2019.25,
  author =	{Potukuchi, Aditya},
  title =	{{On the AC^0\lbrackoplus\rbrack Complexity of Andreev’s Problem}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.25},
  URN =		{urn:nbn:de:0030-drops-115879},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.25},
  annote =	{Keywords: List Recovery, Sharp Threshold, Fourier Analysis}
}
Document
The Preemptive Resource Allocation Problem

Authors: Kanthi Sarpatwar, Baruch Schieber, and Hadas Shachnai


Abstract
We revisit a classical scheduling model to incorporate modern trends in data center networks and cloud services. Addressing some key challenges in the allocation of shared resources to user requests (jobs) in such settings, we consider the following variants of the classic resource allocation problem (RAP). The input to our problems is a set J of jobs and a set M of homogeneous hosts, each has an available amount of some resource. A job is associated with a release time, a due date, a weight and a given length, as well as its resource requirement. A feasible schedule is an allocation of the resource to a subset of the jobs, satisfying the job release times/due dates as well as the resource constraints. A crucial distinction between classic RAP and our problems is that we allow preemption and migration of jobs, motivated by virtualization techniques. We consider two natural objectives: throughput maximization (MaxT), which seeks a maximum weight subset of the jobs that can be feasibly scheduled on the hosts in M, and resource minimization (MinR), that is finding the minimum number of (homogeneous) hosts needed to feasibly schedule all jobs. Both problems are known to be NP-hard. We first present an Omega(1)-approximation algorithm for MaxT instances where time-windows form a laminar family of intervals. We then extend the algorithm to handle instances with arbitrary time-windows, assuming there is sufficient slack for each job to be completed. For MinR we study a more general setting with d resources and derive an O(log d)-approximation for any fixed d >= 1, under the assumption that time-windows are not too small. This assumption can be removed leading to a slightly worse ratio of O(log d log^* T), where T is the maximum due date of any job.

Cite as

Kanthi Sarpatwar, Baruch Schieber, and Hadas Shachnai. The Preemptive Resource Allocation Problem. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{sarpatwar_et_al:LIPIcs.FSTTCS.2019.26,
  author =	{Sarpatwar, Kanthi and Schieber, Baruch and Shachnai, Hadas},
  title =	{{The Preemptive Resource Allocation Problem}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.26},
  URN =		{urn:nbn:de:0030-drops-115886},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.26},
  annote =	{Keywords: Machine Scheduling, Resource Allocation, Vector Packing, Approximation Algorithms}
}
Document
Online and Offline Algorithms for Circuit Switch Scheduling

Authors: Roy Schwartz, Mohit Singh, and Sina Yazdanbod


Abstract
Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matching between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay delta is incurred during which no data can be transmitted. For the offline version of the problem we present a (1-(1/e)-epsilon) approximation ratio (for any constant epsilon >0). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) ((e-1)/(2e-1)-epsilon)-competitive ratio (for any constant epsilon >0 ) that exceeds time by an additive factor of O(delta/epsilon). We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one.

Cite as

Roy Schwartz, Mohit Singh, and Sina Yazdanbod. Online and Offline Algorithms for Circuit Switch Scheduling. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{schwartz_et_al:LIPIcs.FSTTCS.2019.27,
  author =	{Schwartz, Roy and Singh, Mohit and Yazdanbod, Sina},
  title =	{{Online and Offline Algorithms for Circuit Switch Scheduling}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.27},
  URN =		{urn:nbn:de:0030-drops-115893},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.27},
  annote =	{Keywords: approximation algorithm, online, matching, scheduling}
}
Document
On the Probabilistic Degrees of Symmetric Boolean Functions

Authors: Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh


Abstract
The probabilistic degree of a Boolean function f:{0,1}^n -> {0,1} is defined to be the smallest d such that there is a random polynomial P of degree at most d that agrees with f at each point with high probability. Introduced by Razborov (1987), upper and lower bounds on probabilistic degrees of Boolean functions - specifically symmetric Boolean functions - have been used to prove explicit lower bounds, design pseudorandom generators, and devise algorithms for combinatorial problems. In this paper, we characterize the probabilistic degrees of all symmetric Boolean functions up to polylogarithmic factors over all fields of fixed characteristic (positive or zero).

Cite as

Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh. On the Probabilistic Degrees of Symmetric Boolean Functions. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 28:1-28:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{srinivasan_et_al:LIPIcs.FSTTCS.2019.28,
  author =	{Srinivasan, Srikanth and Tripathi, Utkarsh and Venkitesh, S.},
  title =	{{On the Probabilistic Degrees of Symmetric Boolean Functions}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{28:1--28:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.28},
  URN =		{urn:nbn:de:0030-drops-115908},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.28},
  annote =	{Keywords: Symmetric Boolean function, probabilistic degree}
}
Document
Classification Among Hidden Markov Models

Authors: S. Akshay, Hugo Bazille, Eric Fabre, and Blaise Genest


Abstract
An important task in AI is one of classifying an observation as belonging to one class among several (e.g. image classification). We revisit this problem in a verification context: given k partially observable systems modeled as Hidden Markov Models (also called labeled Markov chains), and an execution of one of them, can we eventually classify which system performed this execution, just by looking at its observations? Interestingly, this problem generalizes several problems in verification and control, such as fault diagnosis and opacity. Also, classification has strong connections with different notions of distances between stochastic models. In this paper, we study a general and practical notion of classifiers, namely limit-sure classifiers, which allow misclassification, i.e. errors in classification, as long as the probability of misclassification tends to 0 as the length of the observation grows. To study the complexity of several notions of classification, we develop techniques based on a simple but powerful notion of stationary distributions for HMMs. We prove that one cannot classify among HMMs iff there is a finite separating word from their stationary distributions. This provides a direct proof that classifiability can be checked in PTIME, as an alternative to existing proofs using separating events (i.e. sets of infinite separating words) for the total variation distance. Our approach also allows us to introduce and tackle new notions of classifiability which are applicable in a security context.

Cite as

S. Akshay, Hugo Bazille, Eric Fabre, and Blaise Genest. Classification Among Hidden Markov Models. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.FSTTCS.2019.29,
  author =	{Akshay, S. and Bazille, Hugo and Fabre, Eric and Genest, Blaise},
  title =	{{Classification Among Hidden Markov Models}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{29:1--29:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.29},
  URN =		{urn:nbn:de:0030-drops-115917},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.29},
  annote =	{Keywords: verification: probabilistic systems, partially observable systems}
}
Document
Minimisation of Event Structures

Authors: Paolo Baldan and Alessandra Raffaetà


Abstract
Event structures are fundamental models in concurrency theory, providing a representation of events in computation and of their relations, notably concurrency, conflict and causality. In this paper we present a theory of minimisation for event structures. Working in a class of event structures that generalises many stable event structure models in the literature, (e.g., prime, asymmetric, flow and bundle event structures) we study a notion of behaviour-preserving quotient, taking hereditary history preserving bisimilarity as a reference behavioural equivalence. We show that for any event structure a uniquely determined minimal quotient always exists. We observe that each event structure can be seen as the quotient of a prime event structure, and that quotients of general event structures arise from quotients of (suitably defined) corresponding prime event structures. This gives a special relevance to quotients in the class of prime event structures, which are then studied in detail, providing a characterisation and showing that also prime event structures always admit a unique minimal quotient.

Cite as

Paolo Baldan and Alessandra Raffaetà. Minimisation of Event Structures. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{baldan_et_al:LIPIcs.FSTTCS.2019.30,
  author =	{Baldan, Paolo and Raffaet\`{a}, Alessandra},
  title =	{{Minimisation of Event Structures}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.30},
  URN =		{urn:nbn:de:0030-drops-115923},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.30},
  annote =	{Keywords: Event structures, minimisation, history-preserving bisimilarity, behaviour-preserving quotient}
}
Document
Concurrent Parameterized Games

Authors: Nathalie Bertrand, Patricia Bouyer, and Anirban Majumdar


Abstract
Traditional concurrent games on graphs involve a fixed number of players, who take decisions simultaneously, determining the next state of the game. In this paper, we introduce a parameterized variant of concurrent games on graphs, where the parameter is precisely the number of players. Parameterized concurrent games are described by finite graphs, in which the transitions bear regular languages to describe the possible move combinations that lead from one vertex to another. We consider the problem of determining whether the first player, say Eve, has a strategy to ensure a reachability objective against any strategy profile of her opponents as a coalition. In particular Eve’s strategy should be independent of the number of opponents she actually has. Technically, this paper focuses on an a priori simpler setting where the languages labeling transitions only constrain the number of opponents (but not their precise action choices). These constraints are described as semilinear sets, finite unions of intervals, or intervals. We establish the precise complexities of the parameterized reachability game problem, ranging from PTIME-complete to PSPACE-complete, in a variety of situations depending on the contraints (semilinear predicates, unions of intervals, or intervals) and on the presence or not of non-determinism.

Cite as

Nathalie Bertrand, Patricia Bouyer, and Anirban Majumdar. Concurrent Parameterized Games. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 31:1-31:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bertrand_et_al:LIPIcs.FSTTCS.2019.31,
  author =	{Bertrand, Nathalie and Bouyer, Patricia and Majumdar, Anirban},
  title =	{{Concurrent Parameterized Games}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{31:1--31:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.31},
  URN =		{urn:nbn:de:0030-drops-115931},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.31},
  annote =	{Keywords: concurrent games, parameterized verification}
}
Document
Expected Window Mean-Payoff

Authors: Benjamin Bordais, Shibashis Guha, and Jean-François Raskin


Abstract
We study the expected value of the window mean-payoff measure in Markov decision processes (MDPs) and Markov chains (MCs). The window mean-payoff measure strengthens the classical mean-payoff measure by measuring the mean-payoff over a window of bounded length that slides along an infinite path. This measure ensures better stability properties than the classical mean-payoff. Window mean-payoff has been introduced previously for two-player zero-sum games. As in the case of games, we study several variants of this definition: the measure can be defined to be prefix-independent or not, and for a fixed window length or for a window length that is left parametric. For fixed window length, we provide polynomial time algorithms for the prefix-independent version for both MDPs and MCs. When the length is left parametric, the problem of computing the expected value on MDPs is as hard as computing the mean-payoff value in two-player zero-sum games, a problem for which it is not known if it can be solved in polynomial time. For the prefix-dependent version, surprisingly, the expected window mean-payoff value cannot be computed in polynomial time unless P=PSPACE. For the parametric case and the prefix-dependent case, we manage to obtain algorithms with better complexities for MCs.

Cite as

Benjamin Bordais, Shibashis Guha, and Jean-François Raskin. Expected Window Mean-Payoff. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 32:1-32:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bordais_et_al:LIPIcs.FSTTCS.2019.32,
  author =	{Bordais, Benjamin and Guha, Shibashis and Raskin, Jean-Fran\c{c}ois},
  title =	{{Expected Window Mean-Payoff}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{32:1--32:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.32},
  URN =		{urn:nbn:de:0030-drops-115940},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.32},
  annote =	{Keywords: mean-payoff, Markov decision processes, synthesis}
}
Document
Interval Temporal Logic for Visibly Pushdown Systems

Authors: Laura Bozzelli, Angelo Montanari, and Adriano Peron


Abstract
In this paper, we introduce and investigate an extension of Halpern and Shoham’s interval temporal logic HS for the specification and verification of branching-time context-free requirements of pushdown systems under a state-based semantics over Kripke structures. Both homogeneity and visibility are assumed. The proposed logic, called nested BHS, supports branching-time both in the past and in the future, and is able to express non-regular properties of linear and branching behaviours of procedural contexts in a natural way. It strictly subsumes well-known linear time context-free extensions of LTL such as CaRet [R. Alur et al., 2004] and NWTL [R. Alur et al., 2007]. The main result is the decidability of the visibly pushdown model-checking problem against nested BHS. The proof exploits a non-trivial automata-theoretic construction.

Cite as

Laura Bozzelli, Angelo Montanari, and Adriano Peron. Interval Temporal Logic for Visibly Pushdown Systems. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bozzelli_et_al:LIPIcs.FSTTCS.2019.33,
  author =	{Bozzelli, Laura and Montanari, Angelo and Peron, Adriano},
  title =	{{Interval Temporal Logic for Visibly Pushdown Systems}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.33},
  URN =		{urn:nbn:de:0030-drops-115959},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.33},
  annote =	{Keywords: Pushdown systems, Interval temporal logic, Model checking}
}
Document
Taming the Complexity of Timeline-Based Planning over Dense Temporal Domains

Authors: Laura Bozzelli, Angelo Montanari, and Adriano Peron


Abstract
The problem of timeline-based planning (TP) over dense temporal domains is known to be undecidable. In this paper, we introduce two semantic variants of TP, called strong minimal and weak minimal semantics, which allow to express meaningful properties. Both semantics are based on the minimality in the time distances of the existentially-quantified time events from the universally-quantified reference event, but the weak minimal variant distinguishes minimality in the past from minimality in the future. Surprisingly, we show that, despite the (apparently) small difference in the two semantics, for the strong minimal one, the TP problem is still undecidable, while for the weak minimal one, the TP problem is just PSPACE-complete. Membership in PSPACE is determined by exploiting a strictly more expressive extension (ECA^+) of the well-known robust class of Event-Clock Automata (ECA) that allows to encode the weak minimal TP problem and to reduce it to non-emptiness of Timed Automata (TA). Finally, an extension of ECA^+ (ECA^{++}) is considered, proving that its non-emptiness problem is undecidable. We believe that the two extensions of ECA (ECA^+ and ECA^{++}), introduced for technical reasons, are actually valuable per sé in the field of TA.

Cite as

Laura Bozzelli, Angelo Montanari, and Adriano Peron. Taming the Complexity of Timeline-Based Planning over Dense Temporal Domains. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bozzelli_et_al:LIPIcs.FSTTCS.2019.34,
  author =	{Bozzelli, Laura and Montanari, Angelo and Peron, Adriano},
  title =	{{Taming the Complexity of Timeline-Based Planning over Dense Temporal Domains}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.34},
  URN =		{urn:nbn:de:0030-drops-115963},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.34},
  annote =	{Keywords: Timeline-based planning, timed automata, event-clock automata}
}
Document
Dynamics on Games: Simulation-Based Techniques and Applications to Routing

Authors: Thomas Brihaye, Gilles Geeraerts, Marion Hallet, Benjamin Monmege, and Bruno Quoitin


Abstract
We consider multi-player games played on graphs, in which the players aim at fulfilling their own (not necessarily antagonistic) objectives. In the spirit of evolutionary game theory, we suppose that the players have the right to repeatedly update their respective strategies (for instance, to improve the outcome w.r.t. the current strategy profile). This generates a dynamics in the game which may eventually stabilise to an equilibrium. The objective of the present paper is twofold. First, we aim at drawing a general framework to reason about the termination of such dynamics. In particular, we identify preorders on games (inspired from the classical notion of simulation between transitions systems, and from the notion of graph minor) which preserve termination of dynamics. Second, we show the applicability of the previously developed framework to interdomain routing problems.

Cite as

Thomas Brihaye, Gilles Geeraerts, Marion Hallet, Benjamin Monmege, and Bruno Quoitin. Dynamics on Games: Simulation-Based Techniques and Applications to Routing. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{brihaye_et_al:LIPIcs.FSTTCS.2019.35,
  author =	{Brihaye, Thomas and Geeraerts, Gilles and Hallet, Marion and Monmege, Benjamin and Quoitin, Bruno},
  title =	{{Dynamics on Games: Simulation-Based Techniques and Applications to Routing}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.35},
  URN =		{urn:nbn:de:0030-drops-115978},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.35},
  annote =	{Keywords: games on graphs, dynamics, simulation, network}
}
Document
Query Preserving Watermarking Schemes for Locally Treelike Databases

Authors: Agnishom Chattopadhyay and M. Praveen


Abstract
Watermarking is a way of embedding information in digital documents. Much research has been done on techniques for watermarking relational databases and XML documents, where the process of embedding information shouldn't distort query outputs too much. Recently, techniques have been proposed to watermark some classes of relational structures preserving first-order and monadic second order queries. For relational structures whose Gaifman graphs have bounded degree, watermarking can be done preserving first-order queries. We extend this line of work and study watermarking schemes for other classes of structures. We prove that for relational structures whose Gaifman graphs belong to a class of graphs that have locally bounded tree-width and is closed under minors, watermarking schemes exist that preserve first-order queries. We use previously known properties of logical formulas and graphs, and build on them with some technical work to make them work in our context. This constitutes a part of the first steps to understand the extent to which techniques from algorithm design and computational learning theory can be adapted for watermarking.

Cite as

Agnishom Chattopadhyay and M. Praveen. Query Preserving Watermarking Schemes for Locally Treelike Databases. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2019.36,
  author =	{Chattopadhyay, Agnishom and Praveen, M.},
  title =	{{Query Preserving Watermarking Schemes for Locally Treelike Databases}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.36},
  URN =		{urn:nbn:de:0030-drops-115988},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.36},
  annote =	{Keywords: Locally bounded tree-width, closure under minors, first-order queries, watermarking}
}
Document
Complexity of Liveness in Parameterized Systems

Authors: Peter Chini, Roland Meyer, and Prakash Saivasan


Abstract
We investigate the fine-grained complexity of liveness verification for leader contributor systems. These consist of a designated leader thread and an arbitrary number of identical contributor threads communicating via a shared memory. The liveness verification problem asks whether there is an infinite computation of the system in which the leader reaches a final state infinitely often. Like its reachability counterpart, the problem is known to be NP-complete. Our results show that, even from a fine-grained point of view, the complexities differ only by a polynomial factor. Liveness verification decomposes into reachability and cycle detection. We present a fixed point iteration solving the latter in polynomial time. For reachability, we reconsider the two standard parameterizations. When parameterized by the number of states of the leader L and the size of the data domain D, we show an (L + D)^O(L + D)-time algorithm. It improves on a previous algorithm, thereby settling an open problem. When parameterized by the number of states of the contributor C, we reuse an O^*(2^C)-time algorithm. We show how to connect both algorithms with the cycle detection to obtain algorithms for liveness verification. The running times of the composed algorithms match those of reachability, proving that the fine-grained lower bounds for liveness verification are met.

Cite as

Peter Chini, Roland Meyer, and Prakash Saivasan. Complexity of Liveness in Parameterized Systems. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chini_et_al:LIPIcs.FSTTCS.2019.37,
  author =	{Chini, Peter and Meyer, Roland and Saivasan, Prakash},
  title =	{{Complexity of Liveness in Parameterized Systems}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.37},
  URN =		{urn:nbn:de:0030-drops-115993},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.37},
  annote =	{Keywords: Liveness Verification, Fine-Grained Complexity, Parameterized Systems}
}
Document
Greibach Normal Form for omega-Algebraic Systems and Weighted Simple omega-Pushdown Automata

Authors: Manfred Droste, Sven Dziadek, and Werner Kuich


Abstract
In weighted automata theory, many classical results on formal languages have been extended into a quantitative setting. Here, we investigate weighted context-free languages of infinite words, a generalization of omega-context-free languages (Cohen, Gold 1977) and an extension of weighted context-free languages of finite words (Chomsky, Schützenberger 1963). As in the theory of formal grammars, these weighted languages, or omega-algebraic series, can be represented as solutions of mixed omega-algebraic systems of equations and by weighted omega-pushdown automata. In our first main result, we show that mixed omega-algebraic systems can be transformed into Greibach normal form. Our second main result proves that simple omega-reset pushdown automata recognize all omega-algebraic series that are a solution of an omega-algebraic system in Greibach normal form. Simple reset automata do not use epsilon-transitions and can change the stack only by at most one symbol. These results generalize fundamental properties of context-free languages to weighted languages.

Cite as

Manfred Droste, Sven Dziadek, and Werner Kuich. Greibach Normal Form for omega-Algebraic Systems and Weighted Simple omega-Pushdown Automata. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 38:1-38:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{droste_et_al:LIPIcs.FSTTCS.2019.38,
  author =	{Droste, Manfred and Dziadek, Sven and Kuich, Werner},
  title =	{{Greibach Normal Form for omega-Algebraic Systems and Weighted Simple omega-Pushdown Automata}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{38:1--38:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.38},
  URN =		{urn:nbn:de:0030-drops-116003},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.38},
  annote =	{Keywords: Weighted omega-Context-Free Grammars, Algebraic Systems, Greibach Normal Form, Weighted Automata, omega-Pushdown Automata}
}
Document
Transformations of Boolean Functions

Authors: Jeffrey M. Dudek and Dror Fried


Abstract
Boolean functions are characterized by the unique structure of their solution space. Some properties of the solution space, such as the possible existence of a solution, are well sought after but difficult to obtain. To better reason about such properties, we define transformations as functions that change one Boolean function to another while maintaining some properties of the solution space. We explore transformations of Boolean functions, compactly described as Boolean formulas, where the property is to maintain is the number of solutions in the solution spaces. We first discuss general characteristics of such transformations. Next, we reason about the computational complexity of transforming one Boolean formula to another. Finally, we demonstrate the versatility of transformations by extensively discussing transformations of Boolean formulas to "blocks," which are solution spaces in which the set of solutions makes a prefix of the solution space under a lexicographic order of the variables.

Cite as

Jeffrey M. Dudek and Dror Fried. Transformations of Boolean Functions. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dudek_et_al:LIPIcs.FSTTCS.2019.39,
  author =	{Dudek, Jeffrey M. and Fried, Dror},
  title =	{{Transformations of Boolean Functions}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.39},
  URN =		{urn:nbn:de:0030-drops-116019},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.39},
  annote =	{Keywords: Boolean Formulas, Boolean Functions, Transformations, Model Counting}
}
Document
Two-Way Parikh Automata

Authors: Emmanuel Filiot, Shibashis Guha, and Nicolas Mazzocchi


Abstract
Parikh automata extend automata with counters whose values can only be tested at the end of the computation, with respect to membership into a semi-linear set. Parikh automata have found several applications, for instance in transducer theory, as they enjoy a decidable emptiness problem. In this paper, we study two-way Parikh automata. We show that emptiness becomes undecidable in the non-deterministic case. However, it is PSpace-C when the number of visits to any input position is bounded and the semi-linear set is given as an existential Presburger formula. We also give tight complexity bounds for the inclusion, equivalence and universality problems. Finally, we characterise precisely the complexity of those problems when the semi-linear constraint is given by an arbitrary Presburger formula.

Cite as

Emmanuel Filiot, Shibashis Guha, and Nicolas Mazzocchi. Two-Way Parikh Automata. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{filiot_et_al:LIPIcs.FSTTCS.2019.40,
  author =	{Filiot, Emmanuel and Guha, Shibashis and Mazzocchi, Nicolas},
  title =	{{Two-Way Parikh Automata}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.40},
  URN =		{urn:nbn:de:0030-drops-116027},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.40},
  annote =	{Keywords: Parikh automata, two-way automata, Presburger arithmetic}
}
Document
The Well Structured Problem for Presburger Counter Machines

Authors: Alain Finkel and Ekanshdeep Gupta


Abstract
We introduce the well structured problem as the question of whether a model (here a counter machine) is well structured (here for the usual ordering on integers). We show that it is undecidable for most of the (Presburger-defined) counter machines except for Affine VASS of dimension one. However, the strong well structured problem is decidable for all Presburger counter machines. While Affine VASS of dimension one are not, in general, well structured, we give an algorithm that computes the set of predecessors of a configuration; as a consequence this allows to decide the well structured problem for 1-Affine VASS.

Cite as

Alain Finkel and Ekanshdeep Gupta. The Well Structured Problem for Presburger Counter Machines. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{finkel_et_al:LIPIcs.FSTTCS.2019.41,
  author =	{Finkel, Alain and Gupta, Ekanshdeep},
  title =	{{The Well Structured Problem for Presburger Counter Machines}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.41},
  URN =		{urn:nbn:de:0030-drops-116037},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.41},
  annote =	{Keywords: Well structured transition systems, infinite state systems, Presburger counter machines, reachability, coverability}
}
Document
A Categorical Account of Replicated Data Types

Authors: Fabio Gadducci, Hernán Melgratti, Christian Roldán, and Matteo Sammartino


Abstract
Replicated Data Types (RDTs) have been introduced as a suitable abstraction for dealing with weakly consistent data stores, which may (temporarily) expose multiple, inconsistent views of their state. In the literature, RDTs are commonly specified in terms of two relations: visibility, which accounts for the different views that a store may have, and arbitration, which states the logical order imposed on the operations executed over the store. Different flavours, e.g., operational, axiomatic and functional, have recently been proposed for the specification of RDTs. In this work, we propose an algebraic characterisation of RDT specifications. We define categories of visibility relations and arbitrations, show the existence of relevant limits and colimits, and characterize RDT specifications as functors between such categories that preserve these additional structures.

Cite as

Fabio Gadducci, Hernán Melgratti, Christian Roldán, and Matteo Sammartino. A Categorical Account of Replicated Data Types. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 42:1-42:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gadducci_et_al:LIPIcs.FSTTCS.2019.42,
  author =	{Gadducci, Fabio and Melgratti, Hern\'{a}n and Rold\'{a}n, Christian and Sammartino, Matteo},
  title =	{{A Categorical Account of Replicated Data Types}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{42:1--42:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.42},
  URN =		{urn:nbn:de:0030-drops-116042},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.42},
  annote =	{Keywords: Replicated data type, Specification, Functorial characterisation}
}
Document
New Results on Cutting Plane Proofs for Horn Constraint Systems

Authors: Hans Kleine Büning, Piotr Wojciechowski, and K. Subramani


Abstract
In this paper, we investigate properties of cutting plane based refutations for a class of integer programs called Horn constraint systems (HCS). Briefly, a system of linear inequalities A * x >= b is called a Horn constraint system, if each entry in A belongs to the set {0,1,-1} and furthermore there is at most one positive entry per row. Our focus is on deriving refutations i.e., proofs of unsatisfiability of such programs using cutting planes as a proof system. We also look at several properties of these refutations. Horn constraint systems can be considered as a more general form of propositional Horn formulas, i.e., CNF formulas with at most one positive literal per clause. Cutting plane calculus (CP) is a well-known calculus for deciding the unsatisfiability of propositional CNF formulas and integer programs. Usually, CP consists of a pair of inference rules. These are called the addition rule (ADD) and the division rule (DIV). In this paper, we show that cutting plane calculus is still complete for Horn constraints when every intermediate constraint is required to be Horn. We also investigate the lengths of cutting plane proofs for Horn constraint systems.

Cite as

Hans Kleine Büning, Piotr Wojciechowski, and K. Subramani. New Results on Cutting Plane Proofs for Horn Constraint Systems. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kleinebuning_et_al:LIPIcs.FSTTCS.2019.43,
  author =	{Kleine B\"{u}ning, Hans and Wojciechowski, Piotr and Subramani, K.},
  title =	{{New Results on Cutting Plane Proofs for Horn Constraint Systems}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.43},
  URN =		{urn:nbn:de:0030-drops-116056},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.43},
  annote =	{Keywords: Horn constraints, cutting planes, proof length}
}
Document
The Tree-Generative Capacity of Combinatory Categorial Grammars

Authors: Marco Kuhlmann, Andreas Maletti, and Lena Katharina Schiffer


Abstract
The generative capacity of combinatory categorial grammars as acceptors of tree languages is investigated. It is demonstrated that the such obtained tree languages can also be generated by simple monadic context-free tree grammars. However, the subclass of pure combinatory categorial grammars cannot even accept all regular tree languages. Additionally, the tree languages accepted by combinatory categorial grammars with limited rule degrees are characterized: If only application rules are allowed, then they can accept only a proper subset of the regular tree languages, whereas they can accept exactly the regular tree languages once first degree composition rules are permitted.

Cite as

Marco Kuhlmann, Andreas Maletti, and Lena Katharina Schiffer. The Tree-Generative Capacity of Combinatory Categorial Grammars. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kuhlmann_et_al:LIPIcs.FSTTCS.2019.44,
  author =	{Kuhlmann, Marco and Maletti, Andreas and Schiffer, Lena Katharina},
  title =	{{The Tree-Generative Capacity of Combinatory Categorial Grammars}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.44},
  URN =		{urn:nbn:de:0030-drops-116063},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.44},
  annote =	{Keywords: Combinatory Categorial Grammar, Regular Tree Language, Linear Context-free Tree Language}
}
Document
Cyclic Proofs and Jumping Automata

Authors: Denis Kuperberg, Laureline Pinault, and Damien Pous


Abstract
We consider a fragment of a cyclic sequent proof system for Kleene algebra, and we see it as a computational device for recognising languages of words. The starting proof system is linear and we show that it captures precisely the regular languages. When adding the standard contraction rule, the expressivity raises significantly; we characterise the corresponding class of languages using a new notion of multi-head finite automata, where heads can jump.

Cite as

Denis Kuperberg, Laureline Pinault, and Damien Pous. Cyclic Proofs and Jumping Automata. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kuperberg_et_al:LIPIcs.FSTTCS.2019.45,
  author =	{Kuperberg, Denis and Pinault, Laureline and Pous, Damien},
  title =	{{Cyclic Proofs and Jumping Automata}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.45},
  URN =		{urn:nbn:de:0030-drops-116071},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.45},
  annote =	{Keywords: Cyclic proofs, regular languages, multi-head automata, transducers}
}
Document
Reachability in Concurrent Uninterpreted Programs

Authors: Salvatore La Torre and Madhusudan Parthasarathy


Abstract
We study the safety verification (reachability problem) for concurrent programs with uninterpreted functions/relations. By extending the notion of coherence, recently identified for sequential programs, to concurrent programs, we show that reachability in coherent concurrent programs under various scheduling restrictions is decidable by a reduction to multistack pushdown automata, and establish precise complexity bounds for them. We also prove that the coherence restriction for these various scheduling restrictions is itself a decidable property.

Cite as

Salvatore La Torre and Madhusudan Parthasarathy. Reachability in Concurrent Uninterpreted Programs. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{latorre_et_al:LIPIcs.FSTTCS.2019.46,
  author =	{La Torre, Salvatore and Parthasarathy, Madhusudan},
  title =	{{Reachability in Concurrent Uninterpreted Programs}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.46},
  URN =		{urn:nbn:de:0030-drops-116082},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.46},
  annote =	{Keywords: Verification, uninterpreted programs, concurrent programs, shared memory}
}
Document
Distance Between Mutually Reachable Petri Net Configurations

Authors: Jérôme Leroux


Abstract
Petri nets are a classical model of concurrency widely used and studied in formal verification with many applications in modeling and analyzing hardware and software, data bases, and reactive systems. The reachability problem is central since many other problems reduce to reachability questions. In 2011, we proved that a variant of the reachability problem, called the reversible reachability problem is exponential-space complete. Recently, this problem found several unexpected applications in particular in the theory of population protocols. In this paper we revisit the reversible reachability problem in order to prove that the minimal distance in the reachability graph of two mutually reachable configurations is linear with respect to the Euclidean distance between those two configurations.

Cite as

Jérôme Leroux. Distance Between Mutually Reachable Petri Net Configurations. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{leroux:LIPIcs.FSTTCS.2019.47,
  author =	{Leroux, J\'{e}r\^{o}me},
  title =	{{Distance Between Mutually Reachable Petri Net Configurations}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.47},
  URN =		{urn:nbn:de:0030-drops-116094},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.47},
  annote =	{Keywords: Petri nets, Vector addition systems, Formal verification, Reachability problem}
}
Document
Boolean Algebras from Trace Automata

Authors: Alexandre Mansard


Abstract
We consider trace automata. Their vertices are Mazurkiewicz traces and they accept finite words. Considering the length of a trace as the length of its Foata normal form, we define the operations of level-length synchronization and of superposition of trace automata. We show that if a family F of trace automata is closed under these operations, then for any deterministic automaton H in F, the word languages accepted by the deterministic automata of F that are length-reducible to H form a Boolean algebra. We show that the family of trace suffix automata with level-regular contexts and the subfamily of vector addition systems satisfy these closure properties. In particular, this yields various Boolean algebras of word languages accepted by deterministic vector addition systems.

Cite as

Alexandre Mansard. Boolean Algebras from Trace Automata. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mansard:LIPIcs.FSTTCS.2019.48,
  author =	{Mansard, Alexandre},
  title =	{{Boolean Algebras from Trace Automata}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{48:1--48:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.48},
  URN =		{urn:nbn:de:0030-drops-116107},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.48},
  annote =	{Keywords: Boolean algebras, traces, automata, synchronization, pushdown automata, vector addition systems}
}
Document
Widths of Regular and Context-Free Languages

Authors: David Mestel


Abstract
Given a partially-ordered finite alphabet Sigma and a language L subseteq Sigma^*, how large can an antichain in L be (where L is given the lexicographic ordering)? More precisely, since L will in general be infinite, we should ask about the rate of growth of maximum antichains consisting of words of length n. This fundamental property of partial orders is known as the width, and in a companion work [Mestel, 2019] we show that the problem of computing the information leakage permitted by a deterministic interactive system modeled as a finite-state transducer can be reduced to the problem of computing the width of a certain regular language. In this paper, we show that if L is regular then there is a dichotomy between polynomial and exponential antichain growth. We give a polynomial-time algorithm to distinguish the two cases, and to compute the order of polynomial growth, with the language specified as an NFA. For context-free languages we show that there is a similar dichotomy, but now the problem of distinguishing the two cases is undecidable. Finally, we generalise the lexicographic order to tree languages, and show that for regular tree languages there is a trichotomy between polynomial, exponential and doubly exponential antichain growth.

Cite as

David Mestel. Widths of Regular and Context-Free Languages. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mestel:LIPIcs.FSTTCS.2019.49,
  author =	{Mestel, David},
  title =	{{Widths of Regular and Context-Free Languages}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.49},
  URN =		{urn:nbn:de:0030-drops-116111},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.49},
  annote =	{Keywords: Formal languages, combinatorics on words, information flow}
}
Document
Degrees of Ambiguity of Büchi Tree Automata

Authors: Alexander Rabinovich and Doron Tiferet


Abstract
An automaton is unambiguous if for every input it has at most one accepting computation. An automaton is finitely (respectively, countably) ambiguous if for every input it has at most finitely (respectively, countably) many accepting computations. An automaton is boundedly ambiguous if there is k in N, such that for every input it has at most k accepting computations. We consider nondeterministic Büchi automata (NBA) over infinite trees and prove that it is decidable in polynomial time, whether an automaton is unambiguous, boundedly ambiguous, finitely ambiguous, or countably ambiguous.

Cite as

Alexander Rabinovich and Doron Tiferet. Degrees of Ambiguity of Büchi Tree Automata. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rabinovich_et_al:LIPIcs.FSTTCS.2019.50,
  author =	{Rabinovich, Alexander and Tiferet, Doron},
  title =	{{Degrees of Ambiguity of B\"{u}chi Tree Automata}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.50},
  URN =		{urn:nbn:de:0030-drops-116122},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.50},
  annote =	{Keywords: automata on infinite trees, ambiguous automata}
}
Document
Regular Separability and Intersection Emptiness Are Independent Problems

Authors: Ramanathan S. Thinniyam and Georg Zetzsche


Abstract
The problem of regular separability asks, given two languages K and L, whether there exists a regular language S that includes K and is disjoint from L. This problem becomes interesting when the input languages K and L are drawn from language classes beyond the regular languages. For such classes, a mild and useful assumption is that they are full trios, i.e. closed under rational transductions. All the results on regular separability for full trios obtained so far exhibited a noteworthy correspondence with the intersection emptiness problem: In each case, regular separability is decidable if and only if intersection emptiness is decidable. This raises the question whether for full trios, regular separability can be reduced to intersection emptiness or vice-versa. We present counterexamples showing that neither of the two problems can be reduced to the other. More specifically, we describe full trios C_1, D_1, C_2, D_2 such that (i) intersection emptiness is decidable for C_1 and D_1, but regular separability is undecidable for C_1 and D_1 and (ii) regular separability is decidable for C_2 and D_2, but intersection emptiness is undecidable for C_2 and D_2.

Cite as

Ramanathan S. Thinniyam and Georg Zetzsche. Regular Separability and Intersection Emptiness Are Independent Problems. In 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 150, pp. 51:1-51:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{thinniyam_et_al:LIPIcs.FSTTCS.2019.51,
  author =	{Thinniyam, Ramanathan S. and Zetzsche, Georg},
  title =	{{Regular Separability and Intersection Emptiness Are Independent Problems}},
  booktitle =	{39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)},
  pages =	{51:1--51:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-131-3},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{150},
  editor =	{Chattopadhyay, Arkadev and Gastin, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2019.51},
  URN =		{urn:nbn:de:0030-drops-116138},
  doi =		{10.4230/LIPIcs.FSTTCS.2019.51},
  annote =	{Keywords: Regular separability, intersection emptiness, decidability}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail