eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
0
0
10.4230/LIPIcs.FSTTCS.2015
article
LIPIcs, Volume 45, FSTTCS'15, Complete Volume
Harsha, Prahladh
Ramalingam, G.
LIPIcs, Volume 45, FSTTCS'15, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015/LIPIcs.FSTTCS.2015.pdf
Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs, Mathematical Logic, Formal Languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
i
xiv
10.4230/LIPIcs.FSTTCS.2015.i
article
Front Matter, Table of Contents, Preface, Conference Organization
Harsha, Prahladh
Ramalingam, G.
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.i/LIPIcs.FSTTCS.2015.i.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
1
1
10.4230/LIPIcs.FSTTCS.2015.1
article
Bypassing Worst Case Analysis: Tensor Decomposition and Clustering (Invited Talk)
Charikar, Moses S.
Typical worst case analysis of algorithms has led to a rich theory, but suffers from many pitfalls.
This has inspired several approaches to bypass worst case analysis.
In this talk, I will describe two vignettes from recent work in this realm.
In the first part of the talk, I will discuss tensor decomposition -- a natural higher dimensional analog of matrix decomposition.
Low rank tensor decompositions have proved to be a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. Yet, they pose significant challenges for algorithm design as most problems about tensors
are NP-hard. I will discuss a smoothed analysis framework for analyzing algorithms for tensor decomposition which models realistic instances of learning problems and allows us to overcome many of the usual limitations of using tensor methods.
In the second part of the talk, I will explore the phenomenon of convex relaxations returning integer solutions. Clearly this is not true in the worst case. I will discuss instances of discrete optimization problems where, for a suitable distribution on inputs, LP and SDP relaxations produce integer solutions with high probability. This has been studied in the context of LP decoding, sparse recovery, stochastic block models and so on. I will mention some recent results for clustering problems: when points are drawn from a distribution over k sufficiently separated clusters, the well known k-median relaxation and a (not so well known) SDP relaxation for k-means exactly recover the clusters.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.1/LIPIcs.FSTTCS.2015.1.pdf
tensor decomposition
smoothed analysis
convex relaxations
integrality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
2
4
10.4230/LIPIcs.FSTTCS.2015.2
article
Checking Correctness of Concurrent Objects: Tractable Reductions to Reachability (Invited Talk)
Bouajjani, Ahmed
Emmi, Michael
Enea, Constantin
Hamza, Jad
Efficient implementations of concurrent objects such as semaphores, locks, and atomic collections including stacks and queues are vital to modern computer systems. Programming them is however error prone. To minimize synchronization overhead between concurrent object-method invocations, implementors avoid blocking operations like lock acquisition, allowing methods to execute concurrently. However, concurrency risks unintended inter-operation interference. Their correctness is captured by observational refinement which ensures conformance to atomic reference implementations. Formally, given two libraries L_1 and L_2 implementing the methods of some concurrent object, we say L_1 refines L_2 if and only if every computation of every program using L_1 would also be possible were L_2 used instead.
Linearizability, being an equivalent property, is the predominant proof technique for establishing observational refinement: one shows that each concurrent execution has a linearization which is a valid sequential execution according to a specification, given by an abstract data type or atomic reference implementation.
However, checking linearizability is intrinsically hard. Indeed, even in the case where method implementations are finite-state and object specifications are also finite-state, and when a fixed number of threads (invoking methods in parallel) is considered, the linearizability problem is EXPSPACE-complete, and it becomes undecidable when the number of threads is unbounded. These results show in particular that there is a complexity/decidability gap between the problem of checking linearizability and the problem of
checking reachability (i.e., the dual of checking safety/invariance properties), the latter being, PSPACE-complete and EXPSPACE-complete in the above considered cases, respectively.
We address here the issue of investigating cases where tractable reductions of the observational refinement/linearizability problem to the reachability problem, or dually to invariant checking, are possible. Our aim is (1) to develop algorithmic approaches that avoid a systematic exploration of all possible linearizations of all computations, (2) to exploit existing techniques and tools for
efficient invariant checking to check observational refinement, and (3) to establish decidability and complexity results for significant classes of concurrent objects and data structures.
We present two approaches that we have proposed recently. The first approach introduces a parameterized approximation schema for detecting observational refinement violations. This approach exploits a fundamental property of shared-memory library executions: their histories are interval orders, a special case of partial orders which admit canonical representations in which each operation o is mapped to a positive-integer-bounded interval I(o). Interval orders are equipped with a natural notion of length, which corresponds to the smallest integer constant for which an interval mapping exists. Then, we define a notion of bounded-interval-length analysis, and demonstrate its efficiency, in terms of complexity, coverage, and scalability, for detecting observational refinement bugs.
The second approach focuses on a specific class of abstract data types, including common concurrent objects and data structures such as stacks and queues. We show that for this class of objects, the linearizability problem is actually as hard as the control-state reachability problem. Indeed, we prove that in this case, the
existence of linearizability violations (i.e., finite computations that are not linearizable), can be captured completely by a finite number of finite-state automata, even when an unbounded number of parallel operations is allowed (assuming that libraries are data-independent).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.2/LIPIcs.FSTTCS.2015.2.pdf
Concurrent objects
linearizability
verification
bug detection
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
5
6
10.4230/LIPIcs.FSTTCS.2015.5
article
Reachability Problems for Continuous Linear Dynamical Systems (Invited Talk)
Worrell, James
This talk is about reachability problems for continuous-time linear
dynamical systems. A central decision problem in the area is the
Continuous Skolem Problem. In particular, this problem lies at the heart of several reachability questions in continuous-time Markov chains and linear hybrid automata.
We describe some recent work, done in collaboration with Chonev and Ouaknine, that uses results in transcendence theory and real algebraic geometry to obtain decidability for certain variants of the problem. In particular, we consider a bounded version of the Continuous Skolem Problem, corresponding to time-bounded reachability. We prove decidability of the bounded problem assuming Schanuel's conjecture, a central conjecture in transcendence theory. We also describe some partial decidability results in the unbounded case in the case of functions satisfying differential equations of fixed low order.
Finally, we give evidence of significant mathematical obstacles to
proving decidability of the Continuous Skolem Problem in full
generality by exhibiting some number-theoretic consequences of the
existence of a decision procedure for this problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.5/LIPIcs.FSTTCS.2015.5.pdf
Linear Differential Equations
Continuous-Time Markov Chains
Hybrid Automata
Schanuel's Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
7
7
10.4230/LIPIcs.FSTTCS.2015.7
article
Convexity, Bayesianism, and the Quest Towards Optimal Algorithms (Invited Talk)
Barak, Boaz
In this high level and accessible talk I will describe a recent line of works aimed at trying to understand the intrinsic complexity of computational problems by finding optimal algorithms for large classes of such problems. In particular, I will talk about efforts centered on convex programming as a source for such candidate algorithms. As we will see, a byproduct of this effort is a computational analog of Bayesian probability that is of its own interest.
I will demonstrate the approach using the example of the planted clique (also known as hidden clique) problem - a central problem in average case complexity with connections to machine learning, community detection, compressed sensing, finding Nash equilibrium and more. While the complexity of the planted clique problem is still wide open, this line of works has led to interesting insights on it.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.7/LIPIcs.FSTTCS.2015.7.pdf
Convex programming
Bayesian probability
Average-case complexity
Planted clique
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
8
8
10.4230/LIPIcs.FSTTCS.2015.8
article
Beyond Matrix Completion (Invited Talk)
Moitra, Ankur
Here we study some of the statistical and algorithmic problems that arise in recommendation systems. We will be interested in what happens when we move beyond the matrix setting, to work with higher order objects — namely, tensors. To what extent does inference over more complex objects yield better predictions, but at the expense of the running time? We will explore the computational vs. statistical tradeoffs for some basic problems about recovering approximately low rank tensors from few observations, and will show that our algorithms are nearly optimal among all polynomial time algorithms, under natural complexity-theoretic assumptions.
This is based on joint work with Boaz Barak.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.8/LIPIcs.FSTTCS.2015.8.pdf
matrix completion
recommendation systems
tensor prediction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
9
9
10.4230/LIPIcs.FSTTCS.2015.9
article
Relational Refinement Types for Higher-Order Shape Transformers (Invited Talk)
Jagannathan, Suresh
Understanding, discovering, and proving useful properties of sophisticated data structures are central problems in program verification. A particularly challenging exercise for shape analyses
involves reasoning about sophisticated shape transformers that preserve the shape of a data structure (e.g., the data structure skeleton is always maintained as a balanced tree) or the relationship among values contained therein (e.g., the in-order relation of the elements of a tree or the parent-child relation of the elements of a heap) across program transformations.
In this talk, we consider the specification and verification of such transformers for ML programs. The structural properties preserved by transformers can often be naturally expressed as inductively-defined relations over the recursive structure evident in the definitions of the datatypes they manipulate. By carefully augmenting a refinement type system with support for reasoning about structural relations over algebraic datatypes, we realize an expressive yet decidable specification language, capable of capturing useful structural invariants, which can nonetheless be automatically verified using off-the-shelf type checkers and theorem provers. Notably, our technique generalizes to definitions of parametric relations for polymorphic data types which, in turn, lead to highly composable specifications over higher-order polymorphic shape transformers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.9/LIPIcs.FSTTCS.2015.9.pdf
Relational Specifications; Inductive and Parametric Relations; Refinement Types
Shape Analysis
Data Structure Verification}
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
10
24
10.4230/LIPIcs.FSTTCS.2015.10
article
Robust Reoptimization of Steiner Trees
Goyal, Keshav
Mömke, Tobias
In reoptimization problems, one is given an optimal solution to a problem instance and a local modification of the instance. The goal is to obtain a solution for the modified instance. The additional information about the instance provided by the given solution plays a central role: we aim to use that information in order to obtain better solutions than we are able to compute from scratch.
In this paper, we consider Steiner tree reoptimization and address the optimality requirement of the provided solution. Instead of assuming that we are provided an optimal solution, we relax the assumption to the more realistic scenario where we are given an approximate solution with an upper bound on its performance guarantee.
We show that for Steiner tree reoptimization there is a clear separation between local modifications where optimality is crucial for obtaining improved approximations and those instances where approximate solutions are acceptable starting points. For some of the local modifications that have been considered in previous research, we show that for every fixed epsilon > 0, approximating the reoptimization problem with respect to a given (1+epsilon)-approximation is as hard as approximating the Steiner tree problem itself (whereas with a given optimal solution to the original problem it is known that one can obtain considerably improved results). Furthermore, we provide a new algorithmic technique that, with some further insights, allows us to obtain improved performance guarantees for Steiner tree reoptimization with respect to all remaining local modifications that have been considered in the literature: a required node of degree more than one becomes a Steiner node; a Steiner node becomes a required node; the cost of one edge is increased.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.10/LIPIcs.FSTTCS.2015.10.pdf
reoptimization
approximation algorithms
Steiner tree problem
robustness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
25
37
10.4230/LIPIcs.FSTTCS.2015.25
article
Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model
Roy Choudhury, Anamitra
Das, Syamantak
Kumar, Amit
We consider the online scheduling problem to minimize the weighted ell_p-norm of flow-time of jobs. We study this problem under the rejection model introduced by Choudhury et al. (SODA 2015) - here the online algorithm is allowed to not serve an eps-fraction of the requests. We consider the restricted assignments setting where each job can go to a specified subset of machines. Our main result is an immediate dispatch non-migratory 1/eps^{O(1)}-competitive algorithm for this problem when one is allowed to reject at most eps-fraction of the total weight of jobs arriving. This is in contrast with the speed augmentation model under which no online algorithm for this problem can achieve a competitive ratio independent of p.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.25/LIPIcs.FSTTCS.2015.25.pdf
online scheduling
flow-time
competitive analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
38
51
10.4230/LIPIcs.FSTTCS.2015.38
article
On Correcting Inputs: Inverse Optimization for Online Structured Prediction
Daumé III, Hal
Khuller, Samir
Purohit, Manish
Sanders, Gregory
Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.38/LIPIcs.FSTTCS.2015.38.pdf
Inverse Optimization
Structured Prediction
Online Learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
52
68
10.4230/LIPIcs.FSTTCS.2015.52
article
Dynamic Sketching for Graph Optimization Problems with Applications to Cut-Preserving Sketches
Assadi, Sepehr
Khanna, Sanjeev
Li, Yang
Tannen, Val
In this paper, we introduce a new model for sublinear algorithms called dynamic sketching. In this model, the underlying data is partitioned into a large static part and a small dynamic part and the goal is to compute a summary of the static part (i.e, a sketch) such that given any update for the dynamic part, one can combine it with the sketch to compute a given function. We say that a sketch is compact if its size is bounded by a polynomial function of the length of the dynamic data, (essentially) independent of the size of the static part.
A graph optimization problem P in this model is defined as follows. The input is a graph G(V,E) and a set T \subseteq V of k terminals; the edges between the terminals are the dynamic part and the other edges in G are the static part. The goal is to summarize the graph G into a compact sketch (of size poly(k)) such that given any set Q of edges between the terminals, one can answer the problem P for the graph obtained by inserting all edges in Q to G, using only the sketch.
We study the fundamental problem of computing a maximum matching and prove tight bounds on the sketch size. In particular, we show that there exists a (compact) dynamic sketch of size O(k^2) for the matching problem and any such sketch has to be of size \Omega(k^2). Our sketch for matchings can be further used to derive compact dynamic sketches for other fundamental graph problems involving cuts and connectivities. Interestingly, our sketch for matchings can also be used to give an elementary construction of a cut-preserving vertex sparsifier with space O(kC^2) for k-terminal graphs, which matches the best known upper bound; here C is the total capacity of the edges incident on the terminals. Additionally, we give an improved lower bound (in terms of C) of Omega(C/log{C}) on size of cut-preserving vertex sparsifiers, and establish that progress on dynamic sketching of the s-t max-flow problem (either upper bound or lower bound) immediately leads to better bounds for size of cut-preserving vertex sparsifiers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.52/LIPIcs.FSTTCS.2015.52.pdf
Small-space Algorithms
Maximum Matchings
Vertex Sparsifiers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
69
83
10.4230/LIPIcs.FSTTCS.2015.69
article
Weighted Strategy Logic with Boolean Goals Over One-Counter Games
Bouyer, Patricia
Gardy, Patrick
Markey, Nicolas
Strategy Logic is a powerful specification language for expressing non-zero-sum properties of multi-player games. SL conveniently extends the logic ATL with explicit quantification and assignment of strategies. In this paper, we consider games over one-counter automata, and a quantitative extension 1cSL of SL with assertions over the value of the counter. We prove two results: we first show that, if decidable, model checking the so-called Boolean-goal fragment of 1cSL has non-elementary complexity; we actually prove the result for the Boolean-goal fragment of SL over finite-state games, which was an open question in [Mogavero et al. Reasoning about strategies: On the model-checking problem. ACM ToCL 15(4),2014]. As a first step towards proving decidability, we then show that the Boolean-goal fragment of 1cSL over one-counter games enjoys a nice periodicity property.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.69/LIPIcs.FSTTCS.2015.69.pdf
Temporal logics
multi-player games
strategy logic
quantitative games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
84
97
10.4230/LIPIcs.FSTTCS.2015.84
article
Decidability in the Logic of Subsequences and Supersequences
Karandikar, Prateek
Schnoebelen, Philippe
We consider first-order logics of sequences ordered by the subsequence ordering, aka sequence embedding. We show that the Sigma_2 theory is undecidable, answering a question left open by Kuske. Regarding fragments with a bounded number of variables, we show that the FO^2 theory is decidable while the FO^3 theory is undecidable.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.84/LIPIcs.FSTTCS.2015.84.pdf
subsequence
subword
logic
first-order logic
decidability
piecewise- testability
Simon’s congruence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
98
111
10.4230/LIPIcs.FSTTCS.2015.98
article
Fragments of Fixpoint Logic on Data Words
Colcombet, Thomas
Manuel, Amaldev
We study fragments of a mu-calculus over data words whose primary modalities are 'go to next position' (X^g), 'go to previous position}' (Y^g), 'go to next position with the same data value' (X^c), 'go to previous position with the same data value (Y^c)'. Our focus is on two fragments that are called the bounded mode
alternation fragment (BMA) and the bounded reversal fragment (BR). BMA is the fragment of those formulas that whose unfoldings contain only a bounded number of alternations between global modalities (X^g, Y^g) and class modalities (X^c, Y^c). Similarly BR is the fragment of formulas whose unfoldings contain only a bounded number of alternations between left modalities (Y^g, Y^c) and right modalities (X^g, X^c). We show that these fragments are decidable (by inclusion in Data Automata), enjoy effective Boolean closure, and contain previously defined logics such as the two variable fragment of first-order logic and DataLTL. More precisely the definable language in each formalism obey the following inclusions that are effective.
FO^2 subsetneq DataLTL subsetneq BMA BR subsetneq nu subseteq Data Automata.
Our main contribution is a method to prove inexpressibility results on the fragment BMA by reducing them to inexpressibility results for combinatorial expressions. More precisely we prove the following hierarchy of definable languages,
emptyset=BMA^0 subsetneq BMA^1 subsetneq ... subsetneq BMA subsetneq BR , where BMA^k is the set of all formulas whose unfoldings contain at most k-1 alternations between global modalities (X^g, Y^g) and class modalities (X^c, Y^c). Since the class BMA is a generalization of FO^2 and DataLTL the inexpressibility results carry over to them as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.98/LIPIcs.FSTTCS.2015.98.pdf
Data words
Data automata
Fixpoint logic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
112
124
10.4230/LIPIcs.FSTTCS.2015.112
article
Efficient Algorithms for Morphisms over Omega-Regular Languages
Fleischer, Lukas
Kufleitner, Manfred
Morphisms to finite semigroups can be used for recognizing omega-regular languages. The so-called strongly recognizing morphisms can be seen as a deterministic computation model which provides minimal objects (known as the syntactic morphism) and a trivial complementation procedure. We give a quadratic-time algorithm for computing the syntactic morphism from any given strongly recognizing morphism, thereby showing that minimization is easy as well. In addition, we give algorithms for efficiently solving various decision problems for weakly recognizing morphisms. Weakly recognizing morphism are often smaller than their strongly recognizing counterparts. Finally, we describe the language operations needed for converting formulas in monadic second-order logic (MSO) into strongly recognizing morphisms, and we give some experimental results.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.112/LIPIcs.FSTTCS.2015.112.pdf
Büchi automata
omega-regular language
syntactic semigroup
recognizing morphism
MSO
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
125
135
10.4230/LIPIcs.FSTTCS.2015.125
article
Approximating the Regular Graphic TSP in Near Linear Time
Chiplunkar, Ashish
Vishwanathan, Sundar
We present a randomized approximation algorithm for computing traveling salesperson tours in undirected regular graphs. Given an n-vertex, k-regular graph, the algorithm computes a tour of length at most (1+frac 4+ln 4+varepsilon ln k-O(1)n, with high probability, in O(nk log k) time. This improves upon the result by Vishnoi ([Vishnoi12],FOCS 2012) for the same problem, in terms of both approximation factor, and running time. Furthermore, our result is incomparable with the recent result by Feige, Ravi, and Singh ([FeigeRS14], IPCO 2014), since our algorithm runs in linear time, for any fixed k. The key ingredient of our algorithm is a technique that uses edge-coloring algorithms to sample a cycle cover with O(n/log k) cycles, with high probability, in near linear time.
Additionally, we also give a deterministic frac{3}{2}+O(frac{1}sqrt{k}) factor approximation algorithm for the TSP on n-vertex, k-regular graphs running in time O(nk).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.125/LIPIcs.FSTTCS.2015.125.pdf
traveling salesperson problem
approximation
linear time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
136
150
10.4230/LIPIcs.FSTTCS.2015.136
article
On Weighted Bipartite Edge Coloring
Khan, Arindam
Singh, Mohit
We study weighted bipartite edge coloring problem, which is a generalization of two classical problems: bin packing and edge coloring. This problem has been inspired from the study of Clos networks in multirate switching environment in communication networks. In weighted bipartite edge coloring problem, we are given an edge-weighted bipartite multi-graph G=(V,E) with weights
w:E\rightarrow [0,1]. The goal is to find a proper
weighted coloring of the edges with as few colors as possible. An
edge coloring of the weighted graph is called a proper
weighted coloring if the sum of the weights of the edges incident
to a vertex of any color is at most one. Chung and Ross conjectured 2m-1 colors are sufficient for a proper weighted coloring, where m denotes the minimum number of unit sized bins needed to pack the weights of all edges incident at any vertex. We give an algorithm that returns a coloring with at most \lceil 2.2223m \rceil colors improving on the previous result of \frac{9m}{4} by Feige and Singh.
Our algorithm is purely combinatorial and combines the König's theorem for edge coloring bipartite graphs and first-fit decreasing heuristic for bin packing. However, our analysis uses configuration linear program for the bin packing problem to give the improved result.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.136/LIPIcs.FSTTCS.2015.136.pdf
Edge coloring
Bin packing
Clos networks
Approximation algorithms
Graph algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
151
162
10.4230/LIPIcs.FSTTCS.2015.151
article
Deciding Orthogonality in Construction-A Lattices
Chandrasekaran, Karthekeyan
Gandikota, Venkata
Grigorescu, Elena
Lattices are discrete mathematical objects with widespread applications to integer programs as well as modern cryptography. A fundamental problem in both domains is the Closest Vector Problem (popularly known as CVP). It is well-known that CVP can be easily solved in lattices that have an orthogonal basis if the orthogonal basis is specified. This motivates the orthogonality decision problem: verify whether a given lattice has an orthogonal basis. Surprisingly, the orthogonality decision problem is not known to be either NP-complete or in P.
In this paper, we focus on the orthogonality decision problem for a well-known family of lattices, namely Construction-A lattices. These are lattices of the form C + qZ^n, where C is an error-correcting q-ary code, and are studied in communication settings. We provide a complete characterization of lattices obtained from binary and ternary codes using Construction- A that have an orthogonal basis. This characterization leads to an efficient algorithm solving the orthogonality decision problem, which also finds an orthogonal basis if one exists for this family of lattices. We believe that these results could provide a better understanding of the complexity of the orthogonality decision problem in general.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.151/LIPIcs.FSTTCS.2015.151.pdf
Orthogonal Lattices
Construction-A
Orthogonal Decomposition
Lattice isomorphism
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
163
177
10.4230/LIPIcs.FSTTCS.2015.163
article
Ordered Tree-Pushdown Systems
Clemente, Lorenzo
Parys, Pawel
Salvati, Sylvain
Walukiewicz, Igor
We define a new class of pushdown systems where the pushdown is a tree instead of a word. We allow a limited form of lookahead on the pushdown conforming to a certain ordering restriction, and we show that the resulting class enjoys a decidable reachability problem. This follows from a preservation of recognizability result for the backward reachability relation of such systems. As an application, we show that our simple model can encode several formalisms generalizing pushdown systems, such as ordered multi-pushdown systems, annotated higher-order pushdown systems, the Krivine machine, and ordered annotated multi-pushdown systems. In each case, our procedure yields tight complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.163/LIPIcs.FSTTCS.2015.163.pdf
reachability analysis
saturation technique
pushdown automata
ordered pushdown automata
higher-order pushdown automata
higher-order recursive sche
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
178
191
10.4230/LIPIcs.FSTTCS.2015.178
article
One-way Definability of Sweeping Transducer
Baschenis, Félix
Gauwin, Olivier
Muscholl, Anca
Puppis, Gabriele
Two-way finite-state transducers on words are strictly more expressive than one-way transducers. It has been shown recently how to decide if a two-way functional transducer has an equivalent one-way transducer, and the complexity of the algorithm is non-elementary. We propose an alternative and simpler characterization for sweeping functional transducers, namely, for transducers that can only reverse their head direction at the extremities of the input. Our algorithm works in 2EXPSPACE and, in the positive case, produces an equivalent one-way transducer of doubly exponential size. We also show that the bound on the size of the transducer is tight, and that the one-way definability problem is undecidable for (sweeping) non-functional transducers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.178/LIPIcs.FSTTCS.2015.178.pdf
Regular word transductions
sweeping transducers
one-way definability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
192
205
10.4230/LIPIcs.FSTTCS.2015.192
article
What's Decidable about Availability Languages?
Abdulla, Parosh Aziz
Atig, Mohamed Faouzi
Meyer, Roland
Seyed Salehi, Mehdi
We study here the algorithmic analysis of systems modeled in terms of availability languages. Our first main result is a positive answer to the emptiness problem: it is decidable whether a given availability language contains a word. The key idea is an inductive construction that replaces availability languages with Parikh-equivalent regular languages. As a second contribution, we solve the intersection problem modulo bounded languages: given availability languages and a bounded language, it is decidable whether the intersection of the former contains a word from the bounded language. We show that the problem is NP-complete. The idea is to reduce to satisfiability of existential Presburger arithmetic. Since the (general) intersection problem for availability languages is known to be undecidable, our results characterize the decidability border for this model. Our last contribution is a study of the containment problem between regular and availability languages. We show that safety verification, i.e., checking containment of an availability language in a regular language, is decidable. The containment problem of regular languages in availability languages is proven undecidable.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.192/LIPIcs.FSTTCS.2015.192.pdf
Availability
formal languages
emptiness
decidability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
206
220
10.4230/LIPIcs.FSTTCS.2015.206
article
Towards Better Separation between Deterministic and Randomized Query Complexity
Mukhopadhyay, Sagnik
Sanyal, Swagato
We show that there exists a Boolean function F which gives the following separations among deterministic query complexity (D(F)), randomized zero error query complexity (R_0(F)) and randomized one-sided error query complexity (R_1(F)): R_1(F) = ~O(sqrt{D(F)) and R_0(F)=~O(D(F))^3/4. This refutes the conjecture made by Saks and Wigderson that for any Boolean function f, R_0(f)=Omega(D(f))^0.753.. . This also shows widest separation between R_1(f) and D(f) for any Boolean function. The function F was defined by Göös, Pitassi and Watson who studied it for showing a separation between deterministic decision tree complexity and unambiguous non-deterministic decision tree complexity. Independently of us, Ambainis et al proved that different variants of the function F certify optimal (quadratic) separation between D(f) and R_0(f), and polynomial separation between R_0(f) and R_1(f). Viewed as separation results, our results are subsumed by those of Ambainis et al. However, while the functions considered in the work of Ambainis et al are different variants of F, in this work we show that the original function F itself is sufficient to refute the Saks-Wigderson conjecture and obtain widest possible separation between the deterministic and one-sided error randomized query complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.206/LIPIcs.FSTTCS.2015.206.pdf
Deterministic Decision Tree
Randomized Decision Tree
Query Complexity
Models of Computation.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
221
235
10.4230/LIPIcs.FSTTCS.2015.221
article
Dimension, Pseudorandomness and Extraction of Pseudorandomness
Agrawal, Manindra
Chakraborty, Diptarka
Das, Debarati
Nandakumar, Satyadev
In this paper we propose a quantification of distributions on a set of strings, in terms of how close to pseudorandom a distribution is. The quantification is an adaptation of the theory of dimension of sets of infinite sequences introduced by Lutz. Adapting Hitchcock's work, we also show that the logarithmic loss incurred by a predictor on a distribution is quantitatively equivalent to the notion of dimension we define. Roughly, this captures the equivalence between pseudorandomness defined via indistinguishability and via unpredictability. Later we show some natural properties of our notion of dimension. We also do a comparative study among our proposed notion of dimension and two well known notions of computational analogue of entropy, namely HILL-type pseudo min-entropy and next-bit pseudo Shannon entropy.
Further, we apply our quantification to the following problem. If we know that the dimension of a distribution on the set of n-length strings is s in (0,1], can we extract out O(sn) pseudorandom bits out of the distribution? We show that to construct such extractor, one need at least Omega(log n) bits of pure randomness. However, it is still open to do the same using O(log n) random bits. We show that deterministic extraction is possible in a special case - analogous to the bit-fixing sources introduced by Chor et al., which we term nonpseudorandom bit-fixing source. We adapt the techniques of Gabizon, Raz and Shaltiel to construct a deterministic pseudorandom extractor for this source.
By the end, we make a little progress towards P vs. BPP problem by showing that existence of optimal stretching function that stretches O(log n) input bits to produce n output bits such that output distribution has dimension s in (0,1], implies P=BPP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.221/LIPIcs.FSTTCS.2015.221.pdf
Pseudorandomness
Dimension
Martingale
Unpredictability
Pseudoentropy
Pseudorandom Extractor
Hard Function
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
236
245
10.4230/LIPIcs.FSTTCS.2015.236
article
On the NP-Completeness of the Minimum Circuit Size Problem
Hitchcock, John M.
Pavan, A.
We study the Minimum Circuit Size Problem (MCSP): given the
truth-table of a Boolean function f and a number k, does there
exist a Boolean circuit of size at most k computing f? This is a
fundamental NP problem that is not known to be NP-complete. Previous
work has studied consequences of the NP-completeness of MCSP. We
extend this work and consider whether MCSP may be complete for NP
under more powerful reductions. We also show that NP-completeness of
MCSP allows for amplification of circuit complexity.
We show the following results.
- If MCSP is NP-complete via many-one reductions, the following circuit complexity amplification result holds: If NP cap co-NP requires 2^n^{Omega(1)-size circuits, then E^NP requires 2^Omega(n)-size circuits.
- If MCSP is NP-complete under truth-table reductions, then
EXP neq NP cap SIZE(2^n^epsilon) for some epsilon> 0 and EXP neq ZPP. This result extends to polylog Turing reductions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.236/LIPIcs.FSTTCS.2015.236.pdf
Minimum Circuit Size
NP-completeness
truth-table reductions
circuit complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
246
260
10.4230/LIPIcs.FSTTCS.2015.246
article
Counting Euler Tours in Undirected Bounded Treewidth Graphs
Balaji, Nikhil
Datta, Samir
Ganesan, Venkatesh
We show that counting Euler tours in undirected bounded tree-width graphs is tractable even in parallel - by proving a GapL upper bound. This is in stark contrast to #P-completeness of the same problem in general graphs.
Our main technical contribution is to show how (an instance of) dynamic programming on bounded clique-width graphs can be performed efficiently in parallel. Thus we show that the sequential result of Espelage, Gurski and Wanke for efficiently computing Hamiltonian paths in bounded clique-width graphs can be adapted in the parallel setting to count the number of Hamiltonian paths which in turn is a tool for counting the number of Euler tours in bounded tree-width graphs. Our technique also yields parallel algorithms for counting longest paths and bipartite perfect matchings in bounded-clique width graphs.
While establishing that counting Euler tours in bounded tree-width graphs can be computed by non-uniform monotone arithmetic circuits of polynomial degree (which characterize #SAC^1) is relatively easy, establishing a uniform #SAC^1 bound needs a careful use of polynomial interpolation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.246/LIPIcs.FSTTCS.2015.246.pdf
Euler Tours
Bounded Treewidth
Bounded clique-width
Hamiltonian cycles
Parallel algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
261
277
10.4230/LIPIcs.FSTTCS.2015.261
article
Revisiting Robustness in Priced Timed Games
Guha, Shibashis
Krishna, Shankara Narayanan
Manasa, Lakshmi
Trivedi, Ashutosh
Priced timed games are optimal-cost reachability games played between
two players---the controller and the environment---by moving a token along the edges of infinite graphs of configurations of priced timed automata. The goal of the controller is to reach a given set of target locations as cheaply as possible, while the goal of the environment is the opposite. Priced timed games are known to be undecidable for timed automata with 3 or more clocks, while they are known to be decidable for automata with 1 clock. In an attempt to recover decidability for priced timed games Bouyer, Markey, and Sankur studied robust priced timed games where the environment has the power to slightly perturb delays proposed by the controller.
Unfortunately, however, they showed that the natural problem of deciding the existence of optimal limit-strategy---optimal strategy of the controller where the perturbations tend to vanish in the limit---is undecidable with 10 or more clocks. In this paper we revisit this problem and improve our understanding of the decidability of these games. We show that the limit-strategy problem is already undecidable for a subclass of robust priced timed games with 5 or more clocks. On a positive side, we show the decidability of the existence of almost optimal strategies for the same subclass of one-clock robust priced timed games by adapting a classical construction by Bouyer at al. for one-clock priced timed games.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.261/LIPIcs.FSTTCS.2015.261.pdf
Priced Timed Games
Decidability
Optimal strategies
Robustness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
278
292
10.4230/LIPIcs.FSTTCS.2015.278
article
Simple Priced Timed Games are not That Simple
Brihaye, Thomas
Geeraerts, Gilles
Haddad, Axel
Lefaucheux, Engel
Monmege, Benjamin
Priced timed games are two-player zero-sum games played on priced timed automata (whose locations and transitions are labeled by weights modeling the costs of spending time in a state and executing an action, respectively). The goals of the players are to minimise and maximise the cost to reach a target location, respectively. We consider priced timed games with one clock and arbitrary (positive and negative) weights and show that, for an important subclass of theirs (the so-called simple priced timed games), one can compute, in exponential time, the optimal values that the players can achieve, with their associated optimal strategies. As side results, we also show that one-clock priced timed games are determined and that we can use our result on simple priced timed games to solve the more general class of so-called reset-acyclic priced timed games (with arbitrary weights and one-clock).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.278/LIPIcs.FSTTCS.2015.278.pdf
Priced timed games
real-time systems
game theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
293
306
10.4230/LIPIcs.FSTTCS.2015.293
article
Quantitative Games under Failures
Brihaye, Thomas
Geeraerts, Gilles
Haddad, Axel
Monmege, Benjamin
Pérez, Guillermo A.
Renault, Gabriel
We study a generalisation of sabotage games, a model of dynamic network games introduced by van Benthem. The original definition of the game is inherently finite and therefore does not allow one to model infinite processes. We propose an extension of the sabotage games in which the first player (Runner) traverses an arena with dynamic weights determined by the second player (Saboteur). In our model of quantitative sabotage games, Saboteur is now given a budget that he can distribute amongst the edges of the graph, whilst Runner attempts to minimise the quantity of budget witnessed while completing his task. We show that, on the one hand, for most of the classical cost functions considered in the literature, the problem of determining if Runner has a strategy to ensure a cost below some threshold is EXPTIME-complete. On the other hand, if the budget of Saboteur is fixed a priori, then the problem is in PTIME for most cost functions. Finally, we show that restricting the dynamics of the game also leads to better complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.293/LIPIcs.FSTTCS.2015.293.pdf
Quantitative games
verification
synthesis
game theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
307
319
10.4230/LIPIcs.FSTTCS.2015.307
article
Games with Delays - A Frankenstein Approach
Berwanger, Dietmar
van den Bogaard, Marie
We investigate infinite games on finite graphs where the information flow is perturbed by non- deterministic signalling delays. It is known that such perturbations make synthesis problems virtually unsolvable, in the general case. On the classical model where signals are attached to states, tractable cases are rare and difficult to identify.
In this paper, we propose a model where signals are detached from control states, and we identify a subclass on which equilibrium outcomes can be preserved, even if signals are delivered with a delay that is finitely bounded. To offset the perturbation, our solution procedure combines responses from a collection of virtual plays following an equilibrium strategy in the instant- signalling game to synthesise, in a Dr. Frankenstein manner, an equivalent equilibrium strategy for the delayed-signalling game.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.307/LIPIcs.FSTTCS.2015.307.pdf
infinite games on graphs
imperfect information
delayed monitoring
distributed synthesis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
320
335
10.4230/LIPIcs.FSTTCS.2015.320
article
Forbidden Extension Queries
Biswas, Sudip
Ganguly, Arnab
Shah, Rahul
Thankachan, Sharma V.
Document retrieval is one of the most fundamental problem in information retrieval. The objective is to retrieve all documents from a document collection that are relevant to an input pattern.
Several variations of this problem such as ranked document retrieval, document listing with two patterns and forbidden patterns have been studied. We introduce the problem of document retrieval with forbidden extensions.
Let D={T_1,T_2,...,T_D} be a collection of D string documents of n characters in total, and P^+ and P^- be two query patterns, where P^+ is a proper prefix of P^-. We call P^- as the forbidden extension of the included pattern P^+. A forbidden extension query < P^+,P^- > asks to report all occ documents in D that contains P^+ as a substring, but does not contain P^- as one. A top-k forbidden extension query < P^+,P^-,k > asks to report those k documents among the occ documents that are most relevant to P^+. We present a linear index (in words) with an O(|P^-| + occ) query time for the document listing problem. For the top-k version of the problem, we achieve the following results, when the relevance of a document is based on PageRank:
- an O(n) space (in words) index with O(|P^-|log sigma+ k) query time, where sigma is the size of the alphabet from which characters in D are chosen. For constant alphabets, this yields an optimal query time of O(|P^-|+ k).
- for any constant epsilon > 0, a |CSA| + |CSA^*| + Dlog frac{n}{D} + O(n) bits index with O(search(P)+ k cdot tsa cdot log ^{2+epsilon} n) query time, where search(P) is the time to find the suffix range of a pattern P, tsa is the time to find suffix (or inverse suffix) array value, and |CSA^*| denotes the maximum of the space needed to store the compressed suffix array CSA of the concatenated text of all documents, or the total space needed to store the individual CSA of each document.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.320/LIPIcs.FSTTCS.2015.320.pdf
document retrieval
suffix trees
range queries
succinct data structure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
336
349
10.4230/LIPIcs.FSTTCS.2015.336
article
On Density, Threshold and Emptiness Queries for Intervals in the Streaming Model
Bishnu, Arijit
Chakrabarti, Amit
Nandy, Subhas C.
Sen, Sandeep
In this paper, we study the maximum density, threshold and emptiness
queries for intervals in the streaming model. The input is a stream S
of n points in the real line R and a floating closed interval W of width alpha. The specific problems we consider in this paper are as follows.
- Maximum density: find a placement of W in R containing the
maximum number of points of S.
- Threshold query: find a placement of W in R, if it exists,
that contains at least Delta elements of S.
- Emptiness query: find, if possible, a placement of W within the extent of S so that the interior of W does not contain any element of S.
The stream S, being huge, does not fit into main memory and can be read sequentially at most a constant number of times, usually once.
The problems studied here in the geometric setting have relations to frequency estimation and heavy hitter identification in a stream of data. We provide lower bounds and results on trade-off between extra space and quality of solution. We also discuss generalizations for the higher dimensional variants for a few cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.336/LIPIcs.FSTTCS.2015.336.pdf
Density
threshold
emptiness queries
interval queries
streaming model
heavy hitter
frequency estimation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
350
364
10.4230/LIPIcs.FSTTCS.2015.350
article
Clustering on Sliding Windows in Polylogarithmic Space
Braverman, Vladimir
Lang, Harry
Levin, Keith
Monemizadeh, Morteza
In PODS 2003, Babcock, Datar, Motwani and O'Callaghan gave the first streaming solution for the k-median problem on sliding windows using
O(frack k tau^4 W^2tau log^2 W) space, with a O(2^O(1/tau)) approximation factor, where W is the window size and tau in (0,1/2) is a user-specified parameter. They left as an open question whether it is possible to improve this to polylogarithmic space. Despite much progress on clustering and sliding windows, this question has remained open for more than a decade.
In this paper, we partially answer the main open question posed by Babcock, Datar, Motwani and O'Callaghan. We present an algorithm yielding an exponential improvement in space compared to the previous result given in Babcock, et al. In particular, we give the first polylogarithmic space (alpha,beta)-approximation for metric k-median clustering in the sliding window model, where alpha and beta are constants, under the assumption, also made by Babcock et al., that the optimal k-median cost on any given window is bounded by a polynomial in the window size. We justify this assumption by showing that when the cost is exponential in the window size, no sublinear space approximation is possible. Our main technical contribution is a simple but elegant extension of smooth functions as introduced by Braverman and Ostrovsky, which allows us to apply well-known techniques for solving problems in the sliding window model
to functions that are not smooth, such as the k-median cost.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.350/LIPIcs.FSTTCS.2015.350.pdf
Streaming
Clustering
Sliding windows
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
365
379
10.4230/LIPIcs.FSTTCS.2015.365
article
Congestion Games with Multisets of Resources and Applications in Synthesis
Avni, Guy
Kupferman, Orna
Tamir, Tami
In classical congestion games, players' strategies are subsets of resources. We introduce and study multiset congestion games, where players' strategies are multisets of resources. Thus, in each strategy a player may need to use each resource a different number of times, and his cost for using the resource depends on the load that he and the other players generate on the resource.
Beyond the theoretical interest in examining the effect of a repeated use of resources, our study enables better understanding of non-cooperative systems and environments whose behavior is not covered by previously studied models. Indeed, congestion games with multiset-strategies arise, for example, in production planing
and network formation with tasks that are more involved than reachability. We study in detail the application of synthesis from component libraries: different users synthesize systems by gluing together components from a component library. A component may be used in several systems and may be used several times in a system. The performance of a component and hence the system's quality depends on the load on it.
Our results reveal how the richer setting of multisets congestion games affects the stability and equilibrium efficiency compared to standard congestion games. In particular, while we present very simple instances with no pure Nash equilibrium and prove tighter and simpler lower bounds for equilibrium inefficiency, we are also able to show that some of the positive results known for affine and weighted congestion games apply to the richer setting of multisets.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.365/LIPIcs.FSTTCS.2015.365.pdf
Congestion games
Multiset strategies
Equilibrium existence and computation
Equilibrium inefficiency
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
380
393
10.4230/LIPIcs.FSTTCS.2015.380
article
The Sensing Cost of Monitoring and Synthesis
Almagor, Shaull
Kuperberg, Denis
Kupferman, Orna
In FSTTCS 2014, we introduced sensing as a new complexity measure for the complexity of regular languages. Intuitively, the sensing cost quantifies the detail in which a random input word has to be read by a deterministic automaton in order to decide its membership in the language. In this paper, we consider sensing in two principal applications of deterministic automata. The first is monitoring: we are given a computation in an on-line manner, and we have to decide whether it satisfies the specification. The second is synthesis: we are given a sequence of inputs in an on-line manner and we have to generate a sequence of outputs so that the resulting computation satisfies the specification. In the first, our goal is to design a monitor that handles all computations and minimizes the expected average number of sensors used in the monitoring process. In the second, our goal is to design a transducer that realizes the specification for all input sequences and minimizes the expected average number of sensors used for reading the inputs.
We argue that the two applications require new and different frameworks for reasoning about sensing, and develop such frameworks. We focus on safety languages. We show that for monitoring, minimal sensing is attained by a monitor
based on the minimal deterministic automaton for the language. For synthesis, however, the setting is
more challenging: minimizing the sensing may require exponentially bigger transducers, and the problem of synthesizing a minimally-sensing transducer is EXPTIME-complete even for safety specifications given by deterministic automata.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.380/LIPIcs.FSTTCS.2015.380.pdf
Automata
regular languages
omega-regular languages
complexity
sensing
minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
394
407
10.4230/LIPIcs.FSTTCS.2015.394
article
An omega-Algebra for Real-Time Energy Problems
Cachera, David
Fahrenberg, Uli
Legay, Axel
We develop a *-continuous Kleene omega-algebra of real-time energy functions. Together with corresponding automata, these can be used to model systems which can consume and regain energy (or other types of resources) depending on available time. Using recent results on *-continuous Kleene omega-algebras and computability of certain manipulations on real-time energy functions, it follows that reachability and Büchi acceptance in real-time energy automata can be decided in a static way which only involves manipulations of real-time energy functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.394/LIPIcs.FSTTCS.2015.394.pdf
Energy problem
Real time
Star-continuous Kleene algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
408
419
10.4230/LIPIcs.FSTTCS.2015.408
article
Parameterized Complexity of Secluded Connectivity Problems
Fomin, Fedor V.
Golovach, Petr A.
Karpov, Nikolay
Kulikov, Alexander S.
The Secluded Path problem introduced by Chechik et al. in [ESA 2013] models a situation where a sensitive information has to be transmitted between a pair of nodes along a path in a network.
The measure of the quality of a selected path is its exposure, which is the total weight of vertices in its closed neighborhood. In order to minimize the risk of intercepting the information, we are interested in selecting a secluded path, i.e. a path with a small exposure. Similarly, the Secluded Steiner Tree problem is to find a tree in a graph connecting a given set of terminals such that the exposure of the tree is minimized. In this work, we obtain the following results about parameterized complexity of secluded connectivity problems.
We start from an observation that being parameterized by the size of the exposure, the problem is fixed-parameter tractable (FPT). More precisely, we give an algorithm deciding if a graph G with a given cost function w:V(G)->N contains a secluded path of exposure at most k with the cost at most C in time O(3^{k/3}(n+m) log W), where W is the maximum value of w on an input graph G. Similarly,
Secluded Steiner Tree is solvable in time O(2^{k}k^2 (n+m) log W).
The main result of this paper is about "above guarantee" parameterizations for secluded problems. We show that Secluded Steiner Tree is FPT being parameterized by r+p, where p is the number of the terminals, l the size of an optimum Steiner tree, and r=k-l. We complement this result by showing that the problem is co-W[1]-hard when parameterized by r only.
We also investigate Secluded Steiner Tree from kernelization perspective and provide several lower and upper bounds when parameters are the treewidth, the size of a vertex cover, maximum vertex degree and the solution size. Finally, we refine the algorithmic result of Chechik et al. by improving the exponential dependence from the treewidth of the input graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.408/LIPIcs.FSTTCS.2015.408.pdf
Secluded path
Secluded Steiner tree
parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
420
433
10.4230/LIPIcs.FSTTCS.2015.420
article
Parameterized Algorithms for Deletion to (r,ell)-Graphs
Kolay, Sudeshna
Panolan, Fahad
For fixed integers r,ell geq 0, a graph G is called an (r,ell)-graph if the vertex set V(G) can be partitioned into r independent sets and ell cliques. This brings us to the following natural parameterized questions: Vertex (r,ell)-Partization and Edge (r,ell)-Partization. An input to these problems consist of a graph G and a positive integer k and the objective is to decide whether there exists a set S subseteq V(G) (S subseteq E(G)) such that the deletion of S from G results in an (r,ell)-graph. These problems generalize well studied problems such as Odd Cycle Transversal, Edge Odd Cycle Transversal, Split Vertex Deletion and Split Edge Deletion. We do not hope to get parameterized algorithms for either Vertex (r, ell)-Partization or Edge (r,ell)-Partization when either of r or ell is at least 3 as the recognition problem itself is NP-complete. This leaves the case of r,ell in {1,2}. We almost complete the parameterized complexity dichotomy for these problems by obtaining the following results:
- We show that Vertex (r,ell)-Partization is fixed parameter tractable (FPT) for r,ell in {1,2}. Then we design an Oh(sqrt log n)-factor approximation algorithms for these problems. These approximation algorithms are then utilized to design polynomial sized randomized Turing kernels for these problems.
- Edge (r,ell)-Partization is FPT when (r,ell)in{(1,2),(2,1)}. However, the parameterized complexity of Edge (2,2)-Partization remains open.
For our approximation algorithms and thus for Turing kernels we use
an interesting finite forbidden induced graph characterization, for a class of graphs known as (r,ell)-split graphs, properly containing the class of (r,ell)-graphs. This approach to obtain approximation algorithms could be of an independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.420/LIPIcs.FSTTCS.2015.420.pdf
FPT
Turing kernels
Approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
434
447
10.4230/LIPIcs.FSTTCS.2015.434
article
Finding Even Subgraphs Even Faster
Goyal, Prachi
Misra, Pranabendu
Panolan, Fahad
Philip, Geevarghese
Saurabh, Saket
Problems of the following kind have been the focus of much recent research in the realm of parameterized complexity: Given an input graph (digraph) on n vertices and a positive integer parameter k, find if there exist k edges(arcs) whose deletion results in a graph that satisfies some specified parity constraints. In particular, when the objective is to obtain a connected graph in which all the
vertices have even degrees--where the resulting graph is Eulerian,the problem is called Undirected Eulerian Edge Deletion. The corresponding
problem in digraphs where the resulting graph should be strongly connected and every vertex should have the same in-degree as its
out-degree is called Directed Eulerian Edge Deletion. Cygan et al.[Algorithmica, 2014] showed that these problems are fixed parameter tractable (FPT), and gave algorithms with the running time
2^O(k log k)n^O(1). They also asked, as an open problem, whether there exist FPT algorithms which solve these problems in time
2^O(k)n^O(1). It was also posed as an open problem at the School on Parameterized Algorithms and Complexity 2014, Bedlewo, Poland.
In this paper we answer their question in the affirmative: using the technique of computing representative families of co-graphic matroids we design algorithms which solve these problems in time 2^O(k)n^O(1). The crucial insight we bring to these problems is to view the solution as an independent set of a co-graphic matroid. We believe that this view-point/approach will be useful in other problems where one of the constraints that need to be satisfied is that of connectivity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.434/LIPIcs.FSTTCS.2015.434.pdf
Eulerian Edge Deletion
FPT
Representative Family.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
448
462
10.4230/LIPIcs.FSTTCS.2015.448
article
The Parameterized Complexity of the Minimum Shared Edges Problem
Fluschnik, Till
Kratsch, Stefan
Niedermeier, Rolf
Sorge, Manuel
We study the NP-complete Minimum Shared Edges (MSE) problem. Given an undirected graph, a source and a sink vertex, and two integers p and k, the question is whether there are p paths in the graph connecting the source with the sink and sharing at most k edges. Herein, an edge is shared if it appears in at least two paths. We show that MSE is W[1]-hard when parameterized by the treewidth of the input graph and the number k of shared edges combined. We show that MSE is fixed-parameter tractable with respect to p, but does not admit a polynomial-size kernel (unless NP is a subset of coNP/poly). In the proof of the fixed-parameter tractability of MSE parameterized by p, we employ the treewidth reduction technique due to Marx, O'Sullivan, and Razgon [ACM TALG 2013].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.448/LIPIcs.FSTTCS.2015.448.pdf
Parameterized complexity
kernelization
treewidth
treewidth reduction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
463
474
10.4230/LIPIcs.FSTTCS.2015.463
article
Control Improvisation
Fremont, Daniel J.
Donzé, Alexandre
Seshia, Sanjit A.
Wessel, David
We formalize and analyze a new automata-theoretic problem termed control improvisation. Given an automaton, the problem is to produce an improviser, a probabilistic algorithm that randomly generates words in its language, subject to two additional constraints: the satisfaction of an admissibility predicate, and the exhibition of a specified amount of randomness. Control improvisation has multiple applications, including, for example, generating musical improvisations that satisfy rhythmic and melodic constraints, where admissibility is determined by some bounded divergence from a reference melody. We analyze the complexity of the control improvisation problem, giving cases where it is efficiently solvable and cases where it is #P-hard or undecidable. We also show how symbolic techniques based on Boolean satisfiability (SAT) solvers can be used to approximately solve some of the intractable cases.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.463/LIPIcs.FSTTCS.2015.463.pdf
finite automata
random sampling
Boolean satisfiability
testing
computational music
control theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
475
488
10.4230/LIPIcs.FSTTCS.2015.475
article
A Provably Correct Sampler for Probabilistic Programs
Hur, Chung-Kil
Nori, Aditya V.
Rajamani, Sriram K.
Samuel, Selva
We consider the problem of inferring the implicit distribution specified by a probabilistic program. A popular inference technique for probabilistic programs called Markov Chain Monte Carlo or
MCMC sampling involves running the program repeatedly and generating sample values by perturbing values produced in "previous runs". This simulates a Markov chain whose stationary distribution is the distribution specified by the probabilistic program. However, it is non-trivial to implement MCMC sampling for probabilistic programs since each variable could be updated at multiple program points. In such cases, it is unclear which values from the "previous run" should be used to generate samples for the "current run". We present an algorithm to solve this problem for the general case and formally prove that the algorithm is correct. Our algorithm handles variables that are updated multiple times along the same path, updated along different paths in a conditional statement, or repeatedly updated inside loops, We have implemented our algorithm in a tool called InferX. We empirically demonstrate that InferX produces the correct result for various benchmarks, whereas existing tools such as R2 and Stan produce incorrect results on several of these benchmarks.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.475/LIPIcs.FSTTCS.2015.475.pdf
Probabilistic Programming
Program Correctness
Probabilistic Inference
Markov Chain Monte Carlo Sampling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
489
502
10.4230/LIPIcs.FSTTCS.2015.489
article
On the Problem of Computing the Probability of Regular Sets of Trees
Michalewski, Henryk
Mio, Matteo
We consider the problem of computing the probability of regular languages of infinite trees with respect to the natural coin-flipping measure. We propose an algorithm which computes the probability of languages recognizable by game automata. In particular this algorithm is applicable to all deterministic automata. We then use the algorithm to prove through examples three properties of measure: (1) there exist regular sets having irrational probability, (2) there exist comeager regular sets having probability 0 and (3) the probability of game languages W_{i,k}, from automata theory, is 0 if k is odd and is 1 otherwise.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.489/LIPIcs.FSTTCS.2015.489.pdf
regular languages of trees
probability
meta-parity games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
503
516
10.4230/LIPIcs.FSTTCS.2015.503
article
Probabilistic Regular Expressions and MSO Logic on Finite Trees
Weidner, Thomas
We introduce probabilistic regular tree expressions and give a Kleene-like theorem for probabilistic tree automata (PTA). Furthermore, we define probabilistic MSO logic. This logic is more expressive than PTA. We define bottom-up PTA, which are strictly more expressive than PTA. Using bottom-up PTA, we prove a Büchi-like theorem for probabilistic MSO logic. We obtain a Nivat-style theorem as an additional result.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.503/LIPIcs.FSTTCS.2015.503.pdf
Probabilistic Regular Expressions
MSO Logic
Tree Automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
517
528
10.4230/LIPIcs.FSTTCS.2015.517
article
Rumors Across Radio, Wireless, Telephone
Iglesias, Jennifer
Rajaraman, Rajmohan
Ravi, R.
Sundaram, Ravi
We study the problem of computing a minimum time schedule to spread rumors in a given graph under several models: In the radio model, all neighbors of a transmitting node listen to the messages and are able to record it only when no other neighbor is transmitting; In the wireless model (also called the edge-star model), each transmitter is at a different frequency to which any neighbor can tune to, but only one neighboring transmission can be accessed in this way; In the telephone model, the set of transmitter-receiver pairs form a matching in the graph. The rumor spreading problems assume a message at one or several nodes of the graph that must reach a target node or set of nodes. The transmission proceeds in synchronous rounds under the rules of the corresponding model. The goal is to compute a schedule that completes in the minimum number of rounds.
We present a comprehensive study of approximation algorithms for these problems, and show several reductions from the harder to the easier models for special demands. We show a new hardness of approximation of Omega(n^1/2 - epsilon) for the minimum radio gossip time by a connection to maximum induced matchings. We give the first sublinear approximation algorithms for the most general case of the problem under the wireless model; we also consider various special cases such as instances with symmetric demands and give better approximation algorithms. Our work exposes the relationships across the models and opens up several new avenues for further study.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.517/LIPIcs.FSTTCS.2015.517.pdf
Broadcast
Gossip
Approximation algorithms
Graph algorithms
Hardness of Approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
529
542
10.4230/LIPIcs.FSTTCS.2015.529
article
The Price of Local Power Control in Wireless Scheduling
Halldórsson, Magnús M.
Tonoyan, Tigran
We consider the problem of scheduling wireless links in the physical model, where we seek an assignment of power levels and a partition of the given set of links into the minimum number of subsets satisfying the signal-to-interference-and-noise-ratio (SINR) constraints. Specifically, we are interested in the efficiency of local power assignment schemes, or oblivious power schemes, in approximating wireless scheduling. Oblivious power schemes are motivated by networking scenarios when power levels must be decided in advance, and not as part of the scheduling computation.
We present the first O(log log Delta)-approximation algorithm, which is known to be best possible (in terms of Delta) for oblivious power schemes, where Delta is the longest to shortest link length ratio. We achieve this by representing interference by a conflict graph, which allows the application of graph-theoretic results for a variety of related problems, including the weighted capacity problem. We explore further the contours of approximability and find the choice of power assignment matters; that not all metric spaces are equal; and that the presence of weak links makes the problem harder. Combined, our results resolve the price of local power for wireless scheduling, or the value of allowing unfettered power control.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.529/LIPIcs.FSTTCS.2015.529.pdf
Wireless Scheduling
Physical Model
Oblivious Power
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
543
559
10.4230/LIPIcs.FSTTCS.2015.543
article
Allocation of Divisible Goods Under Lexicographic Preferences
Schulman, Leonard J.
Vazirani, Vijay V.
We present a simple and natural non-pricing mechanism for allocating divisible goods among strategic agents having lexicographic preferences. Our mechanism has favorable properties of strategy-proofness (incentive compatibility). In addition (and even when extended to the case of Leontief bundles) it enjoys Pareto efficiency, envy-freeness, and time efficiency.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.543/LIPIcs.FSTTCS.2015.543.pdf
Mechanism design
lexicographic preferences
strategyproof
Pareto optimal
incentive compatible
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
560
574
10.4230/LIPIcs.FSTTCS.2015.560
article
On the Expressiveness of Multiparty Sessions
Demangeon, Romain
Yoshida, Nobuko
This paper explores expressiveness of asynchronous multiparty
sessions. We model the behaviours of endpoint implementations in several ways: (i) by the existence of different buffers and queues used to store messages exchanged asynchronously, (ii) by the ability for an endpoint to lightly reconfigure his behaviour at runtime (flexibility), (iii) by the presence of explicit parallelism or interruptions (exceptional actions) in endpoint behaviour. For a given protocol we define several denotations, based on traces of
events, corresponding to the different implementations and compare
them.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.560/LIPIcs.FSTTCS.2015.560.pdf
concurrency
message-passing
session
asynchrony
expressiveness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
575
589
10.4230/LIPIcs.FSTTCS.2015.575
article
Secure Refinements of Communication Channels
Cheval, Vincent
Cortier, Véronique
le Morvan, Eric
It is a common practice to design a protocol (say Q) assuming some secure channels. Then the secure channels are implemented using any standard protocol, e.g. TLS. In this paper, we study when such a practice is indeed secure.
We provide a characterization of both confidential and authenticated channels. As an application, we study several protocols of the literature including TLS and BAC protocols. Thanks to our result, we can consider a larger number of sessions when analyzing complex protocols resulting from explicit implementation of the secure channels of some more abstract protocol Q.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.575/LIPIcs.FSTTCS.2015.575.pdf
Protocol
Composition
Formal methods
Channels
Implementation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2015-12-14
45
590
603
10.4230/LIPIcs.FSTTCS.2015.590
article
Failure-aware Runtime Verification of Distributed Systems
Basin, David
Klaedtke, Felix
Zalinescu, Eugen
Prior runtime-verification approaches for distributed systems are limited as they do not account for network failures and they assume that system messages are received in the order they are sent. To overcome these limitations, we present an online algorithm for verifying observed system behavior at runtime with respect to specifications written in the real-time logic MTL that efficiently handles out-of-order message deliveries and operates in the presence of failures. Our algorithm uses a three-valued semantics for MTL, where the third truth value models knowledge gaps, and it resolves knowledge gaps as it propagates Boolean values through the formula structure. We establish the algorithm's soundness and provide completeness guarantees. We also show that it supports distributed system monitoring, where multiple monitors cooperate and exchange their observations and conclusions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol045-fsttcs2015/LIPIcs.FSTTCS.2015.590/LIPIcs.FSTTCS.2015.590.pdf
Runtime verification
monitoring algorithm
real-time logics
multi-valued semantics
distributed systems
asynchronous communication