52 Search Results for "Ramalingam, G."


Volume

LIPIcs, Volume 45

35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)

FSTTCS 2015, December 16-18, 2015, Bangalore, India

Editors: Prahladh Harsha and G. Ramalingam

Document
Safe Transferable Regions

Authors: Gowtham Kaki and G. Ramalingam

Published in: LIPIcs, Volume 109, 32nd European Conference on Object-Oriented Programming (ECOOP 2018)


Abstract
There is an increasing interest in alternative memory management schemes that seek to combine the convenience of garbage collection and the performance of manual memory management in a single language framework. Unfortunately, ensuring safety in presence of manual memory management remains as great a challenge as ever. In this paper, we present a C#-like object-oriented language called Broom that uses a combination of region type system and lightweight runtime checks to enforce safety in presence of user-managed memory regions called transferable regions. Unsafe transferable regions have been previously used to contain the latency due to unbounded GC pauses. Our approach shows that it is possible to restore safety without compromising on the benefits of transferable regions. We prove the type safety of Broom in a formal framework that includes its C#-inspired features, such as higher-order functions and generics. We complement our type system with a type inference algorithm, which eliminates the need for programmers to write region annotations on types. The inference algorithm has been proven sound and relatively complete. We describe a prototype implementation of the inference algorithm, and our experience of using it to enforce memory safety in dataflow programs.

Cite as

Gowtham Kaki and G. Ramalingam. Safe Transferable Regions. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 109, pp. 11:1-11:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kaki_et_al:LIPIcs.ECOOP.2018.11,
  author =	{Kaki, Gowtham and Ramalingam, G.},
  title =	{{Safe Transferable Regions}},
  booktitle =	{32nd European Conference on Object-Oriented Programming (ECOOP 2018)},
  pages =	{11:1--11:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-079-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{109},
  editor =	{Millstein, Todd},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2018.11},
  URN =		{urn:nbn:de:0030-drops-92160},
  doi =		{10.4230/LIPIcs.ECOOP.2018.11},
  annote =	{Keywords: Memory Safety, Formal Methods, Type System, Type Inference, Regions, Featherweight Java}
}
Document
Complete Volume
LIPIcs, Volume 45, FSTTCS'15, Complete Volume

Authors: Prahladh Harsha and G. Ramalingam

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
LIPIcs, Volume 45, FSTTCS'15, Complete Volume

Cite as

35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Proceedings{harsha_et_al:LIPIcs.FSTTCS.2015,
  title =	{{LIPIcs, Volume 45, FSTTCS'15, Complete Volume }},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015},
  URN =		{urn:nbn:de:0030-drops-56671},
  doi =		{10.4230/LIPIcs.FSTTCS.2015},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs, Mathematical Logic, Formal Languages}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Prahladh Harsha and G. Ramalingam

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. i-xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{harsha_et_al:LIPIcs.FSTTCS.2015.i,
  author =	{Harsha, Prahladh and Ramalingam, G.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{i--xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.i},
  URN =		{urn:nbn:de:0030-drops-56174},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.i},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Bypassing Worst Case Analysis: Tensor Decomposition and Clustering (Invited Talk)

Authors: Moses S. Charikar

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Typical worst case analysis of algorithms has led to a rich theory, but suffers from many pitfalls. This has inspired several approaches to bypass worst case analysis. In this talk, I will describe two vignettes from recent work in this realm. In the first part of the talk, I will discuss tensor decomposition -- a natural higher dimensional analog of matrix decomposition. Low rank tensor decompositions have proved to be a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. Yet, they pose significant challenges for algorithm design as most problems about tensors are NP-hard. I will discuss a smoothed analysis framework for analyzing algorithms for tensor decomposition which models realistic instances of learning problems and allows us to overcome many of the usual limitations of using tensor methods. In the second part of the talk, I will explore the phenomenon of convex relaxations returning integer solutions. Clearly this is not true in the worst case. I will discuss instances of discrete optimization problems where, for a suitable distribution on inputs, LP and SDP relaxations produce integer solutions with high probability. This has been studied in the context of LP decoding, sparse recovery, stochastic block models and so on. I will mention some recent results for clustering problems: when points are drawn from a distribution over k sufficiently separated clusters, the well known k-median relaxation and a (not so well known) SDP relaxation for k-means exactly recover the clusters.

Cite as

Moses S. Charikar. Bypassing Worst Case Analysis: Tensor Decomposition and Clustering (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{charikar:LIPIcs.FSTTCS.2015.1,
  author =	{Charikar, Moses S.},
  title =	{{Bypassing Worst Case Analysis: Tensor Decomposition and Clustering}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{1--1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.1},
  URN =		{urn:nbn:de:0030-drops-56647},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.1},
  annote =	{Keywords: tensor decomposition, smoothed analysis, convex relaxations, integrality}
}
Document
Invited Talk
Checking Correctness of Concurrent Objects: Tractable Reductions to Reachability (Invited Talk)

Authors: Ahmed Bouajjani, Michael Emmi, Constantin Enea, and Jad Hamza

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Efficient implementations of concurrent objects such as semaphores, locks, and atomic collections including stacks and queues are vital to modern computer systems. Programming them is however error prone. To minimize synchronization overhead between concurrent object-method invocations, implementors avoid blocking operations like lock acquisition, allowing methods to execute concurrently. However, concurrency risks unintended inter-operation interference. Their correctness is captured by observational refinement which ensures conformance to atomic reference implementations. Formally, given two libraries L_1 and L_2 implementing the methods of some concurrent object, we say L_1 refines L_2 if and only if every computation of every program using L_1 would also be possible were L_2 used instead. Linearizability, being an equivalent property, is the predominant proof technique for establishing observational refinement: one shows that each concurrent execution has a linearization which is a valid sequential execution according to a specification, given by an abstract data type or atomic reference implementation. However, checking linearizability is intrinsically hard. Indeed, even in the case where method implementations are finite-state and object specifications are also finite-state, and when a fixed number of threads (invoking methods in parallel) is considered, the linearizability problem is EXPSPACE-complete, and it becomes undecidable when the number of threads is unbounded. These results show in particular that there is a complexity/decidability gap between the problem of checking linearizability and the problem of checking reachability (i.e., the dual of checking safety/invariance properties), the latter being, PSPACE-complete and EXPSPACE-complete in the above considered cases, respectively. We address here the issue of investigating cases where tractable reductions of the observational refinement/linearizability problem to the reachability problem, or dually to invariant checking, are possible. Our aim is (1) to develop algorithmic approaches that avoid a systematic exploration of all possible linearizations of all computations, (2) to exploit existing techniques and tools for efficient invariant checking to check observational refinement, and (3) to establish decidability and complexity results for significant classes of concurrent objects and data structures. We present two approaches that we have proposed recently. The first approach introduces a parameterized approximation schema for detecting observational refinement violations. This approach exploits a fundamental property of shared-memory library executions: their histories are interval orders, a special case of partial orders which admit canonical representations in which each operation o is mapped to a positive-integer-bounded interval I(o). Interval orders are equipped with a natural notion of length, which corresponds to the smallest integer constant for which an interval mapping exists. Then, we define a notion of bounded-interval-length analysis, and demonstrate its efficiency, in terms of complexity, coverage, and scalability, for detecting observational refinement bugs. The second approach focuses on a specific class of abstract data types, including common concurrent objects and data structures such as stacks and queues. We show that for this class of objects, the linearizability problem is actually as hard as the control-state reachability problem. Indeed, we prove that in this case, the existence of linearizability violations (i.e., finite computations that are not linearizable), can be captured completely by a finite number of finite-state automata, even when an unbounded number of parallel operations is allowed (assuming that libraries are data-independent).

Cite as

Ahmed Bouajjani, Michael Emmi, Constantin Enea, and Jad Hamza. Checking Correctness of Concurrent Objects: Tractable Reductions to Reachability (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 2-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bouajjani_et_al:LIPIcs.FSTTCS.2015.2,
  author =	{Bouajjani, Ahmed and Emmi, Michael and Enea, Constantin and Hamza, Jad},
  title =	{{Checking Correctness of Concurrent Objects: Tractable Reductions to Reachability}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{2--4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.2},
  URN =		{urn:nbn:de:0030-drops-56638},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.2},
  annote =	{Keywords: Concurrent objects, linearizability, verification, bug detection}
}
Document
Invited Talk
Reachability Problems for Continuous Linear Dynamical Systems (Invited Talk)

Authors: James Worrell

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
This talk is about reachability problems for continuous-time linear dynamical systems. A central decision problem in the area is the Continuous Skolem Problem. In particular, this problem lies at the heart of several reachability questions in continuous-time Markov chains and linear hybrid automata. We describe some recent work, done in collaboration with Chonev and Ouaknine, that uses results in transcendence theory and real algebraic geometry to obtain decidability for certain variants of the problem. In particular, we consider a bounded version of the Continuous Skolem Problem, corresponding to time-bounded reachability. We prove decidability of the bounded problem assuming Schanuel's conjecture, a central conjecture in transcendence theory. We also describe some partial decidability results in the unbounded case in the case of functions satisfying differential equations of fixed low order. Finally, we give evidence of significant mathematical obstacles to proving decidability of the Continuous Skolem Problem in full generality by exhibiting some number-theoretic consequences of the existence of a decision procedure for this problem.

Cite as

James Worrell. Reachability Problems for Continuous Linear Dynamical Systems (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 5-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{worrell:LIPIcs.FSTTCS.2015.5,
  author =	{Worrell, James},
  title =	{{Reachability Problems for Continuous Linear Dynamical Systems}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{5--6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.5},
  URN =		{urn:nbn:de:0030-drops-56410},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.5},
  annote =	{Keywords: Linear Differential Equations, Continuous-Time Markov Chains, Hybrid Automata, Schanuel's Conjecture}
}
Document
Invited Talk
Convexity, Bayesianism, and the Quest Towards Optimal Algorithms (Invited Talk)

Authors: Boaz Barak

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
In this high level and accessible talk I will describe a recent line of works aimed at trying to understand the intrinsic complexity of computational problems by finding optimal algorithms for large classes of such problems. In particular, I will talk about efforts centered on convex programming as a source for such candidate algorithms. As we will see, a byproduct of this effort is a computational analog of Bayesian probability that is of its own interest. I will demonstrate the approach using the example of the planted clique (also known as hidden clique) problem - a central problem in average case complexity with connections to machine learning, community detection, compressed sensing, finding Nash equilibrium and more. While the complexity of the planted clique problem is still wide open, this line of works has led to interesting insights on it.

Cite as

Boaz Barak. Convexity, Bayesianism, and the Quest Towards Optimal Algorithms (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{barak:LIPIcs.FSTTCS.2015.7,
  author =	{Barak, Boaz},
  title =	{{Convexity, Bayesianism, and the Quest Towards Optimal Algorithms}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{7--7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.7},
  URN =		{urn:nbn:de:0030-drops-56626},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.7},
  annote =	{Keywords: Convex programming, Bayesian probability, Average-case complexity, Planted clique}
}
Document
Invited Talk
Beyond Matrix Completion (Invited Talk)

Authors: Ankur Moitra

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Here we study some of the statistical and algorithmic problems that arise in recommendation systems. We will be interested in what happens when we move beyond the matrix setting, to work with higher order objects — namely, tensors. To what extent does inference over more complex objects yield better predictions, but at the expense of the running time? We will explore the computational vs. statistical tradeoffs for some basic problems about recovering approximately low rank tensors from few observations, and will show that our algorithms are nearly optimal among all polynomial time algorithms, under natural complexity-theoretic assumptions. This is based on joint work with Boaz Barak.

Cite as

Ankur Moitra. Beyond Matrix Completion (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{moitra:LIPIcs.FSTTCS.2015.8,
  author =	{Moitra, Ankur},
  title =	{{Beyond Matrix Completion}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{8--8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.8},
  URN =		{urn:nbn:de:0030-drops-56650},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.8},
  annote =	{Keywords: matrix completion, recommendation systems, tensor prediction}
}
Document
Invited Talk
Relational Refinement Types for Higher-Order Shape Transformers (Invited Talk)

Authors: Suresh Jagannathan

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Understanding, discovering, and proving useful properties of sophisticated data structures are central problems in program verification. A particularly challenging exercise for shape analyses involves reasoning about sophisticated shape transformers that preserve the shape of a data structure (e.g., the data structure skeleton is always maintained as a balanced tree) or the relationship among values contained therein (e.g., the in-order relation of the elements of a tree or the parent-child relation of the elements of a heap) across program transformations. In this talk, we consider the specification and verification of such transformers for ML programs. The structural properties preserved by transformers can often be naturally expressed as inductively-defined relations over the recursive structure evident in the definitions of the datatypes they manipulate. By carefully augmenting a refinement type system with support for reasoning about structural relations over algebraic datatypes, we realize an expressive yet decidable specification language, capable of capturing useful structural invariants, which can nonetheless be automatically verified using off-the-shelf type checkers and theorem provers. Notably, our technique generalizes to definitions of parametric relations for polymorphic data types which, in turn, lead to highly composable specifications over higher-order polymorphic shape transformers.

Cite as

Suresh Jagannathan. Relational Refinement Types for Higher-Order Shape Transformers (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{jagannathan:LIPIcs.FSTTCS.2015.9,
  author =	{Jagannathan, Suresh},
  title =	{{Relational Refinement Types for Higher-Order Shape Transformers}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{9--9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.9},
  URN =		{urn:nbn:de:0030-drops-56406},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.9},
  annote =	{Keywords: Relational Specifications; Inductive and Parametric Relations; Refinement Types, Shape Analysis, Data Structure Verification\}}
}
Document
Robust Reoptimization of Steiner Trees

Authors: Keshav Goyal and Tobias Mömke

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
In reoptimization problems, one is given an optimal solution to a problem instance and a local modification of the instance. The goal is to obtain a solution for the modified instance. The additional information about the instance provided by the given solution plays a central role: we aim to use that information in order to obtain better solutions than we are able to compute from scratch. In this paper, we consider Steiner tree reoptimization and address the optimality requirement of the provided solution. Instead of assuming that we are provided an optimal solution, we relax the assumption to the more realistic scenario where we are given an approximate solution with an upper bound on its performance guarantee. We show that for Steiner tree reoptimization there is a clear separation between local modifications where optimality is crucial for obtaining improved approximations and those instances where approximate solutions are acceptable starting points. For some of the local modifications that have been considered in previous research, we show that for every fixed epsilon > 0, approximating the reoptimization problem with respect to a given (1+epsilon)-approximation is as hard as approximating the Steiner tree problem itself (whereas with a given optimal solution to the original problem it is known that one can obtain considerably improved results). Furthermore, we provide a new algorithmic technique that, with some further insights, allows us to obtain improved performance guarantees for Steiner tree reoptimization with respect to all remaining local modifications that have been considered in the literature: a required node of degree more than one becomes a Steiner node; a Steiner node becomes a required node; the cost of one edge is increased.

Cite as

Keshav Goyal and Tobias Mömke. Robust Reoptimization of Steiner Trees. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 10-24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{goyal_et_al:LIPIcs.FSTTCS.2015.10,
  author =	{Goyal, Keshav and M\"{o}mke, Tobias},
  title =	{{Robust Reoptimization of Steiner Trees}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{10--24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.10},
  URN =		{urn:nbn:de:0030-drops-56251},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.10},
  annote =	{Keywords: reoptimization, approximation algorithms, Steiner tree problem, robustness}
}
Document
Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model

Authors: Anamitra Roy Choudhury, Syamantak Das, and Amit Kumar

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
We consider the online scheduling problem to minimize the weighted ell_p-norm of flow-time of jobs. We study this problem under the rejection model introduced by Choudhury et al. (SODA 2015) - here the online algorithm is allowed to not serve an eps-fraction of the requests. We consider the restricted assignments setting where each job can go to a specified subset of machines. Our main result is an immediate dispatch non-migratory 1/eps^{O(1)}-competitive algorithm for this problem when one is allowed to reject at most eps-fraction of the total weight of jobs arriving. This is in contrast with the speed augmentation model under which no online algorithm for this problem can achieve a competitive ratio independent of p.

Cite as

Anamitra Roy Choudhury, Syamantak Das, and Amit Kumar. Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 25-37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{roychoudhury_et_al:LIPIcs.FSTTCS.2015.25,
  author =	{Roy Choudhury, Anamitra and Das, Syamantak and Kumar, Amit},
  title =	{{Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{25--37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.25},
  URN =		{urn:nbn:de:0030-drops-56341},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.25},
  annote =	{Keywords: online scheduling, flow-time, competitive analysis}
}
Document
On Correcting Inputs: Inverse Optimization for Online Structured Prediction

Authors: Hal Daumé III, Samir Khuller, Manish Purohit, and Gregory Sanders

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.

Cite as

Hal Daumé III, Samir Khuller, Manish Purohit, and Gregory Sanders. On Correcting Inputs: Inverse Optimization for Online Structured Prediction. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 38-51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{daumeiii_et_al:LIPIcs.FSTTCS.2015.38,
  author =	{Daum\'{e} III, Hal and Khuller, Samir and Purohit, Manish and Sanders, Gregory},
  title =	{{On Correcting Inputs: Inverse Optimization for Online Structured Prediction}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{38--51},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.38},
  URN =		{urn:nbn:de:0030-drops-56375},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.38},
  annote =	{Keywords: Inverse Optimization, Structured Prediction, Online Learning}
}
Document
Dynamic Sketching for Graph Optimization Problems with Applications to Cut-Preserving Sketches

Authors: Sepehr Assadi, Sanjeev Khanna, Yang Li, and Val Tannen

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
In this paper, we introduce a new model for sublinear algorithms called dynamic sketching. In this model, the underlying data is partitioned into a large static part and a small dynamic part and the goal is to compute a summary of the static part (i.e, a sketch) such that given any update for the dynamic part, one can combine it with the sketch to compute a given function. We say that a sketch is compact if its size is bounded by a polynomial function of the length of the dynamic data, (essentially) independent of the size of the static part. A graph optimization problem P in this model is defined as follows. The input is a graph G(V,E) and a set T \subseteq V of k terminals; the edges between the terminals are the dynamic part and the other edges in G are the static part. The goal is to summarize the graph G into a compact sketch (of size poly(k)) such that given any set Q of edges between the terminals, one can answer the problem P for the graph obtained by inserting all edges in Q to G, using only the sketch. We study the fundamental problem of computing a maximum matching and prove tight bounds on the sketch size. In particular, we show that there exists a (compact) dynamic sketch of size O(k^2) for the matching problem and any such sketch has to be of size \Omega(k^2). Our sketch for matchings can be further used to derive compact dynamic sketches for other fundamental graph problems involving cuts and connectivities. Interestingly, our sketch for matchings can also be used to give an elementary construction of a cut-preserving vertex sparsifier with space O(kC^2) for k-terminal graphs, which matches the best known upper bound; here C is the total capacity of the edges incident on the terminals. Additionally, we give an improved lower bound (in terms of C) of Omega(C/log{C}) on size of cut-preserving vertex sparsifiers, and establish that progress on dynamic sketching of the s-t max-flow problem (either upper bound or lower bound) immediately leads to better bounds for size of cut-preserving vertex sparsifiers.

Cite as

Sepehr Assadi, Sanjeev Khanna, Yang Li, and Val Tannen. Dynamic Sketching for Graph Optimization Problems with Applications to Cut-Preserving Sketches. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 52-68, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.FSTTCS.2015.52,
  author =	{Assadi, Sepehr and Khanna, Sanjeev and Li, Yang and Tannen, Val},
  title =	{{Dynamic Sketching for Graph Optimization Problems with Applications to Cut-Preserving Sketches}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{52--68},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.52},
  URN =		{urn:nbn:de:0030-drops-56361},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.52},
  annote =	{Keywords: Small-space Algorithms, Maximum Matchings, Vertex Sparsifiers}
}
Document
Weighted Strategy Logic with Boolean Goals Over One-Counter Games

Authors: Patricia Bouyer, Patrick Gardy, and Nicolas Markey

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Strategy Logic is a powerful specification language for expressing non-zero-sum properties of multi-player games. SL conveniently extends the logic ATL with explicit quantification and assignment of strategies. In this paper, we consider games over one-counter automata, and a quantitative extension 1cSL of SL with assertions over the value of the counter. We prove two results: we first show that, if decidable, model checking the so-called Boolean-goal fragment of 1cSL has non-elementary complexity; we actually prove the result for the Boolean-goal fragment of SL over finite-state games, which was an open question in [Mogavero et al. Reasoning about strategies: On the model-checking problem. ACM ToCL 15(4),2014]. As a first step towards proving decidability, we then show that the Boolean-goal fragment of 1cSL over one-counter games enjoys a nice periodicity property.

Cite as

Patricia Bouyer, Patrick Gardy, and Nicolas Markey. Weighted Strategy Logic with Boolean Goals Over One-Counter Games. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 69-83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bouyer_et_al:LIPIcs.FSTTCS.2015.69,
  author =	{Bouyer, Patricia and Gardy, Patrick and Markey, Nicolas},
  title =	{{Weighted Strategy Logic with Boolean Goals Over One-Counter Games}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{69--83},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.69},
  URN =		{urn:nbn:de:0030-drops-56537},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.69},
  annote =	{Keywords: Temporal logics, multi-player games, strategy logic, quantitative games}
}
  • Refine by Author
  • 3 Ramalingam, G.
  • 2 Brihaye, Thomas
  • 2 Geeraerts, Gilles
  • 2 Haddad, Axel
  • 2 Harsha, Prahladh
  • Show More...

  • Refine by Classification
  • 1 Software and its engineering → Allocation / deallocation strategies
  • 1 Software and its engineering → Data flow languages
  • 1 Software and its engineering → Object oriented languages
  • 1 Software and its engineering → Software verification and validation

  • Refine by Keyword
  • 3 Approximation algorithms
  • 2 FPT
  • 2 Graph algorithms
  • 2 decidability
  • 2 game theory
  • Show More...

  • Refine by Type
  • 51 document
  • 1 volume

  • Refine by Publication Year
  • 51 2015
  • 1 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail