59 Search Results for "Sen, Sandeep"


Volume

LIPIcs, Volume 65

36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)

FSTTCS 2016, December 13-15, 2016, Chennai, India

Editors: Akash Lal, S. Akshay, Saket Saurabh, and Sandeep Sen

Document
No-Dimensional Tverberg Partitions Revisited

Authors: Sariel Har-Peled and Eliot W. Robson

Published in: LIPIcs, Volume 294, 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)


Abstract
Given a set P ⊂ ℝ^d of n points, with diameter Δ, and a parameter δ ∈ (0,1), it is known that there is a partition of P into sets P_1, …, P_t, each of size O(1/δ²), such that their convex hulls all intersect a common ball of radius δΔ. We prove that a random partition, with a simple alteration step, yields the desired partition, resulting in a (randomized) linear time algorithm (i.e., O(dn)). We also provide a deterministic algorithm with running time O(dn log n). Previous proofs were either existential (i.e., at least exponential time), or required much bigger sets. In addition, the algorithm and its proof of correctness are significantly simpler than previous work, and the constants are slightly better. We also include a number of applications and extensions using the same central ideas. For example, we provide a linear time algorithm for computing a "fuzzy" centerpoint, and prove a no-dimensional weak ε-net theorem with an improved constant.

Cite as

Sariel Har-Peled and Eliot W. Robson. No-Dimensional Tverberg Partitions Revisited. In 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 294, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.SWAT.2024.26,
  author =	{Har-Peled, Sariel and Robson, Eliot W.},
  title =	{{No-Dimensional Tverberg Partitions Revisited}},
  booktitle =	{19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-318-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{294},
  editor =	{Bodlaender, Hans L.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2024.26},
  URN =		{urn:nbn:de:0030-drops-200664},
  doi =		{10.4230/LIPIcs.SWAT.2024.26},
  annote =	{Keywords: Points, partitions, convex hull, high dimension}
}
Document
Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings

Authors: Shashwat Banchhor, Rishikesh Gajjala, Yogish Sabharwal, and Sandeep Sen

Published in: LIPIcs, Volume 213, 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)


Abstract
In this paper, we study the problem of designing prefix-free encoding schemes having minimum average code length that can be decoded efficiently under a decode cost model that captures memory hierarchy induced cost functions. We also study a special case of this problem that is closely related to the length limited Huffman coding (LLHC) problem; we call this the soft-length limited Huffman coding problem. In this version, there is a penalty associated with each of the n characters of the alphabet whose encodings exceed a specified bound D(≤ n) where the penalty increases linearly with the length of the encoding beyond D. The goal of the problem is to find a prefix-free encoding having minimum average code length and total penalty within a pre-specified bound P. This generalizes the LLHC problem. We present an algorithm to solve this problem that runs in time O(nD). We study a further generalization in which the penalty function and the objective function can both be arbitrary monotonically non-decreasing functions of the codeword length. We provide dynamic programming based exact and PTAS algorithms for this setting.

Cite as

Shashwat Banchhor, Rishikesh Gajjala, Yogish Sabharwal, and Sandeep Sen. Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 8:1-8:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{banchhor_et_al:LIPIcs.FSTTCS.2021.8,
  author =	{Banchhor, Shashwat and Gajjala, Rishikesh and Sabharwal, Yogish and Sen, Sandeep},
  title =	{{Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{8:1--8:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-155193},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.8},
  annote =	{Keywords: Approximation algorithms, Hierarchical memory, Prefix free codes}
}
Document
A Unified Approach to Tail Estimates for Randomized Incremental Construction

Authors: Sandeep Sen

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
By combining several interesting applications of random sampling in geometric algorithms like point location, linear programming, segment intersections, binary space partitioning, Clarkson and Shor [Kenneth L. Clarkson and Peter W. Shor, 1989] developed a general framework of randomized incremental construction (RIC ). The basic idea is to add objects in a random order and show that this approach yields efficient/optimal bounds on expected running time. Even quicksort can be viewed as a special case of this paradigm. However, unlike quicksort, for most of these problems, sharper tail estimates on their running times are not known. Barring some promising attempts in [Kurt Mehlhorn et al., 1993; Kenneth L. Clarkson et al., 1992; Raimund Seidel, 1991], the general question remains unresolved. In this paper we present a general technique to obtain tail estimates for RIC and and provide applications to some fundamental problems like Delaunay triangulations and construction of Visibility maps of intersecting line segments. The main result of the paper is derived from a new and careful application of Freedman’s [David Freedman, 1975] inequality for Martingale concentration that overcomes the bottleneck of the better known Azuma-Hoeffding inequality. Further, we explore instances, where an RIC based algorithm may not have inverse polynomial tail estimates. In particular, we show that the RIC time bounds for trapezoidal map can encounter a running time of Omega (n log n log log n) with probability exceeding 1/(sqrt{n)}. This rules out inverse polynomial concentration bounds within a constant factor of the O(n log n) expected running time.

Cite as

Sandeep Sen. A Unified Approach to Tail Estimates for Randomized Incremental Construction. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 58:1-58:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{sen:LIPIcs.STACS.2019.58,
  author =	{Sen, Sandeep},
  title =	{{A Unified Approach to Tail Estimates for Randomized Incremental Construction}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{58:1--58:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.58},
  URN =		{urn:nbn:de:0030-drops-102977},
  doi =		{10.4230/LIPIcs.STACS.2019.58},
  annote =	{Keywords: ric, tail estimates, martingale, lower bound}
}
Document
Complete Volume
LIPICs, Volume 65, FSTTCS'16, Complete Volume

Authors: Akash Lal, S. Akshay, Saket Saurabh, and Sandeep Sen

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
LIPICs, Volume 65, FSTTCS'16, Complete Volume

Cite as

36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{lal_et_al:LIPIcs.FSTTCS.2016,
  title =	{{LIPICs, Volume 65, FSTTCS'16, Complete Volume}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016},
  URN =		{urn:nbn:de:0030-drops-69074},
  doi =		{10.4230/LIPIcs.FSTTCS.2016},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs, Mathematical Logic, Formal Languages}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers

Authors: Akash Lal, S. Akshay, Saket Saurabh, and Sandeep Sen

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers

Cite as

36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lal_et_al:LIPIcs.FSTTCS.2016.0,
  author =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.0},
  URN =		{urn:nbn:de:0030-drops-68412},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers}
}
Document
Invited Talk
Fast and Powerful Hashing Using Tabulation (Invited Talk)

Authors: Mikkel Thorup

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here we survey recent results on how simple hashing schemes based on tabulation provide unexpectedly strong guarantees. Simple tabulation hashing dates back to Zobrist [1970]. Keys are viewed as consisting of c characters and we have precomputed character tables h_1, ... , h_q mapping characters to random hash values. A key x = (x_1 , ..., x_c) is hashed to h_1[x_1] + h_2[x_2] ... + h_c[x_c]. This scheme is very fast with character tables in cache. While simple tabulation is not even 4-independent, it does provide many of the guarantees that are normally obtained via higher independence, e.g., linear probing and Cuckoo hashing. Next we consider twisted tabulation where one character is "twisted" with some simple operations. The resulting hash function has powerful distributional properties: Chernoff-Hoeffding type tail bounds and a very small bias for min-wise hashing. Finally, we consider double tabulation where we compose two simple tabulation functions, applying one to the output of the other, and show that this yields very high independence in the classic framework of Carter and Wegman [1977]. In fact, w.h.p., for a given set of size proportional to that of the space consumed, double tabulation gives fully-random hashing. While these tabulation schemes are all easy to implement and use, their analysis is not.

Cite as

Mikkel Thorup. Fast and Powerful Hashing Using Tabulation (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{thorup:LIPIcs.FSTTCS.2016.1,
  author =	{Thorup, Mikkel},
  title =	{{Fast and Powerful Hashing Using Tabulation}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.1},
  URN =		{urn:nbn:de:0030-drops-68869},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.1},
  annote =	{Keywords: Hashing, Randomized Algorithms}
}
Document
Invited Talk
Simple Invariants for Proving the Safety of Distributed Protocols (Invited Talk)

Authors: Mooly Sagiv

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Safety of a distributed protocol means that the protocol never reaches a bad state, e.g., a state where two nodes become leaders in a leader-election protocol. Proving safety is obviously undecidable since such protocols are run by an unbounded number of nodes, and their safety needs to be established for any number of nodes. I will describe a deductive approach for proving safety, based on the concept of universally quantified inductive invariants—an adaptation of the mathematical concept of induction to the domain of programs. In the deductive approach, the programmer specifies a candidate inductive invariant and the system automatically checks if it is inductive. By restricting the invariants to be universally quantified, this approach can be effectively implemented with a SAT solver. This is a joint work with Ken McMillan (Microsoft Research), Oded Padon (Tel Aviv University), Aurojit Panda (UC Berkeley), and Sharon Shoham (Tel Aviv University) and was integrated into the IVY system. The work is inspired by Shachar Itzhaky's thesis.

Cite as

Mooly Sagiv. Simple Invariants for Proving the Safety of Distributed Protocols (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sagiv:LIPIcs.FSTTCS.2016.2,
  author =	{Sagiv, Mooly},
  title =	{{Simple Invariants for Proving the Safety of Distributed Protocols}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.2},
  URN =		{urn:nbn:de:0030-drops-68877},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.2},
  annote =	{Keywords: Program verification, Distributed protocols, Deductive reasoning}
}
Document
Invited Talk
My O Is Bigger Than Yours (Invited Talk)

Authors: Holger Hermanns

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
This invited talk starts off with a review of probabilistic safety assessment (PSA) methods currently exercised across the nuclear power plant domain worldwide. It then elaborates on crucial aspects of the Fukushima Dai-ichi accident which are not considered properly in contemporary PSA studies. New kinds of PSA are needed so as to take into account external hazards, dynamic aspects of accident progression, and partial information. All of these come with obvious increases in algorithmic analysis complexity. This motivates our ongoing work to gradually tackle the resulting modelling and analysis problems. They revolve around static and dynamic fault trees, open interpretations of compositional Markov models and advances in their effective numerical analysis.

Cite as

Holger Hermanns. My O Is Bigger Than Yours (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hermanns:LIPIcs.FSTTCS.2016.3,
  author =	{Hermanns, Holger},
  title =	{{My O Is Bigger Than Yours}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.3},
  URN =		{urn:nbn:de:0030-drops-68881},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.3},
  annote =	{Keywords: Probabilistic Safety Analysis, Fault Trees, Compositionality, Markov Models, Model Checking}
}
Document
Invited Talk
Continuous Optimization: The “Right” Language for Graph Algorithms? (Invited Talk)

Authors: Aleksander Madry

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Traditionally, we view graphs as purely combinatorial objects and tend to design our graph algorithms to be combinatorial as well. In fact, in the context of algorithms, "combinatorial" became a synonym of "fast". Recent work, however, shows that a number of such "inherently combinatorial" graph problems can be solved much faster using methods that are very "non-combinatorial". Specifically, by approaching these problems with tools and notions borrowed from linear algebra and, more broadly, from continuous optimization. A notable examples here are the recent lines of work on the maximum flow problem, the bipartite matching problem, and the shortest path problem in graphs with negative-length arcs. This raises an intriguing question: Is continuous optimization a more suitable and principled optics for fast graph algorithms than the classic combinatorial view? In this talk, I will discuss this question as well as the developments that motivated it.

Cite as

Aleksander Madry. Continuous Optimization: The “Right” Language for Graph Algorithms? (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 4:1-4:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{madry:LIPIcs.FSTTCS.2016.4,
  author =	{Madry, Aleksander},
  title =	{{Continuous Optimization: The “Right” Language for Graph Algorithms?}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{4:1--4:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.4},
  URN =		{urn:nbn:de:0030-drops-68890},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.4},
  annote =	{Keywords: maximum flow problem, bipartite matchings, interior-point methods, gradient decent method, Laplacian linear systems}
}
Document
Invited Talk
Graph Decompositions and Algorithms (Invited Talk)

Authors: Fedor V. Fomin

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
We overview the recent progress in solving intractable optimization problems on planar graphs as well as other classes of sparse graphs. In particular, we discuss how tools from Graph Minors theory can be used to obtain: * subexponential parameterized algorithms * approximation algorithms, and * preprocessing and kernelization algorithms on these classes of graphs.

Cite as

Fedor V. Fomin. Graph Decompositions and Algorithms (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fomin:LIPIcs.FSTTCS.2016.5,
  author =	{Fomin, Fedor V.},
  title =	{{Graph Decompositions and Algorithms}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.5},
  URN =		{urn:nbn:de:0030-drops-68903},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.5},
  annote =	{Keywords: Algorithms, logic, graph minor}
}
Document
Invited Talk
Side Channel Analysis Using a Model Counting Constraint Solver and Symbolic Execution (Invited Talk)

Authors: Tevfik Bultan

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
A crucial problem in software security is the detection of side-channels. Information gained by observing non-functional properties of program executions (such as execution time or memory usage) can enable attackers to infer secret information (such as a password). In this talk, I will discuss how symbolic execution, combined with a model counting constraint solver, can be used for quantifying side-channel leakage in Java programs. In addition to computing information leakage for a single run of a program, I will also discuss computation of information leakage for multiple runs for a type of side channels called segmented oracles. In segmented oracles, the attacker is able to explore each segment of a secret (for example each character of a password) independently. For segmented oracles, it is possible to compute information leakage for multiple runs using only the path constraints generated from a single run symbolic execution. These results have been implemented as an extension to the symbolic execution tool Symbolic Path Finder (SPF) using the SMT solver Z3 and two model counting constraint solvers LattE and ABC.

Cite as

Tevfik Bultan. Side Channel Analysis Using a Model Counting Constraint Solver and Symbolic Execution (Invited Talk). In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 6:1-6:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bultan:LIPIcs.FSTTCS.2016.6,
  author =	{Bultan, Tevfik},
  title =	{{Side Channel Analysis Using a Model Counting Constraint Solver and Symbolic Execution}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{6:1--6:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.6},
  URN =		{urn:nbn:de:0030-drops-68914},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.6},
  annote =	{Keywords: Side-channels, quantitative information flow, symbolic execution, model counting, constraint solvers}
}
Document
Mixed-Criticality Scheduling to Minimize Makespan

Authors: Sanjoy Baruah, Arvind Easwaran, and Zhishan Guo

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
In the mixed-criticality job model, each job is characterized by two execution time parameters, representing a smaller (less conservative) estimate and a larger (more conservative) estimate on its actual, unknown, execution time. Each job is further classified as being either less critical or more critical. The desired execution semantics are that all jobs should execute correctly provided all jobs complete upon being allowed to execute for up to the smaller of their execution time estimates, whereas if some jobs need to execute beyond their smaller execution time estimates (but not beyond their larger execution time estimates), then only the jobs classified as being more critical are required to execute correctly. The scheduling of collections of such mixed-criticality jobs upon identical multiprocessor platforms in order to minimize the makespan is considered here.

Cite as

Sanjoy Baruah, Arvind Easwaran, and Zhishan Guo. Mixed-Criticality Scheduling to Minimize Makespan. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{baruah_et_al:LIPIcs.FSTTCS.2016.7,
  author =	{Baruah, Sanjoy and Easwaran, Arvind and Guo, Zhishan},
  title =	{{Mixed-Criticality Scheduling to Minimize Makespan}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{7:1--7:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.7},
  URN =		{urn:nbn:de:0030-drops-68429},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.7},
  annote =	{Keywords: Scheduling, Mixed criticality, Identical parallel machines, Makespan minimization, Approximation algorithm.}
}
Document
Capacitated k-Center Problem with Vertex Weights

Authors: Aounon Kumar

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
We study the capacitated k-center problem with vertex weights. It is a generalization of the well known k-center problem. In this variant each vertex has a weight and a capacity. The assignment cost of a vertex to a center is given by the product of the weight of the vertex and its distance to the center. The distances are assumed to form a metric. Each center can only serve as many vertices as its capacity. We show an n^{1-epsilon}-approximation hardness for this problem, for any epsilon > 0, where n is the number of vertices in the input. Both the capacitated and the weighted versions of the k-center problem individually can be approximated within a constant factor. Yet the common extension of both the generalizations cannot be approximated efficiently within a constant factor, unless P = NP. This problem, to the best of our knowledge, is the first facility location problem with metric distances known to have a super-constant inapproximability result. The hardness result easily generalizes to versions of the problem that consider the p-norm of the assignment costs (weighted distances) as the objective function. We give n^{1- 1/p - epsilon}-approximation hardness for this problem, for p>1. We complement the hardness result by showing a simple n-approximation algorithm for this problem. We also give a bi-criteria constant factor approximation algorithm, for the case of uniform capacities, which opens at most 2k centers.

Cite as

Aounon Kumar. Capacitated k-Center Problem with Vertex Weights. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar:LIPIcs.FSTTCS.2016.8,
  author =	{Kumar, Aounon},
  title =	{{Capacitated k-Center Problem with Vertex Weights}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.8},
  URN =		{urn:nbn:de:0030-drops-68438},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.8},
  annote =	{Keywords: approximation hardness, k-center, gadget reduction}
}
Document
Improved Pseudo-Polynomial-Time Approximation for Strip Packing

Authors: Waldo Gálvez, Fabrizio Grandoni, Salvatore Ingala, and Arindam Khan

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
We study the strip packing problem, a classical packing problem which generalizes both bin packing and makespan minimization. Here we are given a set of axis-parallel rectangles in the two-dimensional plane and the goal is to pack them in a vertical strip of fixed width such that the height of the obtained packing is minimized. The packing must be non-overlapping and the rectangles cannot be rotated. A reduction from the partition problem shows that no approximation better than 3/2 is possible for strip packing in polynomial time (assuming P!=NP). Nadiradze and Wiese [SODA16] overcame this barrier by presenting a (7/5+epsilon)-approximation algorithm in pseudo-polynomial-time (PPT). As the problem is strongly NP-hard, it does not admit an exact PPT algorithm (though a PPT approximation scheme might exist). In this paper we make further progress on the PPT approximability of strip packing, by presenting a (4/3+epsilon)-approximation algorithm. Our result is based on a non-trivial repacking of some rectangles in the "empty space" left by the construction by Nadiradze and Wiese, and in some sense pushes their approach to its limit. Our PPT algorithm can be adapted to the case where we are allowed to rotate the rectangles by 90 degrees, achieving the same approximation factor and breaking the polynomial-time approximation barrier of 3/2 for the case with rotations as well.

Cite as

Waldo Gálvez, Fabrizio Grandoni, Salvatore Ingala, and Arindam Khan. Improved Pseudo-Polynomial-Time Approximation for Strip Packing. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{galvez_et_al:LIPIcs.FSTTCS.2016.9,
  author =	{G\'{a}lvez, Waldo and Grandoni, Fabrizio and Ingala, Salvatore and Khan, Arindam},
  title =	{{Improved Pseudo-Polynomial-Time Approximation for Strip Packing}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.9},
  URN =		{urn:nbn:de:0030-drops-68445},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.9},
  annote =	{Keywords: approximation algorithms, strip packing, rectangle packing, cutting-stock problem}
}
  • Refine by Author
  • 7 Sen, Sandeep
  • 2 Akshay, S.
  • 2 Gupta, Manoj
  • 2 Harsha, Prahladh
  • 2 Krishna, Shankara Narayanan
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 3 Model Checking
  • 2 Algorithms
  • 2 Infinite Games
  • 2 Online Algorithms
  • 2 approximation algorithms
  • Show More...

  • Refine by Type
  • 58 document
  • 1 volume

  • Refine by Publication Year
  • 53 2016
  • 1 2011
  • 1 2012
  • 1 2015
  • 1 2019
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail