Search Results

Documents authored by Rao, S. Srinivasa


Found 3 Possible Name Variants:

Rao, S. Srinivasa

Document
More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries

Authors: Roberto Grossi, Alessio Orlandi, Rajeev Raman, and S. Srinivasa Rao

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
We consider the problem of representing, in a compressed format, a bit-vector~$S$ of $m$ bits with $n$ $\mathbf{1}$s, supporting the following operations, where $b \in \{ \mathbf{0}, \mathbf{1} \}$: \begin{itemize} \item $\mathtt{rank}_b(S,i)$ returns the number of occurrences of bit $b$ in the prefix $S\left[1..i\right]$; \item $\mathtt{select}_b(S,i)$ returns the position of the $i$th occurrence of bit $b$ in $S$. \end{itemize} Such a data structure is called \emph{fully indexable dictionary (\textsc{fid})} [Raman, Raman, and Rao, 2007], and is at least as powerful as predecessor data structures. Viewing $S$ as a set $X = \{ x_1, x_2, \ldots, x_n \}$ of $n$ distinct integers drawn from a universe $[m] = \{1, \ldots, m\}$, the predecessor of integer $y \in [m]$ in $X$ is given by $\ensuremath{\mathtt{select}^{}_1}(S, \ensuremath{\mathtt{rank}_1}(S,y-1))$. {\textsc{fid}}s have many applications in succinct and compressed data structures, as they are often involved in the construction of succinct representation for a variety of abstract data types. Our focus is on space-efficient {\textsc{fid}}s on the \textsc{ram} model with word size $\Theta(\lg m)$ and constant time for all operations, so that the time cost is independent of the input size. Given the bitstring $S$ to be encoded, having length $m$ and containing $n$ ones, the minimal amount of information that needs to be stored is $B(n,m) = \lceil \log {{m}\choose{n}} \rceil$. The state of the art in building a \textsc{fid}\ for~$S$ is given in~\mbox{}[P\v{a}tra\c{s}cu, 2008] using $B(m,n)+O( m / ( (\log m/ t) ^t) ) + O(m^{3/4}) $ bits, to support the operations in $O(t)$ time. Here, we propose a parametric data structure exhibiting a time/space trade-off such that, for any real constants $0 < \delta \leq 1/2$, $0 < \varepsilon \leq 1$, and integer $s > 0$, it uses \[ B(n,m) + O\left(n^{1+\delta} + n \left(\frac{m}{n^s}\right)^\varepsilon\right) \] bits and performs all the operations in time $O(s\delta^{-1} + \varepsilon^{-1})$. The improvement is twofold: our redundancy can be lowered parametrically and, fixing $s = O(1)$, we get a constant-time \textsc{fid}\ whose space is $B(n,m) + O(m^\varepsilon/\mathrm{poly}(n))$ bits, for sufficiently large $m$. This is a significant improvement compared to the previous bounds for the general case.

Cite as

Roberto Grossi, Alessio Orlandi, Rajeev Raman, and S. Srinivasa Rao. More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 517-528, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{grossi_et_al:LIPIcs.STACS.2009.1847,
  author =	{Grossi, Roberto and Orlandi, Alessio and Raman, Rajeev and Rao, S. Srinivasa},
  title =	{{More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{517--528},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1847},
  URN =		{urn:nbn:de:0030-drops-18470},
  doi =		{10.4230/LIPIcs.STACS.2009.1847},
  annote =	{Keywords: }
}

Rao, Satish

Document
New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy

Authors: Qiuyi (Richard) Zhang, Satish Rao, and Tandy Warnow

Published in: LIPIcs, Volume 113, 18th International Workshop on Algorithms in Bioinformatics (WABI 2018)


Abstract
Absolute fast converging (AFC) phylogeny estimation methods are ones that have been proven to recover the true tree with high probability given sequences whose lengths are polynomial in the number of number of leaves in the tree (once the shortest and longest branch lengths are fixed). While there has been a large literature on AFC methods, the best in terms of empirical performance was DCM_NJ, published in SODA 2001. The main empirical advantage of DCM_NJ over other AFC methods is its use of neighbor joining (NJ) to construct trees on smaller taxon subsets, which are then combined into a tree on the full set of species using a supertree method; in contrast, the other AFC methods in essence depend on quartet trees that are computed independently of each other, which reduces accuracy compared to neighbor joining. However, DCM_NJ is unlikely to scale to large datasets due to its reliance on supertree methods, as no current supertree methods are able to scale to large datasets with high accuracy. In this study we present a new approach to large-scale phylogeny estimation that shares some of the features of DCM_NJ but bypasses the use of supertree methods. We prove that this new approach is AFC and uses polynomial time. Furthermore, we describe variations on this basic approach that can be used with leaf-disjoint constraint trees (computed using methods such as maximum likelihood) to produce other AFC methods that are likely to provide even better accuracy. Thus, we present a new generalizable technique for large-scale tree estimation that is designed to improve scalability for phylogeny estimation methods to ultra-large datasets, and that can be used in a variety of settings (including tree estimation from unaligned sequences, and species tree estimation from gene trees).

Cite as

Qiuyi (Richard) Zhang, Satish Rao, and Tandy Warnow. New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy. In 18th International Workshop on Algorithms in Bioinformatics (WABI 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 113, pp. 8:1-8:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.WABI.2018.8,
  author =	{Zhang, Qiuyi (Richard) and Rao, Satish and Warnow, Tandy},
  title =	{{New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy}},
  booktitle =	{18th International Workshop on Algorithms in Bioinformatics (WABI 2018)},
  pages =	{8:1--8:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-082-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{113},
  editor =	{Parida, Laxmi and Ukkonen, Esko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2018.8},
  URN =		{urn:nbn:de:0030-drops-93108},
  doi =		{10.4230/LIPIcs.WABI.2018.8},
  annote =	{Keywords: phylogeny estimation, short quartets, sample complexity, absolute fast converging methods, neighbor joining, maximum likelihood}
}
Document
Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction

Authors: Di Wang, Satish Rao, and Michael W. Mahoney

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
In a series of recent breakthroughs, Allen-Zhu and Orecchia [Allen-Zhu/Orecchia, STOC 2015; Allen-Zhu/Orecchia, SODA 2015] leveraged insights from the linear coupling method [Allen-Zhu/Oreccia, arXiv 2014], which is a first-order optimization scheme, to provide improved algorithms for packing and covering linear programs. The result in [Allen-Zhu/Orecchia, STOC 2015] is particularly interesting, as the algorithm for packing LP achieves both width-independence and Nesterov-like acceleration, which was not known to be possible before. Somewhat surprisingly, however, while the dependence of the convergence rate on the error parameter epsilon for packing problems was improved to O(1/epsilon), which corresponds to what accelerated gradient methods are designed to achieve, the dependence for covering problems was only improved to O(1/epsilon^{1.5}), and even that required a different more complicated algorithm, rather than from Nesterov-like acceleration. Given the primal-dual connection between packing and covering problems and since previous algorithms for these very related problems have led to the same epsilon dependence, this discrepancy is surprising, and it leaves open the question of the exact role that the linear coupling is playing in coordinating the complementary gradient and mirror descent step of the algorithm. In this paper, we clarify these issues, illustrating that the linear coupling method can lead to improved O(1/epsilon) dependence for both packing and covering problems in a unified manner, i.e., with the same algorithm and almost identical analysis. Our main technical result is a novel dimension lifting method that reduces the coordinate-wise diameters of the feasible region for covering LPs, which is the key structural property to enable the same Nesterov-like acceleration as in the case of packing LPs. The technique is of independent interest and that may be useful in applying the accelerated linear coupling method to other combinatorial problems.

Cite as

Di Wang, Satish Rao, and Michael W. Mahoney. Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ICALP.2016.50,
  author =	{Wang, Di and Rao, Satish and Mahoney, Michael W.},
  title =	{{Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.50},
  URN =		{urn:nbn:de:0030-drops-63308},
  doi =		{10.4230/LIPIcs.ICALP.2016.50},
  annote =	{Keywords: Convex optimization, Accelerated gradient descent, Linear program, Approximation algorithm, Packing and covering}
}
Document
Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time

Authors: Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
We study the problem of approximately solving positive linear programs (LPs). This class of LPs models a wide range of fundamental problems in combinatorial optimization and operations research, such as many resource allocation problems, solving non-negative linear systems, computing tomography, single/multi commodity flows on graphs, etc. For the special cases of pure packing or pure covering LPs, recent result by Allen-Zhu and Orecchia [Allen/Zhu/Orecchia, SODA'15] gives O˜(1/(epsilon^3))-time parallel algorithm, which breaks the longstanding O˜(1/(epsilon^4)) running time bound by the seminal work of Luby and Nisan [Luby/Nisan, STOC'93]. We present new parallel algorithm with running time O˜(1/(epsilon^3)) for the more general mixed packing and covering LPs, which improves upon the O˜(1/(epsilon^4))-time algorithm of Young [Young, FOCS'01; Young, arXiv 2014]. Our work leverages the ideas from both the optimization oriented approach [Allen/Zhu/Orecchia, SODA'15; Wang/Mahoney/Mohan/Rao, arXiv 2015], as well as the more combinatorial approach with phases [Young, FOCS'01; Young, arXiv 2014]. In addition, our algorithm, when directly applied to pure packing or pure covering LPs, gives a improved running time of O˜(1/(epsilon^2)).

Cite as

Michael W. Mahoney, Satish Rao, Di Wang, and Peng Zhang. Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^{-3}) Time. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mahoney_et_al:LIPIcs.ICALP.2016.52,
  author =	{Mahoney, Michael W. and Rao, Satish and Wang, Di and Zhang, Peng},
  title =	{{Approximating the Solution to Mixed Packing and Covering LPs in Parallel O˜(epsilon^\{-3\}) Time}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.52},
  URN =		{urn:nbn:de:0030-drops-63335},
  doi =		{10.4230/LIPIcs.ICALP.2016.52},
  annote =	{Keywords: Mixed packing and covering, Linear program, Approximation algorithm, Parallel algorithm}
}

Rao, Sankeerth

Document
Torus Polynomials: An Algebraic Approach to ACC Lower Bounds

Authors: Abhishek Bhrushundi, Kaave Hosseini, Shachar Lovett, and Sankeerth Rao

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We propose an algebraic approach to proving circuit lower bounds for ACC^0 by defining and studying the notion of torus polynomials. We show how currently known polynomial-based approximation results for AC^0 and ACC^0 can be reformulated in this framework, implying that ACC^0 can be approximated by low-degree torus polynomials. Furthermore, as a step towards proving ACC^0 lower bounds for the majority function via our approach, we show that MAJORITY cannot be approximated by low-degree symmetric torus polynomials. We also pose several open problems related to our framework.

Cite as

Abhishek Bhrushundi, Kaave Hosseini, Shachar Lovett, and Sankeerth Rao. Torus Polynomials: An Algebraic Approach to ACC Lower Bounds. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhrushundi_et_al:LIPIcs.ITCS.2019.13,
  author =	{Bhrushundi, Abhishek and Hosseini, Kaave and Lovett, Shachar and Rao, Sankeerth},
  title =	{{Torus Polynomials: An Algebraic Approach to ACC Lower Bounds}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.13},
  URN =		{urn:nbn:de:0030-drops-101066},
  doi =		{10.4230/LIPIcs.ITCS.2019.13},
  annote =	{Keywords: Circuit complexity, ACC, lower bounds, polynomials}
}
Document
A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n

Authors: Daniel Kane and Sankeerth Rao

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We construct and analyze a pseudorandom generator for degree 2 boolean polynomial threshold functions. Random constructions achieve the optimal seed length of O(log n + log 1/epsilon), however the best known explicit construction of [Ilias Diakonikolas, 2010] uses a seed length of O(log n * epsilon^{-8}). In this work we give an explicit construction that uses a seed length of O(log n + (1/epsilon)^{o(1)}). Note that this improves the seed length substantially and that the dependence on the error epsilon is additive and only grows subpolynomially as opposed to the previously known multiplicative polynomial dependence. Our generator uses dimensionality reduction on a Nisan-Wigderson based pseudorandom generator given by Lu, Kabanets [Kabanets and Lu, 2018].

Cite as

Daniel Kane and Sankeerth Rao. A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 2:1-2:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kane_et_al:LIPIcs.CCC.2018.2,
  author =	{Kane, Daniel and Rao, Sankeerth},
  title =	{{A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{2:1--2:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.2},
  URN =		{urn:nbn:de:0030-drops-88861},
  doi =		{10.4230/LIPIcs.CCC.2018.2},
  annote =	{Keywords: Pseudorandomness, Polynomial Threshold Functions}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail