2 Search Results for "Kakade, Sham M."


Document
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

Authors: Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, and Aaron Sidford

Published in: LIPIcs, Volume 93, 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)


Abstract
This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares. This result is obtained by analyzing SGD as a stochastic process and by sharply characterizing the stationary covariance matrix of this process. The finite rate optimality characterization captures the constant factors and addresses model mis-specification.

Cite as

Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, and Aaron Sidford. A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares). In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 2:1-2:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.FSTTCS.2017.2,
  author =	{Jain, Prateek and Kakade, Sham M. and Kidambi, Rahul and Netrapalli, Praneeth and Pillutla, Venkata Krishna and Sidford, Aaron},
  title =	{{A Markov Chain Theory Approach to Characterizing the Minimax Optimality  of Stochastic Gradient Descent  (for Least Squares)}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{2:1--2:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.2},
  URN =		{urn:nbn:de:0030-drops-83941},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.2},
  annote =	{Keywords: Stochastic Gradient Descent, Minimax Optimality, Least Squares Regression}
}
Document
Recovering Structured Probability Matrices

Authors: Qingqing Huang, Sham M. Kakade, Weihao Kong, and Gregory Valiant

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We consider the problem of accurately recovering a matrix B of size M by M, which represents a probability distribution over M^2 outcomes, given access to an observed matrix of "counts" generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including building recommendation systems and collaborative filtering in the sparse data regime, community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute "word embeddings". Many aspects of this problem---both in terms of learning and property testing/estimation and on both the algorithmic and information theoretic sides---remain open. Our results apply to the setting where B has a low rank structure. For this setting, we propose an efficient (and practically viable) algorithm that accurately recovers the underlying M by M matrix using O(M) samples} (where we assume the rank is a constant). This linear sample complexity is optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires Omega(M) samples. Additionally, we provide an even stronger lower bound showing that distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by a well-conditioned Hidden Markov Model with two hidden states requires Omega(M) observations, while our positive results for recovering B immediately imply that Omega(M) observations suffice to learn such an HMM. This lower bound precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs.

Cite as

Qingqing Huang, Sham M. Kakade, Weihao Kong, and Gregory Valiant. Recovering Structured Probability Matrices. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ITCS.2018.46,
  author =	{Huang, Qingqing and Kakade, Sham M. and Kong, Weihao and Valiant, Gregory},
  title =	{{Recovering Structured Probability Matrices}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.46},
  URN =		{urn:nbn:de:0030-drops-83625},
  doi =		{10.4230/LIPIcs.ITCS.2018.46},
  annote =	{Keywords: Random matrices, matrix recovery, stochastic block model, Hidden Markov Models}
}
  • Refine by Author
  • 2 Kakade, Sham M.
  • 1 Huang, Qingqing
  • 1 Jain, Prateek
  • 1 Kidambi, Rahul
  • 1 Kong, Weihao
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Hidden Markov Models
  • 1 Least Squares Regression
  • 1 Minimax Optimality
  • 1 Random matrices
  • 1 Stochastic Gradient Descent
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 2 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail