Embeddings of Schatten Norms with Applications to Data Streams

Authors Yi Li, David P. Woodruff



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2017.60.pdf
  • Filesize: 0.52 MB
  • 14 pages

Document Identifiers

Author Details

Yi Li
David P. Woodruff

Cite As Get BibTex

Yi Li and David P. Woodruff. Embeddings of Schatten Norms with Applications to Data Streams. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.ICALP.2017.60

Abstract

Given an n×d matrix A, its Schatten-p norm, p >= 1, is defined as |A|_p = (sum_{i=1}^rank(A) sigma(i)^p)^{1/p} where sigma_i(A) is the i-th largest singular value of A. These norms have been studied in functional analysis in the context of non-commutative L_p-spaces, and recently in data stream and linear sketching models of computation. Basic questions on the relations between these norms, such as their embeddability, are still open. Specifically, given a set of matrices A_1, ... , A_poly(nd) in R^{n x d}, suppose we want to construct a linear map L such that L(A_i) in R^{n' x d'} for each i, where n' < n and d' < d, and further, |A_i|p <= |L(A_i)|_q <= D_{p,q}|A_i|_p for a given approximation factor D_{p,q} and real number q >= 1. Then how large do n' and d' need to be as a function of D_{p,q}?

We nearly resolve this question for every p, q >= 1, for the case where L(A_i) can be expressed as R*A_i*S, where R and S are arbitrary matrices that are allowed to depend on A_1, ... ,A_t, that is, L(A_i) can be implemented by left and right matrix multiplication. Namely, for every p, q >= 1, we provide nearly matching upper and lower bounds on the size of n' and d' as a function of D_{p,q}. Importantly, our upper bounds are oblivious, meaning that R and S do not depend on the A_i, while our lower bounds hold even if R and S depend on the A_i. As an application of our upper bounds, we answer a recent open question of Blasiok et al. about space-approximation trade-offs for the Schatten 1-norm, showing in a data stream it is possible to estimate the Schatten-1 norm up to a factor of D >= 1 using O~(min(n, d)^2/D^4) space.

Subject Classification

Keywords
  • data stream algorithms
  • embeddings
  • matrix norms
  • sketching

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Alexandr Andoni. Nearest neighbor search in high-dimensional spaces. In the workshop: Barriers in Computational Complexity II, 2010. URL: http://www.mit.edu/~andoni/nns-barriers.pdf.
  2. Alexandr Andoni, Robert Krauthgamer, and Ilya Razenshteyn. Sketching and embedding are equivalent for norms. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 479-488. ACM, 2015. Google Scholar
  3. Jaroslaw Blasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang. Streaming symmetric norms via measure concentration. arXiv:1511.01111 [cs.DS], 2016. Google Scholar
  4. Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang. Sketches for matrix norms: Faster, smaller and more general. arXiv:1609.05885 [cs.DS], 2016. Google Scholar
  5. Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111-119, 2012. Google Scholar
  6. Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 81-90. ACM, 2013. Google Scholar
  7. Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal approximate matrix product in terms of stable rank. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, pages 11:1-11:14, 2016. Google Scholar
  8. Aram W. Harrow, Ashley Montanaro, and Anthony J. Short. Limitations on quantum dimensionality reduction. In International Colloquium on Automata, Languages, and Programming, pages 86-97. Springer, 2011. Google Scholar
  9. Weihao Kong and Gregory Valiant. Spectrum estimation from samples. arXiv:1602.00061 [cs.LG], 2016. Google Scholar
  10. Kasper Green Larsen and Jelani Nelson. The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi, editors, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), volume 55 of Leibniz International Proceedings in Informatics (LIPIcs), pages 82:1-82:11, 2016. Google Scholar
  11. Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Springer-Verlag, Berlin, 1991. Google Scholar
  12. Yi Li, Huy L. Nguyen, and David P. Woodruff. On sketching matrix norms and the top singular vector. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1562-1581, 2014. Google Scholar
  13. Yi Li and David P. Woodruff. On approximating functions of the singular values in a stream. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 726-739, 2016. Google Scholar
  14. Yi Li and David P. Woodruff. Tight bounds for sketching the operator norm, schatten norms, and subspace embeddings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2016, September 7-9, 2016, Paris, France, pages 39:1-39:11, 2016. Google Scholar
  15. Françoise Lust-Piquard. Inégalités de khintchine dans c_p (1 < p < ∞). Comptes Rendus de l'Académie des Sciences - Series I - Mathematics, 303:289-292, 1986. Google Scholar
  16. Xiangrui Meng and Michael W. Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Symposium on Theory of Computing Conference, STOC'13, Palo Alto, CA, USA, June 1-4, 2013, pages 91-100, 2013. Google Scholar
  17. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. Spectral sums beyond fast matrix multiplication: Algorithms and hardness. arXiv:1704.04163 [cs.DS], 2017. Google Scholar
  18. Jelani Nelson and Huy L. Nguyen. OSNAP: faster numerical linear algebra algorithms via sparser subspace embeddings. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA, pages 117-126, 2013. Google Scholar
  19. Shashanka Ubaru, Jie Chen, and Yousef Saad. Fast estimation of tr(F(A)) via stochastic lanczos quadrature, 2016. URL: http://www-users.cs.umn.edu/~saad/PDF/ys-2016-04.pdf.
  20. Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Yonina C. Eldar and Gitta Kutyniok, editors, Compressed Sensing: Theory and Practice, pages 210-268. Cambridge University Press, 2012. Google Scholar
  21. Andreas J. Winter. Quantum and classical message identification via quantum channels. Quantum Information & Computation, 5(7):605-606, 2005. Google Scholar
  22. David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):1-157, 2014. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail