Faster Algorithms for Schatten-p Low Rank Approximation

Authors Praneeth Kacham , David P. Woodruff



PDF
Thumbnail PDF

File

LIPIcs.APPROX-RANDOM.2024.55.pdf
  • Filesize: 0.94 MB
  • 19 pages

Document Identifiers

Author Details

Praneeth Kacham
  • Carnegie Mellon University, Pittsburgh, PA, USA
  • Google Research, New York, USA
David P. Woodruff
  • Carnegie Mellon University, Pittsburgh, PA, USA

Acknowledgements

We would like to thank Cameron Musco, Christopher Musco, and Aleksandros Sobczyk for helpful discussions. We thank a Simons Investigator Award and NSF CCF-2335411 for partial support.

Cite AsGet BibTex

Praneeth Kacham and David P. Woodruff. Faster Algorithms for Schatten-p Low Rank Approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2024.55

Abstract

We study algorithms for the Schatten-p Low Rank Approximation (LRA) problem. First, we show that by using fast rectangular matrix multiplication algorithms and different block sizes, we can improve the running time of the algorithms in the recent work of Bakshi, Clarkson and Woodruff (STOC 2022). We then show that by carefully combining our new algorithm with the algorithm of Li and Woodruff (ICML 2020), we can obtain even faster algorithms for Schatten-p LRA. While the block-based algorithms are fast in the real number model, we do not have a stability analysis which shows that the algorithms work when implemented on a machine with polylogarithmic bits of precision. We show that the LazySVD algorithm of Allen-Zhu and Li (NeurIPS 2016) can be implemented on a floating point machine with only logarithmic, in the input parameters, bits of precision. As far as we are aware, this is the first stability analysis of any algorithm using O((k/√ε)poly(log n)) matrix-vector products with the matrix A to output a 1+ε approximate solution for the rank-k Schatten-p LRA problem.

Subject Classification

ACM Subject Classification
  • Theory of computation → Mathematical optimization
  • Mathematics of computing → Mathematical analysis
Keywords
  • Low Rank Approximation
  • Schatten Norm
  • Rectangular Matrix Multiplication
  • Stability Analysis

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Zeyuan Allen-Zhu and Yuanzhi Li. LazySVD: Even faster SVD decomposition yet without agonizing pain. Advances in neural information processing systems, 29, 2016. URL: https://proceedings.neurips.cc/paper/2016/hash/c6e19e830859f2cb9f7c8f8cacb8d2a6-Abstract.html.
  2. Ainesh Bakshi, Kenneth L Clarkson, and David P Woodruff. Low-rank approximation with 1/ε^1/3 matrix-vector products. STOC 2022. arXiv:2202.05120, 2022. URL: https://doi.org/10.48550/arXiv.2202.05120.
  3. Jess Banks, Jorge Garza-Vargas, Archit Kulkarni, and Nikhil Srivastava. Pseudospectral shattering, the sign function, and diagonalization in nearly matrix multiplication time. Foundations of Computational Mathematics, pages 1-89, 2023. URL: https://doi.org/10.1007/s10208-022-09577-5.
  4. Kenneth L. Clarkson and David P. Woodruff. Low-rank approximation and regression in input sparsity time. J. ACM, 63(6):Art. 54, 45, 2017. URL: https://doi.org/10.1145/3019134.
  5. Ran Duan, Hongxun Wu, and Renfei Zhou. Faster matrix multiplication via asymmetric hashing. FOCS 2023. arXiv:2210.10173, 2022. URL: https://doi.org/10.48550/arXiv.2210.10173.
  6. François Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1029-1046. SIAM, 2018. URL: https://doi.org/10.1137/1.9781611975031.67.
  7. François Le Gall. Faster algorithms for rectangular matrix multiplication. In 2012 IEEE 53rd annual symposium on foundations of computer science, pages 514-523. IEEE, 2012. URL: https://doi.org/10.1109/FOCS.2012.80.
  8. François Le Gall. Faster rectangular matrix multiplication by combination loss analysis. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3765-3791. SIAM, 2024. URL: https://doi.org/10.1137/1.9781611977912.133.
  9. Yi Li and David Woodruff. Input-sparsity low rank approximation in schatten norm. In International Conference on Machine Learning, pages 6001-6009. PMLR, 2020. URL: http://proceedings.mlr.press/v119/li20q.html.
  10. Grazia Lotti and Francesco Romani. On the asymptotic complexity of rectangular matrix multiplication. Theoretical Computer Science, 23(2):171-185, 1983. URL: https://doi.org/10.1016/0304-3975(83)90054-3.
  11. Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and faster approximate singular value decomposition. Advances in neural information processing systems, 28, 2015. URL: https://proceedings.neurips.cc/paper/2015/hash/1efa39bcaec6f3900149160693694536-Abstract.html.
  12. Cameron Musco, Christopher Musco, and Aaron Sidford. Stability of the Lanczos method for matrix function approximation. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1605-1624. SIAM, Philadelphia, PA, 2018. URL: https://doi.org/10.1137/1.9781611975031.105.
  13. Sushant Sachdeva and Nisheeth K Vishnoi. Faster algorithms via approximation theory. Foundations and Trendsregistered in Theoretical Computer Science, 9(2):125-210, 2014. URL: https://doi.org/10.1561/0400000065.
  14. Aleksandros Sobczyk, Marko Mladenović, and Mathieu Luisier. Hermitian pseudospectral shattering, cholesky, hermitian eigenvalues, and density functional theory in nearly matrix multiplication time. arXiv preprint arXiv:2311.10459, 2023. URL: https://doi.org/10.48550/arXiv.2311.10459.
  15. Lloyd N. Trefethen and David Bau, III. Numerical linear algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997. URL: https://doi.org/10.1137/1.9780898719574.
  16. Joel A Tropp. Improved analysis of the subsampled randomized hadamard transform. Advances in Adaptive Data Analysis, 3(01n02):115-126, 2011. URL: https://doi.org/10.1142/S1793536911000787.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail