Document Open Access Logo

Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation

Authors Cameron Musco, Christopher Musco, David P. Woodruff



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2021.6.pdf
  • Filesize: 0.55 MB
  • 20 pages

Document Identifiers

Author Details

Cameron Musco
  • College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
Christopher Musco
  • Department of Computer Science and Engineering, New York University Tandon School of Engineering, NY, USA
David P. Woodruff
  • Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA

Acknowledgements

Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing.

Cite AsGet BibTex

Cameron Musco, Christopher Musco, and David P. Woodruff. Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.ITCS.2021.6

Abstract

In the masked low-rank approximation problem, one is given data matrix A ∈ ℝ^{n × n} and binary mask matrix W ∈ {0,1}^{n × n}. The goal is to find a rank-k matrix L for which: cost(L) := ∑_{i=1}^n ∑_{j=1}^n W_{i,j} ⋅ (A_{i,j} - L_{i,j})² ≤ OPT + ε ‖A‖_F², where OPT = min_{rank-k L̂} cost(L̂) and ε is a given error parameter. Depending on the choice of W, the above problem captures factor analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix completion, low-rank plus block matrix approximation, low-rank recovery from monotone missing data, and a number of other important problems. Many of these problems are NP-hard, and while algorithms with provable guarantees are known in some cases, they either 1) run in time n^Ω(k²/ε) or 2) make strong assumptions, for example, that A is incoherent or that the entries in W are chosen independently and uniformly at random. In this work, we show that a common polynomial time heuristic, which simply sets A to 0 where W is 0, and then finds a standard low-rank approximation, yields bicriteria approximation guarantees for this problem. In particular, for rank k' > k depending on the public coin partition number of W, the heuristic outputs rank-k' L with cost(L) ≤ OPT + ε ‖A‖_F². This partition number is in turn bounded by the randomized communication complexity of W, when interpreted as a two-player communication matrix. For many important cases, including all those listed above, this yields bicriteria approximation guarantees with rank k' = k ⋅ poly(log n/ε). Beyond this result, we show that different notions of communication complexity yield bicriteria algorithms for natural variants of masked low-rank approximation. For example, multi-player number-in-hand communication complexity connects to masked tensor decomposition and non-deterministic communication complexity to masked Boolean low-rank factorization.

Subject Classification

ACM Subject Classification
  • Theory of computation → Approximation algorithms analysis
  • Theory of computation → Communication complexity
Keywords
  • low-rank approximation
  • communication complexity
  • weighted low-rank approximation
  • bicriteria approximation algorithms

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. SAS/STAT(R) 9.22 User’s guide. See Figure 54.7. URL: https://support.sas.com/documentation/cdl/en/statug/63347/HTML/defaultviewer.htm#statug_mi_sect017.htm.
  2. Noga Alon. Perturbed identity matrices have high rank: Proof and applications. Combinatorics, Probability & Computing, 18(1-2):3, 2009. Google Scholar
  3. Yossi Azar, Amos Fiat, Anna Karlin, Frank McSherry, and Jared Saia. Spectral analysis of data. In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing (STOC), 2001. Google Scholar
  4. Masoumeh Azghani and Sumei Sun. Low-rank block sparse decomposition algorithm for anomaly detection in networks. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pages 807-810. IEEE, 2015. Google Scholar
  5. Frank Ban, Vijay Bhattiprolu, Karl Bringmann, Pavel Kolev, Euiwoong Lee, and David P Woodruff. A PTAS for 𝓁_p-low rank approximation. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 747-766, 2019. Google Scholar
  6. Ziv Bar-Yossef, TS Jayram, Robert Krauthgamer, and Ravi Kumar. Approximating edit distance efficiently. In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 550-559, 2004. Google Scholar
  7. S. Beghelli, R.P. Guidorzi, and U. Soverini. The Frisch scheme in dynamic system identification. Automatica, 26(1):171-176, 1990. Google Scholar
  8. Peter Benner, Venera Khoromskaia, and Boris N Khoromskij. A reduced basis approach for calculation of the Bethe-Salpeter excitation energies by using low-rank tensor factorisations. Molecular Physics, 114(7-8):1148-1161, 2016. Google Scholar
  9. Aditya Bhaskara, Silvio Lattanzi, Sergei Vassilvitskii, and Morteza Zadimoghaddam. Residual based sampling for online low rank approximation. In Proceedings of the 60th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 1596-1614, 2019. Google Scholar
  10. Karl Bringmann, Pavel Kolev, and David Woodruff. Approximation algorithms for 𝓁₀-low rank approximation. In Advances in Neural Information Processing Systems 30 (NeurIPS), pages 6648-6659, 2017. Google Scholar
  11. Joshua Brody, Amit Chakrabarti, Ranganath Kondapally, David P. Woodruff, and Grigory Yaroslavtsev. Certifying equality with limited interaction. Algorithmica, 76(3):796-845, 2016. Google Scholar
  12. Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM, 58(3):11, 2011. Google Scholar
  13. Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717, 2009. Google Scholar
  14. Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Yao. Informational complexity and the direct sum problem for simultaneous message complexity. In Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 270-278, 2001. Google Scholar
  15. Venkat Chandrasekaran, Sujay Sanghavi, Pablo A Parrilo, and Alan S Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572-596, 2011. Google Scholar
  16. Arkadev Chattopadhyay, Nikhil S Mande, and Suhail Sherif. The log-approximate-rank conjecture is false. In Proceedings of the 51st Annual ACM Symposium on Theory of Computing (STOC), pages 42-53, 2019. Google Scholar
  17. Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, and David P Woodruff. Algorithms for 𝓁_p low-rank approximation. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 806-814, 2017. Google Scholar
  18. Kenneth L Clarkson and David P Woodruff. Input sparsity and hardness for robust subspace approximation. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 310-329. https://arxiv.org/pdf/1510.06073, 2015.
  19. Kenneth L Clarkson and David P Woodruff. Input sparsity and hardness for robust subspace approximation. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2015. Google Scholar
  20. Chen Dan, Kristoffer Arnsfelt Hansen, He Jiang, Liwei Wang, and Yuchen Zhou. On low rank approximation of binary matrices. http://arxiv.org/abs/1511.01699, 2015. Google Scholar
  21. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38, 1977. Google Scholar
  22. Amit Deshpande and Kasturi R. Varadarajan. Sampling-based dimension reduction for subspace approximation. In Proceedings of the 39th Annual ACM Symposium on Theory of Computing (STOC), pages 641-650, 2007. Google Scholar
  23. Dan Feldman, Amos Fiat, Micha Sharir, and Danny Segev. Bi-criteria linear-time approximations for generalized k-mean/median/center. In Proceedings of the 23rd Annual Symposium on Computational Geometry (SCG), pages 19-26, 2007. Google Scholar
  24. Fedor V Fomin, Petr A Golovach, and Fahad Panolan. Parameterized low-rank binary matrix approximation. In Proceedings of the 45th International Colloquium on Automata, Languages and Programming (ICALP), 2018. Google Scholar
  25. Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011. Google Scholar
  26. Nicolas Gillis and Stephen A Vavasis. On the complexity of robust PCA and 𝓁₁-norm low-rank matrix approximation. Mathematics of Operations Research, 43(4):1072-1084, 2018. Google Scholar
  27. Mika Göös, TS Jayram, Toniann Pitassi, and Thomas Watson. Randomized communication vs. partition number. In Proceedings of the 44th International Colloquium on Automata, Languages and Programming (ICALP), 2017. Google Scholar
  28. Leslie Greengard and John Strain. The fast Gauss transform. SIAM Journal on Scientific and Statistical Computing, 12(1):79-94, 1991. Google Scholar
  29. Charles Guyon, Thierry Bouwmans, and El-Hadi Zahzah. Foreground detection based on low-rank and block-sparse matrix decomposition. Proceedings of the 19th IEEE International Conference on Image Processing, pages 1225-1228, 2012. Google Scholar
  30. Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz. Computational limits for matrix completion. In Proceedings of the 27th Annual Conference on Computational Learning Theory (COLT), pages 703-725, 2014. Google Scholar
  31. Daniel Hsu, Sham M Kakade, and Tong Zhang. Robust matrix decomposition with sparse corruptions. IEEE Transactions on Information Theory, 57(11):7221-7234, 2011. Google Scholar
  32. Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pages 665-674. ACM, 2013. Google Scholar
  33. Rahul Jain and Hartmut Klauck. The partition bound for classical communication complexity and query complexity. In Proceedings of the 25th Annual IEEE Conference on Computational Complexity (CCC), pages 247-258, 2010. Google Scholar
  34. Rahul Jain, Troy Lee, and Nisheeth K Vishnoi. A quadratically tight partition bound for classical communication complexity and query complexity. http://arxiv.org/abs/1401.4512, 2014. Google Scholar
  35. Daniel M Kane, Jelani Nelson, and David P Woodruff. On the exact space complexity of sketching and streaming small norms. In Proceedings of the 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1161-1178, 2010. Google Scholar
  36. Ravi Kannan and Santosh Vempala. Spectral algorithms. Foundations and Trends in Theoretical Computer Science, 2009. Google Scholar
  37. Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980-2998, 2010. Google Scholar
  38. Gillat Kol, Shay Moran, Amir Shpilka, and Amir Yehudayoff. Approximate nonnegative rank is equivalent to the smooth rectangle bound. In Proceedings of the 41st International Colloquium on Automata, Languages and Programming (ICALP), pages 701-712, 2014. Google Scholar
  39. Eyal Kushilevitz and Noam Nisan. Communication complexity. Cambridge University Press, 1997. Google Scholar
  40. Anastasios Kyrillidis and Volkan Cevher. Matrix ALPS: Accelerated low rank and sparse matrix reconstruction. In 2012 IEEE Statistical Signal Processing Workshop (SSP), pages 185-188, 2012. Google Scholar
  41. Qiuwei Li, Gongguo Tang, and Arye Nehorai. Robust principal component analysis based on low-rank and block-sparse matrix decomposition. Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, 2016. Google Scholar
  42. Shuangjiang Li, Wei Wang, Hairong Qi, Bulent Ayhan, Chiman Kwan, and Steven Vance. Low-rank tensor decomposition based anomaly detection for hyperspectral imagery. In Proceedings of the IEEE International Conference on Image Processing (ICIP), pages 4525-4529, 2015. Google Scholar
  43. Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):208-220, 2013. Google Scholar
  44. Antoine Liutkus and Kazuyoshi Yoshii. A diagonal plus low-rank covariance model for computationally efficient source separation. In 27th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1-6. IEEE, 2017. Google Scholar
  45. Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin, and Shuicheng Yan. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 5249-5257, 2016. Google Scholar
  46. Haibing Lu, Jaideep Vaidya, and Vijayalakshmi Atluri. Optimal Boolean matrix decomposition: Application to role engineering. In Proceedings of the 24th IEEE International Conference on Data Engineering, pages 297-306, 2008. Google Scholar
  47. Michael W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2):123-224, 2011. Google Scholar
  48. Antonio Valerio Miceli Barone. Low-rank passthrough neural networks. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 77-86. Association for Computational Linguistics, 2018. Google Scholar
  49. Andrew C Miller, Nicholas J Foti, and Ryan P Adams. Variational boosting: Iteratively refining posterior approximations. In Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. Google Scholar
  50. Cun Mu, Bo Huang, John Wright, and Donald Goldfarb. Square deal: Lower bounds and improved relaxations for tensor recovery. In Proceedings of the 31st International Conference on Machine Learning (ICML), pages 73-81, 2014. Google Scholar
  51. Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, and Prateek Jain. Non-convex robust PCA. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 1107-1115, 2014. URL: http://papers.nips.cc/paper/5430-non-convex-robust-pca.
  52. Noam Nisan. The communication complexity of threshold gates. Combinatorics, Paul Erdos is Eighty, 1:301-315, 1993. Google Scholar
  53. Anup Rao and Amir Yehudayoff. Communication complexity and applications (early draft), 2019. URL: https://homes.cs.washington.edu/~anuprao/pubs/book.pdf.
  54. Ilya Razenshteyn, Zhao Song, and David P Woodruff. Weighted low rank approximations with provable guarantees. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC), 2016. Google Scholar
  55. Vladimir Rokhlin. Rapid solution of integral equations of classical potential theory. Journal of Computational Physics, 60(2):187-207, 1985. Google Scholar
  56. Donald B. Rubin and Dorothy T. Thayer. EM algorithms for ML factor analysis. Psychometrika, 47(1):69-76, 1982. Google Scholar
  57. James Saunderson, Venkat Chandrasekaran, Pablo A Parrilo, and Alan S Willsky. Diagonal and low-rank matrix decompositions, correlation matrices, and ellipsoid fitting. SIAM Journal on Matrix Analysis and Applications, 33(4):1395-1416, 2012. Google Scholar
  58. Tselil Schramm and Benjamin Weitz. Low-rank matrix completion with adversarial missing entries. CoRR, 2015. Google Scholar
  59. Bao-Hong Shen, Shuiwang Ji, and Jieping Ye. Mining discrete patterns via binary matrix factorization. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 757-766, 2009. Google Scholar
  60. Alexander Sherstov. Lecture notes for CS 289A Communication Complexity, 2012. URL: http://web.cs.ucla.edu/~sherstov/teaching/2012-winter/docs/lecture05.pdf.
  61. Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. Advances in Neural Information Processing Systems 18 (NeurIPS), 18:1257-1264, 2005. Google Scholar
  62. Zhao Song, David P Woodruff, and Peilin Zhong. Low rank approximation with entrywise 𝓁₁-norm error. In Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC), pages 688-701, 2017. Google Scholar
  63. Zhao Song, David P Woodruff, and Peilin Zhong. Relative error tensor low rank approximation. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2772-2789. SIAM, 2019. Google Scholar
  64. Charles Spearman. "General Intelligence," objectively determined and measured. The American Journal of Psychology, 15(2):201-292, 1904. Google Scholar
  65. Michael L. Stein. Limitations on low-rank approximations for covariance matrices of spatial data. Spatial Statistics, 8:1-19, 2014. Google Scholar
  66. Jos MF Ten Berge and Henk AL Kiers. A numerical approach to the approximate and the exact minimum rank of a covariance matrix. Psychometrika, 56(2):309-315, 1991. Google Scholar
  67. Jaideep Vaidya. Boolean matrix decomposition problem: theory, variations and applications to data engineering. In Proceedings of the 28th IEEE International Conference on Data Engineering, pages 1222-1224, 2012. Google Scholar
  68. Stef van Buuren. Flexible imputation of missing data. Chapman and Hall/CRC, 2018. URL: https://stefvanbuuren.name/fimd/missing-data-pattern.html.
  69. Shusen Wang, Chao Zhang, Hui Qian, and Zhihua Zhang. Improving the modified Nyström method using spectral shifting. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2014. Google Scholar
  70. David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):1-157, 2014. Google Scholar
  71. John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada. Curran Associates, Inc., 2009. Google Scholar
  72. Changjiang Yang, Ramani Duraiswami, Nail A Gumerov, and Larry Davis. Improved fast Gauss transform and efficient kernel density estimation. In Proceedings of the 9th IEEE International Conference on Computer Vision, page 464, 2003. Google Scholar
  73. Xiao Zhang, Lingxiao Wang, and Quanquan Gu. A unified framework for nonconvex low-rank plus sparse matrix recovery. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), volume 84, 2018. Google Scholar
  74. Yong Zhao, Jinyu Li, and Yifan Gong. Low-rank plus diagonal adaptation for deep neural networks. In Proceedings of the 2016 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 5005-5009, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail