Improved Quantum Lower and Upper Bounds for Matrix Scaling

Authors Sander Gribling, Harold Nieuwboer



PDF
Thumbnail PDF

File

LIPIcs.STACS.2022.35.pdf
  • Filesize: 0.87 MB
  • 23 pages

Document Identifiers

Author Details

Sander Gribling
  • IRIF, Université de Paris, CNRS, France
Harold Nieuwboer
  • Korteweg-de Vries Institute for Mathematics and QuSoft, University of Amsterdam, The Netherlands

Acknowledgements

We thank Joran van Apeldoorn, Michael Walter and Ronald de Wolf for interesting and helpful discussions, and the latter two for giving feedback on a first version of this paper. Moreover, we thank Ronald de Wolf for pointing us to [Troy Lee and Jérémie Roland, 2013], which allows for an exponentially small success probability in Theorem 2.1.

Cite As Get BibTex

Sander Gribling and Harold Nieuwboer. Improved Quantum Lower and Upper Bounds for Matrix Scaling. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 35:1-35:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) https://doi.org/10.4230/LIPIcs.STACS.2022.35

Abstract

Matrix scaling is a simple to state, yet widely applicable linear-algebraic problem: the goal is to scale the rows and columns of a given non-negative matrix such that the rescaled matrix has prescribed row and column sums. Motivated by recent results on first-order quantum algorithms for matrix scaling, we investigate the possibilities for quantum speedups for classical second-order algorithms, which comprise the state-of-the-art in the classical setting.
We first show that there can be essentially no quantum speedup in terms of the input size in the high-precision regime: any quantum algorithm that solves the matrix scaling problem for n × n matrices with at most m non-zero entries and with 𝓁₂-error ε = Θ~(1/m) must make Ω(m) queries to the matrix, even when the success probability is exponentially small in n. Additionally, we show that for ε ∈ [1/n,1/2], any quantum algorithm capable of producing ε/100-𝓁₁-approximations of the row-sum vector of a (dense) normalized matrix uses Ω(n/ε) queries, and that there exists a constant ε₀ > 0 for which this problem takes Ω(n^{1.5}) queries.
To complement these results we give improved quantum algorithms in the low-precision regime: with quantum graph sparsification and amplitude estimation, a box-constrained Newton method can be sped up in the large-ε regime, and outperforms previous quantum algorithms. For entrywise-positive matrices, we find an ε-𝓁₁-scaling in time O~(n^{1.5}/ε²), whereas the best previously known bounds were O~(n²polylog(1/ε)) (classical) and O~(n^{1.5}/ε³) (quantum).

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
  • Theory of computation → Quantum computation theory
Keywords
  • Matrix scaling
  • quantum algorithms
  • lower bounds

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Zeyuan Allen-Zhu, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson. Much faster algorithms for matrix scaling. In Proceedings of IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS'17), pages 890-901, 2017. URL: http://arxiv.org/abs/1704.02315.
  2. Jason Altschuler, Jonathan Niles-Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. In Advances in Neural Information Processing Systems, volume 30, pages 1964-1974, 2017. URL: http://arxiv.org/abs/1705.09634.
  3. Andris Ambainis. Quantum lower bounds by quantum arguments. Journal of Computer and System Sciences, 64(4):750-767, 2002. Earlier version in STOC'00. URL: https://arxiv.org/abs/quant-ph/0002066.
  4. Edward Anderson, Zhaojun Bai, Christian Bischof, L Susan Blackford, James Demmel, Jack Dongarra, Jeremy Du Croz, Anne Greenbaum, Sven Hammarling, Alan McKenney, et al. LAPACK Users' guide. SIAM, 1999. Google Scholar
  5. Joran van Apeldoorn, Sander Gribling, Yinan Li, Harold Nieuwboer, Michael Walter, and Ronald de Wolf. Quantum Algorithms for Matrix Scaling and Matrix Balancing. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198, pages 110:1-110:17, 2021. http://arxiv.org/abs/2011.12823v1, URL: https://doi.org/10.4230/LIPIcs.ICALP.2021.110.
  6. Simon Apers and Ronald de Wolf. Quantum speedup for graph sparsification, cut approximation and laplacian solving. Proceedings of IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS'20), pages 637-648, 2020. URL: http://arxiv.org/abs/1911.07306.
  7. Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum lower bounds by polynomials. Journal of the ACM (JACM), 48(4):778-797, 2001. URL: http://arxiv.org/abs/quant-ph/9802049.
  8. Yvonne M. M. Bishop, Stephen E. Fienberg, and Paul W. Holland. Discrete Multivariate Analysis: Theory and Practice. MIT Press, 1975. Google Scholar
  9. David T. Brown. A note on approximations to discrete probability distributions. Information and control, 2(4):386-392, 1959. Google Scholar
  10. Peter Bürgisser, Cole Franks, Ankit Garg, Rafael Oliveira, Michael Walter, and Avi Wigderson. Towards a theory of non-commutative optimization: geodesic 1st and 2nd order methods for moment maps and polytopes. In Proceedings of 60th IEEE Annual Symposium on Foundations of Computer Science (FOCS'19), pages 845-861, 2019. URL: http://arxiv.org/abs/1910.12375.
  11. Deeparnab Chakrabarty and Sanjeev Khanna. Better and simpler error analysis of the Sinkhorn-Knopp algorithm for matrix scaling. Mathematical Programming, pages 1-13, 2020. Google Scholar
  12. Michael B. Cohen, Aleksander Madry, Dimitris Tsipras, and Adrian Vladu. Matrix scaling and balancing via box constrained Newton’s method and interior point methods. In Proceedings of IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS'17), pages 902-913, 2017. URL: https://doi.org/10.1109/FOCS.2017.88.
  13. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, volume 26, pages 2292-2300, 2013. URL: http://arxiv.org/abs/1306.0895.
  14. W. Edwards Deming and Frederick F. Stephan. On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. The Annals of Mathematical Statistics, 11(4):427-444, 1940. Google Scholar
  15. Pavel E. Dvurechensky, Alexander V. Gasnikov, and Alexey Kroshnin. Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn’s Algorithm. In ICML, 2018. URL: http://arxiv.org/abs/1802.04367.
  16. Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson. Operator scaling: theory and applications. Foundations of Computational Mathematics, pages 1-68, 2019. Earlier version in FOCS'16. Google Scholar
  17. Sander Gribling and Harold Nieuwboer. Improved quantum lower and upper bounds for matrix scaling, 2021. URL: http://arxiv.org/abs/2109.15282v1.
  18. Peter Høyer, Troy Lee, and Robert Špalek. Negative weights make adversaries stronger. In Proceedings of the 39th Annual ACM SIGACT Symposium on Theory of Computing (STOC'07), pages 526-535, 2007. http://arxiv.org/abs/quant-ph/0611054, URL: https://doi.org/10.1145/1250790.1250867.
  19. Martin Idel. A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps, 2016. URL: http://arxiv.org/abs/1609.06349.
  20. B. Kalantari, I. Lari, F. Ricca, and B. Simeone. On the complexity of general matrix scaling and entropy minimization via the RAS algorithm. Mathematical Programming, 112:371-401, 2008. Google Scholar
  21. Bahman Kalantari and Leonid Khachiyan. On the rate of convergence of deterministic and randomized RAS matrix scaling algorithms. Operations Research Letters, 14(5):237-244, 1993. URL: https://doi.org/10.1016/0167-6377(93)90087-W.
  22. Bahman Kalantari and Leonid Khachiyan. On the complexity of nonnegative-matrix scaling. Linear Algebra and its Applications, 240:87-103, 1996. URL: https://doi.org/10.1016/0024-3795(94)00188-X.
  23. J. Kruithof. Telefoonverkeersrekening. De Ingenieur, 52:E15-E25, 1937. Google Scholar
  24. Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman. Sparsified Cholesky and multigrid solvers for connection Laplacians. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC'16), pages 842-850, 2016. http://arxiv.org/abs/1512.01892, URL: https://doi.org/10.1145/2897518.2897640.
  25. Troy Lee, Rajat Mittal, Ben Reichardt, Robert Špalek, and Mario Szegedy. Quantum query complexity of state conversion. Proceedings of IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS'11), pages 344-353, 2011. http://arxiv.org/abs/1011.3020, URL: https://doi.org/10.1109/FOCS.2011.75.
  26. Troy Lee and Jérémie Roland. A strong direct product theorem for quantum query complexity. Computational Complexity, 22(2):429-462, 2013. Earlier version in CCC'12. URL: https://arxiv.org/abs/1104.4468.
  27. Yin Tat Lee, Richard Peng, and Daniel A. Spielman. Sparsified Cholesky solvers for SDD linear systems, 2015. URL: http://arxiv.org/abs/1506.08204.
  28. Nathan Linial, Alex Samorodnitsky, and Avi Wigderson. A deterministic strongly polynomial algorithm for matrix scaling and approximate permanents. Combinatorica, 20(4):545-568, 2000. Google Scholar
  29. Ashwin Nayak and Felix Wu. The quantum query complexity of approximating the median and related statistics. In Proceedings of the 31st Annual ACM SIGACT Symposium on Theory of Computing (STOC'99), pages 384-393, 1999. http://arxiv.org/abs/quant-ph/9804066, URL: https://doi.org/10.1145/301250.301349.
  30. Uriel G. Rothblum and Hans Schneider. Scalings of matrices which have prespecified row sums and column sums via optimization. Linear Algebra and its Applications, 114:737-764, 1989. Google Scholar
  31. Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statistics, 35(2):876-879, 1964. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail