No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization

Authors Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2021.53.pdf
  • Filesize: 0.51 MB
  • 20 pages

Document Identifiers

Author Details

Ankit Garg
  • Microsoft Research India, Bangalore, India
Robin Kothari
  • Microsoft Quantum and Microsoft Research, Redmond, WA, USA
Praneeth Netrapalli
  • Microsoft Research India, Bangalore, India
Suhail Sherif
  • Microsoft Research India, Bangalore, India
  • Tata Institute of Fundamental Research, Mumbai, India

Acknowledgements

We thank Sébastien Bubeck, Ronald de Wolf, and András Gilyén for helpful conversations about this work. RK thanks Vamsi Pritham Pingali for many helpful conversations about multivariable calculus. We would also like to thank Matthias Kleinmann for asking us some questions about Lemma 17 that helped us improve its presentation.

Cite As Get BibTex

Ankit Garg, Robin Kothari, Praneeth Netrapalli, and Suhail Sherif. No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021) https://doi.org/10.4230/LIPIcs.ITCS.2021.53

Abstract

We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function f:ℝⁿ → ℝ and its (sub)gradient. Our goal is to find an ε-approximate minimum of f starting from a point that is distance at most R from the true minimum. If f is G-Lipschitz, then the classic gradient descent algorithm solves this problem with O((GR/ε)²) queries. Importantly, the number of queries is independent of the dimension n and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension n.
In this paper we reprove the randomized lower bound of Ω((GR/ε)²) using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using O(GR/ε) quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need Ω((GR/ε)²) queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.

Subject Classification

ACM Subject Classification
  • Theory of computation → Quantum computation theory
  • Theory of computation → Convex optimization
Keywords
  • Quantum algorithms
  • Gradient descent
  • Convex optimization

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Joran van Apeldoorn and András Gilyén. Improvements in Quantum SDP-Solving with Applications. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of Leibniz International Proceedings in Informatics (LIPIcs), pages 99:1-99:15. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019. URL: https://doi.org/10.4230/LIPIcs.ICALP.2019.99.
  2. Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Quantum SDP-solvers: Better upper and lower bounds. In 58th Annual Symposium on Foundations of Computer Science (FOCS 2017), October 2017. URL: https://doi.org/10.1109/focs.2017.44.
  3. Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Convex optimization using quantum oracles. Quantum, 4:220, January 2020. URL: https://doi.org/10.22331/q-2020-01-13-220.
  4. Eric Balkanski and Yaron Singer. Parallelization does not accelerate convex optimization: Adaptivity lower bounds for non-smooth convex minimization. arXiv preprint, 2018. URL: http://arxiv.org/abs/1808.03880.
  5. Keith Ball. An elementary introduction to modern convex geometry. In Silvio Levy, editor, Flavors of geometry, volume 31, pages 1-58. Cambridge University Press, 1997. URL: http://library.msri.org/books/Book31/files/ball.pdf.
  6. Walter Baur and Volker Strassen. The complexity of partial derivatives. Theoretical Computer Science, 22(3):317-330, 1983. URL: https://doi.org/10.1016/0304-3975(83)90110-X.
  7. Aleksandrs Belovs. Quantum algorithms for learning symmetric juntas via adversary bound. In Proceedings of the 2014 IEEE 29th Conference on Computational Complexity (CCC 2014), CCC ’14, page 22–31, 2014. URL: https://doi.org/10.1109/CCC.2014.11.
  8. Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and weaknesses of quantum computing. SIAM Journal on Computing, 26(5):1510-1523, 1997. URL: https://doi.org/10.1137/S0097539796300933.
  9. Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of Leibniz International Proceedings in Informatics (LIPIcs), pages 27:1-27:14. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019. URL: https://doi.org/10.4230/LIPIcs.ICALP.2019.27.
  10. Fernando G.S.L. Brandão and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs. In 58th Annual Symposium on Foundations of Computer Science (FOCS 2017), October 2017. URL: https://doi.org/10.1109/focs.2017.45.
  11. Sébastien Bubeck. Convex optimization: Algorithms and complexity. Found. Trends Mach. Learn., 8(3–4):231–357, November 2015. URL: https://doi.org/10.1561/2200000050.
  12. Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford. Complexity of highly parallel non-smooth convex optimization. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 13900-13909, 2019. URL: http://papers.nips.cc/paper/9541-complexity-of-highly-parallel-non-smooth-convex-optimization.
  13. Shouvanik Chakrabarti, Andrew M. Childs, Tongyang Li, and Xiaodi Wu. Quantum algorithms and lower bounds for convex optimization. Quantum, 4:221, January 2020. URL: https://doi.org/10.22331/q-2020-01-13-221.
  14. Jelena Diakonikolas and Cristóbal Guzmán. Lower bounds for parallel and randomized convex optimization. In Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pages 1132-1157. PMLR, 2019. URL: http://proceedings.mlr.press/v99/diakonikolas19c.html.
  15. András Gilyén, Srinivasan Arunachalam, and Nathan Wiebe. Optimizing quantum optimization algorithms via faster quantum gradient computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '19, page 1425–1444, 2019. URL: https://doi.org/10.1137/1.9781611975482.87.
  16. Andreas Griewank and Andrea Walther. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Second Edition. Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics, 2008. URL: https://books.google.com/books?id=xoiiLaRxcbEC.
  17. Stephen P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501, July 2005. URL: https://doi.org/10.1103/PhysRevLett.95.050501.
  18. Sham M Kakade and Jason D Lee. Provably correct automatic sub-differentiation for qualified programs. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pages 7125-7135, 2018. URL: http://papers.nips.cc/paper/7943-provably-correct-automatic-sub-differentiation-for-qualified-programs.
  19. Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares. Physical Review A, 101(2):022316, 2020. URL: https://doi.org/10.1103/PhysRevA.101.022316.
  20. Yin Tat Lee, Aaron Sidford, and Santosh S. Vempala. Efficient convex optimization with membership oracles. In Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet, editors, Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, volume 75 of Proceedings of Machine Learning Research, pages 1292-1294. PMLR, 06-09 July 2018. URL: http://proceedings.mlr.press/v75/lee18a.html.
  21. A. Nemirovski. On parallel complexity of nonsmooth convex optimization. Journal of Complexity, 10(4):451-463, 1994. URL: https://doi.org/10.1006/jcom.1994.1025.
  22. Arkadiui Nemirovsky and David Borisovich Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983. Google Scholar
  23. Yurii Nesterov. Introductory Lectures on Convex Optimization. Springer US, 2004. URL: https://doi.org/10.1007/978-1-4419-8853-9.
  24. Yurii Nesterov. Lectures on convex optimization, volume 137. Springer, 2018. URL: https://doi.org/10.1007/978-3-319-91578-4.
  25. Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, and Seth Lloyd. Quantum gradient descent and Newton’s method for constrained polynomial optimization. New Journal of Physics, 21(7):073023, 2019. URL: https://doi.org/10.1088/1367-2630/ab2a9e.
  26. Blake Woodworth and Nathan Srebro. Lower bound for randomized first order convex optimization. arXiv preprint, 2017. URL: http://arxiv.org/abs/1709.03594.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail