A Faster Interior-Point Method for Sum-Of-Squares Optimization

Authors Shunhua Jiang, Bento Natura, Omri Weinstein



PDF
Thumbnail PDF

File

LIPIcs.ICALP.2022.79.pdf
  • Filesize: 0.88 MB
  • 20 pages

Document Identifiers

Author Details

Shunhua Jiang
  • Columbia University, New York, NY, USA
Bento Natura
  • London School of Economics, UK
Omri Weinstein
  • The Hebrew University, Jerusalem, Israel
  • Columbia University, New York, NY, USA

Acknowledgements

The second author would like to thank Vissarion Fisikopoulos and Elias Tsigaridas for introducing him from a practical perspective to Sum-of-Squares Optimization under the interpolant basis.

Cite As Get BibTex

Shunhua Jiang, Bento Natura, and Omri Weinstein. A Faster Interior-Point Method for Sum-Of-Squares Optimization. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 79:1-79:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) https://doi.org/10.4230/LIPIcs.ICALP.2022.79

Abstract

We present a faster interior-point method for optimizing sum-of-squares (SOS) polynomials, which are a central tool in polynomial optimization and capture convex programming in the Lasserre hierarchy. Let p = ∑_i q²_i be an n-variate SOS polynomial of degree 2d. Denoting by L : = binom(n+d,d) and U : = binom(n+2d,2d) the dimensions of the vector spaces in which q_i’s and p live respectively, our algorithm runs in time Õ(LU^{1.87}). This is polynomially faster than state-of-art SOS and semidefinite programming solvers [Jiang et al., 2020; Huang et al., 2021; Papp and Yildiz, 2019], which achieve runtime Õ(L^{0.5} min{U^{2.37}, L^{4.24}}). 
The centerpiece of our algorithm is a dynamic data structure for maintaining the inverse of the Hessian of the SOS barrier function under the polynomial interpolant basis [Papp and Yildiz, 2019], which efficiently extends to multivariate SOS optimization, and requires maintaining spectral approximations to low-rank perturbations of elementwise (Hadamard) products. This is the main challenge and departure from recent IPM breakthroughs using inverse-maintenance, where low-rank updates to the slack matrix readily imply the same for the Hessian matrix.

Subject Classification

ACM Subject Classification
  • Mathematics of computing → Continuous functions
  • Mathematics of computing → Convex optimization
  • Mathematics of computing → Semidefinite programming
  • Mathematics of computing → Stochastic control and optimization
Keywords
  • Interior Point Methods
  • Sum-of-squares Optimization
  • Dynamic Matrix Inverse

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 522-539. SIAM, 2021. Google Scholar
  2. Christine Bachoc and Frank Vallentin. New upper bounds for kissing numbers from semidefinite programming. Technical report, Journal of the American Mathematical Society, 2006. Google Scholar
  3. Brandon Ballinger, Grigoriy Blekherman, Henry Cohn, Noah Giansiracusa, Elizabeth Kelly, and Achill Schürmann. Experimental study of energy-minimizing point configurations on spheres. Experimental Mathematics, 18(3):257-283, 2009. URL: https://doi.org/10.1080/10586458.2009.10129052.
  4. Boaz Barak, Samuel B. Hopkins, Jonathan A. Kelner, Pravesh K. Kothari, Ankur Moitra, and Aaron Potechin. A nearly tight sum-of-squares lower bound for the planted clique problem. SIAM J. Comput., 48(2):687-735, 2019. URL: https://doi.org/10.1137/17M1138236.
  5. Boaz Barak, Prasad Raghavendra, and David Steurer. Rounding semidefinite programming hierarchies via global correlation. In 2011 IEEE 52nd annual symposium on foundations of computer science (FOCS), pages 472-481. IEEE, 2011. Google Scholar
  6. Markus Bläser. Fast matrix multiplication. Theory of Computing, pages 1-60, 2013. Google Scholar
  7. Grigoriy Blekherman, Pablo A Parrilo, and Rekha R Thomas. Semidefinite optimization and convex algebraic geometry. SIAM, 2012. Google Scholar
  8. L. Bos, S. De Marchi, A. Sommariva, and M. Vianello. Computing multivariate fekete and leja points by numerical linear algebra. SIAM Journal on Numerical Analysis, 48(5):1984-1999, 2010. URL: https://doi.org/10.1137/090779024.
  9. Michael B Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. In Proceedings of the 51st Annual ACM Symposium on Theory of Computing (STOC), 2019. Google Scholar
  10. François Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. In Proceedings of the 2018 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1029-1046. SIAM, 2018. Google Scholar
  11. Bissan Ghaddar, Jakub Marecek, and M. Mevissen. Optimal power flow as a polynomial optimization problem. IEEE Transactions on Power Systems, 31:539-546, 2016. Google Scholar
  12. Roxana Heß, Didier Henrion, Jean-Bernard Lasserre, and Tien Son Pham. Semidefinite approximations of the polynomial abscissa. SIAM J. Control. Optim., 54(3):1633-1656, 2016. URL: https://doi.org/10.1137/15M1033198.
  13. Samuel B. Hopkins, Pravesh K. Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer. The power of sum-of-squares for detecting hidden structures. In 58th IEEE Annual Symposium on Foundations of Computer Science, (FOCS), pages 720-731. IEEE Computer Society, 2017. URL: https://doi.org/10.1109/FOCS.2017.72.
  14. Samuel B. Hopkins and Jerry Li. Mixture models, robustness, and sum of squares proofs. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, (STOC), pages 1021-1034. ACM, 2018. URL: https://doi.org/10.1145/3188745.3188748.
  15. Baihe Huang, Shunhua Jiang, Zhao Song, Runzhou Tao, and Ruizhe Zhang. Solving sdp faster: A robust ipm framework and efficient implementation, 2021. URL: http://arxiv.org/abs/2101.08208.
  16. Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. A faster interior point method for semidefinite programming. In 2020 IEEE 61st annual symposium on foundations of computer science (FOCS), pages 910-918. IEEE, 2020. Google Scholar
  17. Shunhua Jiang, Yunze Man, Zhao Song, Zheng Yu, and Danyang Zhuo. Fast graph neural tangent kernel via kronecker sketching. arXiv preprint AAAI'22, 2021. URL: http://arxiv.org/abs/2112.02446.
  18. Narendra Karmarkar. A new polynomial-time algorithm for linear programming. In Proceedings of the 16th annual ACM symposium on Theory of computing (STOC), pages 302-311, 1984. Google Scholar
  19. Jean Bernard Lasserre. An Introduction to Polynomial and Semi-Algebraic Optimization. Cambridge Texts in Applied Mathematics. Cambridge University Press, 2015. URL: https://doi.org/10.1017/CBO9781107447226.
  20. M. Laurent. Sums of squares, moment matrices and optimization over polynomials, pages 155-270. Number 149 in The IMA Volumes in Mathematics and its Applications Series. Springer Verlag, Germany, 2009. Google Scholar
  21. François Le Gall. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th international symposium on symbolic and algebraic computation, pages 296-303, 2014. Google Scholar
  22. Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solving linear programs in Õ(√rank) iterations and faster algorithms for maximum flow. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), pages 424-433. IEEE, 2014. Google Scholar
  23. Yin Tat Lee, Zhao Song, and Qiuyi Zhang. Solving empirical risk minimization in the current matrix multiplication time. In Conference on Learning Theory (COLT), pages 2140-2157. PMLR, 2019. Google Scholar
  24. Yurii Nesterov. Squared functional systems and optimization problems. In High performance optimization, pages 405-440. Springer, 2000. Google Scholar
  25. Yurii Nesterov and Arkadi Nemirovski. Interior-point polynomial algorithms in convex programming. In Siam Studies in Applied Mathematics, 1987. URL: https://doi.org/10.1137/1.9781611970791.
  26. Dávid Papp. Optimal designs for rational function regression. Journal of the American Statistical Association, 107(497):400-411, 2012. URL: https://doi.org/10.1080/01621459.2012.656035.
  27. Dávid Papp and Sercan Yildiz. Sum-of-squares optimization without semidefinite programming. SIAM Journal on Optimization, 29(1):822-851, 2019. Google Scholar
  28. Pablo Parrilo. Sum of squares : theory and applications : AMS short course, sum of squares : theory and applications, January 14-15, 2019, Baltimore, Maryland. American Mathematical Society, Providence, Rhode Island, 2020. Google Scholar
  29. Mihai Putinar and Florian-Horia Vasilescu. Positive polynomials on semi-algebraic sets. Comptes Rendus de l'Académie des Sciences - Series I - Mathematics, 328(7):585-589, 1999. URL: https://doi.org/10.1016/S0764-4442(99)80251-1.
  30. James Renegar. A Mathematical View of Interior-Point Methods in Convex Optimization. Society for Industrial and Applied Mathematics, January 2001. URL: https://doi.org/10.1137/1.9780898718812.
  31. Tae Roh, Bogdan Dumitrescu, and Lieven Vandenberghe. Multidimensional FIR filter design via trigonometric sum-of-squares optimization. J. Sel. Topics Signal Processing, 1(4):641-650, 2007. URL: https://doi.org/10.1109/JSTSP.2007.910261.
  32. Alvise Sommariva and Marco Vianello. Computing approximate fekete points by qr factorizations of vandermonde matrices. Computers & Mathematics with Applications, 57(8):1324-1336, 2009. Google Scholar
  33. Zhao Song, Shuo Yang, and Ruizhe Zhang. Does preprocessing help training over-parameterized neural networks? Advances in Neural Information Processing Systems, 34, 2021. Google Scholar
  34. Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training multi-layer over-parametrized neural network in subquadratic time. arXiv preprint, 2021. URL: http://arxiv.org/abs/2112.07628.
  35. Gilbert Strang. Karmarkar’s algorithm and its place in applied mathematics. The Mathematical Intelligencer, 9(2):4-10, 1987. Google Scholar
  36. Ning Tan. On the Power of Lasserre SDP Hierarchy. PhD thesis, EECS Department, University of California, Berkeley, December 2015. URL: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-236.html.
  37. Pravin M Vaidya. Speeding-up linear programming using fast matrix multiplication. In 30th Annual Symposium on Foundations of Computer Science (FOCS), pages 332-337. IEEE, 1989. Google Scholar
  38. Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. Training (overparametrized) neural networks in near-linear time. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021), volume 185, pages 63:1-63:15, 2021. URL: https://doi.org/10.4230/LIPIcs.ITCS.2021.63.
  39. Yinyu Ye, Michael J Todd, and Shinji Mizuno. An O(√n L)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of operations research, 19(1):53-67, 1994. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail